You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The thread_pool dispatcher has two FIFO modes: individual (every agent has its own event_queue) and cooperative (all agents from a cooperation share the same event_queue).
But sometimes this may not be flexible enough.
For example, let's imagine that we have agents with very fast event handlers. Such agents can receive a message, check one or two message fields and resend the message. Or they can increment a couple of counters when a specific message or signal arrives.
Effective scheduling of such fast event handlers is a hard task for the thread_pool dispatcher because the cost of awakening a sleeping worker thread may be much larger than the cost of calling an event handler.
Thread pool dispatchers in SObjectizer have a special tuning option -- next_thread_wakeup_threshold -- that allows optimizing behavior of a dispatcher in presence of such fast event handlers. But the use of such an option can be problematic in a case when there is a mix of fast and slow event handlers: if we set next_thread_wakeup_threshold to a big value then we can slow down processing of demands while there could be sleeping threads.
For example, let's imagine that a thread_pool dispatcher has 8 worker threads, and just one of them is working now, the remaining 7 are sleeping. At this moment 3 new demands arrive: the first is for a slow event handler of agent A, the second and the third are for fast event handlers of agents B and C. If the next_thread_wakeup_theshold is big, then no one of the sleeping threads will be awakened. So the second and third demands will wait while the first demand will be processed (or while some more demands arrive).
If agents B and C belong to different cooperations then we can't bind them to the same event_queue to optimize processing of such a queue.
In such a case use of active_group dispatcher may be more appropriate. But active_group dispatcher has another drawback: it creates a new worker thread for every group. It might be a problem if we have to create several hundreds of groups.
Because of that I have an idea about a new type of dispatcher: group_thread_pool.
It will be a mix of thread_pool and active_group dispatchers: there will be N worker threads (specified at creation time) and named groups. Every group will have a separate event_queue and all agents that belong to the group will share the same event_queue.
A binding to such a dispatcher may look like that:
usingnamespaceso_5::disp::group_thread_pool;auto disp = make_dispatcher(env, 8);
// All agents will be in separate coops.
env.introduce_coop([&](so_5::coop_t & coop) {
coop.make_agent_with_binder<agent_A>(
disp.binder(bind_params_t{}.group("show_handlers")),
...);
...
});
env.introduce_coop([&](so_5::coop_t & coop) {
coop.make_agent_with_binder<agent_B>(
disp.binder(bind_params_t{}.group("fast_handlers")),
...);
...
});
env.introduce_coop([&](so_5::coop_t & coop) {
coop.make_agent_with_binder<agent_C>(
disp.binder(bind_params_t{}.group("fast_handlers"),
...);
...
});
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The thread_pool dispatcher has two FIFO modes: individual (every agent has its own event_queue) and cooperative (all agents from a cooperation share the same event_queue).
But sometimes this may not be flexible enough.
For example, let's imagine that we have agents with very fast event handlers. Such agents can receive a message, check one or two message fields and resend the message. Or they can increment a couple of counters when a specific message or signal arrives.
Effective scheduling of such fast event handlers is a hard task for the thread_pool dispatcher because the cost of awakening a sleeping worker thread may be much larger than the cost of calling an event handler.
Thread pool dispatchers in SObjectizer have a special tuning option -- next_thread_wakeup_threshold -- that allows optimizing behavior of a dispatcher in presence of such fast event handlers. But the use of such an option can be problematic in a case when there is a mix of fast and slow event handlers: if we set
next_thread_wakeup_threshold
to a big value then we can slow down processing of demands while there could be sleeping threads.For example, let's imagine that a thread_pool dispatcher has 8 worker threads, and just one of them is working now, the remaining 7 are sleeping. At this moment 3 new demands arrive: the first is for a slow event handler of agent A, the second and the third are for fast event handlers of agents B and C. If the
next_thread_wakeup_theshold
is big, then no one of the sleeping threads will be awakened. So the second and third demands will wait while the first demand will be processed (or while some more demands arrive).If agents B and C belong to different cooperations then we can't bind them to the same event_queue to optimize processing of such a queue.
In such a case use of active_group dispatcher may be more appropriate. But active_group dispatcher has another drawback: it creates a new worker thread for every group. It might be a problem if we have to create several hundreds of groups.
Because of that I have an idea about a new type of dispatcher: group_thread_pool.
It will be a mix of thread_pool and active_group dispatchers: there will be N worker threads (specified at creation time) and named groups. Every group will have a separate event_queue and all agents that belong to the group will share the same event_queue.
A binding to such a dispatcher may look like that:
Beta Was this translation helpful? Give feedback.
All reactions