You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see how I can have multiple actors in a group in a single executable. It is not clear if I can have the same group in multiple executables to share the workload of messages. When the topology is built it appears to separate groups into local or remote and no way to have both? Ideally I would like each actor in a group spread over many executables (pods in Kubernetes) process messages for the keys the manage.
The text was updated successfully, but these errors were encountered:
An actor group cannot contain actors across multiple nodes, but each node can have a group with the same name and routing between them.
// Service A (nodes 1, 2, 3)let consumers = topology.local("consumers");// Service B (nodes 4, 5)let remote_consumers = topology.remote("consumers");let producers = topology.local("producers");
producers.route_to(&remote_consumers, |e, nd| {msg!(match e {SomeEvent => ...,
});});
The second argument nd (node discovery) is supposed to be used to choose node among the connected ones to avoid explicit routing (SomeEvent { node_no } => Outcome::Unicast(node_no)).
However, it doesn't provide now any methods to do it because we need to decide which syntax to use for it. In the simplest form it can be
SomeEvent => Outcome::Unicast(nd.random()),// nd.nearest_by_rtt() and so on
But how should it look when sent to multiple nodes (Outcome::Multicast(nd.by_tag("something")))? It's not very flexible.
This, it's not the question of implementation, but more of API design =(
I see how I can have multiple actors in a group in a single executable. It is not clear if I can have the same group in multiple executables to share the workload of messages. When the topology is built it appears to separate groups into local or remote and no way to have both? Ideally I would like each actor in a group spread over many executables (pods in Kubernetes) process messages for the keys the manage.
The text was updated successfully, but these errors were encountered: