Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: is there any 'nats' alike queue-group possible now or in future? #233

Open
mashpie opened this issue Nov 18, 2020 · 4 comments
Open

Comments

@mashpie
Copy link

mashpie commented Nov 18, 2020

Hi, I am impressed and eager to test out cote because of your mesh-like discovery features.

The only feature I am missing (or not might have found) is something called "queue-groups" in nats:

https://github.com/nats-io/nats.js#queue-groups

Say you have a publisher emitting an event. And you have some worker-groups running multiple processes for different concerns. Example:

  • Group A running 5 workers
  • Group B running 5 workers

A published event (or might be a request?) should be processed by only 1 A worker + only 1 B worker. And of course, the publisher won't know anything about groups or workers, just emitting "something.done" to anyone interested.

@dmetree01
Copy link

Hi, it seems to me that we can't do this without logic on the publisher's side. For example, the publisher first publishes a message-question "who is ready to accept the message?". Then he listens to the responses and collects a list of responded subscribers. Then he randomly selects one and sends a message to him specifically. Maybe here we can add an intermediary publisher that will do this and the top-level publisher will only emit.

@dashersw
Copy link
Owner

Yes, I believe this is outside the scope of cote—the main reason is that I believe queues are inherently not well-suited for microservices “communication”, and is just a workload management system. If there’s a need to implement queue functionality as a workload management system on top of a communication mechanism (cote), then that should be on the client-side. Cote can support in building the communication blocks in this, and @dmetreeves’s recommendation isn’t far off than what I would say.

@dmetree01
Copy link

dmetree01 commented Apr 15, 2021

Yes, I believe this is outside the scope of cote—the main reason is that I believe queues are inherently not well-suited for microservices “communication”, and is just a workload management system. If there’s a need to implement queue functionality as a workload management system on top of a communication mechanism (cote), then that should be on the client-side. Cote can support in building the communication blocks in this, and @dmetreeves’s recommendation isn’t far off than what I would say.

Hi, thanks for the great library. I have now realized that there is an important missing point in my approach. If the publisher does not know in advance exactly how many subscribers he has, then how can he understand exactly when in time to finish listening to messages, so that he can then choose one at random? To do this, we will have to use the "setTimeout" function inside the publisher, or additionally constantly ping the listeners? How would you make a communication system of this type (when only one random subscriber out of many has to process a message from the publisher)? Thanks!

@dmetree01
Copy link

dmetree01 commented Apr 15, 2021

Ok. The following algorithm should work:

  1. A publisher is created
  2. At the time of creation, each subscriber informs the publisher of its existence. The publisher retains knowledge of the existence of each subscriber.
  3. When the publisher needs to send a message to one random subscriber, he first goes through his list and polls whether each of the subscribers is still alive. Then he forms a list of live subscribers. Then he selects a random one from "alive-list" and sends him a message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants