Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for rpc pattern - reply_to #55

Open
ddegasperi opened this issue Jan 28, 2019 · 6 comments
Open

Support for rpc pattern - reply_to #55

ddegasperi opened this issue Jan 28, 2019 · 6 comments

Comments

@ddegasperi
Copy link

Hi,

I've found this library and find it very useful to run "executables" (but not only php) in a container environment.

In my current case I've to wait for the result of the executable, so I want do a "synchronous" call using the RPC pattern (described in tutorial 6 of the rabbitmq website). I searched the code for clues to the "reply_to" property, but it seems to find no usage.

Probably the best place to publish the "replay_to" message (inc. correlation_id) would be consumer.go after line 108.

Something like the code from the example but only if m.ReplyTo exists:

err = ch.Publish(
    "",        // exchange
    m.ReplyTo, // routing key
    false,     // mandatory
    false,     // immediate
    amqp.Publishing{
            ContentType:   "text/plain",
            CorrelationId: m.CorrelationId,
            Body:          []byte(strconv.Itoa(response)),
    })

Unfortunately, I am not a go programmer and have no experience with this programming language, so I would be very grateful for your help.

@corvus-ch
Copy link
Owner

The consumer is supposed to be only the initiator. The response to a RPC can and should be send by the executable. At least this is how I have dealt with this requirement myself. This assumes the use of a technology where sending an AMQP message is easy enough to do so.

Will this be an option for you?

@ddegasperi
Copy link
Author

Ok, I've wrote a wrapper to read the "reply_to" property and answer to the queue.

Still I think the response if the executable run successfully or failed would be very helpful. Doing so means, adding only the rabbitmq-cli-consumer as container dependency and avoid writing a wrapper foreach usecase.

@corvus-ch
Copy link
Owner

corvus-ch commented Jan 30, 2019

The example code you provided, only sends the exit code as part of the response body, a hard coded set of message headers and hardcoded assumptions about exchange and other settings. This covers only a very narrow use case. And I am against adding such a limiting feature.

I see that this can be a feature of general interest. In order to make this valuable for a broader audience, the executable would require the means to have control over the response sent. I am open to discuss ideas how this would look like. STDOUT and STDERR are already used for logging. The next thing that comes to mind are pipes with number three and up.

I will think about this and any input will be welcome.

@Magentron
Copy link

The previous maintainer had it implemented in his 2.0 branch:
ricbra/rabbitmq-cli-consumer@7fa1fa9
Would it be possible to use that to implement RPC support?
We need it as well 😉

@Magentron
Copy link

Magentron commented Sep 12, 2019

Ok, I've wrote a wrapper to read the "reply_to" property and answer to the queue.

@ddegasperi Could you share your code please?
Or share how you resolved receiving the response data from the executable?

@Magentron
Copy link

Magentron commented Sep 12, 2019

File descriptor 3 is not usable since when using the queue:consumer command, the --pipe flag does not work.

I have a basic version working that sends a temporary file path in the headers with the incoming message to the executable and retrieves its response from that temporary file (so the executable has to write the response into it). Obviously, it's not sufficient for a generic solution but it works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants