-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark: add a feature for the stream count #5898
Conversation
@hueypark : If possible, could you please avoid force pushing comments, wherever possible. GitHub loses track of the association between code lines and comments. |
@@ -232,40 +233,60 @@ func DoUnaryCall(tc testgrpc.BenchmarkServiceClient, reqSize, respSize int) erro | |||
} | |||
|
|||
// DoStreamingRoundTrip performs a round trip for a single streaming rpc. | |||
func DoStreamingRoundTrip(stream testgrpc.BenchmarkService_StreamingCallClient, reqSize, respSize int) error { | |||
func DoStreamingRoundTrip(stream testgrpc.BenchmarkService_StreamingCallClient, reqSize, respSize, streamCount int) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I think I missed this in my previous iteration. The way this is implemented currently, it creates a single stream and performs multiple sends and receives on the same stream. What I believe you are trying to add is to actually create multiple streams. In that case, we would have to change the signature of DoStreamingRoundTrip()
to accept a grpc.ClientConnInterface
instead of a testgrpc.BenchmarkService_StreamingCallClient
and create streamCount
number of streams inside this function (from goroutines), and perform sends and receives on them.
Also, the Send()
and Receive()
can be inside the same goroutine instead of the latter being outside the goroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I think I missed this in my previous iteration. The way this is implemented currently, it creates a single stream and performs multiple sends and receives on the same stream. What I believe you are trying to add is to actually create multiple streams. In that case, we would have to change the signature of
DoStreamingRoundTrip()
to accept agrpc.ClientConnInterface
instead of atestgrpc.BenchmarkService_StreamingCallClient
and createstreamCount
number of streams inside this function (from goroutines), and perform sends and receives on them.Also, the
Send()
andReceive()
can be inside the same goroutine instead of the latter being outside the goroutine.
I would like to create a single stream and perform multiple sends and receives. The current implementation is sufficient for my needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have three modes of operation for the benchmark client, controlled by the -workloads
flag, and in each of these modes, -maxConcurrentCalls
number of streams are created upfront
- Unary: One goroutine per stream is created, and one RPC is done at a time
- Streaming: One goroutine per stream is created, and one
Send()
andRecv()
is done at a time - Unconstrained: Two goroutines per stream are created, one for sending and one for receiving, and two goroutines send and recv independent of the other
Is the unconstrained mode any different from what you are trying to do here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the unconstrained mode any different from what you are trying to do here?
It's almost same. the unconstrained mode can accomplish what I am attempting to do here.
Sure, I will. |
This PR is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed. |
@easwars Thank you for your review, I will be closing this pull request without merging it. |
RELEASE NOTES: none