-
Notifications
You must be signed in to change notification settings - Fork 1
HttpClient: Allow queue limit and change request exceptions #32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit includes two changes: 1) Allows you to set a queue limit for requests It's still recommended the queue size to be at least as large as the number of concurrent requests due to the fact that all requests must pass through the queue. This limit however can aleviate conditions where we otherwise would need to timeout by being in queue so long. Instead we can now fail fast when the client is not making progress 2) This changes the exceptions which may be thrown from `request` to no longer only be `HTTPParsingException` This is to make the behavior uniform with asyncRequest. AsyncRequest may throw a CancellationException, a RejectedExecutionException, or other cases which are not only `HTTPParsingException`. Rather than changing the async request to always return this exception, I think it's better to let the specific exception types bubble up. Alternatively we could create `http` exceptions for each condition we want to report, however I don't believe it's worth the per-request overhead to map these exception types.
|
All in all I like the change and especially the addition of the I kind of feel like that in a |
The point of limiting the queue is to avoid / limit the extra blocking that might occur here. So this would be kind of counter to the motivation of limiting the queue. I am guessing you are thinking more of a "supplier / producer" problem, needing to back pressure the supplier. Something like |
| this.ssi = sts; | ||
| ntse = new NoThreadSocketExecuter(); | ||
| sei = ntse; | ||
| if (maxQueueSize < 1 || maxQueueSize == Integer.MAX_VALUE) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel like we should:
if(maxQueueSize > 0 && maxQueueSize < maxConcurrent) {
maxQueueSize = maxConcurrent
}Not strictly necessary but would be odd to have max concurrent of 10 and queue size of 1, and on the 2nd/3rd request have it be rejected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was unsure about this. I considered the same. I decided to just document how the queue size generally should be >= the max concurrency.
The reason be is one specific use case I can see, which is a case where you have a low throughput client. A client where requests are rare, but potentially slow. There you may want to allow more concurrent requests than queued.
A queue length of 1 is still unlikely to make sense, but I thought I would give people the freedom. I suspect most people wont limit the queue and instead will go with the default we had anyways.
lwahlmeier
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a comment, I am fine with the change ether way.
This commit includes two changes:
It's still recommended the queue size to be at least as large as the number of concurrent requests due to the fact that all requests must pass through the queue.
This limit however can alleviate conditions where we otherwise would need to timeout by being in queue so long. Instead we can now fail fast when the client is not making progress
requestto no longer only beHTTPParsingExceptionThis is to make the behavior uniform with asyncRequest. AsyncRequest may throw a
CancellationException, aRejectedExecutionException, or other cases which are not onlyHTTPParsingException. Rather than changing the async request to always return this exception, I think it's better to let the specific exception types bubble up. Alternatively we could createhttpexceptions for each condition we want to report, however I don't believe it's worth the per-request overhead to map these exception types.@lwahlmeier Let me know your thoughts, thanks!