Skip to content

Conversation

@jentfoo
Copy link
Member

@jentfoo jentfoo commented Jun 20, 2019

This commit includes two changes:

  1. Allows you to set a queue limit for requests

It's still recommended the queue size to be at least as large as the number of concurrent requests due to the fact that all requests must pass through the queue.

This limit however can alleviate conditions where we otherwise would need to timeout by being in queue so long. Instead we can now fail fast when the client is not making progress

  1. This changes the exceptions which may be thrown from request to no longer only be HTTPParsingException

This is to make the behavior uniform with asyncRequest. AsyncRequest may throw a CancellationException, a RejectedExecutionException, or other cases which are not only HTTPParsingException. Rather than changing the async request to always return this exception, I think it's better to let the specific exception types bubble up. Alternatively we could create http exceptions for each condition we want to report, however I don't believe it's worth the per-request overhead to map these exception types.

@lwahlmeier Let me know your thoughts, thanks!

This commit includes two changes:
1) Allows you to set a queue limit for requests

It's still recommended the queue size to be at least as large as the number of concurrent requests due to the fact that all requests must pass through the queue.

This limit however can aleviate conditions where we otherwise would need to timeout by being in queue so long.  Instead we can now fail fast when the client is not making progress

2) This changes the exceptions which may be thrown from `request` to no longer only be `HTTPParsingException`

This is to make the behavior uniform with asyncRequest.  AsyncRequest may throw a CancellationException, a RejectedExecutionException, or other cases which are not only `HTTPParsingException`.  Rather than changing the async request to always return this exception, I think it's better to let the specific exception types bubble up.  Alternatively we could create `http` exceptions for each condition we want to report, however I don't believe it's worth the per-request overhead to map these exception types.
@jentfoo jentfoo requested a review from lwahlmeier June 20, 2019 14:28
@lwahlmeier
Copy link
Member

All in all I like the change and especially the addition of the extractAsyncResponse method.

I kind of feel like that in a .request() cant add to the queue it should block till it can (maybe with a queue timeout param), since your already blocking a thread for the request anyway, but maybe thats not the case.

@jentfoo
Copy link
Member Author

jentfoo commented Jun 20, 2019

I kind of feel like that in a .request() cant add to the queue it should block till it can (maybe with a queue timeout param)

The point of limiting the queue is to avoid / limit the extra blocking that might occur here. So this would be kind of counter to the motivation of limiting the queue.

I am guessing you are thinking more of a "supplier / producer" problem, needing to back pressure the supplier. Something like FlowControlledProcessor could solve that: http://threadly.github.io/threadly/javadocs/5.37/org/threadly/concurrent/processing/FlowControlledProcessor.html

this.ssi = sts;
ntse = new NoThreadSocketExecuter();
sei = ntse;
if (maxQueueSize < 1 || maxQueueSize == Integer.MAX_VALUE) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel like we should:

if(maxQueueSize > 0 && maxQueueSize < maxConcurrent) {
  maxQueueSize = maxConcurrent
}

Not strictly necessary but would be odd to have max concurrent of 10 and queue size of 1, and on the 2nd/3rd request have it be rejected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was unsure about this. I considered the same. I decided to just document how the queue size generally should be >= the max concurrency.

The reason be is one specific use case I can see, which is a case where you have a low throughput client. A client where requests are rare, but potentially slow. There you may want to allow more concurrent requests than queued.

A queue length of 1 is still unlikely to make sense, but I thought I would give people the freedom. I suspect most people wont limit the queue and instead will go with the default we had anyways.

Copy link
Member

@lwahlmeier lwahlmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment, I am fine with the change ether way.

@jentfoo jentfoo merged commit f1ad5a7 into master Jun 20, 2019
@jentfoo jentfoo deleted the HttpClient-queueLimits branch June 20, 2019 22:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants