-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/net/http2: add a MarkComplete method to ClientConnPool #17776
Comments
CL https://golang.org/cl/32327 mentions this issue. |
@tombergan any interest for this PR? In the same vein as for #17775, the idea is that if we want to create a custom connection pool as permitted by the This method is also useful for reporting. |
From the CL:
I'm not sure InFlightStreams is the best way to implement that. I suspect you will have atomicity problems unless you can prevent new streams from being added to a connection concurrently, just after your call to InFlightStreams. Or maybe you won't ... I don't have enough information to say. Along with #17775, I am sympathetic to the idea that net/http has privileged access to golang.org/x/net/http2, which makes it hard to write ClientConnPools of equivalent power to what is used internally by net/http. However, it would help me if you could state the request by: (1) say what you want to do at a high level; (2) explain why you cannot do that with the current API; then (3) if you have a proposal, explain how that proposal solves your problem. In this issue, you've mostly jumped directly to (3). I'm also wondering if this is effectively a dup of #13957. Wanting to limit the number of requests per connection is a reasonable goal, and basically what #13957 is asking for. Do you want to use the same limit for all connections? Different limit for different connections (based on some arbitrary function)? i.e., does this have to be implemented with a ClientConnPool, or could it be a knob on http2.Transport? |
We are building an HTTP proxy that take HTTP/1&2 connections from clients, and multiplex there requests into a bunch of HTTP/2 connections to the origin. As we only speak HTTP/2 with the origin, we are using Here are a few examples of control we want on the pool:
The current To implement 3. we also need #17775 or #17292 or any way to gracefully dispose a client connection. For 4. we need #17292. |
There is another reason that comes to mind regarding exposing this kind of information to custom connection pools: with the introduction of #13774, if a pool is not able to grow or ensure it does not return a connection that reached it's max streams, the request will queued. That is not an acceptable behavior for the kind of proxy we are working on. As the max stream is no longer checked by |
Thanks! That is helpful. I am still hesitant to expose the number of inflight, pending, or max streams because of potential atomicity problems, e.g., it might look OK to accept a request but then some concurrent action might take that slot between the time your conn pool assigns that request to a conn and the time that request actually reaches conn.RoundTrip, meaning you end up queuing the request even though you didn't want to. I need to think about this in more detail. (I'm also about to go on vacation for a week so don't interpret lack of response as lack of interest.) |
If you lock the pool while you take this decision it should be ok as no other request could obtain a connection in the interval. The only thing that could happen is connections releasing streams, but that is not a big issue in this context. I would also rethink the method, and perhaps have a |
Is locking the pool enough? The issue is that locking the pool does not lock the full call to RoundTrip. For example:
Note that no lock guards the duration between transport.go lines 345 and 351. |
That is a good point. What you're saying is that we should probably track the connections we gave to the transport by ourselves instead of relying on internal states of the client connection. The problem is then that we have no way to know when the roundtrip is done (i.e.: stream is closed) without doing some crazy tricks. What about adding a new (optional?) method to the |
Hi guys, I came here to ask the same thing. I am working on a http2 push library for Apple APNs. The use case is similar to @rs:
Why is this important? @tombergan Your recent change commited to master Block RoundTrip when the Transport hits MaxConcurrentStreams fixes a longstanding issue for us (thanks), but unfortunately changes the behaviour of the default connection pool. I may be wrong, but under my testing after commit 1c05540, the default connection won't ever dial a new connection (unless the stream ID hits Exposing a method like Lastly, @tombergan to you point;
This above point is not such an issue for us, if a few overflow requests are queued they will still execute under your new commit. The main thing is the connection pool has the information to know how many requests are assigned/active on each connection and can make the call to assign to a connection / open a new connection or lock a request. Lastly if you consider adding this, please also expose something like Thanks! |
I think you're right, ideally we would need both insight on connection's streams as well as a more advanced pool API. |
@tombergan any thoughts on this? |
Sorry for dropping this. #22537 was just filed and is related. That is on the main repo, so it won't happen until Go 1.11 at the earliest due to the feature freeze, but since this is off the main repo we could potentially do something before then. I don't know how I missed this comment from @rs, as I think it solves the concurrency problem I raised earlier:
Something like this sounds like the right approach. The pool would have the following methods (names and comment just for illustration): type Pool interface {
// Get a conn for a new request.
GetClientConn(req *http.Request, addr string) *ClientConn
// Called when the request is complete (response.Body has been closed).
// This would be new.
MarkComplete(req *http.Request, conn *ClientConn)
// MarkDead called when the connection should no longer be used.
// i.e., when the connection is aborted or when the server sends GOAWAY.
MarkDead(conn *ClientConn)
} You can then implement A question for @rs and @sideshow: Does the above API completely support what you need to do? I think the answer is yes but I may have missed some subtleties in the above comments. |
It would solve most of our issues yes. The access to the max concurrent streams would be another great addition. |
@tombergan will be able to work on this soonish or should I give it a try? |
Change https://golang.org/cl/77091 mentions this issue: |
@tombergan any chance to have this merged? |
As this PR is not yet merged, I'd like to add a few parameters to the new For pools who want to implement outlier detection via passive health check, those parameters would be very helpful. |
This issue celebrated its first anniversary a month ago (happy birthday). Could you guys please help me figure out what needs to be done to move it forward? Would it be more context, different approach, more unit tests. Please don't let it die, we need this and apparently we are not alone. |
/cc @bradfitz |
@rs, if you're talking about https://go-review.googlesource.com/c/net/+/32327, Matt Layher noted you needed tests in January but they're not there yet, so it definitely isn't ready to be merged. But to @tombergan's point: it's not even clear this is a useful primitive that can be used properly (without races). Work towards a coherent design for #22537 would be most helpful. I'd like to get the minimal set of interfaces that solve the largest set of use cases. We don't want to just add little hooks everywhere without understanding what they're being used for. Code is relatively easy. This will hit its two year anniversary as well if there's not a design. |
@bradfitz I'm talking about the second PR attached to this issue: https://go-review.googlesource.com/c/net/+/77091. As discussed with @tombergan above, exposing Let me close the previous PR and change the title to make this issue less misleading. If what @tombergan proposed is not the good design or not considered as "a design", please help me understand what you need to move forward. |
Alright, thanks for the feedback everyone and for addressing the concerns @rs with your CL. @bradfitz, when you return from vacation: in regards to the design and API suitability, I believe that @tombergan's suggestion to implement |
Here is a design proposal: https://docs.google.com/document/d/1Am4sTajtyWjcXfoxg5cRgo3HKqIK4qmM2th6bELMu-g |
Any updates on this? |
This method can be useful in the context of a custom connection pool, to implement lower max concurrent streams per connection than advertised by the server for example.
The text was updated successfully, but these errors were encountered: