-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Don't reuse the same exception through multiple paths #11827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Because That still work? |
We should ensure that server-initiated aborts result in Task/OperationCanceledExceptions and client-initiated aborts/resets result in IOExceptions. The common abort path might have led to us getting this wrong in a few places. |
Done. There's a problem with pipelines also |
Fix for Pipelines exceptions dotnet/corefx#39158 |
Rebased to kick off the helix-test again (IIS issues) |
Betterized it |
|
||
// Unblock the request body. | ||
PoisonRequestBodyStream(new IOException(abortReason.Message, abortReason)); | ||
RequestBodyPipe.Writer.Complete(new IOException(abortReason.Message, abortReason)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we really need to new up two exceptions here, since there should only be one reader accessing a single HTTP/2 request body at a time.
@Tratcher made a good point that it might be better to have the caller of Abort or ResetAndAbort clone the exceptions if needed. After all, it's the caller that knows if the exception is being used to complete multiple streams. As it is, this seems a little overly defensive, though certainly better than what's currently in master.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except its used by Request PoisonRequestBodyStream
and Response Writer.Complete
which can happen in parallel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but it's only a bug if it get's thrown/handled in parallel, right? There's only one reader for each stream who will either observe the exception via the poisoned request body stream OR the poisoned request body pipe.
The reader could only observe both exceptions it swallowed the exception from the pipe first and then tried reading again anyway thereby observing the the exception from the poisoned stream. This wouldn't cause parallel access to the exception though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only obvious place I can see where it will keep adding to the exception stack trace is in HttpRequestPipeReader
#11875 but I would have thought that would have shown up in the stack trace?
Http2Connection
already creates a new exception for each Http2Stream
https://github.com/aspnet/AspNetCore/blob/3f73a0fdcff217d061b994f3bd6528f1cd8caf2e/src/Servers/Kestrel/Core/src/Internal/Http2/Http2Connection.cs#L279-L282
However the stack trace would suggest either its happening in parallel or its chaining Write+Flush on Write+Flush on Write+Flush without ever waiting for any completion; so an earlier Flush would be active concurrently with a later Write?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really want to find the root of the problem. The exception set here should only be observed once or maybe repeatedly thrown to one caller if the caller happens swallow the exception. I really doubt that we're adding to the exception stack trace because we're throwing from both the poisoned Stream and PipeReader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question is, do you think that could be an accurate the stacktrace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question is, do you think that could be an accurate the stacktrace?
I hope not, but I haven't ruled it out completely yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once #12081 is merged, TimingPipeFlusher will no longer be maintaining a flush Task queue. Instead ConcurrentQueue will be doing this job.
I think ConcurrentQueue stack traces will be easier to reason about because there is only ever one TCS at a time configured with TaskCreationOptions.RunContinuationsAsynchronously instead of an arbitrarily long linked list of Tasks being awaited.
src/Servers/Kestrel/test/InMemory.FunctionalTests/Http2/Http2StreamTests.cs
Show resolved
Hide resolved
Going to close this one then |
Am guessing this is the cause of the long chain in #11804 (though doesn't fix the underlying issue)
/cc @halter73 @davidfowl @stephentoub