-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(incremental): do not ignore errors from filtered streams #4142
Conversation
✅ Deploy Preview for compassionate-pike-271cb3 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Hi @yaacovCR, I'm @github-actions bot happy to help you with this PR 👋 Supported commandsPlease post this commands in separate comments and only one per comment:
|
ed9caee
to
c4c07f7
Compare
Early execution may result in non-completed streams after publishing is completed -- these streams must be closed using their return methods. When this occurs, we should pass through any error that occurs in the clean-up function instead of swallowing errors. Swallowing errors is a bad practice that could lead to memory leaks. The argument in favor of swallowing the error might be that because the stream was "executed early" and the error does not impact any of the returned data, there is no "place" to forward the error. But there is a way to forward the error, and that's on the next() call that returns { value: undefined, done: true } to end the stream. We will therefore appropriately send all the data and be able to pass through an error. Servers processing our stream should be made aware of this behavior and put in place error handling procedures that allow them to forward the data to clients when they see a payload with { hasNext: false } and then filter any further errors from clients. Servers could also postpone sending { hasNext: false } until the clean-up has been performed, and then optionally send information about the error in the extensions key, if that made sense. It would be up to the server!
c4c07f7
to
d6377eb
Compare
There are 2-6 cases in which we might want to call
In each of these scenarios, we would have to think about how to forward errors from |
depends on #4141
Early execution may result in non-completed streams after publishing is completed -- these streams must be closed using their return methods. When this occurs, we should pass through any error that occurs in the clean-up function instead of swallowing errors.
Swallowing errors is a bad practice that could lead to memory leaks. The argument in favor of swallowing the error might be that because the stream was "executed early" and the error does not impact any of the returned data, there is no "place" to forward the error.
But there is a way to forward the error, and that's on the next() call that returns { value: undefined, done: true } to end the stream. We will therefore appropriately send all the data and be able to pass through an error. Servers processing our stream should be made aware of this behavior and put in place error handling procedures that allow them to forward the data to clients when they see a payload with { hasNext: false } and then filter any further errors from clients. Servers could also postpone sending { hasNext: false } until the clean-up has been performed, and then optionally send information about the error in the extensions key, if that made sense. It would be up to the server!