-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSL error with uvloop and already-closed transport #3115
Comments
To clarify the context here: we use This issue doesn't look like anything I ran into while working on the ASGI conversion, but I wouldn't be surprised if this somehow has something to do with |
Perfect, thanks for helping to confirm (and better clarify) the hunch I had! I'll dive into the logs and see if there's anything which comports this. |
Update: There was no other useful logging information around those messages, however they did all occur while the API was doing some heavy iteration on dead link filtering, so I do believe that's the case. I'm not sure what would be the appropriate action here besides potentially handling this specific exception. |
If it happens again then more logging in the function so that we know precisely when things failed would be helpful. Otherwise, I have no idea how we would begin to try to handle this special case. Is it in our code? Is it in the environment of our application? Is it in the worker? |
One more clarification: this issue previously mentioned uvicorn in the title. We do not yet use uvicorn. This is uvloop, an asyncio implementation (though one that uvicorn depends on). Just clarifying to help focus debugging 🙂 |
Assigning this to myself to see if this is still happening. |
It is still happening. Unclear how to investigate this further though. I'll move it back to the backlog and unassign. @AetherUnbound what do you think the best way to prioritise this kind of thing is? I don't want to leave this issue open forever. If we're "okay" with the error, should we close it as "won't fix"? The issue's been open for a long time now. |
What's interesting is, these seem to happen in bursts, with dozens of issues coming in at a single timestamp (across a few different tasks). This makes me think it might be deployment related, but none of the timestamps match up. That said, I don't think there's much that we can do if there's not a clear source. I've archived this for another thousand occurrences, and I'll close this issue as won't fix. |
That is interesting. I wonder if it's something dead-link related specifically too 🤔 Or potentially related to not having that route be truly async. One of the SO answers I read when trying to investigate this suggested that it could be caused by the worker timing out and the loop terminating. Maybe this is just a symptom of that? We can plan to look into this again after #3449 and see if making our other potentially long-running request (search) asynchronous would prevent uvicorn from thinking it timed out? |
Sentry link
https://openverse.sentry.io/share/issue/c59d64d754414cb2bcf2887c1da580cd/
Description
We received an exception in Sentry for the following:
With the message:
9 of these occurred in quick succession on Oct 2, 2023 at 6:02am UTC. They have not come up again.
This seems to me like it might be a worker dying or requests hanging (a large number of images failed to validate their dead link checks immediately before this). It seems like
asyncio
is involved in some part of the stack here, @sarayourfriend was this similar to any issues you encountered while setting all that up? I know we don't have the ASGI version fully deployed, so I'm a little surprised to seeasync
involved here. I've confirmed that the environment is production though, so unless the Sentry environment is incorrectly configured this is coming from production.Additional context
Setting this to low priority because it hasn't occurred again and only affected a small number of requests.
The text was updated successfully, but these errors were encountered: