Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Memory leak in Web Service Listener #4657

Closed
ChristopherSchultz opened this issue Aug 3, 2021 · 6 comments
Closed

[BUG] Memory leak in Web Service Listener #4657

ChristopherSchultz opened this issue Aug 3, 2021 · 6 comments
Labels
bug Something isn't working Fix-Commited Issue fixed and will be available in milestone Internal-Issue-Created An issue has been created in NextGen's internal issue tracker RS-6614 triaged
Milestone

Comments

@ChristopherSchultz
Copy link
Contributor

Describe the bug
Under certain conditions, instances of sun.net.httpserver.HttpConnection are retained forever after the connection has been closed.

To Reproduce
Setup steps (if required). Example:

  1. Configure a channel whose source listener is a Web Service Listener with all defaults (e.g. Mirth is accepting the SOAP request and ingesting HL7 messages through its standard service).

Steps to reproduce the behavior:

  1. Additional steps (this issue represents a DOS and I would prefer to give details privately; please let me know how best to submit confidential information)

Expected behavior
After connections are closed, they are eventually GC'd.

Actual behavior
Connections pile-up forever until the heap is exhausted.

Environment (please complete the following information):

  • OS: Linux kernel 4.14
  • OpenJDK 1.8.292-b10, 64-bit
  • Mirth Connect 3.9.0, also 3.10.1

Workaround(s)
Re-deploying the channel allows all connections to be cleaned-up.

Additional Information
This appears to be a leak in Java's built-in SOAP server and not actually a problem with Mirth itself. I'm wondering if Mirth can possibly perform some additional setup operations to allow these connections to be cleaned-up properly.

@ChristopherSchultz ChristopherSchultz added the bug Something isn't working label Aug 3, 2021
@jonbartels
Copy link
Contributor

@narupley
Copy link
Collaborator

narupley commented Aug 3, 2021

👀

@narupley narupley added Internal-Issue-Created An issue has been created in NextGen's internal issue tracker RS-6614 triaged labels Aug 3, 2021
@narupley
Copy link
Collaborator

narupley commented Aug 4, 2021

After some research, it appears that the default behavior of that internal Sun ServerImpl class is to hold onto "stuck" connections forever, maybe because there's always a chance that they're not really stuck and the network layer is just taking a long time. But you can override that by setting these in your vmoptions:

-Dsun.net.httpserver.maxReqTime=60
-Dsun.net.httpserver.maxRspTime=60

When at least one of those is set, the ServerImpl will start up a separate thread to periodically (once per second, but that can be tweaked too with sun.net.httpserver.timerMillis) purge any old connections.

This issue could be resolved by having Mirth Connect set those system properties by default, unless they're overridden in VM options.

@ChristopherSchultz
Copy link
Contributor Author

This solution appears to work-around the problem in dev/test, and it actually looks like it's the official (if undocumented) way to prevent DOS in the built-in HTTP server. All other "real" HTTP servers do this by default. I'll be trying it in production very soon and will be able to confirm that the case-in-the-wild has been resolved as well.

+1 for enabling this by default if the user hasn't already set a a value in *.vmoptions.

@ChristopherSchultz
Copy link
Contributor Author

After 24 hours of observation, this workaround absolutely prevents the memory leak from getting out of control.

@lmillergithub lmillergithub added this to the 4.0.0 milestone Mar 22, 2022
@pladesma pladesma added the Fix-Commited Issue fixed and will be available in milestone label Mar 29, 2022
@ChristopherSchultz
Copy link
Contributor Author

I see this has been fixed in 4.0.0. Was the solution to simply enable these automated clean-up threads, or has the underlying implementation been replaced with Jetty?

jonbartels referenced this issue Apr 20, 2022
…b service receiver

Merge in MC/connect from bugfix/ROCKSOLID-6614-memory-leak-in-web-service-listener to development

* commit 'e06b2f430baa073a1a4e80f7196d949efa3b07db':
  ROCKSOLID-6614 Added a fix for a memory leak in web service receiver
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Fix-Commited Issue fixed and will be available in milestone Internal-Issue-Created An issue has been created in NextGen's internal issue tracker RS-6614 triaged
Projects
None yet
Development

No branches or pull requests

5 participants