Closed
Description
When using native ES-Hadoop support for Spark Streaming, the connector uses transport pools that are isolated by job and host info. These pools stick around in the Executor JVMs for the life of the Spark Application. There should be a mechanism to release pooled resources that are older and unused for longer running Spark Applications. This is mostly an issue for long running interactive sessions that create a large number of streaming jobs. Spark Streaming 1.3-1.6 is probably less affected by this due to the lifecycle of the StreamingContext limiting how many streams you can stand up and tear down over a long period, but Spark 2.0 does not seem to have these same constraints.