Open
Description
In my production environment, I have observed that long-running Spark rewrite Files Action can lead to OutOfMemoryError. Analyze the Java dump, I noticed a large number of ChildAllocator objects that are only referenced by the RootAllocator. Upon reviewing the code, I discovered that the ChildAllocator allocated at this point is indeed not being released. Is this correct?