-
-
Notifications
You must be signed in to change notification settings - Fork 30.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent behaviour of multiprocessing.shared_memory #116849
Comments
Any news on this? |
I just ran into a similar issue with sharing memory with forked process under WSL(ubuntu) with python 3.8. After some debugging, here is what I found out: Whenever SharedMemory.init() is called, it registers the SharedMemory._name with the resource_tracker: This happens twice for the same name, once during memory creation in the first terminal and once during attaching of existing memory in the second terminal. Resource_tracker does not check for duplicates and both times the same name is added to the cache! When we unlink() the memory, it removes the mmap and removes one instance of the name from the resource_tracker's cache. The other instance remains in place. When each terminal process ends, resource_tracker tries to clean up any remaining registered named resources. So if the second terminal ends first, the resource_tracker warns about the still registered resource and removes its mmap. When the first terminal is about to end, unlink() is called on the memory from there, but the corresponding mmap is already gone. Suggested permanent fix:
Current workaround:
Full working equivalent of the example in the docs. |
ResourceTracker of a child process should not raise warnings or try to clean up SharedMemory if it was created and is still in use by the parent process. Even if ResourceTracker was asked to track that memory. If that was allowed, an error is raised when the parent process tries to unlink the memory later. Previous commit also makes manual unregistration from resource_tracker unneeded.
Bug report
Bug description:
An example code already exists in the documentation (see from here: The following example demonstrates a practical use of the SharedMemory class with NumPy arrays, accessing the same numpy.ndarray from two distinct Python shells).
The code executes fine as expected and mentioned there until one closes the terminals which report warnings.
When the second terminal is closed:
/home/souradeep/miniconda/envs/py39/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
and when the first terminal is closed:
/home/souradeep/miniconda/envs/py39/lib/python3.9/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/trial': [Errno 2] No such file or directory: '/trial'
This can have implications when one tries to use the feature somewhere in an application (see an example).
CPython versions tested on:
3.9
Operating systems tested on:
Linux
Linked PRs
The text was updated successfully, but these errors were encountered: