Open
Description
Describe the issue:
I have Dask submitting jobs to condor. They seem to work fine and produce the output.
However they crash at the end with the following errors:
2023-11-29 14:21:19,635 - distributed.sizeof - WARNING - Sizeof calculation failed. Defaulting to -1 B
Traceback (most recent call last):
File "xxx/python3.10/site-packages/distributed/sizeof.py", line 17, in safe_sizeof
return sizeof(obj)
File "xxx/python3.10/site-packages/dask/utils.py", line 642, in __call__
return meth(arg, *args, **kwargs)
File "xxx/python3.10/site-packages/dask/sizeof.py", line 96, in sizeof_python_dict
+ sizeof(list(d.values()))
File "xxx/python3.10/site-packages/dask/utils.py", line 642, in __call__
return meth(arg, *args, **kwargs)
File "xxx/python3.10/site-packages/dask/sizeof.py", line 59, in sizeof_python_collection
return sys.getsizeof(seq) + sum(map(sizeof, seq))
File "xxx/python3.10/site-packages/dask/utils.py", line 642, in __call__
--> Here the last two errors repeat in cycle <--
RecursionError: maximum recursion depth exceeded
Workers crash due to exceeding recursion depth. But it seems like the problem is in safe_sizeof()
method or in the meth:
meth = self.dispatch(type(arg))
return meth(arg, *args, **kwargs)
Minimal Complete Verifiable Example:
None
Environment:
- Dask version: 2023.11.0
- Python version: 3.10.11
- Operating System: AlmaLinux9
- Install method (conda, pip, source): pip