Description
I would like to report an issue (which is now resolved for me) related to parallelization.
It happened when trying to perform motion correction by spikeinterface built-in motion correction functions.
It’s similar to other issues related to parallelization (#895 & #1034, both are now closed).
I had encountered this error while running the localize_peaks()
function :
detect peaks: 100%|██████████| 4883/4883 [35:37<00:00, 2.28it/s]
localize peaks: 30%|███ | 1484/4883 [00:55<00:50, 66.67it/s]IOStream.flush timed out
localize peaks: 31%|███▏ | 1531/4883 [1:08:01<2:28:56, 2.67s/it]
Traceback (most recent call last):
File ~/anaconda3/envs/env_17/lib/python3.9/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)
File ~/groups/PrimNeu/Aryo/analysis/sort/pipe_sort_21.py:117
peak_locations = localize_peaks(
File ~/anaconda3/envs/env_17/lib/python3.9/site-packages/spikeinterface/sortingcomponents/peak_localization.py:61 in localize_peaks
peak_locations = run_peak_pipeline(recording, peaks, pipeline_nodes, job_kwargs, job_name='localize peaks', squeeze_output=True)
File ~/anaconda3/envs/env_17/lib/python3.9/site-packages/spikeinterface/sortingcomponents/peak_pipeline.py:247 in run_peak_pipeline
outputs = processor.run()
File ~/anaconda3/envs/env_17/lib/python3.9/site-packages/spikeinterface/core/job_tools.py:361 in run
for res in results:
File ~/anaconda3/envs/env_17/lib/python3.9/site-packages/tqdm/std.py:1178 in __iter__
for obj in iterable:
File ~/anaconda3/envs/env_17/lib/python3.9/concurrent/futures/process.py:562 in _chain_from_iterable_of_lists
for element in iterable:
File ~/anaconda3/envs/env_17/lib/python3.9/concurrent/futures/_base.py:609 in result_iterator
yield fs.pop().result()
File ~/anaconda3/envs/env_17/lib/python3.9/concurrent/futures/_base.py:446 in result
return self.__get_result()
File ~/anaconda3/envs/env_17/lib/python3.9/concurrent/futures/_base.py:391 in __get_result
raise self._exception
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I had run this function in many other files without any problem, but only in one of them it throws this error.
Solution : in job_kwargs I changed : n_jobs = 4
, & it ran without any problem. This number may be ‘optimized’ by trying different values of it to reach the highest speed.
The initial value for n_jobs was -1. The total number of threads in the server I am using is 144.
Hence, for issue #1121, I also suggest decreasing the number of threads (choosing an optimal number of them) to fix the issue, despite the error arising from the sorter.
I only mentioned this as an experience for others.