Skip to content

Optimal job kwargs for write_binary_recording #2217

Closed
@Antho2422

Description

@Antho2422

Hi all,

I have a question concerning the job kwargs especially for the function

recording.save()

I am struggling to choose the optimal kwargs. I have the feeling that the process is longer when n_jobs > 1 and it should not be...

Plus, when I'm running the code on my local computer, the save of the recording is not that long (3min for a 20min recording) with n_jobs = 1.
However, when I'm running the same code, on the same data on a cluster with n_jobs = 28 the process takes ages (only 2/3 iterations per seconds). I verified that I have the 28 cpus available. I tried to change the chunk_duration parameter from 1s to 10s and it often causes me the following error :

  File "/network/lustre/iss01/apps/lang/anaconda/3/2021.11-2/lib/python3.9/concurrent/futures/process.py", line 681, in submit
    raise BrokenProcessPool(self._broken)
concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore

I may be missing something on the use of this function and choose of the optimal job_kwargs. If somebody could help me it would be really appreciated 👍

Thank you,

Anthony

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionGeneral question regarding SI

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions