Description
More and more methods are now relying on torch (motion estimation, some peak detectors), and I've made a working implementation of the SVD convolution used in the the matching engines (wobble, circus-omp) that can also use it and be faster. However, while trying to push that into the main, I'm struggling with some problems that I think are worth looking into.
The problem is that maybe it would be good to have some mecanism, at the global level of spikeinterface, that would allow us to configure torch (either if it should be used or not, and on which device). This is particularly important because when using torch with a gpu device, one need to spawn processes, while it could still be only forked if device is cpu. Currently, for example in the peak detectors (as it has been done I think be alessio), the mp_context is hardcoded for some nodes while it could (should?) be chosen depending on the torch context (if we are using it or not, and eventually how we are using it).
So why not, similarly to the global job_kwargs in core/globals, think about a global torch_kwargs dictionary (with appropriate methods set/get_global_torch_kwargs) that will have some keys such as {"use_torch" : bool, "device" : str}?
If we had such a dict, then maybe we could make some functions like split_torch_kwargs() (similarly to job_kwargs) and then ease the signature of all functions that might rely on torch.
method_kwargs, job_kwargs = split_job_kwargs(kwargs)
method_kwargs, torch_kwargs = split_torch_kwargs(method_kwargs)
What do you think?