Skip to content

SpykingCircus2 consuming lots of memory #2202

Closed
@rory-bedford

Description

@rory-bedford

Hi there,

I'm trying to get SpykingCircus2 to run on a 30 minute tetrode recording. My problem is that it seems to use an awful lot of memory, so it keeps crashing out.

I'm using spikeinterface 0.99.1, hdbscan 0.8.33, and numba 0.58.1, with python 3.8.18.

Here's my recording information:

BinaryFolderRecording: 4 channels - 48.0kHz - 1 segments - 87,078,300 samples 1,814.13s (30.24 minutes) - int16 dtype - 664.35 MiB

I'm running on a CPU cluster with 4 cores, and am updating params['job_kwargs']['n_jobs'] to reflect this. Otherwise I'm using the default parameters.

Here's my resource usage:

State: OUT_OF_MEMORY (exit code 0) Nodes: 1 Cores per node: 4 CPU Utilized: 00:05:13 CPU Efficiency: 32.20% of 00:16:12 core-walltime Job Wall-clock time: 00:04:03 Memory Utilized: 79.42 GB Memory Efficiency: 99.27% of 80.00 GB

Compare this with MountainSort5, run on exactly the same recording with default parameters:

State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 4 CPU Utilized: 00:00:43 CPU Efficiency: 41.35% of 00:01:44 core-walltime Job Wall-clock time: 00:00:26 Memory Utilized: 1.19 MB Memory Efficiency: 0.00% of 30.00 GB

And you see the memory requirements are completely different.

Does anyone know why this is and what I can do to fix this? We'd really like to get SpykingCircus2 to work.

On spikeinterface 0.98.2 the problem was even worse - it kept requesting memory to allocate an array which was 26 PiB! (I had to look up those units!).

Any help is much appreciated!

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance issues/improvementssortersRelated to sorters module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions