Skip to content

Issue saving large preprocessed recording to binary file #1922

Closed
@florgf88

Description

@florgf88

Hello! I'm a new user of Spike Interface, currently setting a pipeline for processing Neuropixels 1.0 data. I'm experiencing an issue related to saving the preprocess recording (phase shift correction, highpass filtering and common reference applied to raw data) before running Kilosort. I didn't have a problem with a smaller dataset, but in this 2.5 hours recording, the process seems stuck: it doesn't give an error message, and the operation is not completed after several hours.

Here is the code:

ap_raw = se.read_spikeglx(folder_path=base_folder, stream_id ='imec0.ap')
ap_process = ap_raw
ap_process = spre.phase_shift(ap_process)
ap_process = spre.highpass_filter(ap_process)
ap_process = spre.common_reference(ap_process, reference='global', operator='median')

job_kwargs = dict(n_jobs=-1, chunk_duration="1s", progres_bar=True)
recording_saved = ap_process.save(folder=base_folder / "preprocessed", verbose=True, **job_kwargs)

job_kwargs = dict(delete_tmp_files= False, n_jobs=-1, chunk_duration="1s", progress_bar= True)
sorter_params = job_kwargs

sorting_KS25 = ss.run_sorter('kilosort2_5', recording_saved,
                             output_folder=base_folder / 'results_KS25',
                             verbose=True, **sorter_params)

And here a print of the output folder:
image

I appreciate any help or suggestions in how to send the data to KS. Many thanks!

Metadata

Metadata

Assignees

Labels

concurrencyRelated to parallel processing

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions