Skip to content

large total_memory when writing binary file leads to memory error if preprocessing step converts to float32 #2029

Closed
@JoeZiminski

Description

@JoeZiminski

I am trying to choose chunk size to use a large percentage of available system memory*. The determination of chunk_size in job_tools/ensure_chunk_size is based on recording.dtype. However in some preprocessing steps (e.g. phase_shift, I am unsure of others) the data is temporarily converted to float32. This then leads to a memory error.

The only fix I can think of is to check the preprocessing steps on the recording and, if any of the steps works in float32, then that should be used to compute chunk_size. Maybe there is another workaround?

*this is because, as I understand it, larger chunk size will be better to reduce batch edge effects. Please let me know if there are other issues I have not considered.

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionGeneral question regarding SI

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions