Closed
Description
I am trying to choose chunk size to use a large percentage of available system memory*. The determination of chunk_size
in job_tools/ensure_chunk_size
is based on recording.dtype
. However in some preprocessing steps (e.g. phase_shift
, I am unsure of others) the data is temporarily converted to float32
. This then leads to a memory error.
The only fix I can think of is to check the preprocessing steps on the recording and, if any of the steps works in float32
, then that should be used to compute chunk_size
. Maybe there is another workaround?
*this is because, as I understand it, larger chunk size will be better to reduce batch edge effects. Please let me know if there are other issues I have not considered.