Description
Hello, I'm using spikeinterface to convert my data from openEphys to a binary file for further analysis.
I found that the spikeinterface.core.write_binary_recording()
function just directly write the original data into binary without any scale factors such as gain_to_uV
or offset_to_uV
which can be read from recording
.
From the view of the project, it is necessary not to change the raw recording data, and I compeletely understand why spikeinterface keeps the raw data as it was.
However, from the persepective of a user, when I call the write_binary_recording()
function, not seeing a return_scaled
option makes me feel like the function has done it automatically, especially I can use the sorters naturally without considering if the data scale is returned or not.
It did confuse me for couple of days figuring out why the converted binary data is "wrong". Now, I'm using a code snippet to deal with it, which is not so elegent.
info = recording.to_dict()['properties']
gain_to_uV = info['gain_to_uV'][0]
offset_to_uV = info['offset_to_uV'][0]
recording = spre.scale(recording, gain=gain_to_uV, offset=offset_to_uV)
si.write_binary_recording(recording, file_paths=file_path, dtype=dtype, **job_kwargs)
For the sake of user experience, would you consider adding a return_scaled
option for the write_binary_recording()
function? If it's inconvenient for you, I can learn to submit a PR since I'm new to GitHub.