-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use NEO format in SpikeInterface? #3448
Comments
I'm only using the recording right now, but I only saw NumpySorting has a 'from_neo_spiketrain' function |
Basically all of neo.rawio is incorporated under the hood inside the import spikeinterface.extractors as se
recording = se.read_xx() where the xx is the format you want. For example blackrock, intan, neuralynx, spikeglx, etc. See all formats here. When you use the |
I've read these functions in the documents, but they seem to read the file into SpikeInterface formats. I want to use SpikeInterface with other Python packages like Elephant without converting and writing the file on disk, which may save me a lot of time and make things easier. Therefore, I just want to ask if I can just simply use functions to convert NEO-loaded files with SpikeInterface Extractors. |
I think you'll need to explain the actual work flow you want. SpikeInterface Recording objects are also lazy when possible. If you want to sort data then it will need to be in a recording objects and some sorters require files to be written to disk to be used (for example KS1-3, MS5) so you'll have to write files anyway. If you want to do spike train analysis, we haven't gotten around to writing a SpikeInterface->Neo spiketrain function yet, but you could make it yourself by extracting the spike train information from a |
We are trying to explore possible ways to use different packages in the pipeline. We have Blackrock NSX files and want to read them into Python using NEO to do some preprocessing and then do a small check using Elephant Visualization functions. After this, we want to put them into SpikeInterface for further analysis. It would be convenient for us to do this using one NEO format for the whole process because we saw both SpikeInterface and Elephant are based on NEO so we thought we could use the NEO format directly. But it seems like you don't have the function to convert between NEO and SpikeInterface data format, maybe I can use a Numpy array to transfer between them? |
SpikeInterface has a variety of pre preprocessing so maybe you could do your preprocessing in spikeinterface? Basically Neo has two formats/api's. There's the neo.io which works well with Elephant and then there is the neo.rawio which is basically numpy arrays (typically as memmaps so you can work with giant files) which is what SpikeInterface uses. So if you work with neo.rawio then it probably makes more sense to just use SpikeInterface directly. If you are using the neo.io format then things are trickier. What preprocessing do you want to do? Maybe we could think of a way to still make this work. But my super simple way of thinking about this is: Elephant <-> Neo.IO |
The |
Thanks for replying! I do want to use Numpy so any suggestion would be really appreciated! Now, we are using something else to synchronize the event and neural signal, and map the electrodes, so we can get the right order and format for further analysis. Though it should also work if we modify the pipeline, I have to discuss it with team members. Therefore, it's good to know if there's any other way to use just Numpy arrays. Thanks again! |
do you guys have a better way to do this then? |
We need a better error for the case above, that should have surfaced way earlier. You should save to binary before running the sorters and most of the analysis. That's basically numpy but on disk. |
The idea was to try to do this without writing to disk, but I can't think of any way to do that. So I thought maybe someone else would have a trick for that. If we are able to write to disk then we go back to the original problem of it being better just to use the spikeinterface extractor to generate the correct recording format. I agree on the error though. I had to go through a simulated dataset to find it. |
Back in the day, Sam wanted to explicitly prohibit that workflow in some places so I don't think there is a way around it. |
I'll ping @samuelgarcia to respond and if he agrees it is still prohibited we can close this then. |
I think we should find a way to use sorter for in memory only object. Also lets have in mind that pure numpy recording can not have parralel access so when n_jobs>1 the numpy is copied which is worts than writing to disk and read it back. |
Hi! I'm trying to build a pipeline combining different python packages including SpikeInterface. I want to read the data using 'neo.rawio' since it's supported by SpikeInterface. I wonder how I can put a 'neo.rawio' data into SpikeInterface? Also, after using the SpikeInterface, how can I export the data back to NEO format?
I saw someone discussing creating an 'export_to_neo' function in 2019, but it seems like this is not in the API or code.
Thanks!
The text was updated successfully, but these errors were encountered: