Skip to content

Use sphinx gallery machinery to directly build how-to file #3472

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ examples/tutorials/*.svg
doc/_build/*
doc/tutorials/*
doc/sources/*
doc/how_to_new/*
*sg_execution_times.rst

examples/getting_started/tmp_*
Expand Down
4 changes: 2 additions & 2 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@
# for sphinx gallery plugin
sphinx_gallery_conf = {
'only_warn_on_example_error': True,
'examples_dirs': ['../examples/tutorials'],
'gallery_dirs': ['tutorials' ], # path where to save gallery generated examples
'examples_dirs': ['../examples/tutorials', '../examples/how_to_new'],
'gallery_dirs': ['tutorials', 'how_to_new'], # path where to save gallery generated examples
'subsection_order': ExplicitOrder([
'../examples/tutorials/core',
'../examples/tutorials/extractors',
Expand Down
125 changes: 0 additions & 125 deletions doc/how_to/combine_recordings.rst

This file was deleted.

2 changes: 1 addition & 1 deletion doc/how_to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ Guides on how to solve specific, short problems in SpikeInterface. Learn how to.
handle_drift
analyze_neuropixels
load_matlab_data
combine_recordings
process_by_channel_group
load_your_data_into_sorting
benchmark_with_hybrid_recordings
drift_with_lfp
../how_to_new/plot_combine_recordings
6 changes: 6 additions & 0 deletions examples/how_to_new/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Unused
======

This file is required by sphinx to build sphinx pages. But, we do not use the gallery
page features for the How To and so do not need it. Instead, we have a custom .rst file
in the How To folder that points to the sphinx-gallery outputs.
92 changes: 92 additions & 0 deletions examples/how_to_new/plot_combine_recordings.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
"""
====================================
Combine recordings in SpikeInterface
====================================

In this tutorial, we will walk through combining multiple recording objects. Sometimes this occurs due to hardware
settings (e.g., Intan software has a default setting of new files every 1 minute) or the experimenter decides to
split their recording into multiple files for different experimental conditions. If the probe has not been moved,
however, then during sorting it would likely make sense to combine these individual recording objects into one
recording object.

------------
Why Combine?
------------

Combining your data into a single recording allows you to have consistent labels (`unit_ids`) across the whole recording.

Spike sorters seek to sort spikes within a recording into groups of units. Thus if multiple `Recording` objects have the
exact same probe location within some tissue and are occurring continuously in time, the units between the `Recordings` will
be the same. But if we sort each recording separately, the unit ids given by the sorter will not be the same between each
`Sorting`, and so we will need extensive post-processing to try to figure out which units are actually the same between
each `Sorting`. By combining everything into one `Recording`, all spikes will be sorted into the same pool of units.

---------------------------------------
Combining recordings continuous in time
---------------------------------------

Some file formats (e.g., Intan) automatically create new files every minute or few minutes (with a setting that can be user
controlled). Other times an experimenter separates their recording for experimental reasons. SpikeInterface provides two
tools for bringing together these files into one `Recording` object.
"""

# %%
# ------------------------
# Concatenating Recordings
# ------------------------

# First, let's cover concatenating recordings together. This will generate a mono-segment
# recording object. Let's load a set of Intan files. 0 is the amplifier data for Intan

import spikeinterface as si # This is only core
import spikeinterface.extractors as se

recording_one, _ = si.generate_ground_truth_recording(durations=[25])
recording_two, _ = si.generate_ground_truth_recording(durations=[25])

print(recording_one)

print(recording_two)

# %%
# Next, we will concatenate these recordings together.

concatenated_recording = si.concatenate_recordings([recording_one, recording_two])

print(concatenated_recording)

# %%
# If we know that we will deal with a lot of files, we can actually work our
# way through a series of them relatively quickly by doing

list_of_recs = [si.generate_ground_truth_recording(durations=[25])[0] for _ in range(4)]
list_of_recordings = []
for rec in list_of_recs:
list_of_recordings.append(rec)
recording = si.concatenate_recordings(list_of_recordings)

# %%
# -----------------
# Append Recordings
# -----------------
#
# If you wish to keep each recording as a separate segment identity (e.g. if doing baseline, stim, poststim) you can use
# `append` instead of `concatenate`. This has the benefit of allowing you to keep different parts of data
# separate, but it is important to note that not all sorters can handle multi-segment objects.

recording = si.append_recordings([recording_one, recording_two])

print(recording)

# %%
# --------
# Pitfalls
# --------
#
# It's important to remember that these operations are directional. So:

recording_forward = si.concatenate_recordings([recording_one, recording_two])
recording_backward = si.concatenate_recordings([recording_two, recording_one])

# %%
# This is important because your spike times will be relative to the start of your recording.