Skip to content

First pass of deprecation removals for 0.103.0 #3993

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/get_started/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ Alternatively we can pass a full dictionary containing the parameters:

# parameters set by params dictionary
sorting_TDC_2 = ss.run_sorter(
sorter_name="tridesclous", recording=recording_preprocessed, output_folder="tdc_output2", **other_params
sorter_name="tridesclous", recording=recording_preprocessed, folder="tdc_output2", **other_params
)
print(sorting_TDC_2)

Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/analyze_neuropixels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -567,7 +567,7 @@ In this example:
# run kilosort2.5 without drift correction
params_kilosort2_5 = {'do_correction': False}

sorting = si.run_sorter('kilosort2_5', rec, output_folder=base_folder / 'kilosort2.5_output',
sorting = si.run_sorter('kilosort2_5', rec, folder=base_folder / 'kilosort2.5_output',
docker_image=True, verbose=True, **params_kilosort2_5)

.. code:: ipython3
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/process_by_channel_group.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ sorting objects in a dictionary for later use.
sorting = run_sorter(
sorter_name='kilosort2',
recording=split_preprocessed_recording,
output_folder=f"folder_KS2_group{group}"
folder=f"folder_KS2_group{group}"
)
sortings[group] = sorting

Expand Down
34 changes: 17 additions & 17 deletions doc/modules/sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,15 @@ to easily run spike sorters:
from spikeinterface.sorters import run_sorter

# run Tridesclous
sorting_TDC = run_sorter(sorter_name="tridesclous", recording=recording, output_folder="/folder_TDC")
sorting_TDC = run_sorter(sorter_name="tridesclous", recording=recording, folder="/folder_TDC")
# run Kilosort2.5
sorting_KS2_5 = run_sorter(sorter_name="kilosort2_5", recording=recording, output_folder="/folder_KS2_5")
sorting_KS2_5 = run_sorter(sorter_name="kilosort2_5", recording=recording, folder="/folder_KS2_5")
# run IronClust
sorting_IC = run_sorter(sorter_name="ironclust", recording=recording, output_folder="/folder_IC")
sorting_IC = run_sorter(sorter_name="ironclust", recording=recording, folder="/folder_IC")
# run pyKilosort
sorting_pyKS = run_sorter(sorter_name="pykilosort", recording=recording, output_folder="/folder_pyKS")
sorting_pyKS = run_sorter(sorter_name="pykilosort", recording=recording, folder="/folder_pyKS")
# run SpykingCircus
sorting_SC = run_sorter(sorter_name="spykingcircus", recording=recording, output_folder="/folder_SC")
sorting_SC = run_sorter(sorter_name="spykingcircus", recording=recording, folder="/folder_SC")


Then the output, which is a :py:class:`~spikeinterface.core.BaseSorting` object, can be easily
Expand All @@ -87,10 +87,10 @@ Spike-sorter-specific parameters can be controlled directly from the

.. code-block:: python

sorting_TDC = run_sorter(sorter_name='tridesclous', recording=recording, output_folder="/folder_TDC",
sorting_TDC = run_sorter(sorter_name='tridesclous', recording=recording, folder="/folder_TDC",
detect_threshold=8.)

sorting_KS2_5 = run_sorter(sorter_name="kilosort2_5", recording=recording, output_folder="/folder_KS2_5"
sorting_KS2_5 = run_sorter(sorter_name="kilosort2_5", recording=recording, folder="/folder_KS2_5"
do_correction=False, preclust_threshold=6, freq_min=200.)


Expand Down Expand Up @@ -193,7 +193,7 @@ The following code creates a test recording and runs a containerized spike sorte

sorting = ss.run_sorter(sorter_name='kilosort3',
recording=test_recording,
output_folder="kilosort3",
folder="kilosort3",
singularity_image=True)

print(sorting)
Expand All @@ -208,7 +208,7 @@ To run in Docker instead of Singularity, use ``docker_image=True``.
.. code-block:: python

sorting = run_sorter(sorter_name='kilosort3', recording=test_recording,
output_folder="/tmp/kilosort3", docker_image=True)
folder="/tmp/kilosort3", docker_image=True)

To use a specific image, set either ``docker_image`` or ``singularity_image`` to a string,
e.g. ``singularity_image="spikeinterface/kilosort3-compiled-base:0.1.0"``.
Expand All @@ -217,7 +217,7 @@ e.g. ``singularity_image="spikeinterface/kilosort3-compiled-base:0.1.0"``.

sorting = run_sorter(sorter_name="kilosort3",
recording=test_recording,
output_folder="kilosort3",
folder="kilosort3",
singularity_image="spikeinterface/kilosort3-compiled-base:0.1.0")


Expand Down Expand Up @@ -301,10 +301,10 @@ an :code:`engine` that supports parallel processing (such as :code:`joblib` or :
another_recording = ...

job_list = [
{'sorter_name': 'tridesclous', 'recording': recording, 'output_folder': 'folder1','detect_threshold': 5.},
{'sorter_name': 'tridesclous', 'recording': another_recording, 'output_folder': 'folder2', 'detect_threshold': 5.},
{'sorter_name': 'herdingspikes', 'recording': recording, 'output_folder': 'folder3', 'clustering_bandwidth': 8., 'docker_image': True},
{'sorter_name': 'herdingspikes', 'recording': another_recording, 'output_folder': 'folder4', 'clustering_bandwidth': 8., 'docker_image': True},
{'sorter_name': 'tridesclous', 'recording': recording, 'folder': 'folder1','detect_threshold': 5.},
{'sorter_name': 'tridesclous', 'recording': another_recording, 'folder': 'folder2', 'detect_threshold': 5.},
{'sorter_name': 'herdingspikes', 'recording': recording, 'folder': 'folder3', 'clustering_bandwidth': 8., 'docker_image': True},
{'sorter_name': 'herdingspikes', 'recording': another_recording, 'folder': 'folder4', 'clustering_bandwidth': 8., 'docker_image': True},
]

# run in loop
Expand Down Expand Up @@ -380,7 +380,7 @@ In this example, we create a 16-channel recording with 4 tetrodes:
# here the result is a dict of a sorting object
sortings = {}
for group, sub_recording in recordings.items():
sorting = run_sorter(sorter_name='kilosort2', recording=recording, output_folder=f"folder_KS2_group{group}")
sorting = run_sorter(sorter_name='kilosort2', recording=recording, folder=f"folder_KS2_group{group}")
sortings[group] = sorting

**Option 2 : Automatic splitting**
Expand All @@ -390,7 +390,7 @@ In this example, we create a 16-channel recording with 4 tetrodes:
# here the result is one sorting that aggregates all sub sorting objects
aggregate_sorting = run_sorter_by_property(sorter_name='kilosort2', recording=recording_4_tetrodes,
grouping_property='group',
working_folder='working_path')
folder='working_path')


Handling multi-segment recordings
Expand Down Expand Up @@ -546,7 +546,7 @@ From the user's perspective, they behave exactly like the external sorters:

.. code-block:: python

sorting = run_sorter(sorter_name="spykingcircus2", recording=recording, output_folder="/tmp/folder")
sorting = run_sorter(sorter_name="spykingcircus2", recording=recording, folder="/tmp/folder")


Contributing
Expand Down
2 changes: 1 addition & 1 deletion examples/tutorials/core/plot_1_recording_extractor.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@
##############################################################################
# You can also get a recording with a subset of channels (i.e. a channel slice):

recording4 = recording3.channel_slice(channel_ids=["a", "c", "e"])
recording4 = recording3.select_channels(channel_ids=["a", "c", "e"])
print(recording4)
print(recording4.get_channel_ids())

Expand Down
4 changes: 1 addition & 3 deletions examples/tutorials/qualitymetrics/plot_3_quality_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,10 @@

import spikeinterface.core as si
import spikeinterface.extractors as se
from spikeinterface.postprocessing import compute_principal_components
from spikeinterface.qualitymetrics import (
compute_snrs,
compute_firing_rates,
compute_isi_violations,
calculate_pc_metrics,
compute_quality_metrics,
)

Expand Down Expand Up @@ -70,7 +68,7 @@

##############################################################################
# Some metrics are based on the principal component scores, so the exwtension
# need to be computed before. For instance:
# must be computed before. For instance:

analyzer.compute("principal_components", n_components=3, mode="by_channel_global", whiten=True)

Expand Down
2 changes: 1 addition & 1 deletion installation_tips/check_your_install.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def _run_one_sorter_and_analyzer(sorter_name):
job_kwargs = dict(n_jobs=-1, progress_bar=True, chunk_duration="1s")
import spikeinterface.full as si
recording = si.load_extractor('./toy_example_recording')
sorting = si.run_sorter(sorter_name, recording, output_folder=f'./sorter_with_{sorter_name}', verbose=False)
sorting = si.run_sorter(sorter_name, recording, folder=f'./sorter_with_{sorter_name}', verbose=False)

sorting_analyzer = si.create_sorting_analyzer(sorting, recording,
format="binary_folder", folder=f"./analyzer_with_{sorter_name}",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def run(self, **job_kwargs):
sorting = run_sorter(
sorter_name,
recording,
output_folder=self.sorter_folder,
folder=self.sorter_folder,
**sorter_params,
delete_output_folder=False,
)
Expand Down
36 changes: 0 additions & 36 deletions src/spikeinterface/core/baserecording.py
Original file line number Diff line number Diff line change
Expand Up @@ -369,21 +369,6 @@ def get_traces(
traces = traces.astype("float32", copy=False) * gains + offsets
return traces

def has_scaled_traces(self) -> bool:
"""Checks if the recording has scaled traces

Returns
-------
bool
True if the recording has scaled traces, False otherwise
"""
warnings.warn(
"`has_scaled_traces` is deprecated and will be removed in 0.103.0. Use has_scaleable_traces() instead",
category=DeprecationWarning,
stacklevel=2,
)
return self.has_scaled()

def get_time_info(self, segment_index=None) -> dict:
"""
Retrieves the timing attributes for a given segment index. As with
Expand Down Expand Up @@ -725,17 +710,6 @@ def rename_channels(self, new_channel_ids: list | np.array | tuple) -> "BaseReco

return ChannelSliceRecording(self, renamed_channel_ids=new_channel_ids)

def _channel_slice(self, channel_ids, renamed_channel_ids=None):
from .channelslice import ChannelSliceRecording

warnings.warn(
"Recording.channel_slice will be removed in version 0.103, use `select_channels` or `rename_channels` instead.",
DeprecationWarning,
stacklevel=2,
)
sub_recording = ChannelSliceRecording(self, channel_ids, renamed_channel_ids=renamed_channel_ids)
return sub_recording

def _remove_channels(self, remove_channel_ids):
from .channelslice import ChannelSliceRecording

Expand Down Expand Up @@ -878,8 +852,6 @@ def binary_compatible_with(
time_axis=None,
file_paths_length=None,
file_offset=None,
file_suffix=None,
file_paths_lenght=None,
):
"""
Check is the recording is binary compatible with some constrain on
Expand All @@ -891,14 +863,6 @@ def binary_compatible_with(
* file_suffix
"""

# spelling typo need to fix
if file_paths_lenght is not None:
warnings.warn(
"`file_paths_lenght` is deprecated and will be removed in 0.103.0 please use `file_paths_length`"
)
if file_paths_length is None:
file_paths_length = file_paths_lenght

if not self.is_binary_compatible():
return False

Expand Down
45 changes: 0 additions & 45 deletions src/spikeinterface/core/baserecordingsnippets.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,6 @@ def has_scaleable_traces(self) -> bool:
else:
return True

def has_scaled(self):
warn(
"`has_scaled` has been deprecated and will be removed in 0.103.0. Please use `has_scaleable_traces()`",
category=DeprecationWarning,
stacklevel=2,
)
return self.has_scaleable_traces()

def has_probe(self) -> bool:
return "contact_vector" in self.get_property_keys()

Expand All @@ -69,9 +61,6 @@ def is_filtered(self):
# the is_filtered is handle with annotation
return self._annotations.get("is_filtered", False)

def _channel_slice(self, channel_ids, renamed_channel_ids=None):
raise NotImplementedError

def set_probe(self, probe, group_mode="by_probe", in_place=False):
"""
Attach a list of Probe object to a recording.
Expand Down Expand Up @@ -234,21 +223,6 @@ def _set_probes(self, probe_or_probegroup, group_mode="by_probe", in_place=False

return sub_recording

def set_probes(self, probe_or_probegroup, group_mode="by_probe", in_place=False):

warning_msg = (
"`set_probes` is now a private function and the public function will be "
"removed in 0.103.0. Please use `set_probe` or `set_probegroup` instead"
)

warn(warning_msg, category=DeprecationWarning, stacklevel=2)

sub_recording = self._set_probes(
probe_or_probegroup=probe_or_probegroup, group_mode=group_mode, in_place=in_place
)

return sub_recording

def get_probe(self):
probes = self.get_probes()
assert len(probes) == 1, "there are several probe use .get_probes() or get_probegroup()"
Expand Down Expand Up @@ -441,25 +415,6 @@ def planarize(self, axes: str = "xy"):

return recording2d

# utils
def channel_slice(self, channel_ids, renamed_channel_ids=None):
"""
Returns a new object with sliced channels.

Parameters
----------
channel_ids : np.array or list
The list of channels to keep
renamed_channel_ids : np.array or list, default: None
A list of renamed channels

Returns
-------
BaseRecordingSnippets
The object with sliced channels
"""
return self._channel_slice(channel_ids, renamed_channel_ids=renamed_channel_ids)

def select_channels(self, channel_ids):
"""
Returns a new object with sliced channels.
Expand Down
8 changes: 0 additions & 8 deletions src/spikeinterface/core/basesnippets.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,14 +79,6 @@ def is_aligned(self):
def get_num_segments(self):
return len(self._snippets_segments)

def has_scaled_snippets(self):
warn(
"`has_scaled_snippets` is deprecated and will be removed in version 0.103.0. Please use `has_scaleable_traces()` instead",
category=DeprecationWarning,
stacklevel=2,
)
return self.has_scaleable_traces()

def get_frames(self, indices=None, segment_index: Union[int, None] = None):
segment_index = self._check_segment_index(segment_index)
spts = self._snippets_segments[segment_index]
Expand Down
Loading
Loading