Use the spike sorting launcher

This example shows how to use the spike sorting launcher. The launcher allows to parameterize the sorter name and to run several sorters on one or multiple recordings.

import spikeinterface.extractors as se
import spikeinterface.sorters as ss

First, let’s create the usual toy example:

recording, sorting_true = se.toy_example(duration=10, seed=0, num_segments=1)
print(recording)
print(sorting_true)

Out:

NumpyRecording: 4 channels - 1 segments - 30.0kHz - 10.000s
NumpySorting: 10 units - 1 segments - 30.0kHz

Lets cache this recording to make it “dumpable”

recording = recording.save(name='toy')
print(recording)

Out:

Use cache_folder=/tmp/spikeinterface_cache/tmp3m75ta7c/toy
write_binary_recording with n_jobs 1  chunk_size None
BinaryRecordingExtractor: 4 channels - 1 segments - 30.0kHz - 10.000s
  file_paths: ['/tmp/spikeinterface_cache/tmp3m75ta7c/toy/traces_cached_seg0.raw']

The launcher enables to call any spike sorter with the same functions: run_sorter and run_sorters. For running multiple sorters on the same recording extractor or a collection of them, the run_sorters function can be used.

Let’s first see how to run a single sorter, for example, Klusta:

# The sorter name can be now a parameter, e.g. chosen with a command line interface or a GUI
sorter_name = 'herdingspikes'
sorting_HS = ss.run_sorter(sorter_name='herdingspikes', recording=recording, output_folder='my_sorter_output')
print(sorting_HS.get_unit_ids())

Out:

# Generating new position and neighbor files from data file
# Not Masking any Channels
# Sampling rate: 30000
# Localization On
# Number of recorded channels: 4
# Not subtracing mean
# Analysing frames: 300000; Seconds: 10.0
# Frames before spike in cutout: 9
# Frames after spike in cutout: 54
# tcuts: 39 84
# tInc: 100000
# Detection completed, time taken: 0:00:00.064607
# Time per frame: 0:00:00.000215
# Time per sample: 0:00:00.000054
Loaded 163 spikes.
Fitting dimensionality reduction using all spikes...
...projecting...
...done
Clustering...
Clustering 163 spikes...
number of seeds: 2
seeds/job: 2
using 2 cpus
[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done   2 out of   2 | elapsed:    2.1s finished
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:242: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  unique = np.ones(len(sorted_centers), dtype=np.bool)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:255: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  labels = np.zeros(n_samples, dtype=np.int)
Number of estimated units: 2
[0 1]

You can also run multiple sorters on the same recording:

recordings = {'toy' : recording }
sorter_list = ['herdingspikes', 'tridesclous']
sorting_output = ss.run_sorters(sorter_list, recordings, working_folder='tmp_some_sorters', mode_if_folder_exists='overwrite')
Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/latest/examples/modules/sorters/plot_2_using_the_launcher.py", line 44, in <module>
    sorting_output = ss.run_sorters(sorter_list, recordings, working_folder='tmp_some_sorters', mode_if_folder_exists='overwrite')
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/spikeinterface/sorters/launcher.py", line 257, in run_sorters
    _run_one(task_args)
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/spikeinterface/sorters/launcher.py", line 33, in _run_one
    run_sorter(sorter_name, recording, output_folder=output_folder,
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/spikeinterface/sorters/runsorter.py", line 67, in run_sorter
    return run_sorter_local(sorter_name, recording, output_folder=output_folder,
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/spikeinterface/sorters/runsorter.py", line 91, in run_sorter_local
    sorting = SorterClass.get_result_from_folder(output_folder)
  File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/latest/lib/python3.8/site-packages/spikeinterface/sorters/basesorter.py", line 253, in get_result_from_folder
    raise SpikeSortingError(
spikeinterface.sorters.utils.misc.SpikeSortingError: Spike sorting failed. You can inspect the runtime trace in spikeinterface_log.json

The ‘mode’ argument allows to ‘overwrite’ the ‘working_folder’ (if existing), ‘raise’ and Exception, or ‘keep’ the folder and skip the spike sorting run.

To ‘sorting_output’ is a dictionary that has (recording, sorter) pairs as keys and the correspondent SortingExtractor as values. It can be accessed as follows:

for (rec_name, sorter_name), sorting in sorting_output.items():
    print(rec_name, sorter_name, ':', sorting.get_unit_ids())

With the same mechanism, you can run several spike sorters on many recordings, just by creating a list/dict of RecordingExtractor objects (recording_list).

Total running time of the script: ( 0 minutes 3.201 seconds)

Gallery generated by Sphinx-Gallery