Use the spike sorting launcher

This example shows how to use the spike sorting launcher. The launcher allows to parameterize the sorter name and to run several sorters on one or multiple recordings.

import spikeinterface.extractors as se
import spikeinterface.sorters as ss

First, let’s create the usueal toy example:

recording, sorting_true = se.example_datasets.toy_example(duration=10, seed=0)

The launcher enables to call any spike sorter with the same functions: run_sorter and run_sorters. For running multiple sorters on the same recording extractor or a collection of them, the run_sorters function can be used.

Let’s first see how to run a single sorter, for example, Klusta:

# The sorter name can be now a parameter, e.g. chosen with a command line interface or a GUI
sorter_name = 'klusta'
sorting_KL = ss.run_sorter(sorter_name_or_class='klusta', recording=recording, output_folder='my_sorter_output')
print(sorting_KL.get_unit_ids())

Out:

RUNNING SHELL SCRIPT: /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/examples/modules/sorters/my_sorter_output/run_klusta.sh
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/doc/sources/spikesorters/spikesorters/basesorter.py:158: ResourceWarning: unclosed file <_io.TextIOWrapper name=63 encoding='UTF-8'>
  self._run(recording, self.output_folders[i])
[0, 2, 3, 4, 5, 6]

This will launch the klusta sorter on the recording object.

You can also run multiple sorters on the same recording:

recording_list = [recording]
sorter_list = ['klusta', 'mountainsort4', 'tridesclous']
sorting_output = ss.run_sorters(sorter_list, recording_list, working_folder='tmp_some_sorters', mode='overwrite')

Out:

RUNNING SHELL SCRIPT: /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/examples/modules/sorters/tmp_some_sorters/recording_0/klusta/run_klusta.sh
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/doc/sources/spikesorters/spikesorters/basesorter.py:158: ResourceWarning: unclosed file <_io.TextIOWrapper name=63 encoding='UTF-8'>
  self._run(recording, self.output_folders[i])
Warning! The recording is already filtered, but Mountainsort4 filter is enabled. You can disable filters by setting 'filter' parameter to False
{'chunksize': 3000,
 'clean_cluster': {'apply_auto_merge_cluster': True,
                   'apply_auto_split': True,
                   'apply_trash_low_extremum': True,
                   'apply_trash_not_aligned': True,
                   'apply_trash_small_cluster': True},
 'clean_peaks': {'alien_value_threshold': None, 'mode': 'extremum_amplitude'},
 'cluster_kargs': {'adjacency_radius_um': 0.0,
                   'high_adjacency_radius_um': 0.0,
                   'max_loop': 1000,
                   'min_cluster_size': 20},
 'cluster_method': 'pruningshears',
 'duration': 10.0,
 'extract_waveforms': {'wf_left_ms': -2.0, 'wf_right_ms': 3.0},
 'feature_kargs': {'n_components': 8},
 'feature_method': 'global_pca',
 'make_catalogue': {'inter_sample_oversampling': False,
                    'sparse_thresh_level2': 3,
                    'subsample_ratio': 'auto'},
 'memory_mode': 'memmap',
 'mode': 'dense',
 'n_jobs': -1,
 'n_spike_for_centroid': 350,
 'noise_snippet': {'nb_snippet': 300},
 'peak_detector': {'adjacency_radius_um': 200.0,
                   'engine': 'numpy',
                   'method': 'global',
                   'peak_sign': '-',
                   'peak_span_ms': 0.7,
                   'relative_threshold': 5,
                   'smooth_radius_um': None},
 'peak_sampler': {'mode': 'rand', 'nb_max': 20000, 'nb_max_by_channel': None},
 'preprocessor': {'common_ref_removal': False,
                  'engine': 'numpy',
                  'highpass_freq': 400.0,
                  'lostfront_chunksize': -1,
                  'lowpass_freq': 5000.0,
                  'smooth_size': 0},
 'sparse_threshold': 1.5}

The ‘mode’ argument allows to ‘overwrite’ the ‘working_folder’ (if existing), ‘raise’ and Exception, or ‘keep’ the folder and skip the spike sorting run.

To ‘sorting_output’ is a dictionary that has (recording, sorter) pairs as keys and the correspondent SortingExtractor as values. It can be accessed as follows:

for (rec, sorter), sort in sorting_output.items():
    print(rec, sorter, ':', sort.get_unit_ids())

Out:

recording_0 tridesclous : [0, 1, 2, 3, 4]
recording_0 mountainsort4 : [1, 2, 3, 4, 5, 6, 7, 8, 9]
recording_0 klusta : [0, 2, 3, 4, 5, 6]

With the same mechanism, you can run several spike sorters on many recordings, just by creating a list of RecordingExtractor objects (recording_list).

Total running time of the script: ( 0 minutes 8.634 seconds)

Gallery generated by Sphinx-Gallery