Sorters module

The sorters module is where spike sorting happens!

SI provides wrapper classes to many commonly used spike sorters (see Compatible Technology). All sorter classes inherit from the BaseSorter class, which provides the common tools to run spike sorters.

Sorter wrappers concept

Each spike sorter wrapper includes:

  • a list of default parameters

  • a list of parameter description

  • a _setup_recording class function, which parses the required files and metadata for each sorter into the specified output_folder

  • a _run_from_folder class function, which launches the spike sorter from the output_folder

  • a _get_result_from_folder class function, which loads the SortingExtractor from the output_folder

Example

The sorters includes run() functions to easily run spike sorters:

import spikeinterface.sorters as ss

# recording is a RecordingExtractor object
sorting_TDC = ss.run_tridesclous(recording, output_folder="tridesclous_output")

# which is equivalent to
sorting_TDC = ss.run_sorter("tridesclous", recording, output_folder="tridesclous_output")

Running sorters in docker

Some sorters are hard to install! To alleviate this headache, SI provides a built-in mechanism to run a spike sorting job in a docker container.

We are maintaining a set of sorter-specific docker files in the spikeinterface-dockerfiles repo and most of the docker images are available on Docker Hub from the SpikeInterface organization.

Running spike sorting in a docker container just requires to:

  1. have docker installed

  2. have docker python SDK installed (pip install docker)

When docker is installed, you can simply run the sorter in a specified docker image:

import spikeinterface.sorters as ss

# recording is a RecordingExtractor object
sorting_TDC = ss.run_tridesclous(recording,
                                 output_folder="tridesclous_output",
                                 docker_image="spikeinterface/tridesclous-base/1.6.1")

Run several sorting jobs in parallel

The sorters includes also includes tools to run several spike sorting jobs in parallel. This can be done with the run_sorters() function by specifying an engine that supports parallel processing (e.g. joblib or dask).

In this code example, 3 sorters are run on 2 recordings using 6 jobs:

import spikeinterface.sorters as ss

# recording1 and recording2 are RecordingExtractor objects
recording_dict = {"rec1": recording1, "rec2": recording2}

sorting_outputs = ss.run_sorters(sorter_list=["tridescouls", "herdingspikes", "ironclust"],
                                 recording_dict_or_list=recording_dict,
                                 working_folder="all_sorters",
                                 verbose=False,
                                 engine="joblib",
                                 engine_kwargs={'n_jobs': 6})

After the jobs are run, the sorting_outputs is a dictionary with (rec_name, sorter_name) as key (e.g. ('rec1', 'tridesclous') in this example), and the corresponding SortingExtractor as value.