Sorters module

The spikeinterface.sorters module is where spike sorting happens!

SpikeInterface provides wrapper classes to many commonly used spike sorters (see Compatible Technology). All sorter classes inherit from the BaseSorter class, which provides the common tools to run spike sorters.

Sorter wrappers concept

Each spike sorter wrapper includes:

  • a list of default parameters

  • a list of parameter description

  • a _setup_recording class function, which parses the required files and metadata for each sorter into the specified output_folder

  • a _run_from_folder class function, which launches the spike sorter from the output_folder

  • a _get_result_from_folder class function, which loads the SortingExtractor from the output_folder

Example

The sorters includes run() functions to easily run spike sorters:

import spikeinterface.sorters as ss

# recording is a RecordingExtractor object
sorting_TDC = ss.run_tridesclous(recording, output_folder="tridesclous_output")

# which is equivalent to
sorting_TDC = ss.run_sorter("tridesclous", recording, output_folder="tridesclous_output")

Running sorters in container docker/singularity

Some sorters are hard to install! To alleviate this headache, SI provides a built-in mechanism to run a spike sorting job in a docker or singularity container.

We are maintaining a set of sorter-specific docker files in the spikeinterface-dockerfiles repo and most of the docker images are available on Docker Hub from the SpikeInterface organization.

singularity has an internal mechanism to convert docker images to singularity images.

singularity is often prefered because you don’t need root privilege to run the container. docker needs almost root privilege

Running spike sorting in a docker container container just requires to:

  1. have docker installed

  2. have docker python SDK installed (pip install docker)

or

  1. have singularity installed

  2. have singularity python (pip install spython)

When docker is installed, you can simply run the sorter in a specified docker image:

import spikeinterface.sorters as ss

# recording is a RecordingExtractor object
sorting_TDC = ss.run_tridesclous(
    recording,
    output_folder="tridesclous_output",
    docker_image="spikeinterface/tridesclous-base:1.6.4",
)

And the same goes for singularity:

import spikeinterface.sorters as ss

# recording is a RecordingExtractor object
sorting_TDC = ss.run_tridesclous(
    recording,
    output_folder="tridesclous_output",
    singularity_image="spikeinterface/tridesclous-base:1.6.4",
)

Run several sorting jobs in parallel

The spikeinterface.sorters includes also includes tools to run several spike sorting jobs in parallel. This can be done with the spikeinterface.sorters.run_sorters() function by specifying an engine that supports parallel processing (e.g. joblib or dask).

In this code example, 3 sorters are run on 2 recordings using 6 jobs:

import spikeinterface.sorters as ss

# recording1 and recording2 are RecordingExtractor objects
recording_dict = {"rec1": recording1, "rec2": recording2}

sorting_outputs = ss.run_sorters(
    sorter_list=["tridesclous", "herdingspikes", "ironclust"],
    recording_dict_or_list=recording_dict,
    working_folder="all_sorters",
    verbose=False,
    engine="joblib",
    engine_kwargs={'n_jobs': 6},
)

After the jobs are run, the sorting_outputs is a dictionary with (rec_name, sorter_name) as key (e.g. ('rec1', 'tridesclous') in this example), and the corresponding SortingExtractor as value.