Run spike sorting on concatenated recordings

In several experiments, several recordings are performed in sequence, for example a baseline/intervention. In these cases, since the underlying spiking activity can be assumed to be the same (or at least very similar), the recordings can be concatenated. This notebook shows how to concatenate the recordings before spike sorting and how to split the sorted output based on the concatenation.

import spikeinterface.extractors as se
import spikeinterface.sorters as ss
import time

When performing an experiment with multiple consecutive recordings, it can be a good idea to concatenate the single recordings, as this can improve the spike sorting performance and it doesn’t require to track the neurons over the different recordings.

This can be done very easily in SpikeInterface using a combination of the MultiRecordingTimeExtractor and the SubSortingExtractor objects.

Let’s create a toy example with 4 channels (the dumpable=True dumps the extractors to a file, which is required for parallel sorting):

recording_single, _ = se.example_datasets.toy_example(duration=10, num_channels=4, dumpable=True)

Let’s now assume that we have 4 recordings. In our case we will concatenate the recording_single 4 times. We first need to build a list of RecordingExtractor objects:

recordings_list = []
for i in range(4):
    recordings_list.append(recording_single)

We can now use the recordings_list to instantiate a MultiRecordingTimeExtractor, which concatenates the traces in time:

multirecording = se.MultiRecordingTimeExtractor(recordings=recordings_list)

Since the MultiRecordingTimeExtractor is a RecordingExtractor, we can run spike sorting “normally”

multisorting = ss.run_klusta(multirecording)

Out:

RUNNING SHELL SCRIPT: /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/examples/modules/sorters/klusta_output/run_klusta.sh
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.13.0/doc/sources/spikesorters/spikesorters/basesorter.py:158: ResourceWarning: unclosed file <_io.TextIOWrapper name=63 encoding='UTF-8'>
  self._run(recording, self.output_folders[i])

The returned multisorting object is a normal SortingExtractor, but we now that its spike trains are concatenated similarly to the recording concatenation. So we have to split them back. We can do that using the epoch information in the MultiRecordingTimeExtractor:

sortings = []

sortings = []
for epoch in multisorting.get_epoch_names():
    info = multisorting.get_epoch_info(epoch)
    sorting_single = se.SubSortingExtractor(multisorting, start_frame=info['start_frame'], end_frame=info['end_frame'])
    sortings.append(sorting_single)

The SortingExtractor objects in the sortings list contain now split spike trains. The nice thing of this approach is that the unit_ids for the different epochs are the same unit!

Total running time of the script: ( 0 minutes 5.406 seconds)

Gallery generated by Sphinx-Gallery