Curation Tutorial

After spike sorting and computing quality metrics, you can automatically curate the spike sorting output using the quality metrics.

import spikeinterface as si
import spikeinterface.extractors as se
from spikeinterface.postprocessing import compute_principal_components
from spikeinterface.qualitymetrics import compute_quality_metrics
First, let’s download a simulated dataset

from the repo ‘https://gin.g-node.org/NeuralEnsemble/ephy_testing_data

Let’s imagine that the ground-truth sorting is in fact the output of a sorter.

local_path = si.download_dataset(remote_path='mearec/mearec_test_10s.h5')
recording, sorting = se.read_mearec(local_path)
print(recording)
print(sorting)
MEArecRecordingExtractor: 32 channels - 1 segments - 32.0kHz - 10.000s
  file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
MEArecSortingExtractor: 10 units - 1 segments - 32.0kHz
  file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5

First, we extract waveforms and compute their PC scores:

folder = 'waveforms_mearec'
we = si.extract_waveforms(recording, sorting, folder,
                          load_if_exists=True,
                          ms_before=1, ms_after=2., max_spikes_per_unit=500,
                          n_jobs=1, chunk_size=30000)
print(we)

pc = compute_principal_components(we, load_if_exists=True,
                                     n_components=3, mode='by_channel_local')
print(pc)
WaveformExtractor: 32 channels - 10 units - 1 segments
  before:32 after:64 n_per_units:500
WaveformPrincipalComponent: 32 channels - 1 segments
  mode:by_channel_local n_components:3

Then we compute some quality metrics:

metrics = compute_quality_metrics(we, metric_names=['snr', 'isi_violation', 'nearest_neighbor'])
print(metrics)
          snr  isi_violations_ratio  ...  nn_hit_rate  nn_miss_rate
#0  23.739727                   0.0  ...     0.996226      0.003468
#1  25.599155                   0.0  ...     0.960000      0.001727
#2  13.819590                   0.0  ...     0.925581      0.003989
#3  21.852650                   0.0  ...     0.993333      0.000000
#4   7.467602                   0.0  ...     0.975000      0.000861
#5   7.465411                   0.0  ...     0.978378      0.001130
#6  20.910934                   0.0  ...     0.980392      0.000576
#7   7.456506                   0.0  ...     0.932727      0.006929
#8   8.052315                   0.0  ...     0.972165      0.018512
#9   8.990562                   0.0  ...     0.945736      0.009091

[10 rows x 6 columns]

We can now threshold each quality metric and select units based on some rules.

The easiest and most intuitive way is to use boolean masking with dataframe:

keep_mask = (metrics['snr'] > 7.5) & (metrics['isi_violations_rate'] < 0.05) & (metrics['nn_hit_rate'] > 0.90)
print(keep_mask)

keep_unit_ids = keep_mask[keep_mask].index.values
print(keep_unit_ids)
#0     True
#1     True
#2     True
#3     True
#4    False
#5    False
#6     True
#7    False
#8     True
#9     True
dtype: bool
['#0' '#1' '#2' '#3' '#6' '#8' '#9']

And now let’s create a sorting that contains only curated units and save it, for example to an NPZ file.

curated_sorting = sorting.select_units(keep_unit_ids)
print(curated_sorting)
se.NpzSortingExtractor.write_sorting(curated_sorting, 'curated_sorting.pnz')
UnitsSelectionSorting: 7 units - 1 segments - 32.0kHz

Total running time of the script: ( 0 minutes 3.441 seconds)

Gallery generated by Sphinx-Gallery