Curation Tutorial

After spike sorting and computing quality metrics, you can automatically curate the spike sorting output using the quality metrics that you have calculated.

Import the modules and/or functions necessary from spikeinterface

import spikeinterface.core as si
import spikeinterface.extractors as se

from spikeinterface.postprocessing import compute_principal_components
from spikeinterface.qualitymetrics import compute_quality_metrics

Let’s download a simulated dataset from the repo ‘https://gin.g-node.org/NeuralEnsemble/ephy_testing_data

Let’s imagine that the ground-truth sorting is in fact the output of a sorter.

local_path = si.download_dataset(remote_path="mearec/mearec_test_10s.h5")
recording, sorting = se.read_mearec(file_path=local_path)
print(recording)
print(sorting)
MEArecRecordingExtractor: 32 channels - 32.0kHz - 1 segments - 320,000 samples - 10.00s
                          float32 dtype - 39.06 MiB
  file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
MEArecSortingExtractor: 10 units - 1 segments - 32.0kHz
  file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5

Create SortingAnalyzer

For this example, we will need a SortingAnalyzer and some extensions to be computed first

analyzer = si.create_sorting_analyzer(sorting=sorting, recording=recording, format="memory")
analyzer.compute(["random_spikes", "waveforms", "templates", "noise_levels"])

analyzer.compute("principal_components", n_components=3, mode="by_channel_local")
print(analyzer)
estimate_sparsity:   0%|          | 0/10 [00:00<?, ?it/s]
estimate_sparsity: 100%|##########| 10/10 [00:00<00:00, 990.06it/s]

compute_waveforms:   0%|          | 0/10 [00:00<?, ?it/s]
compute_waveforms: 100%|##########| 10/10 [00:00<00:00, 326.81it/s]

Fitting PCA:   0%|          | 0/10 [00:00<?, ?it/s]
Fitting PCA:  30%|███       | 3/10 [00:00<00:00, 28.64it/s]
Fitting PCA:  60%|██████    | 6/10 [00:00<00:00, 28.33it/s]
Fitting PCA:  90%|█████████ | 9/10 [00:00<00:00, 24.47it/s]
Fitting PCA: 100%|██████████| 10/10 [00:00<00:00, 22.78it/s]

Projecting waveforms:   0%|          | 0/10 [00:00<?, ?it/s]
Projecting waveforms: 100%|██████████| 10/10 [00:00<00:00, 199.68it/s]
SortingAnalyzer: 32 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 5 extensions: random_spikes, waveforms, templates, noise_levels, principal_components

Then we compute some quality metrics:

metrics = compute_quality_metrics(analyzer, metric_names=["snr", "isi_violation", "nearest_neighbor"])
print(metrics)
calculate_pc_metrics:   0%|          | 0/10 [00:00<?, ?it/s]
calculate_pc_metrics:  50%|█████     | 5/10 [00:00<00:00, 46.98it/s]
calculate_pc_metrics: 100%|██████████| 10/10 [00:00<00:00, 64.41it/s]
    isi_violations_ratio  isi_violations_count  ...  nn_hit_rate  nn_miss_rate
#0                   0.0                   0.0  ...     1.000000      0.001289
#1                   0.0                   0.0  ...     0.990000      0.000744
#2                   0.0                   0.0  ...     0.976744      0.005831
#3                   0.0                   0.0  ...     1.000000      0.000000
#4                   0.0                   0.0  ...     0.989583      0.001010
#5                   0.0                   0.0  ...     0.993243      0.002653
#6                   0.0                   0.0  ...     0.995098      0.000000
#7                   0.0                   0.0  ...     0.986486      0.010753
#8                   0.0                   0.0  ...     0.989744      0.001506
#9                   0.0                   0.0  ...     0.996124      0.003348

[10 rows x 5 columns]

We can now threshold each quality metric and select units based on some rules.

The easiest and most intuitive way is to use boolean masking with a dataframe.

Then create a list of unit ids that we want to keep

keep_mask = (metrics["snr"] > 7.5) & (metrics["isi_violations_ratio"] < 0.2) & (metrics["nn_hit_rate"] > 0.90)
print(keep_mask)

keep_unit_ids = keep_mask[keep_mask].index.values
keep_unit_ids = [unit_id for unit_id in keep_unit_ids]
print(keep_unit_ids)
#0     True
#1     True
#2     True
#3     True
#4    False
#5    False
#6     True
#7    False
#8     True
#9     True
dtype: bool
['#0', '#1', '#2', '#3', '#6', '#8', '#9']

And now let’s create a sorting that contains only curated units and save it.

curated_sorting = sorting.select_units(keep_unit_ids)
print(curated_sorting)


curated_sorting.save(folder="curated_sorting")
UnitsSelectionSorting: 7 units - 1 segments - 32.0kHz

NumpyFolderSorting: 7 units - 1 segments - 32.0kHz

We can also save the analyzer with only theses units

clean_analyzer = analyzer.select_units(unit_ids=keep_unit_ids, format="zarr", folder="clean_analyzer")

print(clean_analyzer)
SortingAnalyzer: 32 channels - 7 units - 1 segments - zarr - has recording
Loaded 6 extensions: random_spikes, waveforms, templates, noise_levels, principal_components, quality_metrics

Total running time of the script: (0 minutes 1.425 seconds)

Gallery generated by Sphinx-Gallery