Note
Go to the end to download the full example code
Quality Metrics Tutorial¶
After spike sorting, you might want to validate the ‘goodness’ of the sorted units. This can be done using the
qualitymetrics
submodule, which computes several quality metrics of the sorted units.
import spikeinterface.core as si
import spikeinterface.extractors as se
from spikeinterface.postprocessing import compute_principal_components
from spikeinterface.qualitymetrics import (
compute_snrs,
compute_firing_rates,
compute_isi_violations,
calculate_pc_metrics,
compute_quality_metrics,
)
First, let’s download a simulated dataset from the repo ‘https://gin.g-node.org/NeuralEnsemble/ephy_testing_data’
local_path = si.download_dataset(remote_path="mearec/mearec_test_10s.h5")
recording, sorting = se.read_mearec(local_path)
print(recording)
print(sorting)
MEArecRecordingExtractor: 32 channels - 32.0kHz - 1 segments - 320,000 samples - 10.00s
float32 dtype - 39.06 MiB
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
MEArecSortingExtractor: 10 units - 1 segments - 32.0kHz
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
Create SortingAnalyzer¶
For quality metrics we need first to create a SortingAnalyzer
.
analyzer = si.create_sorting_analyzer(sorting=sorting, recording=recording, format="memory")
print(analyzer)
estimate_sparsity: 0%| | 0/10 [00:00<?, ?it/s]
estimate_sparsity: 100%|##########| 10/10 [00:00<00:00, 912.64it/s]
SortingAnalyzer: 32 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 0 extensions:
Depending on which metrics we want to compute we will need first to compute some necessary extensions. (if not computed an error message will be raised)
analyzer.compute("random_spikes", method="uniform", max_spikes_per_unit=600, seed=2205)
analyzer.compute("waveforms", ms_before=1.3, ms_after=2.6, n_jobs=2)
analyzer.compute("templates", operators=["average", "median", "std"])
analyzer.compute("noise_levels")
print(analyzer)
compute_waveforms: 0%| | 0/10 [00:00<?, ?it/s]
compute_waveforms: 20%|██ | 2/10 [00:00<00:00, 19.66it/s]
compute_waveforms: 100%|██████████| 10/10 [00:00<00:00, 56.12it/s]
SortingAnalyzer: 32 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 4 extensions: random_spikes, waveforms, templates, noise_levels
The spikeinterface.qualitymetrics
submodule has a set of functions that allow users to compute
metrics in a compact and easy way. To compute a single metric, one can simply run one of the
quality metric functions as shown below. Each function has a variety of adjustable parameters that can be tuned.
firing_rates = compute_firing_rates(analyzer)
print(firing_rates)
isi_violation_ratio, isi_violations_count = compute_isi_violations(analyzer)
print(isi_violation_ratio)
snrs = compute_snrs(analyzer)
print(snrs)
{'#0': 5.3, '#1': 5.0, '#2': 4.3, '#3': 3.0, '#4': 4.8, '#5': 3.7, '#6': 5.1, '#7': 11.1, '#8': 19.5, '#9': 12.9}
{'#0': 0.0, '#1': 0.0, '#2': 0.0, '#3': 0.0, '#4': 0.0, '#5': 0.0, '#6': 0.0, '#7': 0.0, '#8': 0.0, '#9': 0.0}
{'#0': 23.728917258891407, '#1': 25.425348860132083, '#2': 13.778890868978362, '#3': 21.708003644294745, '#4': 7.417237512582994, '#5': 7.401046626853713, '#6': 20.829798606527465, '#7': 7.327281800019207, '#8': 8.051580291350557, '#9': 8.975356132606246}
To compute more than one metric at once, we can use the compute_quality_metrics
function and indicate
which metrics we want to compute. This will return a pandas dataframe:
metrics = compute_quality_metrics(analyzer, metric_names=["firing_rate", "snr", "amplitude_cutoff"])
print(metrics)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/latest/src/spikeinterface/qualitymetrics/misc_metrics.py:846: UserWarning: Some units have too few spikes : amplitude_cutoff is set to NaN
warnings.warn(f"Some units have too few spikes : amplitude_cutoff is set to NaN")
amplitude_cutoff firing_rate snr
#0 NaN 5.3 23.728917
#1 NaN 5.0 25.425349
#2 NaN 4.3 13.778891
#3 NaN 3.0 21.708004
#4 NaN 4.8 7.417238
#5 NaN 3.7 7.401047
#6 NaN 5.1 20.829799
#7 NaN 11.1 7.327282
#8 NaN 19.5 8.051580
#9 NaN 12.9 8.975356
Some metrics are based on the principal component scores, so the exwtension need to be computed before. For instance:
analyzer.compute("principal_components", n_components=3, mode="by_channel_global", whiten=True)
metrics = compute_quality_metrics(
analyzer,
metric_names=[
"isolation_distance",
"d_prime",
],
)
print(metrics)
Fitting PCA: 0%| | 0/10 [00:00<?, ?it/s]
Fitting PCA: 80%|████████ | 8/10 [00:00<00:00, 77.90it/s]
Fitting PCA: 100%|██████████| 10/10 [00:00<00:00, 52.14it/s]
Projecting waveforms: 0%| | 0/10 [00:00<?, ?it/s]
Projecting waveforms: 100%|██████████| 10/10 [00:00<00:00, 179.91it/s]
calculate_pc_metrics: 0%| | 0/10 [00:00<?, ?it/s]
calculate_pc_metrics: 40%|████ | 4/10 [00:00<00:00, 36.81it/s]
calculate_pc_metrics: 100%|██████████| 10/10 [00:00<00:00, 53.01it/s]
d_prime isolation_distance
#0 29.314868 1.162386e+17
#1 27.426990 2.250404e+04
#2 24.759109 NaN
#3 30.909958 8.475294e+17
#4 28.448732 1.168632e+17
#5 20.730188 3.083330e+17
#6 37.228953 7.458687e+17
#7 28.968748 8.878394e+03
#8 20.999118 5.827409e+03
#9 30.249498 7.100390e+03
Total running time of the script: (0 minutes 1.184 seconds)