Note
Go to the end to download the full example code
Sorting objects¶
The BaseSorting
is the basic class for handling spike sorted data.
Here is how it works.
A SortingExtractor handles:
spike trains retrieval across segments
dumping to/loading from dict-json
saving (caching)
import numpy as np
import spikeinterface.extractors as se
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.98.0/examples/modules_gallery/core/plot_2_sorting_extractor.py", line 17, in <module>
import spikeinterface.extractors as se
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/__init__.py", line 1, in <module>
from .extractorlist import *
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/extractorlist.py", line 15, in <module>
from .neoextractors import *
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/__init__.py", line 1, in <module>
from .alphaomega import AlphaOmegaRecordingExtractor, AlphaOmegaEventExtractor, read_alphaomega, read_alphaomega_event
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/alphaomega.py", line 3, in <module>
from .neobaseextractor import NeoBaseRecordingExtractor, NeoBaseEventExtractor
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 332, in <module>
class NeoBaseSortingExtractor(_NeoBaseExtractor, BaseSorting):
File "/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.98.0/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 480, in NeoBaseSortingExtractor
def _infer_t_start_from_signal_stream(self, segment_index: int, stream_id: Optional[str] = None) -> float | None:
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
We will create a SortingExtractor
object from scratch using numpy
and the
NumpySorting
Let’s define the properties of the dataset:
sampling_frequency = 30000.
duration = 20.
num_timepoints = int(sampling_frequency * duration)
num_units = 4
num_spikes = 1000
We generate some random events for 2 segments:
times0 = np.int_(np.sort(np.random.uniform(0, num_timepoints, num_spikes)))
labels0 = np.random.randint(1, num_units + 1, size=num_spikes)
times1 = np.int_(np.sort(np.random.uniform(0, num_timepoints, num_spikes)))
labels1 = np.random.randint(1, num_units + 1, size=num_spikes)
And instantiate a NumpySorting
object:
sorting = se.NumpySorting.from_times_labels([times0, times1], [labels0, labels1], sampling_frequency)
print(sorting)
We can now print properties that the SortingExtractor
retrieves from
the underlying sorted dataset.
print('Unit ids = {}'.format(sorting.get_unit_ids()))
st = sorting.get_unit_spike_train(unit_id=1, segment_index=0)
print('Num. events for unit 1seg0 = {}'.format(len(st)))
st1 = sorting.get_unit_spike_train(unit_id=1, start_frame=0, end_frame=30000, segment_index=1)
print('Num. events for first second of unit 1 seg1 = {}'.format(len(st1)))
Some extractors also implement a write
function. We can for example
save our newly created sorting object to NPZ format (a simple format based
on numpy used in spikeinterface
):
file_path = 'my_sorting.npz'
se.NpzSortingExtractor.write_sorting(sorting, file_path)
We can now read it back with the proper extractor:
sorting2 = se.NpzSortingExtractor(file_path)
print(sorting2)
Unit properties are key value pairs that we can store for any unit.
We will now calculate unit firing rates and add them as properties to
the SortingExtractor
object:
firing_rates = []
for unit_id in sorting2.get_unit_ids():
st = sorting2.get_unit_spike_train(unit_id=unit_id, segment_index=0)
firing_rates.append(st.size / duration)
sorting2.set_property('firing_rate', firing_rates)
print(sorting2.get_property('firing_rate'))
You can also get a a sorting with a subset of unit. Properties are propagated to the new object:
sorting3 = sorting2.select_units(unit_ids=[1, 4])
print(sorting3)
print(sorting3.get_property('firing_rate'))
# which is equivalent to
from spikeinterface import UnitsSelectionSorting
sorting3 = UnitsSelectionSorting(sorting2, unit_ids=[1, 4])
- A sorting can be “dumped” (exported) to:
a dict
- * a json file
a pickle file
The “dump” operation is lazy, i.e., the spike trains are not exported. Only the information about how to reconstruct the sorting are dumped:
from spikeinterface import load_extractor
from pprint import pprint
d = sorting2.to_dict()
pprint(d)
sorting2_loaded = load_extractor(d)
print(sorting2_loaded)
The dictionary can also be dumped directly to a JSON file on disk:
sorting2.dump('my_sorting.json')
sorting2_loaded = load_extractor('my_sorting.json')
print(sorting2_loaded)
IMPORTANT: the “dump” operation DOES NOT copy the spike trains to disk!
If you wish to also store the spike trains in a compact way you need to use the
save()
function:
sorting2.save(folder='./my_sorting')
import os
pprint(os.listdir('./my_sorting'))
sorting2_cached = load_extractor('./my_sorting')
print(sorting2_cached)
Total running time of the script: ( 0 minutes 0.002 seconds)