API

Module spikeinterface.core

Contain core class:
  • Recording

  • Sorting

And contain also “core extractors” used for caching:
  • BinaryRecordingExtractor

  • NpzSortingExtractor

spikeinterface.core.load_extractor(file_or_folder_or_dict, base_folder=None)
Instantiate extractor from:
  • a dict

  • a json file

  • a pickle file

  • folder (after save)

Parameters
file_or_folder_or_dict: dictionary or folder or file (json, pickle)
Returns
extractor: Recording or Sorting

The loaded extractor object

class spikeinterface.core.BaseRecording(sampling_frequency: float, channel_ids: List, dtype)

Abstract class representing several a multichannel timeseries (or block of raw ephys traces). Internally handle list of RecordingSegment

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

channel_slice

check_if_dumpable

clear_channel_groups

clear_channel_locations

frame_slice

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

get_traces

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_dummy_probe_from_locations

set_probegroup

split_by

class spikeinterface.core.BaseSorting(sampling_frequency: float, unit_ids: List)

Abstract class representing several segment several units and relative spiketrains.

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_sorting_segment

annotate

check_if_dumpable

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

select_units

class spikeinterface.core.BaseEvent(channel_ids, structured_dtype)

Abstract class representing events.

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_event_segment

annotate

check_if_dumpable

get_annotation_keys

get_event_times

get_num_channels

get_num_segments

get_property

get_property_keys

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

class spikeinterface.core.BinaryRecordingExtractor(file_paths, sampling_frequency, num_chan, dtype, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)

RecordingExtractor for a binary format

Parameters
file_paths: str or Path or list

Path to the binary file

sampling_frequency: float

The sampling frequncy

num_chan: int

Number of channels

dtype: str or dtype

The dtype of the binary file

time_axis: int

The axis of the time dimension (default 0: F order)

channel_ids: list (optional)

A list of channel ids

file_offset: int (optional)

Number of bytes in the file to offset by during memmap instantiation.

gain_to_uV: float or array-like (optional)

The gain to apply to the traces

offset_to_uV: float or array-like

The offset to apply to the traces

is_filtered: bool or None

If True, the recording is assumed to be filtered. If None, is_filtered is not set.

Returns
recording: BinaryRecordingExtractor

The recording Extractor

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

write_recording(recording, file_paths[, dtype])

Save the traces of a recording extractor in binary .dat format.

add_recording_segment

annotate

channel_slice

check_if_dumpable

clear_channel_groups

clear_channel_locations

frame_slice

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

get_traces

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_dummy_probe_from_locations

set_probegroup

split_by

spikeinterface.core.read_binary(*args, **kwargs)

RecordingExtractor for a binary format

Parameters
file_paths: str or Path or list

Path to the binary file

sampling_frequency: float

The sampling frequncy

num_chan: int

Number of channels

dtype: str or dtype

The dtype of the binary file

time_axis: int

The axis of the time dimension (default 0: F order)

channel_ids: list (optional)

A list of channel ids

file_offset: int (optional)

Number of bytes in the file to offset by during memmap instantiation.

gain_to_uV: float or array-like (optional)

The gain to apply to the traces

offset_to_uV: float or array-like

The offset to apply to the traces

is_filtered: bool or None

If True, the recording is assumed to be filtered. If None, is_filtered is not set.

Returns
recording: BinaryRecordingExtractor

The recording Extractor

class spikeinterface.core.NpzSortingExtractor(file_path)

Dead simple and super light format based on the NPZ numpy format. https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez

It is in fact an archive of several .npy format. All spike are store in two columns maner index+labels

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_sorting_segment

annotate

check_if_dumpable

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

select_units

write_sorting

class spikeinterface.core.NumpyRecording(traces_list, sampling_frequency, channel_ids=None)

In memory recording. Contrary to previous version this class does not handle npy files.

Parameters
traces_list: list of array or array (if mono segment)

The traces to instantiate a mono or multisegment Recording

sampling_frequency: float

The ssampling frequency in Hz

channel_ids: list

An optional list of channel_ids. If None, linear channels are assumed

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

channel_slice

check_if_dumpable

clear_channel_groups

clear_channel_locations

frame_slice

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

get_traces

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_dummy_probe_from_locations

set_probegroup

split_by

class spikeinterface.core.NumpySorting(sampling_frequency, unit_ids=[])
Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

from_dict(units_dict_list, sampling_frequency)

Construct sorting extractor from a list of dict.

from_extractor(source_sorting)

Create a numpy sorting from another exatractor

from_neo_spiketrain_list(neo_spiketrains, ...)

Construct a sorting with a neo spiketrain list.

from_times_labels(times_list, labels_list, ...)

Construct sorting extractor from:

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_sorting_segment

annotate

check_if_dumpable

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

select_units

class spikeinterface.core.ChannelSliceRecording(parent_recording, channel_ids=None, renamed_channel_ids=None)

Class to slice a Recording object based on channel_ids.

Do not use this class directly but use recording.channel_slice(…)

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

channel_slice

check_if_dumpable

clear_channel_groups

clear_channel_locations

frame_slice

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

get_traces

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_dummy_probe_from_locations

set_probegroup

split_by

class spikeinterface.core.UnitsSelectionSorting(parent_sorting, unit_ids=None, renamed_unit_ids=None)

Class that handles slicing of a Sorting object based on a list of unit_ids.

Do not use this class directly but use sorting.select_units(…)

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_sorting_segment

annotate

check_if_dumpable

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

select_units

class spikeinterface.core.FrameSliceRecording(parent_recording, start_frame=None, end_frame=None)

Class to get a lazy frame slice. Work only with mono segment recording.

Do not use this class directly but use recording.frame_slice(…)

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

route save_to_folder() or save_to_mem()

save_to_folder([name, folder, dump_ext, verbose])

Save extractor to folder.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids])

Set property vector:

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

channel_slice

check_if_dumpable

clear_channel_groups

clear_channel_locations

frame_slice

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

get_traces

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_dummy_probe_from_locations

set_probegroup

split_by

spikeinterface.core.append_recordings(*args, **kwargs)
spikeinterface.core.concatenate_recordings(*args, **kwargs)
spikeinterface.core.append_sortings(*args, **kwargs)
spikeinterface.core.extract_waveforms(recording, sorting, folder, load_if_exists=False, precompute_template=('average',), ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, overwrite=False, return_scaled=True, dtype=None, **job_kwargs)

Extracts waveform on paired Recording-Sorting objects. Waveforms are persistent on disk and cached in memory.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: str or Path

The folder where waveforms are cached

load_if_exists: bool

If True and waveforms have already been extracted in the specified folder, they are loaded and not recomputed.

precompute_template: None or list

Precompute average/std/median for template. If None not precompute.

ms_before: float

Time in ms to cut before spike peak

ms_after: float

Time in ms to cut after spike peak

max_spikes_per_unit: int or None

Number of spikes per unit to extract waveforms from (default 500). Use None to extract waveforms for all spikes

overwrite: bool

If True and ‘folder’ exists, the folder is removed and waveforms are recomputed. Othewise an error is raised.

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, waveforms are converted to uV.

dtype: dtype or None

Dtype of the output waveforms. If None, the recording dtype is maintained.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

Returns
we: WaveformExtractor

The WaveformExtractor object

class spikeinterface.core.WaveformExtractor(recording, sorting, folder)

Class to extract waveform on paired Recording-Sorting objects. Waveforms are persistent on disk and cached in memory.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: Path

The folder where waveforms are cached

Returns
we: WaveformExtractor

The WaveformExtractor object

Examples

>>> # Instantiate
>>> we = WaveformExtractor.create(recording, sorting, folder)
>>> # Compute
>>> we = we.set_params(...)
>>> we = we.run_extract_waveforms(...)
>>> # Retrieve
>>> waveforms = we.get_waveforms(unit_id)
>>> template = we.get_template(unit_id, mode='median')
>>> # Load  from folder (in another session)
>>> we = WaveformExtractor.load_from_folder(folder)
Attributes
nafter
nbefore
nsamples
return_scaled

Methods

get_all_templates([unit_ids, mode])

Return templates (average waveform) for multiple units.

get_sampled_indices(unit_id)

Return sampled spike indices of extracted waveforms

get_template(unit_id[, mode, sparsity])

Return template (average waveform).

get_template_segment(unit_id, segment_index)

Return template for the specified unit id computed from waveforms of a specific segment.

get_waveforms(unit_id[, with_index, sparsity])

Return waveforms for the specified unit id.

get_waveforms_segment(segment_index, unit_id)

Return waveforms from a specified segment and unit_id.

precompute_templates([modes])

Precompute all template for different "modes":

set_params([ms_before, ms_after, ...])

Set parameters for waveform extraction

create

load_from_folder

run_extract_waveforms

sample_spikes

spikeinterface.core.download_dataset(repo=None, remote_path=None, local_folder=None, update_if_exists=False)
spikeinterface.core.write_binary_recording(recording, file_paths=None, dtype=None, add_file_extension=True, verbose=False, byte_offset=0, **job_kwargs)

Save the trace of a recording extractor in several binary .dat format.

Note :

time_axis is always 0 (contrary to previous version. to get time_axis=1 (which is a bad idea) use write_binary_recording_file_handle()

Parameters
recording: RecordingExtractor

The recording extractor object to be saved in .dat format

file_path: str

The path to the file.

dtype: dtype

Type of the saved data. Default float32.

add_file_extension: bool

If True (default), file the ‘.raw’ file extension is added if the file name is not a ‘raw’, ‘bin’, or ‘dat’

verbose: bool

If True, output is verbose (when chunks are used)

byte_offset: int

Offset in bytes (default 0) to for the binary file (e.g. to write a header)

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

spikeinterface.core.set_global_tmp_folder(folder)

Set the global path temporary folder.

spikeinterface.core.set_global_dataset_folder(folder)

Set the global dataset folder.

class spikeinterface.core.ChunkRecordingExecutor(recording, func, init_func, init_args, verbose=False, progress_bar=False, handle_returns=False, n_jobs=1, total_memory=None, chunk_size=None, chunk_memory=None, job_name='')

Core class for parallel processing to run a “function” over chunks on a recording.

It supports running a function:
  • in loop with chunk processing (low RAM usage)

  • at once if chunk_size is None (high RAM usage)

  • in parallel with ProcessPoolExecutor (higher speed)

The initializer (‘init_func’) allows to set a global context to avoid heavy serialization (for examples, see implementation in core.WaveformExtractor).

Parameters
recording: RecordingExtractor

The recording to be processed

func: function

Function that runs on each chunk

init_func: function

Initializer function to set the global context (accessible by ‘func’)

init_args: tuple

Arguments for init_func

verbose: bool

If True, output is verbose

progress_bar: bool

If True, a progress bar is printed to monitor the progress of the process

handle_returns: bool

If True, the function can return values

n_jobs: int

Number of jobs to be used (default 1). Use -1 to use as many jobs as number of cores

total_memory: str

Total memory (RAM) to use (e.g. “1G”, “500M”)

chunk_memory: str

Memory per chunk (RAM) to use (e.g. “1G”, “500M”)

chunk_size: int or None

Size of each chunk in number of samples. If ‘total_memory’ or ‘chunk_memory’ are used, it is ignored.

job_name: str

Job name

Returns
res: list

If ‘handle_returns’ is True, the results for each chunk process

Methods

run()

Runs the defined jobs.

Module spikeinterface.extractors

spikeinterface.extractors.toy_example(duration=10, num_channels=4, num_units=10, sampling_frequency=30000.0, num_segments=2, average_peak_amplitude=- 100, upsample_factor=13, dumpable=False, dump_folder=None, seed=None)

Creates toy recording and sorting extractors.

Parameters
duration: float (or list if multi segment)

Duration in s (default 10)

num_channels: int

Number of channels (default 4)

num_units: int

Number of units (default 10)

sampling_frequency: float

Sampling frequency (default 30000)

num_segments: int default 2

Number of segments.

dumpable: bool

If True, objects are dumped to file and become ‘dumpable’

dump_folder: str or Path

Path to dump folder (if None, ‘test’ is used

seed: int

Seed for random initialization

Returns
recording: RecordingExtractor

The output recording extractor. If dumpable is False it’s a NumpyRecordingExtractor, otherwise it’s an MdaRecordingExtractor

sorting: SortingExtractor

The output sorting extractor. If dumpable is False it’s a NumpyRecordingExtractor, otherwise it’s an NpzSortingExtractor

spikeinterface.extractors.read_bids_folder(folder_path)

This read an entire BIDS folder and return a list of recording with there attached Probe.

theses files are considered:
  • _channels.tsv

  • _contacts.tsv

  • _ephys.nwb

  • _probes.tsv

spikeinterface.extractors.read_mearec(file_path, locs_2d=True, use_natural_unit_ids=True)
Parameters
file_path: str or Path

Path to MEArec h5 file

locs_2d: bool

If True (default), locations are loaded in 2d. If False, 3d locations are loaded

use_natural_unit_ids: bool

If True, natural unit strings are loaded (e.g. #0. #1). If False, unit ids are in64

Returns
recording: MEArecRecordingExtractor

The recording extractor object

sorting: MEArecSortingExtractor

The sorting extractor object

spikeinterface.extractors.read_spikeglx(*args, **kwargs)

Class for reading data from a SpikeGLX system (NI-DAQ for neuropixel probe) See https://billkarsh.github.io/SpikeGLX/

Based on neo.rawio.SpikeGLXRawIO

Contrary to older verion this reader is folder based. So if the folder contain several streams (‘imec0.ap’ ‘nidq’ ‘imec0.lf’) then it has to be specified xwith stream_id=

Parameters
folder_path: str
stream_id: str or None

stream for instance : ‘imec0.ap’ ‘nidq’ or ‘imec0.lf’

spikeinterface.extractors.read_openephys(folder_path, **kwargs)

Read ‘legacy’ or ‘binary’ Open Ephys formats

Parameters
folder_path: str or Path

Path to openephys folder

Returns
recording: OpenEphysLegacyRecordingExtractor or OpenEphysBinaryExtractor
spikeinterface.extractors.read_intan(*args, **kwargs)

Class for reading data from a intan board support rhd and rhs format.

Based on neo.rawio.IntanRawIO

Parameters
file_path: str
stream_id: str or None
spikeinterface.extractors.read_neuroscope(*args, **kwargs)

Class for reading data from neuroscope Ref: http://neuroscope.sourceforge.net

Based on neo.rawio.NeuroScopeRawIO

Parameters
file_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_plexon(*args, **kwargs)

Class for reading plexon plx files.

Based on neo.rawio.PlexonRawIO

Parameters
file_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_neuralynx(*args, **kwargs)

Class for reading neuralynx folder

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_blackrock(*args, **kwargs)

Class for reading neuralynx folder

Based on neo.rawio.NeuralynxRawIO

Parameters
file_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_mcsraw(*args, **kwargs)

Class for reading data from “Raw” Multi Channel System (MCS) format. This format is NOT the native MCS format (*.mcd). This format is a raw format with an internal binary header exported by the “MC_DataTool binary conversion” with the option header selected.

Based on neo.rawio.NeuralynxRawIO

Parameters
file_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_kilosort(*args, **kwargs)

SortingExtractor for a Kilosort output folder

Parameters
folder_path: str or Path

Path to the output Phy folder (containing the params.py)

keep_good_only: bool

If True, only Kilosort-labeled ‘good’ units are returned

spikeinterface.extractors.read_spike2(*args, **kwargs)

Class for reading spike2 smr files. smrx are not supported with this, prefer CedRecordingExtractor instead.

Based on neo.rawio.Spike2RawIO

Parameters
file_path: str

The xml file.

stream_id: str or None
spikeinterface.extractors.read_ced(*args, **kwargs)

Class for reading smr/smrw CED file.

Based on neo.rawio.CedRawIO / sonpy

Alternative to read_spike2 which do not handle smrx

Parameters
file_path: str

The smr or smrx file.

stream_id: str or None
spikeinterface.extractors.read_maxwell(*args, **kwargs)

Class for reading data from Maxwell device. It handle MaxOne (old and new format) and MaxTwo.

Based on neo.rawio.IntanRawIO

Parameters
file_path: str

Path to maxwell h5 file

stream_id: str or None

For MaxTwo when there are several wells at the same time you need to specify stream_id=’well000’ or ‘well0001’ or …

rec_name: str or None

When the file contains several blocks (aka recordings) you need to specify the one you want to extract. (rec_name=’rec0000’)

spikeinterface.extractors.read_nix(*args, **kwargs)

Class for reading Nix file

Based on neo.rawio.NIXRawIO

Parameters
file_path: str
stream_id: str or None
spikeinterface.extractors.read_spikegadgets(*args, **kwargs)

Class for reading *rec files from spikegadgets.

Parameters
file_path: str

The smr or smrx file.

stream_id: str or None
spikeinterface.extractors.read_klusta(*args, **kwargs)
spikeinterface.extractors.read_hdsort(*args, **kwargs)
spikeinterface.extractors.read_waveclust(*args, **kwargs)
spikeinterface.extractors.read_yass(*args, **kwargs)
spikeinterface.extractors.read_combinato(*args, **kwargs)
spikeinterface.extractors.read_tridesclous(*args, **kwargs)
spikeinterface.extractors.read_spykingcircus(*args, **kwargs)
spikeinterface.extractors.read_herdingspikes(*args, **kwargs)
spikeinterface.extractors.read_mda_recording(folder_path, **kwargs)
spikeinterface.extractors.read_mda_sorting(file_path, **kwargs)
spikeinterface.extractors.read_shybrid_recording(file_path)
spikeinterface.extractors.read_shybrid_sorting(file_path, sampling_frequency, delimiter=',')
spikeinterface.extractors.read_alf_sorting(folder_path, sampling_frequency=30000)

Module spikeinterface.toolkit

toolkit.utils

spikeinterface.toolkit.get_random_data_chunks(recording, return_scaled=False, num_chunks_per_segment=20, chunk_size=10000, seed=0)

Exctract random chunks across segments

This is used for instance in get_noise_levels() to estimate noise on traces.

Parameters
recording: BaseRecording

The recording to get random chunks from

return_scaled: bool

If True, returned chunks are scaled ti uV

num_chunks_per_segment: int

Number of chunks per segment

chunk_size: int

Size of a chink in number of frames

seed: int

Random seed

Returns
chunk_list: np.array

Array of concatenate chunks per segment

spikeinterface.toolkit.get_channel_distances(recording)

Distance between channel pairs

spikeinterface.toolkit.get_closest_channels(recording, channel_ids=None, num_channels=None)

Get closest channels + distances

Parameters
recording: RecordingExtractor

The recording extractor to be re-referenced

channel_ids: list or int

list of channels id to compute there near neighborhood

num_channels: int, optional

Maximum number of neighborhood channel to return

Returns
: array (2d)

closest channel indices in ascending order for each channel id given in input

: array (2d)

distance in ascending order for each channel id given in input

spikeinterface.toolkit.get_noise_levels(recording, return_scaled=True, **random_chunk_kwargs)

Estimate noise for each channel using MAD methods.

Internally it sample some chunk across segment. And then, it use MAD estimator (more robust than STD)

Preprocessing

spikeinterface.toolkit.preprocessing.filter(recording, engine='scipy', **kwargs)
Generic filter class based on:
  • scipy.signal.iirfilter

  • scipy.signal.filtfilt or scipy.signal.sosfilt

BandpassFilterRecording is built on top of it.

Parameters
recording: Recording

The recording extractor to be re-referenced

band: float or list

If float, cutoff frequency in Hz for ‘highpass’ filter type If list. band (low, high) in Hz for ‘bandpass’ filter type

btype: str

Type of the filter (‘bandpass’, ‘highpass’)

margin_ms: float

Margin in ms on border to avoid border effect

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

**filter_kwargs: keyword arguments for parallel processing:
  • filter_order: order

    The order of the filter

  • filter_mode: ‘sos or ‘ba’

    ‘sos’ is bi quadratic and more stable than ab so thery are prefered.

  • ftype: str

    Filter type for iirdesign (‘butter’ / ‘cheby1’ / … all possible of scipy.signal.iirdesign)

Returns
filter_recording: FilterRecording

The filtered recording extractor object

spikeinterface.toolkit.preprocessing.bandpass_filter(*args, **kwargs)

Bandpass filter of a recording

Parameters
recording: Recording

The recording extractor to be re-referenced

freq_min: float

The highpass cutoff frequency in Hz

freq_max: float

The lowpass cutoff frequency in Hz

margin_ms: float

Margin in ms on border to avoid border effect

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

**filter_kwargs: keyword arguments for parallel processing:
  • filter_order: order

    The order of the filter

  • filter_mode: ‘sos or ‘ba’

    ‘sos’ is bi quadratic and more stable than ab so thery are prefered.

  • ftype: str

    Filter type for iirdesign (‘butter’ / ‘cheby1’ / … all possible of scipy.signal.iirdesign)

Returns
filter_recording: BandpassFilterRecording

The bandpass-filtered recording extractor object

spikeinterface.toolkit.preprocessing.notch_filter(*args, **kwargs)
Parameters
recording: RecordingExtractor

The recording extractor to be notch-filtered

freq: int or float

The target frequency in Hz of the notch filter

q: int

The quality factor of the notch filter

**filter_kwargs: keyword arguments for parallel processing:
  • filter_order: order

    The order of the filter

  • filter_mode: ‘sos or ‘ba’

    ‘sos’ is bi quadratic and more stable than ab so thery are prefered.

  • ftype: str

    Filter type for iirdesign (‘butter’ / ‘cheby1’ / … all possible of scipy.signal.iirdesign)

Returns
filter_recording: NotchFilterRecording

The notch-filtered recording extractor object

spikeinterface.toolkit.preprocessing.normalize_by_quantile(*args, **kwargs)

Rescale the traces from the given recording extractor with a scalar and offset. First, the median and quantiles of the distribution are estimated. Then the distribution is rescaled and offset so that the scale is given by the distance between the quantiles (1st and 99th by default) is set to scale, and the median is set to the given median.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

scalar: float

Scale for the output distribution

median: float

Median for the output distribution

q1: float (default 0.01)

Lower quantile used for measuring the scale

q1: float (default 0.99)

Upper quantile used for measuring the

seed: int

Random seed for reproducibility

Returns
——-
rescaled_traces: NormalizeByQuantileRecording

The rescaled traces recording extractor object

spikeinterface.toolkit.preprocessing.scale(*args, **kwargs)

Sscale traces from the given recording extractor with a scalar and offset. New traces = traces*scalar + offset.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

scalar: float or array

Scalar for the traces of the recording extractor or array with scalars for each channel

offset: float or array

Offset for the traces of the recording extractor or array with offsets for each channel

Returns
——-
transform_traces: ScaleRecording

The transformed traces recording extractor object

spikeinterface.toolkit.preprocessing.center(*args, **kwargs)
spikeinterface.toolkit.preprocessing.whiten(*args, **kwargs)

Whitens the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be whitened.

**random_chunk_kwargs
Returns
——-
whitened_recording: WhitenRecording

The whitened recording extractor

spikeinterface.toolkit.preprocessing.rectify(*args, **kwargs)
spikeinterface.toolkit.preprocessing.blank_staturation(*args, **kwargs)

Find and remove parts of the signal with extereme values. Some arrays may produce these when amplifiers enter saturation, typically for short periods of time. To remove these artefacts, values below or above a threshold are set to the median signal value. The threshold is either be estimated automatically, using the lower and upper 0.1 signal percentile with the largest deviation from the median, or specificed. Use this function with caution, as it may clip uncontaminated signals. A warning is printed if the data range suggests no artefacts.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed Minimum value. If None, clipping is not performed on lower interval edge.

TODO
Returns
rescaled_traces: BlankSaturationRecording

The filtered traces recording extractor object

spikeinterface.toolkit.preprocessing.clip(*args, **kwargs)

Limit the values of the data between a_min and a_max. Values exceeding the range will be set to the minimum or maximum, respectively.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

a_min: float or `None` (default `None`)

Minimum value. If None, clipping is not performed on lower interval edge.

a_max: float or `None` (default `None`)

Maximum value. If None, clipping is not performed on upper interval edge.

Returns
rescaled_traces: ClipTracesRecording

The clipped traces recording extractor object

spikeinterface.toolkit.preprocessing.common_reference(*args, **kwargs)

Re-references the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be re-referenced

reference: str ‘global’, ‘single’ or ‘local’

If ‘global’ then CMR/CAR is used either by groups or all channel way. If ‘single’, the selected channel(s) is remove from all channels. operator is no used in that case. If ‘local’, an average CMR/CAR is implemented with only k channels selected the nearest outside of a radius around each channel

operator: str ‘median’ or ‘average’
If ‘median’, common median reference (CMR) is implemented (the median of

the selected channels is removed for each timestamp).

If ‘average’, common average reference (CAR) is implemented (the mean of the selected channels is removed

for each timestamp).

groups: list

List of lists containing the channel ids for splitting the reference. The CMR, CAR, or referencing with respect to single channels are applied group-wise. However, this is not applied for the local CAR. It is useful when dealing with different channel groups, e.g. multiple tetrodes.

ref_channels: list or int

If no ‘groups’ are specified, all channels are referenced to ‘ref_channels’. If ‘groups’ is provided, then a list of channels to be applied to each group is expected. If ‘single’ reference, a list of one channel or an int is expected.

local_radius: tuple(int, int)

Use in the local CAR implementation as the selecting annulus (exclude radius, include radius)

dtype: str

dtype of the returned traces. If None, dtype is maintained

verbose: bool

If True, output is verbose

Returns
referenced_recording: CommonReferenceRecording

The re-referenced recording extractor object

spikeinterface.toolkit.preprocessing.remove_artifacts(*args, **kwargs)

Removes stimulation artifacts from recording extractor traces. By default, artifact periods are zeroed-out (mode = ‘zeros’). This is only recommended for traces that are centered around zero (e.g. through a prior highpass filter); if this is not the case, linear and cubic interpolation modes are also available, controlled by the ‘mode’ input argument.

Parameters
recording: RecordingExtractor

The recording extractor to remove artifacts from

list_triggers: list of list

One list per segment of int with the stimulation trigger frames

ms_before: float

Time interval in ms to remove before the trigger events

ms_after: float

Time interval in ms to remove after the trigger events

mode: str

Determines what artifacts are replaced by. Can be one of the following:

  • ‘zeros’ (default): Artifacts are replaced by zeros.

  • ‘linear’: Replacement are obtained through Linear interpolation between

    the trace before and after the artifact. If the trace starts or ends with an artifact period, the gap is filled with the closest available value before or after the artifact.

  • ‘cubic’: Cubic spline interpolation between the trace before and after

    the artifact, referenced to evenly spaced fit points before and after the artifact. This is an option thatcan be helpful if there are significant LFP effects around the time of the artifact, but visual inspection of fit behaviour with your chosen settings is recommended. The spacing of fit points is controlled by ‘fit_sample_spacing’, with greater spacing between points leading to a fit that is less sensitive to high frequency fluctuations but at the cost of a less smooth continuation of the trace. If the trace starts or ends with an artifact, the gap is filled with the closest available value before or after the artifact.

fit_sample_spacing: float

Determines the spacing (in ms) of reference points for the cubic spline fit if mode = ‘cubic’. Default = 1ms. Note: The actual fit samples are the median of the 5 data points around the time of each sample point to avoid excessive influence from hyper-local fluctuations.

Returns
removed_recording: RemoveArtifactsRecording

The recording extractor after artifact removal

spikeinterface.toolkit.preprocessing.remove_bad_channels(*args, **kwargs)

Remove bad channels from the recording extractor given a thershold on standard deviation.

Parameters
recording: RecordingExtractor

The recording extractor object

bad_threshold: float

If automatic is used, the threshold for the standard deviation over which channels are removed

**random_chunk_kwargs
Returns
remove_bad_channels_recording: RemoveBadChannelsRecording

The recording extractor without bad channels

Postprocessing

spikeinterface.toolkit.postprocessing.get_template_amplitudes(waveform_extractor, peak_sign='neg', mode='extremum')

Get amplitude per channel for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

‘extremum’: max or min ‘at_index’: take value at spike index

Returns
peak_values: dict

Dictionary with unit ids as keys and template amplitudes as values

spikeinterface.toolkit.postprocessing.get_template_extremum_channel(waveform_extractor, peak_sign='neg', outputs='id')

Compute the channel with the extremum peak for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

outputs: str
  • ‘id’: channel id

  • ‘index’: channel index

Returns
extremum_channels: dict

Dictionary with unit ids as keys and extremum channels (id or index based on ‘outputs’) as values

spikeinterface.toolkit.postprocessing.get_template_extremum_channel_peak_shift(waveform_extractor, peak_sign='neg')

In some situations spike sorters could return a spike index with a small shift related to the waveform peak. This function estimates and return these alignment shifts for the mean template. This function is internally used by get_spike_amplitudes() to accurately retrieve the spike amplitudes.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

Returns
shifts: dict

Dictionary with unit ids as keys and shifts as values

spikeinterface.toolkit.postprocessing.get_template_extremum_amplitude(waveform_extractor, peak_sign='neg')

Computes amplitudes on the best channel.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

Returns
amplitudes: dict

Dictionary with unit ids as keys and amplitudes as values

spikeinterface.toolkit.postprocessing.get_template_channel_sparsity(waveform_extractor, method='best_channels', peak_sign='neg', outputs='id', num_channels=None, radius_um=None, threshold=5, by_property=None)

Get channel sparsity (subset of channels) for each template with several methods.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

method: str
  • “best_channels”: N best channels with the largest amplitude. Use the ‘num_channels’ argument to specify the

    number of channels.

  • “radius”: radius around the best channel. Use the ‘radius_um’ argument to specify the radius in um

  • “threshold”: thresholds based on template signal-to-noise ratio. Use the ‘threshold’ argument

    to specify the SNR threshold.

  • “by_property”: sparsity is given by a property of the recording and sorting(e.g. ‘group’).

    Use the ‘by_property’ argument to specify the property name.

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

outputs: str
  • ‘id’: channel id

  • ‘index’: channel index

num_channels: int

Number of channels for ‘best_channels’ method

radius_um: float

Radius in um for ‘radius’ method

threshold: float

Threshold in SNR ‘threshold’ method

by_property: object

Property name for ‘by_property’ method

Returns
sparsity: dict

Dictionary with unit ids as keys and sparse channel ids or indices (id or index based on ‘outputs’) as values

spikeinterface.toolkit.postprocessing.compute_unit_centers_of_mass(waveform_extractor, peak_sign='neg', num_channels=10)

Computes the center of mass (COM) of a unit based on the template amplitudes.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

num_channels: int

Number of channels used to compute COM

Returns
centers_of_mass: dict of np.array

Dictionary with unit ids as keys and centers of mass as values

spikeinterface.toolkit.postprocessing.calculate_template_metrics(waveform_extractor, feature_names=None, peak_sign='neg', **kwargs)

Compute template features like: peak_to_valley/peak_trough_ratio/half_width/repolarization_slope/recovery_slope

spikeinterface.toolkit.postprocessing.get_template_metric_names()
spikeinterface.toolkit.postprocessing.compute_principal_components(waveform_extractor, load_if_exists=False, n_components=5, mode='by_channel_local', whiten=True, dtype='float32')

Compute PC scores from waveform extractor.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

load_if_exists: bool

If True and pc scores are already in the waveform extractor folders, pc scores are loaded and not recomputed.

n_components: int

Number of components fo PCA

mode: str
  • ‘by_channel_local’: a local PCA is fitted for each channel (projection by channel)

  • ‘by_channel_global’: a global PCA is fitted for all channels (projection by channel)

  • ‘concatenated’: channels are concatenated and a global PCA is fitted

whiten: bool

If True, waveforms are pre-whitened

dtype: dtype

Dtype of the pc scores (default float32)

Returns
pc: WaveformPrincipalComponent

The waveform principal component object

spikeinterface.toolkit.postprocessing.get_spike_amplitudes(waveform_extractor, peak_sign='neg', outputs='concatenated', return_scaled=True, **job_kwargs)

Computes the spike amplitudes from a WaveformExtractor.

  1. The waveform extractor is used to determine the max channel per unit.

  2. Then a “peak_shift” is estimated because for some sorters the spike index is not always at the peak.

  3. Amplitudes are extracted in chunks (parallel or not)

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

peak_sign: str
The sign to compute maximum channel:
  • ‘neg’

  • ‘pos’

  • ‘both’

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, amplitudes are converted to uV.

outputs: str
How the output should be returned:
  • ‘concatenated’

  • ‘by_unit’

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

Returns
amplitudes: np.array
The spike amplitudes.
  • If ‘concatenated’ all amplitudes for all spikes and all units are concatenated

  • If ‘by_unit’, amplitudes are returned as a list (for segments) of dictionaries (for units)

spikeinterface.toolkit.postprocessing.compute_correlograms(sorting, window_ms=100.0, bin_ms=5.0, symmetrize=False)

Compute several cross-correlogram in one course from sevral cluster.

This very elegant implementation is copy from phy package written by Cyril Rossant. https://github.com/cortex-lab/phylib/blob/master/phylib/stats/ccg.py

Some sligh modification have been made to fit spikeinterface data model because there are several segments handling in spikeinterface.

Adaptation: Samuel Garcia

Quality metrics

spikeinterface.toolkit.qualitymetrics.compute_quality_metrics(waveform_extractor, metric_names=None, waveform_principal_component=None, **kwargs)
Parameters
waveform_extractor
metric_names
waveform_principal_component
kwargs
spikeinterface.toolkit.qualitymetrics.get_quality_metric_list()

Module spikeinterface.sorters

spikeinterface.sorters.available_sorters()

Lists available sorters.

spikeinterface.sorters.installed_sorters()

Lists installed sorters.

spikeinterface.sorters.get_default_params(sorter_name_or_class)

Returns default parameters for the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve default parameters from

Returns
default_params: dict

Dictionary with default params for the specified sorter

spikeinterface.sorters.print_sorter_versions()
spikeinterface.sorters.get_sorter_description(sorter_name_or_class)

Returns a brief description of the of the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve description from

Returns
params_description: dict

Dictionary with parameter description

spikeinterface.sorters.run_sorter(sorter_name, recording, output_folder=None, remove_existing_folder=True, delete_output_folder=False, verbose=False, raise_error=True, docker_image=None, with_output=True, **sorter_params)
spikeinterface.sorters.run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params={}, mode_if_folder_exists='raise', engine='loop', engine_kwargs={}, verbose=False, with_output=True, docker_images={})

This run several sorter on several recording. Simple implementation are nested loops or with multiprocessing.

sorter_list: list of str (sorter names) recording_dict_or_list: a dict (or a list) of recording working_folder : str

engine = None ( = ‘loop’) or ‘multiprocessing’ processes = only if ‘multiprocessing’ if None then processes=os.cpu_count() verbose=True/False to control sorter verbosity

Note: engine=’multiprocessing’ use the python multiprocessing module. This do not allow to have subprocess in subprocess. So sorter that already use internally multiprocessing, this will fail.

Parameters
sorter_list: list of str

List of sorter name.

recording_dict_or_list: dict or list

A dict of recording. The key will be the name of the recording. In a list is given then the name will be recording_0, recording_1, …

working_folder: str

The working directory.

sorter_params: dict of dict with sorter_name as key

This allow to overwrite default params for sorter.

mode_if_folder_exists: ‘raise_if_exists’ or ‘overwrite’ or ‘keep’
The mode when the subfolder of recording/sorter already exists.
  • ‘raise’ : raise error if subfolder exists

  • ‘overwrite’ : delete and force recompute

  • ‘keep’ : do not compute again if f=subfolder exists and log is OK

engine: str

‘loop’, ‘joblib’, or ‘dask’

engine_kwargs: dict
This contains kwargs specific to the launcher engine:
  • ‘loop’ : no kwargs

  • ‘joblib’ : {‘n_jobs’ : } number of processes

  • ‘dask’ : {‘client’:} the dask client for submiting task

verbose: bool

default True

with_output: bool

return the output.

docker_images: dict

A dictionary {sorter_name : docker_image} to specify is some sorters should use docker images

run_sorter_kwargs: dict
This contains kwargs specific to run_sorter function: * ‘raise_error’bool
  • ‘parallel’ : bool

  • ‘n_jobs’ : int

  • ‘joblib_backend’ : ‘loky’ / ‘multiprocessing’ / ‘threading’

Returns
resultsdict

The output is nested dict[(rec_name, sorter_name)] of SortingExtractor.

Module spikeinterface.comparison

spikeinterface.comparison.compare_two_sorters(*args, **kwargs)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
——-
sorting_comparison: SortingComparison

The SortingComparison object

spikeinterface.comparison.compare_multiple_sorters(*args, **kwargs)

Compares multiple spike sorter outputs.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

Parameters
sorting_list: list

List of sorting extractor objects to be compared

name_list: list

List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all availible if -1

spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters

  • ‘intersection’: spike trains are the intersection between the spike trains of the

    best matching two sorters

verbose: bool

if True, output is verbose

Returns
multi_sorting_comparison: MultiSortingComparison

MultiSortingComparison object with the multiple sorter comparison

spikeinterface.comparison.compare_sorter_to_ground_truth(*args, **kwargs)

Compares a sorter to a ground truth.

This class can:
  • compute a “macth between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positve detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hugarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
——-
sorting_comparison: SortingComparison

The SortingComparison object

class spikeinterface.comparison.GroundTruthComparison(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=- 1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

This class can:
  • compute a “macth between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positve detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hugarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
——-
sorting_comparison: SortingComparison

The SortingComparison object

Attributes
sorting1
sorting1_name
sorting2
sorting2_name

Methods

count_bad_units()

See get_bad_units

count_false_positive_units([redundant_score])

See get_false_positive_units().

count_overmerged_units([overmerged_score])

See get_overmerged_units().

count_redundant_units([redundant_score])

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units.

get_bad_units()

Return units list of "bad units".

get_confusion_matrix()

Computes the confusion matrix.

get_false_positive_units([redundant_score])

Return units list of "false positive units" from tested_sorting.

get_overmerged_units([overmerged_score])

Return "overmerged units"

get_performance([method, output])

Get performance rate with several method:

get_redundant_units([redundant_score])

Return "redundant units"

get_well_detected_units([well_detected_score])

Return units list of "well detected units" from tested_sorting.

print_performance([method])

Print performance with the selected method

print_summary([well_detected_score, ...])

Print a global performance summary that depend on the context:

get_labels1

get_labels2

get_ordered_agreement_scores

count_bad_units()

See get_bad_units

count_false_positive_units(redundant_score=None)

See get_false_positive_units().

count_overmerged_units(overmerged_score=None)

See get_overmerged_units().

count_redundant_units(redundant_score=None)

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units. kwargs are the same as get_well_detected_units.

get_bad_units()

Return units list of “bad units”.

“bad units” are defined as units in tested that are not in the best match list of GT units.

So it is the union of “false positive units” + “redundant units”.

Need exhaustive_gt=True

get_confusion_matrix()

Computes the confusion matrix.

Returns
confusion_matrix: pandas.DataFrame

The confusion matrix

get_false_positive_units(redundant_score=None)

Return units list of “false positive units” from tested_sorting.

“false positive units” ara defined as units in tested that are not matched at all in GT units.

Need exhaustive_gt=True

Parameters
redundant_score: float (default 0.2)

The agreement score below which tested units are counted as “false positive”” (and not “redundant”).

get_overmerged_units(overmerged_score=None)

Return “overmerged units”

“overmerged units” are defined as units in tested that match more than one GT unit with an agreement score larger than overmerged_score.

Parameters
overmerged_score: float (default 0.4)

Tested units with 2 or more agrement scores above ‘overmerged_score’ are counted as “overmerged”.

get_performance(method='by_unit', output='pandas')
Get performance rate with several method:
  • ‘raw_count’ : just render the raw count table

  • ‘by_unit’ : render perf as rate unit by unit of the GT

  • ‘pooled_with_average’ : compute rate unit by unit and average

Parameters
method: str

‘by_unit’, or ‘pooled_with_average’

output: str

‘pandas’ or ‘dict’

Returns
perf: pandas dataframe/series (or dict)

dataframe/series (based on ‘output’) with performance entries

get_redundant_units(redundant_score=None)

Return “redundant units”

“redundant units” are defined as units in tested that match a GT units with a big agreement score but it is not the best match. In other world units in GT that detected twice or more.

Parameters
redundant_score=None: float (default 0.2)

The agreement score above which tested units are counted as “redundant” (and not “false positive” ).

get_well_detected_units(well_detected_score=None)

Return units list of “well detected units” from tested_sorting.

“well detected units” are defined as units in tested that are well matched to GT units.

Parameters
well_detected_score: float (default 0.8)

The agreement score above which tested units are counted as “well detected”.

print_performance(method='pooled_with_average')

Print performance with the selected method

print_summary(well_detected_score=None, redundant_score=None, overmerged_score=None)
Print a global performance summary that depend on the context:
  • exhaustive= True/False

  • how many gt units (one or several)

This summary mix several performance metrics.

class spikeinterface.comparison.SymmetricSortingComparison(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=- 1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
——-
sorting_comparison: SortingComparison

The SortingComparison object

Attributes
sorting1
sorting1_name
sorting2
sorting2_name

Methods

get_agreement_fraction

get_best_unit_match1

get_best_unit_match2

get_matching

get_matching_event_count

get_matching_unit_list1

get_matching_unit_list2

get_ordered_agreement_scores

get_agreement_fraction(unit1=None, unit2=None)
get_best_unit_match1(unit1)
get_best_unit_match2(unit2)
get_matching()
get_matching_event_count(unit1, unit2)
get_matching_unit_list1(unit1)
get_matching_unit_list2(unit2)
class spikeinterface.comparison.GroundTruthStudy(study_folder=None)

Methods

get_metrics([rec_name])

Load or compute units metrics for a given recording.

get_templates(rec_name[, sorter_name, mode])

Get template for a given recording.

aggregate_count_units

aggregate_dataframes

aggregate_performance_by_unit

aggregate_run_times

compute_metrics

compute_waveforms

concat_all_snr

copy_sortings

create

get_ground_truth

get_recording

get_sorting

get_units_snr

get_waveform_extractor

run_comparisons

run_sorters

scan_folder

aggregate_count_units(well_detected_score=None, redundant_score=None, overmerged_score=None)
aggregate_dataframes(copy_into_folder=True, **karg_thresh)
aggregate_performance_by_unit()
aggregate_run_times()
compute_metrics(rec_name, metric_names=['snr'], ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=- 1, total_memory='1G')
compute_waveforms(rec_name, sorter_name=None, ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=- 1, total_memory='1G')
concat_all_snr()
copy_sortings()
classmethod create(study_folder, gt_dict, **job_kwargs)
get_ground_truth(rec_name=None)
get_metrics(rec_name=None, **metric_kwargs)

Load or compute units metrics for a given recording.

get_recording(rec_name=None)
get_sorting(sort_name, rec_name=None)
get_templates(rec_name, sorter_name=None, mode='median')

Get template for a given recording.

If sorter_name=None then template are from the ground truth.

get_units_snr(rec_name=None, **metric_kwargs)
get_waveform_extractor(rec_name, sorter_name=None)
run_comparisons(exhaustive_gt=False, **kwargs)
run_sorters(sorter_list, mode_if_folder_exists='keep', **kwargs)
scan_folder()

Module spikeinterface.widgets

spikeinterface.widgets.plot_timeseries(*args, **kwargs)

Plots recording timeseries.

Parameters
recording: RecordingExtractor

The recordng extractor object

segment_index: None or int

The segment index.

channel_ids: list

The channel ids to display.

order_channel_by_depth: boolean

Reorder channel by depth.

time_range: list

List with start time and end time

mode: ‘line’ or ‘map’ or ‘auto’
2 possible mode:
  • ‘line’ : classical for low channel count

  • ‘map’ : for high channel count use color heat map

  • ‘auto’ : auto switch depending the channel count <32ch

cmap: str dfault ‘RdBu’

matplotlib colormap used in mode ‘map’

show_channel_ids:

Set yticks with channel ids

color_groups: bool

If True groups are plotted with different colors

color: matplotlib color, default: None

The color used to draw the traces.

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: TimeseriesWidget

The output widget

spikeinterface.widgets.plot_rasters(*args, **kwargs)

Plots spike train rasters.

Parameters
sorting: SortingExtractor

The sorting extractor object

segment_index: None or int

The segment index.

unit_ids: list

List of unit ids

time_range: list

List with start time and end time

color: matplotlib color

The color to be used

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: RasterWidget

The output widget

spikeinterface.widgets.plot_probe_map(*args, **kwargs)

Plot the probe of a recording.

Parameters
recording: RecordingExtractor

The recordng extractor object

channel_ids: list

The channel ids to display

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

**plot_probe_kwargs: keyword arguments for probeinterface.plottin.plot_probe() function
Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_isi_distribution(*args, **kwargs)

Plots spike train ISI distribution.

Parameters
sorting: SortingExtractor

The sorting extractor object

unit_ids: list

List of unit ids

bins: int

Number of bins

window: float

Window size in s

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

Returns
W: ISIDistributionWidget

The output widget

spikeinterface.widgets.plot_crosscorrelograms(*args, **kwargs)

Plots spike train cross-correlograms. The diagonal is auto-correlogram. Parameters ———- sorting: SortingExtractor

The sorting extractor object

unit_ids: list

List of unit ids

bin_ms: float

bins duration in ms

window_ms: float

Window duration in ms

symmetrize: bool default False

Make symetric CCG

spikeinterface.widgets.plot_autocorrelograms(*args, **kwargs)

Plots spike train auto-correlograms. Parameters ———- sorting: SortingExtractor

The sorting extractor object

unit_ids: list

List of unit ids

bin_ms: float

bins duration in ms

window_ms: float

Window duration in ms

symmetrize: bool default False

Make symetric CCG

spikeinterface.widgets.plot_drift_over_time(*args, **kwargs)

Plot “y” (=depth) (or “x”) drift over time. The use peak detection on channel and make histogram of peak activity over time bins.

Parameters
recording: RecordingExtractor

The recordng extractor object

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

mode: str ‘heatmap’ or ‘scatter’

plot mode

probe_axis: 0 or 1

Axis of the probe 0=x 1=y

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: float (default 60.)

Bin duration in second

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_peak_activity_map(*args, **kwargs)

Plots spike rate (estimated estimated with detect_peaks()) as 2D activity map.

Can be static (bin_duration_s=None) or animated (bin_duration_s=60.)

Parameters
recording: RecordingExtractor

The recordng extractor object

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: None or float

If None then static image If not None then it is an animation per bin.

with_contact_color: bool (defaul True)

Plot rates with contact colors

with_interpolated_map: bool (defaul True)

Plot rates with interpolated map

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_unit_waveforms(*args, **kwargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

radius_um: None or float

If not None, all channels within a circle around the peak waveform will be displayed Incompatible with with max_channels

max_channelsNone or int

If not None only max_channels are displayed per units. Incompatible with with radius_um

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

axis_equal: bool

Equal aspext ratio for x and y axis, to visualise the array geometry to scale

lw: float

Line width for the traces.

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used.

show_all_channels: bool

Show the whole probe if True, or only selected channels if False The axis to be used. If not given an axis is created

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

spikeinterface.widgets.plot_unit_templates(*args, **kwargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

radius_um: None or float

If not None, all channels within a circle around the peak waveform will be displayed Incompatible with with max_channels

max_channelsNone or int

If not None only max_channels are displayed per units. Incompatible with with radius_um

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

axis_equal: bool

Equal aspext ratio for x and y axis, to visualise the array geometry to scale

lw: float

Line width for the traces.

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used.

show_all_channels: bool

Show the whole probe if True, or only selected channels if False The axis to be used. If not given an axis is created

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

spikeinterface.widgets.plot_unit_waveform_density_map(*args, **kwargs)

Plots unit waveforms using heat map density.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

max_channelsNone or int

If not None only max_channels are displayed per units. Incompatible with with radius_um

radius_um: None or float

If not None, all channels within a circle around the peak waveform will be displayed Incompatible with with max_channels

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used.

same_axis: bool

If True then all density are plot on the same axis and then channels is the union all channel per units.

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces, only used if channel_locs is True

spikeinterface.widgets.plot_amplitudes_timeseries(*args, **kwargs)

Plots waveform amplitudes distribution.

Parameters
waveform_extractor: WaveformExtractor
amplitudes: None or pre computed amplitudes

If None then apmlitudes are recomputed

peak_sign: ‘neg’, ‘pos’, ‘both’

In case of recomputing amplitudes.

Returns
W: AmplitudeDistributionWidget

The output widget

spikeinterface.widgets.plot_amplitudes_distribution(*args, **kwargs)

Plots waveform amplitudes distribution.

Parameters
waveform_extractor: WaveformExtractor
amplitudes: None or pre computed amplitudes

If None then apmlitudes are recomputed

peak_sign: ‘neg’, ‘pos’, ‘both’

In case of recomputing amplitudes.

Returns
W: AmplitudeDistributionWidget

The output widget

spikeinterface.widgets.plot_principal_component(*args, **kwargs)

Plots principal component.

Parameters
waveform_extractor: WaveformExtractor
pc: None or WaveformPrincipalComponent

If None then pc are recomputed

spikeinterface.widgets.plot_unit_localization(*args, **kwargs)

Plot unit localisation on probe.

Parameters
waveform_extractor: WaveformaExtractor

WaveformaExtractorr object

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

unit_localisation: None or 2d arrar

If None then it is computed with ‘method’ option

method: str default ‘center_of_mass’

Method used to estimate unit localisartion if ‘unit_localisation’ is None

method_kwargs: dict

Option for the method

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used.

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_unit_probe_map(*args, **kwargs)

Plots unit map. Amplitude is color coded on probe contact.

Can be static (animated=False) or animated (animated=True)

Parameters
waveform_extractor: WaveformExtractor
unit_ids: list

List of unit ids.

channel_ids: list

The channel ids to display

animated: True/False

animation for amplitude on time

spikeinterface.widgets.plot_units_depth_vs_amplitude(*args, **kwargs)
spikeinterface.widgets.plot_confusion_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
gt_comparison: GroundTruthComparison

The ground truth sorting comparison object

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ConfusionMatrixWidget

The output widget

spikeinterface.widgets.plot_agreement_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
sorting_comparison: GroundTruthComparison or SymmetricSortingComparison

The sorting comparison object. Symetric or not.

ordered: bool

Order units with best agreement scores. This enable to see agreement on a diagonal.

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_multicomp_graph(*args, **kwargs)

Plots multi sorting comparison graph.

Parameters
multi_sorting_comparison: MultiSortingComparison

The multi sorting comparison object

draw_labels: bool

If True unit labels are shown

node_cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘viridis’)

edge_cmap: matplotlib colormap

The colormap to be used for the edges (default ‘hot’)

alpha_edges: float

Alpha value for edges

colorbar: bool

If True a colorbar for the edges is plotted

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement(*args, **kwargs)

Plots multi sorting comparison agreement as pie or bar plot.

Parameters
multi_sorting_comparison: MultiSortingComparison

The multi sorting comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement_by_sorter(*args, **kwargs)

Plots multi sorting comparison agreement as pie or bar plot.

Parameters
multi_sorting_comparison: MultiSortingComparison

The multi sorting comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored.

show_legend: bool

Show the legend in the last axes (default True).

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_pair_by_pair(*args, **kwargs)

Plots CollisionGTComparison pair by pair.

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

unit_ids: list

List of considered units

nbins: int

Number of bins

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_by_similarity(*args, **kwargs)

Plots CollisionGTComparison pair by pair orderer by cosine_similarity

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

templates: array

template of units

metric: cosine_similarity’,

metric for ordering

unit_ids: list

List of considered units

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_sorting_performance(*args, **kwargs)

Plots sorting performance for each ground-truth unit.

Parameters
gt_sorting_comparison: GroundTruthComparison

The ground truth sorting comparison object

property_name: str

The property of the sorting extractor to use as x-axis (e.g. snr). If None, no property is used.

metric: str

The performance metric. ‘accuracy’ (default), ‘precision’, ‘recall’, ‘miss rate’, etc.

markersize: int

The size of the marker

marker: str

The matplotlib marker to use (default ‘.’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: SortingPerformanceWidget

The output widget

spikeinterface.widgets.plot_unit_summary(*args, **kwargs)

Plot a unit summary.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

unit_id: into or str

The unit id to plot the summary of

amplitudes: dict or None

Amplitudes ‘by_unit’ as returned by the st.postprocessing.get_spike_amplitudes(…, output=”by_unit”) function

unit_colors: list or None

Optional matplotlib color for the unit

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: UnitSummaryWidget

The output widget

Module spikeinterface.exporters

spikeinterface.exporters.export_to_phy(waveform_extractor, output_folder, compute_pc_features=True, compute_amplitudes=True, sparsity_dict=None, copy_binary=True, max_channels_per_template=16, remove_if_exists=False, peak_sign='neg', template_mode='median', dtype=None, verbose=True, **job_kwargs)

Exports a waveform extractor to the phy template-gui format.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the phy template-gui files are saved

compute_pc_features: bool

If True (default), pc features are computed

compute_amplitudes: bool

If True (default), waveforms amplitudes are computed

sparsity_dict: dict or None

If given, the dictionary should contain a sparsity method (e.g. “best_channels”) and optionally arguments associated with the method (e.g. “num_channels” for “best_channels” method). Other examples are:

  • by radius: sparsity_dict=dict(method=”radius”, radius_um=100)

  • by SNR threshold: sparsity_dict=dict(method=”threshold”, threshold=2)

  • by property: sparsity_dict=dict(method=”by_property”, by_property=”group”)

Default is sparsity_dict=dict(method=”best_channels”, num_channels=16) For more info, see the toolkit.get_template_channel_sparsity() function.

max_channels_per_template: int or None

Maximum channels per unit to return. If None, all channels are returned

copy_binary: bool

If True, the recording is copied and saved in the phy ‘output_folder’

remove_if_exists: bool

If True and ‘output_folder’ exists, it is removed and overwritten

peak_sign: ‘neg’, ‘pos’, ‘both’

Used by get_spike_amplitudes

template_mode: str

Parameter ‘mode’ to be given to WaveformExtractor.get_template()

dtype: dtype or None

Dtype to save binary data

verbose: bool

If True, output is verbose

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

spikeinterface.exporters.export_report(waveform_extractor, output_folder, remove_if_exists=False, format='png', metrics=None, amplitudes=None, **job_wargs)

Exports a SI spike sorting report. The report includes summary figures of the spike sorting output (e.g. amplitude distributions, unit localization and depth VS amplitude) as well as unit-specific reports, that include waveforms, templates, template maps, ISI distributions, and more.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the report files are saved

remove_if_exists: bool

If True and the output folder exists, it is removed

format: str

‘png’ (default) or ‘pdf’ or any format handled by matplotlib

metrics: pandas.DataFrame or None

Quality metrics to export to csv. If None, quality metrics are computed.

amplitudes: dict or None

Amplitudes ‘by_unit’ as returned by the st.postprocessing.get_spike_amplitudes(…, output=”by_unit”) function. If None, amplitudes are computed.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

Module spikeinterface.sortingcomponents

spikeinterface.sortingcomponents.detect_peaks(recording, method='by_channel', peak_sign='neg', detect_threshold=5, n_shifts=2, local_radius_um=100, noise_levels=None, random_chunk_kwargs={}, outputs='numpy_compact', **job_kwargs)

Peak detection ported from tridesclous into spikeinterface.

Peak detection based on threhold crossing in term of k x MAD

Ifg the MAD is not provide then it is estimated with random snipet

Several methods:

  • ‘by_channel’ : peak are dettected in each channel independantly

  • ‘locally_exclusive’ : locally given a radius the best peak only is taken but not neighboring channels

Parameters
recording: RecordingExtractor

The recording extractor object

method:
peak_sign=’neg’/ ‘pos’ / ‘both’

Signa of the peak.

detect_threshold: float

Threshold in median absolute deviations (MAD) to detect peaks

n_shifts: int

Number of shifts to find peak. E.g. if n_shift is 2, a peak is detected (if detect_sign is ‘negative’) if a sample is below the threshold, the two samples before are higher than the sample, and the two samples after the sample are higher than the sample.

noise_levels: np.array

noise_levels can be provide externally if already computed.

random_chunk_kwargs: dict

A dict that contain option to randomize chunk for get_noise_levels() Only used if noise_levels is None

numpy_compact: str numpy_compact/numpy_split/sorting

The type of the output. By default “numpy_compact” give a vector with complex dtype.

job_kwargs: dict

Parameters for ChunkRecordingExecutor

spikeinterface.sortingcomponents.localize_peaks(recording, peaks, method='center_of_mass', local_radius_um=150, ms_before=0.3, ms_after=0.6, **job_kwargs)

Localize peak (spike) in 2D or 3D depending the probe.ndim of the recording.

Parameters
recording: RecordingExtractor

The recording extractor object

peaks: numpy

peak vector given by detect_peaks() in “compact_numpy” way.

method: str

Method to be used (‘center_of_mass’)

local_radius_um: float

Radius in micrometer to make neihgborhood for channel around the peak

ms_before: float

The left window before a peak in millisecond

ms_after: float

The left window before a peak in millisecond

**job_kwargs: keyword arguments for parallel processing:
  • chunk_size or chunk_memory, or total_memory
    • chunk_size: int

      number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

Returns
peak_locations: np.array

Array with estimated x-y location for each spike