API

spikeinterface.core

Contain core class:
  • Recording

  • Sorting

And contain also “core extractors” used for caching:
  • BinaryRecordingExtractor

  • NpzSortingExtractor

spikeinterface.core.load_extractor(file_or_folder_or_dict, base_folder=None)
Instantiate extractor from:
  • a dict

  • a json file

  • a pickle file

  • folder (after save)

Parameters
file_or_folder_or_dict: dictionary or folder or file (json, pickle)
Returns
extractor: Recording or Sorting

The loaded extractor object

class spikeinterface.core.BaseRecording(sampling_frequency: float, channel_ids: List, dtype)

Abstract class representing several a multichannel timeseries (or block of raw ephys traces). Internally handle list of RecordingSegment

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

binary_compatible_with(dtype=None, time_axis=None, file_paths_lenght=None, file_offset=None, file_suffix=None)
Check is the recording is binary compatible with some constrain on
  • dtype

  • tim_axis

  • len(file_paths)

  • file_offset

  • file_suffix

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_times(segment_index=None)

Get time vector for a recording segment.

If the segment has a time_vector, then it is returned. Otherwise a time_vector is constructed on the fly with sampling frequency. If t_start is defined and the time vector is constructed on the fly, the first time will be t_start. Otherwise it will start from 0.

get_traces(segment_index: Optional[int] = None, start_frame: Optional[int] = None, end_frame: Optional[int] = None, channel_ids: Optional[Iterable] = None, order: Optional[str] = None, return_scaled=False, cast_unsigned=False)

Returns traces from recording.

Parameters
segment_indexUnion[int, None], optional

The segment index to get traces from. If recording is multi-segment, it is required, by default None

start_frameUnion[int, None], optional

The start frame. If None, 0 is used, by default None

end_frameUnion[int, None], optional

The end frame. If None, the number of samples in the segment is used, by default None

channel_idsUnion[Iterable, None], optional

The channel ids. If None, all channels are used, by default None

orderUnion[str, None], optional

The order of the traces (“C” | “F”). If None, traces are returned as they are, by default None

return_scaledbool, optional

If True and the recording has scaling (gain_to_uV and offset_to_uV properties), traces are scaled to uV, by default False

cast_unsignedbool, optional

If True and the traces are unsigned, they are cast to integer and centered (an offset of (2**nbits) is subtracted), by default False

Returns
np.array

The traces (num_samples, num_channels)

Raises
ValueError

If return_scaled is True, but recording does not have scaled traces

has_time_vector(segment_index=None)

Check if the segment of the recording has a time vector.

is_binary_compatible()

Inform is this recording is “binary” compatible. To be used before calling rec.get_binary_description()

Returns
is_binary_compatible: bool
set_times(times, segment_index=None, with_warning=True)

Set times for a recording segment.

class spikeinterface.core.BaseSorting(sampling_frequency: float, unit_ids: List)

Abstract class representing several segment several units and relative spiketrains.

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

register_recording

save_metadata_to_folder

save_to_memory

get_all_spike_trains(outputs='unit_id')

Return all spike trains concatenated

get_times(segment_index=None)

Get time vector for a registered recording segment.

If a recording is registered:
  • if the segment has a time_vector, then it is returned

  • if not, a time_vector is constructed on the fly with sampling frequency

If there is no registered recording it returns None

get_total_num_spikes()

Get total number of spikes for each unit across segments.

Returns
dict

Dictionary with unit_ids as key and number of spikes as values

has_time_vector(segment_index=None)

Check if the segment of the registered recording has a time vector.

remove_empty_units()

Removes units with empty spike trains

Returns
BaseSorting

Sorting object with non-empty units

remove_units(remove_unit_ids)

Removes a subset of units

Parameters
remove_unit_idsnumpy.array or list

List of unit ids to remove

Returns
BaseSorting

Sorting object without removed units

select_units(unit_ids, renamed_unit_ids=None)

Selects a subset of units

Parameters
unit_idsnumpy.array or list

List of unit ids to keep

renamed_unit_idsnumpy.array or list, optional

If given, the kept unit ids are renamed, by default None

Returns
BaseSorting

Sorting object with selected units

to_spike_vector(extremum_channel_inds=None)

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

See also get_all_spike_trains()

Parameters
extremum_channel_inds: None or dict

If a dictionnary of unit_id to channel_ind is given then an extra field ‘channel_ind’. This can be convinient for computing spikes postion after sorter.

This dict can be computed with get_template_extremum_channel(we, outputs=”index”)

Returns
spikes: np.array

Structured numpy array (‘sample_ind’, ‘unit_index’, ‘segment_index’) with all spikes Or (‘sample_ind’, ‘unit_index’, ‘segment_index’, ‘channel_ind’) if extremum_channel_inds is given

class spikeinterface.core.BaseSortingSegment(t_start=None)

Abstract class representing several units and relative spiketrain inside a segment.

Attributes
parent_extractor

Methods

get_unit_spike_train(unit_id[, start_frame, ...])

Get the spike train for a unit.

set_parent_extractor

get_unit_spike_train(unit_id, start_frame: Optional[int] = None, end_frame: Optional[int] = None) ndarray

Get the spike train for a unit.

Parameters
unit_id
start_frame: int, optional
end_frame: int, optional
Returns
np.ndarray
class spikeinterface.core.BaseEvent(channel_ids, structured_dtype)

Abstract class representing events.

Parameters
channel_idslist or np.array

The channel ids

structured_dtypedtype or dict

The dtype of the events. If dict, each key is the channel_id and values must be the dtype of the channel (also structured). If dtype, each channel is assigned the same dtype. In case of structured dtypes, the “time” or “timestamp” field name must be present.

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_event_times([channel_id, segment_index, ...])

Return events timestamps of a channel in seconds.

get_events([channel_id, segment_index, ...])

Return events of a channel in its native structured type.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_event_segment

annotate

check_if_dumpable

delete_property

get_annotation_keys

get_dtype

get_num_channels

get_num_segments

get_property

get_property_keys

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

class spikeinterface.core.BinaryRecordingExtractor(file_paths, sampling_frequency, num_chan, dtype, t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)

RecordingExtractor for a binary format

Parameters
file_paths: str or Path or list

Path to the binary file

sampling_frequency: float

The sampling frequency

num_chan: int

Number of channels

dtype: str or dtype

The dtype of the binary file

time_axis: int

The axis of the time dimension (default 0: F order)

t_starts: None or list of float

Times in seconds of the first sample for each segment

channel_ids: list (optional)

A list of channel ids

file_offset: int (optional)

Number of bytes in the file to offset by during memmap instantiation.

gain_to_uV: float or array-like (optional)

The gain to apply to the traces

offset_to_uV: float or array-like

The offset to apply to the traces

is_filtered: bool or None

If True, the recording is assumed to be filtered. If None, is_filtered is not set.

Returns
recording: BinaryRecordingExtractor

The recording Extractor

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

write_recording(recording, file_paths[, dtype])

Save the traces of a recording extractor in binary .dat format.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

spikeinterface.core.read_binary(file_paths, sampling_frequency, num_chan, dtype, t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)

RecordingExtractor for a binary format

Parameters
file_paths: str or Path or list

Path to the binary file

sampling_frequency: float

The sampling frequency

num_chan: int

Number of channels

dtype: str or dtype

The dtype of the binary file

time_axis: int

The axis of the time dimension (default 0: F order)

t_starts: None or list of float

Times in seconds of the first sample for each segment

channel_ids: list (optional)

A list of channel ids

file_offset: int (optional)

Number of bytes in the file to offset by during memmap instantiation.

gain_to_uV: float or array-like (optional)

The gain to apply to the traces

offset_to_uV: float or array-like

The offset to apply to the traces

is_filtered: bool or None

If True, the recording is assumed to be filtered. If None, is_filtered is not set.

Returns
recording: BinaryRecordingExtractor

The recording Extractor

class spikeinterface.core.NpzSortingExtractor(file_path)

Dead simple and super light format based on the NPZ numpy format. https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez

It is in fact an archive of several .npy format. All spike are store in two columns maner index+labels

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

register_recording

save_metadata_to_folder

save_to_memory

write_sorting

class spikeinterface.core.NumpyRecording(traces_list, sampling_frequency, t_starts=None, channel_ids=None)

In memory recording. Contrary to previous version this class does not handle npy files.

Parameters
traces_list: list of array or array (if mono segment)

The traces to instantiate a mono or multisegment Recording

sampling_frequency: float

The sampling frequency in Hz

t_starts: None or list of float

Times in seconds of the first sample for each segment

channel_ids: list

An optional list of channel_ids. If None, linear channels are assumed

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.core.NumpySorting(sampling_frequency, unit_ids=[])
Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(units_dict_list, sampling_frequency)

Construct sorting extractor from a list of dict.

from_extractor(source_sorting)

Create a numpy sorting from another extractor

from_neo_spiketrain_list(neo_spiketrains, ...)

Construct a sorting with a neo spiketrain list.

from_peaks(peaks, sampling_frequency)

Construct a sorting from peaks returned by 'detect_peaks()' function.

from_times_labels(times_list, labels_list, ...)

Construct sorting extractor from:

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.core.ChannelSliceRecording(parent_recording, channel_ids=None, renamed_channel_ids=None)

Class to slice a Recording object based on channel_ids.

Do not use this class directly but use recording.channel_slice(…)

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.core.UnitsSelectionSorting(parent_sorting, unit_ids=None, renamed_unit_ids=None)

Class that handles slicing of a Sorting object based on a list of unit_ids.

Do not use this class directly but use sorting.select_units(…)

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.core.FrameSliceRecording(parent_recording, start_frame=None, end_frame=None)

Class to get a lazy frame slice. Work only with mono segment recording.

Do not use this class directly but use recording.frame_slice(…)

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

spikeinterface.core.append_recordings(recording_list, sampling_frequency_max_diff=0)

Takes as input a list of parent recordings each with multiple segments and returns a single multi-segment recording that “appends” all segments from all parent recordings.

For instance, given one recording with 2 segments and one recording with 3 segments, this class will give one recording with 5 segments

Parameters
recording_listlist of BaseRecording

A list of recordings

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across recordings (default 0)

spikeinterface.core.concatenate_recordings(recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Return a recording that “concatenates” all segments from all parent recordings into one recording with a single segment. The operation is lazy.

For instance, given one recording with 2 segments and one recording with 3 segments, this class will give one recording with one large segment made by concatenating the 5 segments.

Time information is lost upon concatenation. By default ignore_times is True. If it is False, you get an error unless:

  • all segments DO NOT have times, AND

  • all segment have t_start=None

Parameters
recording_listlist of BaseRecording

A list of recordings

ignore_times: bool

If True (default), time information (t_start, time_vector) is ignored when concatenating recordings.

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across recordings (default 0)

spikeinterface.core.append_sortings(sorting_list, sampling_frequency_max_diff=0)

Return a sorting that “append” all segments from all sorting into one sorting multi segment.

Parameters
sorting_listlist of BaseSorting

A list of sortings

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across sortings (default 0)

spikeinterface.core.extract_waveforms(recording, sorting, folder=None, mode='folder', load_if_exists=False, precompute_template=('average',), ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, overwrite=False, return_scaled=True, dtype=None, use_relative_path=False, seed=None, **job_kwargs)

Extracts waveform on paired Recording-Sorting objects. Waveforms are persistent on disk and cached in memory.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: str or Path or None

The folder where waveforms are cached

mode: str

“folder” (default) or “memory”. The “folder” argument must be specified in case of mode “folder”. If “memory” is used, the waveforms are stored in RAM. Use this option carefully!

load_if_exists: bool

If True and waveforms have already been extracted in the specified folder, they are loaded and not recomputed.

precompute_template: None or list

Precompute average/std/median for template. If None not precompute.

ms_before: float

Time in ms to cut before spike peak

ms_after: float

Time in ms to cut after spike peak

max_spikes_per_unit: int or None

Number of spikes per unit to extract waveforms from (default 500). Use None to extract waveforms for all spikes

overwrite: bool

If True and ‘folder’ exists, the folder is removed and waveforms are recomputed. Otherwise an error is raised.

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, waveforms are converted to uV.

dtype: dtype or None

Dtype of the output waveforms. If None, the recording dtype is maintained.

use_relative_path: bool

If True, the recording and sorting paths are relative to the waveforms folder. This allows portability of the waveform folder provided that the relative paths are the same, but forces all the data files to be in the same drive. Default is False.

seed: int or None

Random seed for spike selection

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
we: WaveformExtractor

The WaveformExtractor object

class spikeinterface.core.WaveformExtractor(recording, sorting, folder=None, rec_attributes=None)

Class to extract waveform on paired Recording-Sorting objects. Waveforms are persistent on disk and cached in memory.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: Path

The folder where waveforms are cached

rec_attributes: None or dict

When recording is None then a minimal dict with some attributes is needed.

Returns
——-
we: WaveformExtractor

The WaveformExtractor object

Examples

>>> # Instantiate
>>> we = WaveformExtractor.create(recording, sorting, folder)
>>> # Compute
>>> we = we.set_params(...)
>>> we = we.run_extract_waveforms(...)
>>> # Retrieve
>>> waveforms = we.get_waveforms(unit_id)
>>> template = we.get_template(unit_id, mode='median')
>>> # Load  from folder (in another session)
>>> we = WaveformExtractor.load_from_folder(folder)
Attributes
channel_ids
nafter
nbefore
nsamples
recording
return_scaled
sampling_frequency
unit_ids

Methods

delete_extension(extension_name)

Deletes an existing extension.

get_all_templates([unit_ids, mode])

Return templates (average waveform) for multiple units.

get_available_extension_names()

Return a list of loaded or available extension names either in memory or in persistent extension folders.

get_extension_class(extension_name)

Get extension class from name and check if registered.

get_sampled_indices(unit_id)

Return sampled spike indices of extracted waveforms

get_template(unit_id[, mode, sparsity])

Return template (average waveform).

get_template_segment(unit_id, segment_index)

Return template for the specified unit id computed from waveforms of a specific segment.

get_waveforms(unit_id[, with_index, cache, ...])

Return waveforms for the specified unit id.

get_waveforms_segment(segment_index, unit_id)

Return waveforms from a specified segment and unit_id.

is_extension(extension_name)

Check if the extension exists in memory or in the folder.

load_extension(extension_name)

Load an extension from its name.

precompute_templates([modes])

Precompute all template for different "modes":

register_extension(extension_class)

This maintains a list of possible extensions that are available.

select_units(unit_ids[, new_folder, ...])

Filters units by creating a new waveform extractor object in a new folder.

set_params([ms_before, ms_after, ...])

Set parameters for waveform extraction

channel_ids_to_indices

create

get_channel_locations

get_num_channels

get_num_segments

get_probe

get_probegroup

load_from_folder

run_extract_waveforms

sample_spikes

set_params(ms_before=1.0, ms_after=2.0, max_spikes_per_unit=500, return_scaled=False, dtype=None)

Set parameters for waveform extraction

Parameters
ms_before: float

Cut out in ms before spike time

ms_after: float

Cut out in ms after spike time

max_spikes_per_unit: int

Maximum number of spikes to extract per unit

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, waveforms are converted to uV.

dtype: np.dtype

The dtype of the computed waveforms

spikeinterface.core.download_dataset(repo=None, remote_path=None, local_folder=None, update_if_exists=False, unlock=False)
spikeinterface.core.write_binary_recording(recording, file_paths=None, dtype=None, add_file_extension=True, verbose=False, byte_offset=0, auto_cast_uint=True, **job_kwargs)

Save the trace of a recording extractor in several binary .dat format.

Note :

time_axis is always 0 (contrary to previous version. to get time_axis=1 (which is a bad idea) use write_binary_recording_file_handle()

Parameters
recording: RecordingExtractor

The recording extractor object to be saved in .dat format

file_path: str

The path to the file.

dtype: dtype

Type of the saved data. Default float32.

add_file_extension: bool

If True (default), file the ‘.raw’ file extension is added if the file name is not a ‘raw’, ‘bin’, or ‘dat’

verbose: bool

If True, output is verbose (when chunks are used)

byte_offset: int

Offset in bytes (default 0) to for the binary file (e.g. to write a header)

auto_cast_uint: bool

If True (default), unsigned integers are automatically cast to int if the specified dtype is signed

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.core.set_global_tmp_folder(folder)

Set the global path temporary folder.

spikeinterface.core.set_global_dataset_folder(folder)

Set the global dataset folder.

class spikeinterface.core.ChunkRecordingExecutor(recording, func, init_func, init_args, verbose=False, progress_bar=False, handle_returns=False, n_jobs=1, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, mp_context=None, job_name='')

Core class for parallel processing to run a “function” over chunks on a recording.

It supports running a function:
  • in loop with chunk processing (low RAM usage)

  • at once if chunk_size is None (high RAM usage)

  • in parallel with ProcessPoolExecutor (higher speed)

The initializer (‘init_func’) allows to set a global context to avoid heavy serialization (for examples, see implementation in core.WaveformExtractor).

Parameters
recording: RecordingExtractor

The recording to be processed

func: function

Function that runs on each chunk

init_func: function

Initializer function to set the global context (accessible by ‘func’)

init_args: tuple

Arguments for init_func

verbose: bool

If True, output is verbose

progress_bar: bool

If True, a progress bar is printed to monitor the progress of the process

handle_returns: bool

If True, the function can return values

n_jobs: int

Number of jobs to be used (default 1). Use -1 to use as many jobs as number of cores

total_memory: str

Total memory (RAM) to use (e.g. “1G”, “500M”)

chunk_memory: str

Memory per chunk (RAM) to use (e.g. “1G”, “500M”)

chunk_size: int or None

Size of each chunk in number of samples. If ‘total_memory’ or ‘chunk_memory’ are used, it is ignored.

chunk_durationstr or float or None

Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

mp_contextstr or None

“fork” (default) or “spawn”. If None, the context is taken by the recording.get_preferred_mp_context(). “fork” is only available on UNIX systems.

job_name: str

Job name

Returns
res: list

If ‘handle_returns’ is True, the results for each chunk process

Methods

run()

Runs the defined jobs.

spikeinterface.core.get_random_data_chunks(recording, return_scaled=False, num_chunks_per_segment=20, chunk_size=10000, concatenated=True, seed=0)

Exctract random chunks across segments

This is used for instance in get_noise_levels() to estimate noise on traces.

Parameters
recording: BaseRecording

The recording to get random chunks from

return_scaled: bool

If True, returned chunks are scaled to uV

num_chunks_per_segment: int

Number of chunks per segment

chunk_size: int

Size of a chunk in number of frames

concatenated: bool (default True)

If True chunk are concatenated along time axis.

seed: int

Random seed

Returns
——-
chunk_list: np.array

Array of concatenate chunks per segment

spikeinterface.core.get_channel_distances(recording)

Distance between channel pairs

spikeinterface.core.get_closest_channels(recording, channel_ids=None, num_channels=None)

Get closest channels + distances

Parameters
recording: RecordingExtractor

The recording extractor to get closest channels

channel_ids: list

List of channels ids to compute there near neighborhood

num_channels: int, optional

Maximum number of neighborhood channels to return

Returns
closest_channels_indsarray (2d)

Closest channel indices in ascending order for each channel id given in input

dists: array (2d)

Distance in ascending order for each channel id given in input

spikeinterface.core.get_noise_levels(recording, return_scaled=True, **random_chunk_kwargs)

Estimate noise for each channel using MAD methods.

Internally it sample some chunk across segment. And then, it use MAD estimator (more robust than STD)

spikeinterface.core.get_chunk_with_margin(rec_segment, start_frame, end_frame, channel_indices, margin, add_zeros=False, window_on_margin=False, dtype=None)

Helper to get chunk with margin

spikeinterface.extractors

NEO-based

spikeinterface.extractors.read_alphaomega(folder_path, lsx_files=None, stream_id='RAW', stream_name=None, all_annotations=False)

Class for reading from AlphaRS and AlphaLab SnR boards.

Based on neo.rawio.AlphaOmegaRawIO

Parameters
folder_path: str or Path-like

The folder path to the AlphaOmega recordings.

lsx_files: list of strings or None, optional

A list of listings files that refers to mpx files to load.

stream_id: {‘RAW’, ‘LFP’, ‘SPK’, ‘ACC’, ‘AI’, ‘UD’}, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_alphaomega_event(folder_path)

Class for reading events from AlphaOmega MPX file format

spikeinterface.extractors.read_axona(file_path, all_annotations=False)

Class for reading Axona RAW format.

Based on neo.rawio.AxonaRawIO

Parameters
file_path: str

The file path to load the recordings from.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_biocam(file_path, mea_pitch=None, electrode_width=None, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from a Biocam file from 3Brain.

Based on neo.rawio.BiocamRawIO

Parameters
file_path: str

The file path to load the recordings from.

mea_pitch: float, optional

The inter-electrode distance (pitch) between electrodes.

electrode_width: float, optional

Width of the electrodes in um.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool (default False)

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_blackrock(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading BlackRock data.

Based on neo.rawio.BlackrockRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_ced(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading smr/smrw CED file.

Based on neo.rawio.CedRawIO / sonpy

Alternative to read_spike2 which does not handle smrx

Parameters
file_path: str

The file path to the smr or smrx file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_intan(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data from a intan board. Supports rhd and rhs format.

Based on neo.rawio.IntanRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_kilosort(folder_path, keep_good_only=False)

Load Kilosort format data as a sorting extractor.

Parameters
folder_path: str or Path

Path to the output Phy folder (containing the params.py).

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

keep_good_onlybool, optional, default: True

Whether to only keep good units. If True, only Kilosort-labeled ‘good’ units are returned.

Returns
extractorKiloSortSortingExtractor

The loaded data.

spikeinterface.extractors.read_maxwell(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, rec_name=None)

Class for reading data from Maxwell device. It handles MaxOne (old and new format) and MaxTwo.

Based on neo.rawio.MaxwellRawIO

Parameters
file_path: str

The file path to the maxwell h5 file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For MaxTwo when there are several wells at the same time you need to specify stream_id=’well000’ or ‘well0001’, etc.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

rec_name: str, optional

When the file contains several recordings you need to specify the one you want to extract. (rec_name=’rec0000’).

spikeinterface.extractors.read_mearec(file_path)

Read a MEArec file.

Parameters
file_path: str or Path

Path to MEArec h5 file

Returns
recording: MEArecRecordingExtractor

The recording extractor object

sorting: MEArecSortingExtractor

The sorting extractor object

spikeinterface.extractors.read_mcsraw(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from “Raw” Multi Channel System (MCS) format. This format is NOT the native MCS format (*.mcd). This format is a raw format with an internal binary header exported by the “MC_DataTool binary conversion” with the option header selected.

Based on neo.rawio.RawMCSRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_neuralynx(folder_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading neuralynx folder

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_neuroscope(file_path, stream_id=None, keep_mua_units=False, exclude_shanks=None, load_recording=True, load_sorting=False)

Read neuroscope recording and sorting. This function assumses that all .res and .clu files are in the same folder as the .xml file.

Parameters
file_path: str

The xml file.

stream_id: str or None
keep_mua_units: bool

Optional. Whether or not to return sorted spikes from multi-unit activity. Defaults to True.

exclude_shanks: list

Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the final integer of all the .res. % i and .clu. % i pairs.

load_recording: bool

If True, the recording is loaded (default True)

load_sorting: bool

If True, the sorting is loaded (default False)

spikeinterface.extractors.read_nix(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading Nix file

Based on neo.rawio.NIXRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_openephys(folder_path, **kwargs)

Read ‘legacy’ or ‘binary’ Open Ephys formats

Parameters
folder_path: str or Path

Path to openephys folder

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

all_annotations: bool (default False)

Load exhaustively all annotation from neo.

Returns
recording: OpenEphysLegacyRecordingExtractor or OpenEphysBinaryExtractor
spikeinterface.extractors.read_openephys_event(folder_path, block_index=None)

Read Open Ephys events from ‘binary’ format.

Parameters
folder_path: str or Path

Path to openephys folder

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

Returns
event: OpenEphysBinaryEventExtractor
spikeinterface.extractors.read_plexon(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading plexon plx files.

Based on neo.rawio.PlexonRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_spike2(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading spike2 smr files. smrx are not supported with this, prefer CedRecordingExtractor instead.

Based on neo.rawio.Spike2RawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_spikegadgets(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading *rec files from spikegadgets.

Based on neo.rawio.SpikeGadgetsRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_spikeglx(folder_path, load_sync_channel=False, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data saved by SpikeGLX software. See https://billkarsh.github.io/SpikeGLX/

Based on neo.rawio.SpikeGLXRawIO

Contrary to older verion this reader is folder based. So if the folder contain several streams (‘imec0.ap’ ‘nidq’ ‘imec0.lf’) then it has to be specified with stream_id.

Parameters
folder_path: str

The folder path to load the recordings from.

load_sync_channel: bool dafult False

Load or not the last channel used for synchronization. If True, then the probe is not loaded because one more channel

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For example, ‘imec0.ap’ ‘nidq’ or ‘imec0.lf’.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_tdt(folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading TDT folder.

Based on neo.rawio.TdTRawIO

Parameters
folder_path: str

The folder path to the tdt folder.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Non-NEO-based

spikeinterface.extractors.read_alf_sorting(folder_path, sampling_frequency=30000)

Load ALF format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

sampling_frequencyint, optional, default: 30000

The sampling frequency.

Returns
extractorALFSortingExtractor

The loaded data.

spikeinterface.extractors.read_bids(folder_path)

Load a BIDS folder of data into extractor objects.

The following files are considered:
  • _channels.tsv

  • _contacts.tsv

  • _ephys.nwb

  • _probes.tsv

Parameters
folder_pathstr or Path

Path to the BIDS folder.

Returns
extractorslist of extractors

The loaded data, with attached Probes.

spikeinterface.extractors.read_cbin_ibl(folder_path, load_sync_channel=False)

Load IBL data as an extractor object.

IBL have a custom format - compressed binary with spikeglx meta.

The format is like spikeglx (have a meta file) but contains:
  • “cbin” file (instead of “bin”)

  • “ch” file used by mtscomp for compression info

Parameters
folder_path: str or Path

Path to ibl folder.

load_sync_channel: bool, optional, default: False

Load or not the last channel (sync). If not then the probe is loaded.

Returns
recordingCompressedBinaryIblExtractor

The loaded data.

spikeinterface.extractors.read_combinato(folder_path, sampling_frequency=None, user='simple', det_sign='both', keep_good_only=True)

Load Combinato format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the Combinato folder.

sampling_frequencyint, optional, default: 30000

The sampling frequency.

userstr

The username that ran the sorting. Defaults to ‘simple’.

det_sign{‘both’, ‘pos’, ‘neg’}

Which sign was used for detection.

keep_good_onlybool, optional, default: True

Whether to only keep good units.

Returns
extractorCombinatoSortingExtractor

The loaded data.

spikeinterface.extractors.read_hdsort(file_path, keep_good_only=True)

Load HDSort format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to HDSort mat file.

keep_good_onlybool, optional, default: True

Whether to only keep good units.

Returns
extractorHDSortSortingExtractor

The loaded data.

spikeinterface.extractors.read_herdingspikes(file_path, load_unit_info=True)

Load HerdingSpikes format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

load_unit_infobool, optional, default: True

Whether to load the unit info from the file.

Returns
extractorHerdingSpikesSortingExtractor

The loaded data.

spikeinterface.extractors.read_klusta(file_or_folder_path, exclude_cluster_groups=None)

Load Klusta format data as a sorting extractor.

Parameters
file_or_folder_pathstr or Path

Path to the ALF folder.

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

Returns
extractorKlustaSortingExtractor

The loaded data.

spikeinterface.extractors.read_mcsh5(file_path, stream_id=0)

Load a MCS H5 file as a recording extractor.

Parameters
file_pathstr or Path

The path to the MCS h5 file.

stream_idint, optional, default: 0

The stream ID to load.

Returns
recordingMCSH5RecordingExtractor

The loaded data.

spikeinterface.extractors.read_mda_recording(folder_path, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv')

Load MDA format data as a recording extractor.

Parameters
folder_pathstr or Path

Path to the MDA folder.

raw_fname: str

File name of raw file. Defaults to ‘raw.mda’.

params_fname: str

File name of params file. Defaults to ‘params.json’.

geom_fname: str

File name of geom file. Defaults to ‘geom.csv’.

Returns
extractorMdaRecordingExtractor

The loaded data.

spikeinterface.extractors.read_mda_sorting(file_path, sampling_frequency)

Load MDA format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to the MDA file.

sampling_frequencyint

The sampling frequency.

Returns
extractorMdaRecordingExtractor

The loaded data.

spikeinterface.extractors.read_nwb(file_path, load_recording=True, load_sorting=False, electrical_series_name=None)

Reads NWB file into SpikeInterface extractors.

Parameters
file_path: str or Path

Path to NWB file.

load_recordingbool, optional, default: True

If True, the recording object is loaded.

load_sortingbool, optional, default: False

If True, the recording object is loaded.

electrical_series_name: str, optional

The name of the ElectricalSeries (if multiple ElectricalSeries are present)

Returns
extractors: extractor or tuple

Single RecordingExtractor/SortingExtractor or tuple with both (depending on ‘load_recording’/’load_sorting’) arguments.

spikeinterface.extractors.read_phy(folder_path, exclude_cluster_groups=None)

Load Phy format data as a sorting extractor.

Parameters
folder_path: str or Path

Path to the output Phy folder (containing the params.py).

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

Returns
extractorPhySortingExtractor

The loaded data.

spikeinterface.extractors.read_shybrid_recording(file_path)

Load SHYBRID format data as a recording extractor.

Parameters
file_pathstr or Path

Path to the SHYBRID file.

Returns
extractorSHYBRIDRecordingExtractor

Loaded data.

spikeinterface.extractors.read_shybrid_sorting(file_path, sampling_frequency, delimiter=',')

Load SHYBRID format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to the SHYBRID file.

sampling_frequencyint

The sampling frequency.

delimiterstr

The delimiter to use for loading the file.

Returns
extractorSHYBRIDSortingExtractor

Loaded data.

spikeinterface.extractors.read_spykingcircus(folder_path)

Load SpykingCircus format data as a recording extractor.

Parameters
folder_pathstr or Path

Path to the SpykingCircus folder.

Returns
extractorSpykingCircusSortingExtractor

Loaded data.

spikeinterface.extractors.toy_example(duration=10, num_channels=4, num_units=10, sampling_frequency=30000.0, num_segments=2, average_peak_amplitude=-100, upsample_factor=13, contact_spacing_um=40, num_columns=1, spike_times=None, spike_labels=None, score_detection=1, seed=None)

Creates a toy recording and sorting extractors.

Parameters
duration: float (or list if multi segment)

Duration in seconds (default 10).

num_channels: int

Number of channels (default 4).

num_units: int

Number of units (default 10).

sampling_frequency: float

Sampling frequency (default 30000).

num_segments: int

Number of segments (default 2).

spike_times: ndarray (or list of multi segment)

Spike time in the recording.

spike_labels: ndarray (or list of multi segment)

Cluster label for each spike time (needs to specified both together).

score_detection: int (between 0 and 1)

Generate the sorting based on a subset of spikes compare with the trace generation.

seed: int

Seed for random initialization.

Returns
recording: RecordingExtractor

The output recording extractor.

sorting: SortingExtractor

The output sorting extractor.

spikeinterface.extractors.read_tridesclous(folder_path, chan_grp=None)

Load Tridesclous format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the Tridesclous folder.

chan_grplist, optional

The channel group(s) to load.

Returns
extractorTridesclousSortingExtractor

Loaded data.

spikeinterface.extractors.read_yass(folder_path)

Load YASS format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

Returns
extractorYassSortingExtractor

Loaded data.

Low-level classes

class spikeinterface.extractors.AlphaOmegaRecordingExtractor(folder_path, lsx_files=None, stream_id='RAW', stream_name=None, all_annotations=False)

Class for reading from AlphaRS and AlphaLab SnR boards.

Based on neo.rawio.AlphaOmegaRawIO

Parameters
folder_path: str or Path-like

The folder path to the AlphaOmega recordings.

lsx_files: list of strings or None, optional

A list of listings files that refers to mpx files to load.

stream_id: {‘RAW’, ‘LFP’, ‘SPK’, ‘ACC’, ‘AI’, ‘UD’}, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.AlphaOmegaEventExtractor(folder_path)

Class for reading events from AlphaOmega MPX file format

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_event_times([channel_id, segment_index, ...])

Return events timestamps of a channel in seconds.

get_events([channel_id, segment_index, ...])

Return events of a channel in its native structured type.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_event_segment

annotate

check_if_dumpable

delete_property

get_annotation_keys

get_dtype

get_num_channels

get_num_segments

get_property

get_property_keys

id_to_index

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.AxonaRecordingExtractor(file_path, all_annotations=False)

Class for reading Axona RAW format.

Based on neo.rawio.AxonaRawIO

Parameters
file_path: str

The file path to load the recordings from.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.BiocamRecordingExtractor(file_path, mea_pitch=None, electrode_width=None, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from a Biocam file from 3Brain.

Based on neo.rawio.BiocamRawIO

Parameters
file_path: str

The file path to load the recordings from.

mea_pitch: float, optional

The inter-electrode distance (pitch) between electrodes.

electrode_width: float, optional

Width of the electrodes in um.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool (default False)

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.BlackrockRecordingExtractor(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading BlackRock data.

Based on neo.rawio.BlackrockRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.BlackrockSortingExtractor(file_path, sampling_frequency=None)

Class for reading BlackRock spiking data.

Based on neo.rawio.BlackrockRawIO

Parameters
file_path: str

The file path to load the recordings from.

sampling_frequency: float, None by default.

The sampling frequency for the sorting extractor. When the signal data is available (.ncs) those files will be used to extract the frequency automatically. Otherwise, the sampling frequency needs to be specified for this extractor to be initialized.

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.CedRecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading smr/smrw CED file.

Based on neo.rawio.CedRawIO / sonpy

Alternative to read_spike2 which does not handle smrx

Parameters
file_path: str

The file path to the smr or smrx file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.EDFRecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading EDF (European data format) folder.

Based on neo.rawio.EDFRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For this neo reader streams are defined by their sampling frequency.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.IntanRecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data from a intan board. Supports rhd and rhs format.

Based on neo.rawio.IntanRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.MaxwellRecordingExtractor(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, rec_name=None)

Class for reading data from Maxwell device. It handles MaxOne (old and new format) and MaxTwo.

Based on neo.rawio.MaxwellRawIO

Parameters
file_path: str

The file path to the maxwell h5 file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For MaxTwo when there are several wells at the same time you need to specify stream_id=’well000’ or ‘well0001’, etc.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

rec_name: str, optional

When the file contains several recordings you need to specify the one you want to extract. (rec_name=’rec0000’).

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.MaxwellEventExtractor(file_path)

Class for reading TTL events from Maxwell files.

Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_event_times([channel_id, segment_index, ...])

Return events timestamps of a channel in seconds.

get_events([channel_id, segment_index, ...])

Return events of a channel in its native structured type.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_event_segment

annotate

check_if_dumpable

delete_property

get_annotation_keys

get_dtype

get_num_channels

get_num_segments

get_property

get_property_keys

id_to_index

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.MEArecRecordingExtractor(file_path, all_annotations=False)

Class for reading data from a MEArec simulated data.

Based on neo.rawio.MEArecRawIO

Parameters
file_path: str

The file path to load the recordings from.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.MEArecSortingExtractor(file_path)
Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.MCSRawRecordingExtractor(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from “Raw” Multi Channel System (MCS) format. This format is NOT the native MCS format (*.mcd). This format is a raw format with an internal binary header exported by the “MC_DataTool binary conversion” with the option header selected.

Based on neo.rawio.RawMCSRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.NeuralynxRecordingExtractor(folder_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading neuralynx folder

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.NeuralynxSortingExtractor(folder_path, sampling_frequency=None)

Class for reading spike data from a folder with neuralynx spiking data (i.e .nse and .ntt formats).

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The file path to load the recordings from.

sampling_frequency: float

The sampling frequency for the spiking channels. When the signal data is available (.ncs) those files will be used to extract the frequency. Otherwise, the sampling frequency needs to be specified for this extractor.

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.NeuroScopeRecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data from neuroscope Ref: http://neuroscope.sourceforge.net

Based on neo.rawio.NeuroScopeRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.NeuroScopeSortingExtractor(folder_path: Optional[Union[str, Path]] = None, resfile_path: Optional[Union[str, Path]] = None, clufile_path: Optional[Union[str, Path]] = None, keep_mua_units: bool = True, exclude_shanks: Optional[list] = None, xml_file_path: Optional[Union[str, Path]] = None)

Extracts spiking information from an arbitrary number of .res.%i and .clu.%i files in the general folder path.

The .res is a text file with a sorted list of spiketimes from all units displayed in sample (integer ‘%i’) units. The .clu file is a file with one more row than the .res with the first row corresponding to the total number of unique ids in the file (and may exclude 0 & 1 from this count) with the rest of the rows indicating which unit id the corresponding entry in the .res file refers to. The group id is loaded as unit property ‘group’.

In the original Neuroscope format:

Unit ID 0 is the cluster of unsorted spikes (noise). Unit ID 1 is a cluster of multi-unit spikes.

The function defaults to returning multi-unit activity as the first index, and ignoring unsorted noise. To return only the fully sorted units, set keep_mua_units=False.

The sorting extractor always returns unit IDs from 1, …, number of chosen clusters.

Parameters
folder_pathstr

Optional. Path to the collection of .res and .clu text files. Will auto-detect format.

resfile_pathPathType

Optional. Path to a particular .res text file. If given, only the single .res file (and the respective .clu file) are loaded

clufile_pathPathType

Optional. Path to a particular .clu text file. If given, only the single .clu file (and the respective .res file) are loaded

keep_mua_unitsbool

Optional. Whether or not to return sorted spikes from multi-unit activity. Defaults to True.

exclude_shankslist

Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the final integer of all the .res.%i and .clu.%i pairs.

xml_file_pathPathType, optional

Path to the .xml file referenced by this sorting.

Attributes
unit_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_all_spike_trains([outputs])

Return all spike trains concatenated

get_annotation(key[, copy])

Get a annotation.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a registered recording segment.

get_total_num_spikes()

Get total number of spikes for each unit across segments.

has_time_vector([segment_index])

Check if the segment of the registered recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

remove_empty_units()

Removes units with empty spike trains

remove_units(remove_unit_ids)

Removes a subset of units

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_units(unit_ids[, renamed_unit_ids])

Selects a subset of units

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

to_spike_vector([extremum_channel_inds])

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

add_sorting_segment

annotate

check_if_dumpable

delete_property

frame_slice

get_annotation_keys

get_num_segments

get_num_units

get_property

get_property_keys

get_sampling_frequency

get_unit_ids

get_unit_property

get_unit_spike_train

has_recording

id_to_index

load_from_folder

load_metadata_from_folder

register_recording

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.NixRecordingExtractor(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading Nix file

Based on neo.rawio.NIXRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.OpenEphysLegacyRecordingExtractor(folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data saved by the Open Ephys GUI.

This extractor works with the Open Ephys “legacy” format, which saves data using one file per continuous channel (.continuous files).

https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Open-Ephys-format.html

Based on neo.rawio.OpenEphysRawIO

Parameters
folder_path: str

The folder path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

all_annotations: bool (default False)

Load exhaustively all annotation from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.OpenEphysBinaryRecordingExtractor(folder_path, load_sync_channel=False, experiment_names=None, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data saved by the Open Ephys GUI.

This extractor works with the Open Ephys “binary” format, which saves data using one file per continuous stream (.dat files).

https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Binary-format.html

Based on neo.rawio.OpenEphysBinaryRawIO

Parameters
folder_path: str

The folder path to load the recordings from.

load_sync_channelbool

If False (default) and a SYNC channel is present (e.g. Neuropixels), this is not loaded. If True, the SYNC channel is loaded and can be accessed in the analog signals.

experiment_name: str, list, or None

If multiple experiments are available, this argument allows users to select one or more experiments. If None, all experiements are loaded as blocks. E.g. experiment_names=”experiment2”, experiment_names=[“experiment1”, “experiment2”]

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

all_annotations: bool (default False)

Load exhaustively all annotation from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.OpenEphysBinaryEventExtractor(folder_path, block_index=None)

Class for reading events saved by the Open Ephys GUI

This extractor works with the Open Ephys “binary” format, which saves data using one file per continuous stream.

https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Binary-format.html

Based on neo.rawio.OpenEphysBinaryRawIO

Parameters
folder_path: str
Attributes
channel_ids

Methods

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_event_times([channel_id, segment_index, ...])

Return events timestamps of a channel in seconds.

get_events([channel_id, segment_index, ...])

Return events of a channel in its native structured type.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_event_segment

annotate

check_if_dumpable

delete_property

get_annotation_keys

get_dtype

get_num_channels

get_num_segments

get_property

get_property_keys

id_to_index

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

class spikeinterface.extractors.PlexonRecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading plexon plx files.

Based on neo.rawio.PlexonRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.Spike2RecordingExtractor(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading spike2 smr files. smrx are not supported with this, prefer CedRecordingExtractor instead.

Based on neo.rawio.Spike2RawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.SpikeGadgetsRecordingExtractor(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading *rec files from spikegadgets.

Based on neo.rawio.SpikeGadgetsRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.SpikeGLXRecordingExtractor(folder_path, load_sync_channel=False, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data saved by SpikeGLX software. See https://billkarsh.github.io/SpikeGLX/

Based on neo.rawio.SpikeGLXRawIO

Contrary to older verion this reader is folder based. So if the folder contain several streams (‘imec0.ap’ ‘nidq’ ‘imec0.lf’) then it has to be specified with stream_id.

Parameters
folder_path: str

The folder path to load the recordings from.

load_sync_channel: bool dafult False

Load or not the last channel used for synchronization. If True, then the probe is not loaded because one more channel

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For example, ‘imec0.ap’ ‘nidq’ or ‘imec0.lf’.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

class spikeinterface.extractors.TdtRecordingExtractor(folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading TDT folder.

Based on neo.rawio.TdTRawIO

Parameters
folder_path: str

The folder path to the tdt folder.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_blocks

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_streams

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

map_to_neo_kwargs

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

spikeinterface.preprocessing

spikeinterface.preprocessing.bandpass_filter(recording, freq_min=300.0, freq_max=6000.0, margin_ms=5.0, dtype=None, **filter_kwargs)

Bandpass filter of a recording

Parameters
recording: Recording

The recording extractor to be re-referenced

freq_min: float

The highpass cutoff frequency in Hz

freq_max: float

The lowpass cutoff frequency in Hz

margin_ms: float

Margin in ms on border to avoid border effect

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

{}
Returns
——-
filter_recording: BandpassFilterRecording

The bandpass-filtered recording extractor object

spikeinterface.preprocessing.blank_staturation(recording, abs_threshold=None, quantile_threshold=None, direction='upper', fill_value=None, num_chunks_per_segment=50, chunk_size=500, seed=0)

Find and remove parts of the signal with extereme values. Some arrays may produce these when amplifiers enter saturation, typically for short periods of time. To remove these artefacts, values below or above a threshold are set to the median signal value. The threshold is either be estimated automatically, using the lower and upper 0.1 signal percentile with the largest deviation from the median, or specificed. Use this function with caution, as it may clip uncontaminated signals. A warning is printed if the data range suggests no artefacts.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed Minimum value. If None, clipping is not performed on lower interval edge.

TODO
Returns
rescaled_traces: BlankSaturationRecording

The filtered traces recording extractor object

spikeinterface.preprocessing.center(recording, mode='median', dtype='float32', **random_chunk_kwargs)

Centers traces from the given recording extractor by removing the median/mean of each channel.

Parameters
recording: RecordingExtractor

The recording extractor to be centered

mode: str

‘median’ (default) | ‘mean’

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

**random_chunk_kwargs: keyword arguments for `get_random_data_chunks()` function
Returns
centered_traces: ScaleRecording

The centered traces recording extractor object

spikeinterface.preprocessing.clip(recording, a_min=None, a_max=None)

Limit the values of the data between a_min and a_max. Values exceeding the range will be set to the minimum or maximum, respectively.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

a_min: float or `None` (default `None`)

Minimum value. If None, clipping is not performed on lower interval edge.

a_max: float or `None` (default `None`)

Maximum value. If None, clipping is not performed on upper interval edge.

Returns
rescaled_traces: ClipTracesRecording

The clipped traces recording extractor object

spikeinterface.preprocessing.common_reference(recording, reference='global', operator='median', groups=None, ref_channel_ids=None, local_radius=(30, 55), verbose=False)

Re-references the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be re-referenced

reference: str ‘global’, ‘single’ or ‘local’

If ‘global’ then CMR/CAR is used either by groups or all channel way. If ‘single’, the selected channel(s) is remove from all channels. operator is no used in that case. If ‘local’, an average CMR/CAR is implemented with only k channels selected the nearest outside of a radius around each channel

operator: str ‘median’ or ‘average’
If ‘median’, common median reference (CMR) is implemented (the median of

the selected channels is removed for each timestamp).

If ‘average’, common average reference (CAR) is implemented (the mean of the selected channels is removed

for each timestamp).

groups: list

List of lists containing the channel ids for splitting the reference. The CMR, CAR, or referencing with respect to single channels are applied group-wise. However, this is not applied for the local CAR. It is useful when dealing with different channel groups, e.g. multiple tetrodes.

ref_channel_ids: list or int

If no ‘groups’ are specified, all channels are referenced to ‘ref_channel_ids’. If ‘groups’ is provided, then a list of channels to be applied to each group is expected. If ‘single’ reference, a list of one channel or an int is expected.

local_radius: tuple(int, int)

Use in the local CAR implementation as the selecting annulus (exclude radius, include radius)

verbose: bool

If True, output is verbose

Returns
referenced_recording: CommonReferenceRecording

The re-referenced recording extractor object

spikeinterface.preprocessing.filter(recording, band=[300.0, 6000.0], btype='bandpass', filter_order=5, ftype='butter', filter_mode='sos', margin_ms=5.0, coeff=None, dtype=None)
Generic filter class based on:
  • scipy.signal.iirfilter

  • scipy.signal.filtfilt or scipy.signal.sosfilt

BandpassFilterRecording is built on top of it.

Parameters
recording: Recording

The recording extractor to be re-referenced

band: float or list

If float, cutoff frequency in Hz for ‘highpass’ filter type If list. band (low, high) in Hz for ‘bandpass’ filter type

btype: str

Type of the filter (‘bandpass’, ‘highpass’)

margin_ms: float

Margin in ms on border to avoid border effect

filter_mode: str ‘sos’ or ‘ba’

Filter form of the filter coefficients: - second-order sections (default): ‘sos’ - numerator/denominator: ‘ba’

coef: ndarray or None

Filter coefficients in the filter_mode form.

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

{}
Returns
filter_recording: FilterRecording

The filtered recording extractor object

spikeinterface.preprocessing.normalize_by_quantile(recording, scale=1.0, median=0.0, q1=0.01, q2=0.99, mode='by_channel', dtype='float32', **random_chunk_kwargs)

Rescale the traces from the given recording extractor with a scalar and offset. First, the median and quantiles of the distribution are estimated. Then the distribution is rescaled and offset so that the scale is given by the distance between the quantiles (1st and 99th by default) is set to scale, and the median is set to the given median.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

scalar: float

Scale for the output distribution

median: float

Median for the output distribution

q1: float (default 0.01)

Lower quantile used for measuring the scale

q1: float (default 0.99)

Upper quantile used for measuring the

seed: int

Random seed for reproducibility

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

**random_chunk_kwargs: keyword arguments for `get_random_data_chunks()` function
Returns
rescaled_traces: NormalizeByQuantileRecording

The rescaled traces recording extractor object

spikeinterface.preprocessing.notch_filter(recording, freq=3000, q=30, margin_ms=5.0, dtype=None)
Parameters
recording: RecordingExtractor

The recording extractor to be notch-filtered

freq: int or float

The target frequency in Hz of the notch filter

q: int

The quality factor of the notch filter

{}
Returns
——-
filter_recording: NotchFilterRecording

The notch-filtered recording extractor object

spikeinterface.preprocessing.rectify(recording)
spikeinterface.preprocessing.remove_artifacts(recording, list_triggers, ms_before=0.5, ms_after=3.0, mode='zeros', fit_sample_spacing=1.0)

Removes stimulation artifacts from recording extractor traces. By default, artifact periods are zeroed-out (mode = ‘zeros’). This is only recommended for traces that are centered around zero (e.g. through a prior highpass filter); if this is not the case, linear and cubic interpolation modes are also available, controlled by the ‘mode’ input argument.

Parameters
recording: RecordingExtractor

The recording extractor to remove artifacts from

list_triggers: list of list

One list per segment of int with the stimulation trigger frames

ms_before: float or None

Time interval in ms to remove before the trigger events. If None, then also ms_after must be None and a single sample is removed

ms_after: float or None

Time interval in ms to remove after the trigger events. If None, then also ms_before must be None and a single sample is removed

mode: str

Determines what artifacts are replaced by. Can be one of the following:

  • ‘zeros’ (default): Artifacts are replaced by zeros.

  • ‘linear’: Replacement are obtained through Linear interpolation between

    the trace before and after the artifact. If the trace starts or ends with an artifact period, the gap is filled with the closest available value before or after the artifact.

  • ‘cubic’: Cubic spline interpolation between the trace before and after

    the artifact, referenced to evenly spaced fit points before and after the artifact. This is an option thatcan be helpful if there are significant LFP effects around the time of the artifact, but visual inspection of fit behaviour with your chosen settings is recommended. The spacing of fit points is controlled by ‘fit_sample_spacing’, with greater spacing between points leading to a fit that is less sensitive to high frequency fluctuations but at the cost of a less smooth continuation of the trace. If the trace starts or ends with an artifact, the gap is filled with the closest available value before or after the artifact.

fit_sample_spacing: float

Determines the spacing (in ms) of reference points for the cubic spline fit if mode = ‘cubic’. Default = 1ms. Note: The actual fit samples are the median of the 5 data points around the time of each sample point to avoid excessive influence from hyper-local fluctuations.

Returns
removed_recording: RemoveArtifactsRecording

The recording extractor after artifact removal

spikeinterface.preprocessing.remove_bad_channels(recording, bad_threshold=5, **random_chunk_kwargs)

Remove bad channels from the recording extractor given a thershold on standard deviation.

Parameters
recording: RecordingExtractor

The recording extractor object

bad_threshold: float

If automatic is used, the threshold for the standard deviation over which channels are removed

**random_chunk_kwargs
Returns
remove_bad_channels_recording: RemoveBadChannelsRecording

The recording extractor without bad channels

spikeinterface.preprocessing.scale(recording, gain=1.0, offset=0.0, dtype='float32')

Scale traces from the given recording extractor with a scalar and offset. New traces = traces*scalar + offset.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

gain: float or array

Scalar for the traces of the recording extractor or array with scalars for each channel

offset: float or array

Offset for the traces of the recording extractor or array with offsets for each channel

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

Returns
transform_traces: ScaleRecording

The transformed traces recording extractor object

spikeinterface.preprocessing.whiten(recording, dtype='float32', **random_chunk_kwargs)

Whitens the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be whitened.

**random_chunk_kwargs
Returns
——-
whitened_recording: WhitenRecording

The whitened recording extractor

spikeinterface.postprocessing

spikeinterface.postprocessing.get_template_amplitudes(waveform_extractor, peak_sign: str = 'neg', mode: str = 'extremum')

Get amplitude per channel for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

‘extremum’: max or min ‘at_index’: take value at spike index

Returns
peak_values: dict

Dictionary with unit ids as keys and template amplitudes as values

spikeinterface.postprocessing.get_template_extremum_channel(waveform_extractor, peak_sign: str = 'neg', mode: str = 'extremum', outputs: str = 'id')

Compute the channel with the extremum peak for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

‘extremum’: max or min ‘at_index’: take value at spike index

outputs: str
  • ‘id’: channel id

  • ‘index’: channel index

Returns
extremum_channels: dict

Dictionary with unit ids as keys and extremum channels (id or index based on ‘outputs’) as values

spikeinterface.postprocessing.get_template_extremum_channel_peak_shift(waveform_extractor, peak_sign: str = 'neg')

In some situations spike sorters could return a spike index with a small shift related to the waveform peak. This function estimates and return these alignment shifts for the mean template. This function is internally used by compute_spike_amplitudes() to accurately retrieve the spike amplitudes.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

Returns
shifts: dict

Dictionary with unit ids as keys and shifts as values

spikeinterface.postprocessing.get_template_extremum_amplitude(waveform_extractor, peak_sign: str = 'neg', mode: str = 'at_index')

Computes amplitudes on the best channel.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

Where the amplitude is computed ‘extremum’: max or min ‘at_index’: take value at spike index

Returns
amplitudes: dict

Dictionary with unit ids as keys and amplitudes as values

spikeinterface.postprocessing.get_template_channel_sparsity(waveform_extractor, method='best_channels', peak_sign='neg', outputs='id', num_channels=None, radius_um=None, threshold=5, by_property=None)

Get channel sparsity (subset of channels) for each template with several methods.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

method: str
  • “best_channels”: N best channels with the largest amplitude. Use the ‘num_channels’ argument to specify the

    number of channels.

  • “radius”: radius around the best channel. Use the ‘radius_um’ argument to specify the radius in um

  • “threshold”: thresholds based on template signal-to-noise ratio. Use the ‘threshold’ argument

    to specify the SNR threshold.

  • “by_property”: sparsity is given by a property of the recording and sorting(e.g. ‘group’).

    Use the ‘by_property’ argument to specify the property name.

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

outputs: str
  • ‘id’: channel id

  • ‘index’: channel index

num_channels: int

Number of channels for ‘best_channels’ method

radius_um: float

Radius in um for ‘radius’ method

threshold: float

Threshold in SNR ‘threshold’ method

by_property: object

Property name for ‘by_property’ method

Returns
sparsity: dict

Dictionary with unit ids as keys and sparse channel ids or indices (id or index based on ‘outputs’) as values

spikeinterface.postprocessing.localize_units(*args, **kwargs)
spikeinterface.postprocessing.get_template_metric_names()
spikeinterface.postprocessing.calculate_template_metrics(*args, **kwargs)
Compute template metrics including:
  • peak_to_valley

  • peak_trough_ratio

  • halfwidth

  • repolarization_slope

  • recovery_slope

Parameters
waveform_extractorWaveformExtractor, optional

The waveform extractor used to compute template metrics

load_if_existsbool, optional, default: False

Whether to load precomputed template metrics, if they already exist.

metric_nameslist, optional

List of metrics to compute (see si.postprocessing.get_template_metric_names()), by default None

peak_signstr, optional

“pos” | “neg”, by default ‘neg’

upsampling_factorint, optional

Upsample factor, by default 10

sparsity: dict or None

Default is sparsity=None and template metric is computed on extremum channel only. If given, the dictionary should contain a unit ids as keys and a channel id or a list of channel ids as values. For more generating a sparsity dict, see the postprocessing.get_template_channel_sparsity() function.

window_slope_ms: float

Window in ms after the positiv peak to compute slope, by default 0.7

Returns
tempalte_metricspd.DataFrame

Dataframe with the computed template metrics. If ‘sparsity’ is None, the index is the unit_id. If ‘sparsity’ is given, the index is a multi-index (unit_id, channel_id)

spikeinterface.postprocessing.compute_principal_components(waveform_extractor, load_if_exists=False, n_components=5, mode='by_channel_local', sparsity=None, whiten=True, dtype='float32', n_jobs=1, progress_bar=False)

Compute PC scores from waveform extractor. The PCA projections are pre-computed only on the sampled waveforms available from the WaveformExtractor.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

load_if_exists: bool

If True and pc scores are already in the waveform extractor folders, pc scores are loaded and not recomputed.

n_components: int

Number of components fo PCA - default 5

mode: str
  • ‘by_channel_local’: a local PCA is fitted for each channel (projection by channel)

  • ‘by_channel_global’: a global PCA is fitted for all channels (projection by channel)

  • ‘concatenated’: channels are concatenated and a global PCA is fitted

sparsity: dict or None

If given, a dictionary with a list/array of channel ids for each unit id

whiten: bool

If True, waveforms are pre-whitened

dtype: dtype

Dtype of the pc scores (default float32)

n_jobs: int

Number of jobs used to fit the PCA model (if mode is ‘by_channel_local’) - default 1

progress_bar: bool

If True, a progress bar is shown - default False

Returns
pc: WaveformPrincipalComponent

The waveform principal component object

Examples

>>> we = si.extract_waveforms(recording, sorting, folder='waveforms_mearec')
>>> pc = st.compute_principal_components(we, load_if_exists=True, n_components=3, mode='by_channel_local')
>>> # get pre-computed projections for unit_id=1
>>> projections = pc.get_projections(unit_id=1)
>>> # get all pre-computed projections and labels
>>> all_projections, all_labels = pc.get_all_projections()
>>> # retrieve fitted pca model(s)
>>> pca_model = pc.get_pca_model()
>>> # compute projections on new waveforms
>>> proj_new = pc.project_new(new_waveforms)
>>> # run for all spikes in the SortingExtractor
>>> pc.run_for_all_spikes(file_path="all_pca_projections.npy")
spikeinterface.postprocessing.compute_spike_amplitudes(waveform_extractor, load_if_exists=False, peak_sign='neg', return_scaled=True, outputs='concatenated', **job_kwargs)

Computes the spike amplitudes from a WaveformExtractor.

  1. The waveform extractor is used to determine the max channel per unit.

  2. Then a “peak_shift” is estimated because for some sorters the spike index is not always at the peak.

  3. Amplitudes are extracted in chunks (parallel or not)

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

load_if_existsbool, optional, default: False

Whether to load precomputed spike amplitudes, if they already exist.

peak_sign: str
The sign to compute maximum channel:
  • ‘neg’

  • ‘pos’

  • ‘both’

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, amplitudes are converted to uV.

outputs: str
How the output should be returned:
  • ‘concatenated’

  • ‘by_unit’

{}
Returns
amplitudes: np.array or list of dict
The spike amplitudes.
  • If ‘concatenated’ all amplitudes for all spikes and all units are concatenated

  • If ‘by_unit’, amplitudes are returned as a list (for segments) of dictionaries (for units)

spikeinterface.postprocessing.compute_correlograms(waveform_or_sorting_extractor, load_if_exists=False, window_ms: float = 100.0, bin_ms: float = 5.0, symmetrize=None, method: str = 'auto')

Compute auto and cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

If WaveformExtractor, the correlograms are saved as WaveformExtensions.

load_if_existsbool, optional, default: False

Whether to load precomputed crosscorrelograms, if they already exist.

window_msfloat, optional

The window in ms, by default 100.0.

bin_msfloat, optional

The bin size in ms, by default 5.0.

symmetrizeNone

Keep for back compatibility. Always True now.

methodstr, optional

“auto” | “numpy” | “numba”. If _auto” and numba is installed, numba is used, by default “auto”

Returns
ccgsnp.array

Correlograms with shape (num_units, num_units, num_bins) The diagonal of ccgs is the auto correlogram. ccgs[A, B, :] is the symetrie of ccgs[B, A, :] ccgs[A, B, :] have to be read as the histogram of spiketimesA - spiketimesB

binsnp.array

The bin edges in ms

spikeinterface.qualitymetrics

spikeinterface.qualitymetrics.compute_quality_metrics(waveform_extractor, load_if_exists=False, metric_names=None, sparsity=None, skip_pc_metrics=False, n_jobs=1, verbose=False, progress_bar=False, **params)

Compute quality metrics on waveform extractor.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor to compute metrics on.

load_if_existsbool, optional, default: False

Whether to load precomputed quality metrics, if they already exist.

metric_nameslist or None

List of quality metrics to compute.

sparsitydict or None

If given, the sparse channel_ids for each unit in PCA metrics computation. This is used also to identify neighbor units and speed up computations. If None (default) all channels and all units are used for each unit.

skip_pc_metricsbool

If True, PC metrics computation is skipped.

n_jobsint

Number of jobs (used for PCA metrics)

verbosebool

If True, output is verbose.

progress_barbool

If True, progress bar is shown.

**params

Keyword arguments for quality metrics.

Returns
metrics: pandas.DataFrame

Data frame with the computed metrics

spikeinterface.qualitymetrics.get_quality_metric_list()

Get a list of the available quality metrics.

spikeinterface.sorters

spikeinterface.sorters.available_sorters()

Lists available sorters.

spikeinterface.sorters.installed_sorters()

Lists installed sorters.

spikeinterface.sorters.get_default_params(sorter_name_or_class)
spikeinterface.sorters.print_sorter_versions()

“Prints the versions of the installed sorters.

spikeinterface.sorters.get_sorter_description(sorter_name_or_class)

Returns a brief description for the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve description from.

Returns
params_description: dict

Dictionary with parameter description.

spikeinterface.sorters.run_sorter(sorter_name: str, recording: BaseRecording, output_folder: Optional[str] = None, remove_existing_folder: bool = True, delete_output_folder: bool = False, verbose: bool = False, raise_error: bool = True, docker_image: Optional[Union[bool, str]] = False, singularity_image: Optional[Union[bool, str]] = False, with_output: bool = True, **sorter_params)

Generic function to run a sorter via function approach.

Parameters
sorter_name: str

The sorter name

recording: RecordingExtractor

The recording extractor to be spike sorted

output_folder: str or Path

Path to output folder

remove_existing_folder: bool

If True and output_folder exists yet then delete.

delete_output_folder: bool

If True, output folder is deleted (default False)

verbose: bool

If True, output is verbose

raise_error: bool

If True, an error is raised if spike sorting fails (default). If False, the process continues and the error is logged in the log file.

docker_image: bool or str

If True, pull the default docker container for the sorter and run the sorter in that container using docker. Use a str to specify a non-default container. If that container is not local it will be pulled from docker hub. If False, the sorter is run locally.

singularity_image: bool or str

If True, pull the default docker container for the sorter and run the sorter in that container using singularity. Use a str to specify a non-default container. If that container is not local it will be pulled from Docker Hub. If False, the sorter is run locally.

**sorter_params: keyword args

Spike sorter specific arguments (they can be retrieved with ‘get_default_params(sorter_name_or_class)’

Returns
sortingextractor: SortingExtractor

The spike sorted data

Examples

>>> sorting = run_sorter("tridesclous", recording)
spikeinterface.sorters.run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params={}, mode_if_folder_exists='raise', engine='loop', engine_kwargs={}, verbose=False, with_output=True, docker_images={}, singularity_images={})

Run several sorter on several recordings.

Parameters
sorter_list: list of str

List of sorter names.

recording_dict_or_list: dict or list

If a dict of recording, each key should be the name of the recording. If a list, the names should be recording_0, recording_1, etc.

working_folder: str

The working directory.

sorter_params: dict of dict with sorter_name as key

This allow to overwrite default params for sorter.

mode_if_folder_exists: {‘raise’, ‘overwrite’, ‘keep’}
The mode when the subfolder of recording/sorter already exists.
  • ‘raise’ : raise error if subfolder exists

  • ‘overwrite’ : delete and force recompute

  • ‘keep’ : do not compute again if f=subfolder exists and log is OK

engine: {‘loop’, ‘joblib’, ‘dask’}

Which engine to use to run sorter.

engine_kwargs: dict
This contains kwargs specific to the launcher engine:
  • ‘loop’ : no kwargs

  • ‘joblib’ : {‘n_jobs’ : } number of processes

  • ‘dask’ : {‘client’:} the dask client for submitting task

verbose: bool

Controls sorter verboseness.

with_output: bool

Whether to return the output.

docker_images: dict

A dictionary {sorter_name : docker_image} to specify if some sorters should use docker images.

singularity_images: dict

A dictionary {sorter_name : singularity_image} to specify if some sorters should use singularity images

Returns
resultsdict

The output is nested dict[(rec_name, sorter_name)] of SortingExtractor.

Low level

class spikeinterface.sorters.BaseSorter(recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Base Sorter object.

Attributes
SortingExtractor_Class
compiled_name

Methods

check_compiled()

Checks if the sorter is running inside an image with matlab-compiled version

run([raise_error])

Main function kept for backward compatibility.

set_params(sorter_params)

Mimic the old API This should not be used anymore but still works.

default_params

get_result

get_result_from_folder

get_sorter_version

initialize_folder

is_installed

params_description

run_from_folder

set_params_to_folder

setup_recording

use_gpu

spikeinterface.comparison

spikeinterface.comparison.compare_two_sorters(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

spikeinterface.comparison.compare_multiple_sorters(sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)

Compares multiple spike sorting outputs based on spike trains.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

Parameters
sorting_list: list

List of sorting extractor objects to be compared

name_list: list

List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters

  • ‘intersection’: spike trains are the intersection between the spike trains of the

    best matching two sorters

verbose: bool

if True, output is verbose

Returns
multi_sorting_comparison: MultiSortingComparison

MultiSortingComparison object with the multiple sorter comparison

spikeinterface.comparison.compare_sorter_to_ground_truth(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=-1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

This class can:
  • compute a “match between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positive detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hungarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

spikeinterface.comparison.aggregate_performances_table(study_folder, exhaustive_gt=False, **karg_thresh)

Aggregate some results into dataframe to have a “study” overview on all recordingXsorter.

Tables are:
  • run_times: run times per recordingXsorter

  • perf_pooled_with_sum: GroundTruthComparison.see get_performance

  • perf_pooled_with_average: GroundTruthComparison.see get_performance

  • count_units: given some threshold count how many units : ‘well_detected’, ‘redundant’, ‘false_postive_units, ‘bad’

Parameters
study_folder: str

The study folder.

karg_thresh: dict

Threshold parameters used for the “count_units” table.

Returns
dataframes: a dict of DataFrame

Return several useful DataFrame to compare all results. Note that count_units depend on karg_thresh.

class spikeinterface.comparison.GroundTruthComparison(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=-1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

This class can:
  • compute a “match between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positive detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hungarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

Attributes
sorting1
sorting1_name
sorting2
sorting2_name

Methods

count_bad_units()

See get_bad_units

count_false_positive_units([redundant_score])

See get_false_positive_units().

count_overmerged_units([overmerged_score])

See get_overmerged_units().

count_redundant_units([redundant_score])

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units.

get_bad_units()

Return units list of "bad units".

get_confusion_matrix()

Computes the confusion matrix.

get_false_positive_units([redundant_score])

Return units list of "false positive units" from tested_sorting.

get_overmerged_units([overmerged_score])

Return "overmerged units"

get_performance([method, output])

Get performance rate with several method:

get_redundant_units([redundant_score])

Return "redundant units"

get_well_detected_units([well_detected_score])

Return units list of "well detected units" from tested_sorting.

print_performance([method])

Print performance with the selected method

print_summary([well_detected_score, ...])

Print a global performance summary that depend on the context:

get_labels1

get_labels2

get_ordered_agreement_scores

set_frames_and_frequency

count_bad_units()

See get_bad_units

count_false_positive_units(redundant_score=None)

See get_false_positive_units().

count_overmerged_units(overmerged_score=None)

See get_overmerged_units().

count_redundant_units(redundant_score=None)

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units. kwargs are the same as get_well_detected_units.

get_bad_units()

Return units list of “bad units”.

“bad units” are defined as units in tested that are not in the best match list of GT units.

So it is the union of “false positive units” + “redundant units”.

Need exhaustive_gt=True

get_confusion_matrix()

Computes the confusion matrix.

Returns
confusion_matrix: pandas.DataFrame

The confusion matrix

get_false_positive_units(redundant_score=None)

Return units list of “false positive units” from tested_sorting.

“false positive units” are defined as units in tested that are not matched at all in GT units.

Need exhaustive_gt=True

Parameters
redundant_score: float (default 0.2)

The agreement score below which tested units are counted as “false positive”” (and not “redundant”).

get_overmerged_units(overmerged_score=None)

Return “overmerged units”

“overmerged units” are defined as units in tested that match more than one GT unit with an agreement score larger than overmerged_score.

Parameters
overmerged_score: float (default 0.4)

Tested units with 2 or more agreement scores above ‘overmerged_score’ are counted as “overmerged”.

get_performance(method='by_unit', output='pandas')
Get performance rate with several method:
  • ‘raw_count’ : just render the raw count table

  • ‘by_unit’ : render perf as rate unit by unit of the GT

  • ‘pooled_with_average’ : compute rate unit by unit and average

Parameters
method: str

‘by_unit’, or ‘pooled_with_average’

output: str

‘pandas’ or ‘dict’

Returns
perf: pandas dataframe/series (or dict)

dataframe/series (based on ‘output’) with performance entries

get_redundant_units(redundant_score=None)

Return “redundant units”

“redundant units” are defined as units in tested that match a GT units with a big agreement score but it is not the best match. In other world units in GT that detected twice or more.

Parameters
redundant_score=None: float (default 0.2)

The agreement score above which tested units are counted as “redundant” (and not “false positive” ).

get_well_detected_units(well_detected_score=None)

Return units list of “well detected units” from tested_sorting.

“well detected units” are defined as units in tested that are well matched to GT units.

Parameters
well_detected_score: float (default 0.8)

The agreement score above which tested units are counted as “well detected”.

print_performance(method='pooled_with_average')

Print performance with the selected method

print_summary(well_detected_score=None, redundant_score=None, overmerged_score=None)
Print a global performance summary that depend on the context:
  • exhaustive= True/False

  • how many gt units (one or several)

This summary mix several performance metrics.

class spikeinterface.comparison.SymmetricSortingComparison(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

Attributes
sorting1
sorting1_name
sorting2
sorting2_name

Methods

get_agreement_fraction

get_best_unit_match1

get_best_unit_match2

get_matching

get_matching_event_count

get_matching_unit_list1

get_matching_unit_list2

get_ordered_agreement_scores

set_frames_and_frequency

get_agreement_fraction(unit1=None, unit2=None)
get_best_unit_match1(unit1)
get_best_unit_match2(unit2)
get_matching()
get_matching_event_count(unit1, unit2)
get_matching_unit_list1(unit1)
get_matching_unit_list2(unit2)
class spikeinterface.comparison.GroundTruthStudy(study_folder=None)

Methods

get_metrics([rec_name])

Load or compute units metrics for a given recording.

get_templates(rec_name[, sorter_name, mode])

Get template for a given recording.

aggregate_count_units

aggregate_dataframes

aggregate_performance_by_unit

aggregate_run_times

compute_metrics

compute_waveforms

concat_all_snr

copy_sortings

create

get_ground_truth

get_recording

get_sorting

get_units_snr

get_waveform_extractor

run_comparisons

run_sorters

scan_folder

aggregate_count_units(well_detected_score=None, redundant_score=None, overmerged_score=None)
aggregate_dataframes(copy_into_folder=True, **karg_thresh)
aggregate_performance_by_unit()
aggregate_run_times()
compute_metrics(rec_name, metric_names=['snr'], ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=-1, total_memory='1G')
compute_waveforms(rec_name, sorter_name=None, ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=-1, total_memory='1G')
concat_all_snr()
copy_sortings()
classmethod create(study_folder, gt_dict, **job_kwargs)
get_ground_truth(rec_name=None)
get_metrics(rec_name=None, **metric_kwargs)

Load or compute units metrics for a given recording.

get_recording(rec_name=None)
get_sorting(sort_name, rec_name=None)
get_templates(rec_name, sorter_name=None, mode='median')

Get template for a given recording.

If sorter_name=None then template are from the ground truth.

get_units_snr(rec_name=None, **metric_kwargs)
get_waveform_extractor(rec_name, sorter_name=None)
run_comparisons(exhaustive_gt=False, **kwargs)
run_sorters(sorter_list, mode_if_folder_exists='keep', remove_sorter_folders=False, **kwargs)
scan_folder()
class spikeinterface.comparison.MultiSortingComparison(sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)

Compares multiple spike sorting outputs based on spike trains.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

Parameters
sorting_list: list

List of sorting extractor objects to be compared

name_list: list

List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters

  • ‘intersection’: spike trains are the intersection between the spike trains of the

    best matching two sorters

verbose: bool

if True, output is verbose

Returns
multi_sorting_comparison: MultiSortingComparison

MultiSortingComparison object with the multiple sorter comparison

Attributes
units

Methods

compute_subgraphs()

Computes subgraphs of connected components. Returns ------- sg_object_names: list List of sorter names for each node in the connected component subgraph sg_units: list List of unit ids for each node in the connected component subgraph.

get_agreement_sorting([...])

Returns AgreementSortingExtractor with units with a 'minimum_matching' agreement.

load_from_folder

save_to_folder

set_frames_and_frequency

get_agreement_sorting(minimum_agreement_count=1, minimum_agreement_count_only=False)

Returns AgreementSortingExtractor with units with a ‘minimum_matching’ agreement.

Parameters
minimum_agreement_count: int

Minimum number of matches among sorters to include a unit.

minimum_agreement_count_only: bool

If True, only units with agreement == ‘minimum_matching’ are included. If False, units with an agreement >= ‘minimum_matching’ are included

Returns
agreement_sorting: AgreementSortingExtractor

The output AgreementSortingExtractor

spikeinterface.widgets

spikeinterface.widgets.plot_timeseries(recording, segment_index=None, channel_ids=None, order_channel_by_depth=False, time_range=None, mode='auto', return_scaled=False, cmap='RdBu_r', show_channel_ids=False, color_groups=False, color=None, clim=None, tile_size=512, seconds_per_row=0.2, with_colorbar=True, add_legend=True, backend=None, **backend_kwargs)

Plots recording timeseries.

Parameters
recording: RecordingExtractor or dict or list

The recording extractor object If dict (or list) then it is a multi layer display to compare some processing for instance

segment_index: None or int

The segment index.

channel_ids: list

The channel ids to display.

order_channel_by_depth: boolean

Reorder channel by depth.

time_range: list

List with start time and end time

mode: ‘line’ or ‘map’ or ‘auto’
2 possible mode:
  • ‘line’ : classical for low channel count

  • ‘map’ : for high channel count use color heat map

  • ‘auto’ : auto switch depending the channel count <32ch

return_scaled: bool

If True and recording.has_scaled(), it plots the scaled traces. Default False

cmap: str default ‘RdBu’

matplotlib colormap used in mode ‘map’

show_channel_ids: bool

Set yticks with channel ids

color_groups: bool

If True groups are plotted with different colors

color: str default: None

The color used to draw the traces.

clim: None, tuple, or dict

When mode=’map’ this control color lims. If dict, keys should be the same as recording keys

with_colorbar: bool default True

When mode=’map’ add colorbar

tile_size: int

For sortingview backend, the size of each tile in the rendered image

seconds_per_row: float

For ‘map’ mode and sortingview backend, seconds to reder in each row

Returns
W: TimeseriesWidget

The output widget

spikeinterface.widgets.plot_rasters(*args, **kwargs)

Plots spike train rasters.

Parameters
sorting: SortingExtractor

The sorting extractor object

segment_index: None or int

The segment index.

unit_ids: list

List of unit ids

time_range: list

List with start time and end time

color: matplotlib color

The color to be used

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: RasterWidget

The output widget

spikeinterface.widgets.plot_probe_map(*args, **kwargs)

Plot the probe of a recording.

Parameters
recording: RecordingExtractor

The recording extractor object

channel_ids: list

The channel ids to display

with_channel_ids: bool False default

Add channel ids text on the probe

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

**plot_probe_kwargs: keyword arguments for probeinterface.plotting.plot_probe_group() function
Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_isi_distribution(*args, **kwargs)

Plots spike train ISI distribution.

Parameters
sorting: SortingExtractor

The sorting extractor object

unit_ids: list

List of unit ids

bins_ms: int

Bin size in ms

window_ms: float

Window size in ms

ncols: int

Number of maximum columns (default 5)

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

Returns
W: ISIDistributionWidget

The output widget

spikeinterface.widgets.plot_crosscorrelograms(waveform_or_sorting_extractor: Union[WaveformExtractor, BaseSorting], unit_ids=None, window_ms=100.0, bin_ms=1.0, hide_unit_selector=False, unit_colors=None, backend=None, **backend_kwargs)

Plots unit cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

window_msfloat

Window for CCGs in ms, by default 100 ms

bin_msfloat

Bin size in ms, by default 1 ms

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

unit_colors: dict or None

Optional dict of colors for units.

spikeinterface.widgets.plot_autocorrelograms(*args, **kargs)

Plots unit cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

window_msfloat

Window for CCGs in ms, by default 100 ms

bin_msfloat

Bin size in ms, by default 1 ms

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

unit_colors: dict or None

Optional dict of colors for units.

spikeinterface.widgets.plot_drift_over_time(*args, **kwargs)

Plot “y” (=depth) (or “x”) drift over time. The use peak detection on channel and make histogram of peak activity over time bins.

Parameters
recording: RecordingExtractor

The recordng extractor object

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

mode: str ‘heatmap’ or ‘scatter’

plot mode

probe_axis: 0 or 1

Axis of the probe 0=x 1=y

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: float (default 60.)

Bin duration in second

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_peak_activity_map(*args, **kwargs)

Plots spike rate (estimated with detect_peaks()) as 2D activity map.

Can be static (bin_duration_s=None) or animated (bin_duration_s=60.)

Parameters
recording: RecordingExtractor

The recording extractor object.

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: None or float

If None then static image If not None then it is an animation per bin.

with_contact_color: bool (default True)

Plot rates with contact colors

with_interpolated_map: bool (default True)

Plot rates with interpolated map

with_channel_ids: bool False default

Add channel ids text on the probe

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_unit_waveforms(waveform_extractor: WaveformExtractor, channel_ids=None, unit_ids=None, plot_waveforms=True, plot_templates=True, plot_channels=False, unit_colors=None, sparsity=None, max_channels=None, radius_um=None, ncols=5, lw_waveforms=1, lw_templates=2, axis_equal=False, unit_selected_waveforms=None, max_spikes_per_unit=50, set_title=True, same_axis=False, x_offset_units=False, alpha_waveforms=0.5, alpha_templates=1, hide_unit_selector=False, plot_legend=True, backend=None, **backend_kwargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsity: dict or None

If given, the channel sparsity for each unit

radius_um: None or float

If not None, all channels within a circle around the peak waveform will be displayed Ignored is sparsity is provided. Incompatible with with max_channels

max_channelsNone or int

If not None only max_channels are displayed per units. Ignored is sparsity is provided. Incompatible with with radius_um

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

unit_selected_waveforms: None or dict

A dict key is unit_id and value is the subset of waveforms indices that should be be displayed (matplotlib backend)

max_spikes_per_unit: int or None

If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are displayed per waveform, default 50 (matplotlib backend)

axis_equal: bool

Equal aspect ratio for x and y axis, to visualize the array geometry to scale.

lw_waveforms: float

Line width for the waveforms, default 1 (matplotlib backend)

lw_templates: float

Line width for the templates, default 2 (matplotlib backend)

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used. (matplotlib backend)

alpha_waveforms: float

Alpha value for waveforms, default 0.5 (matplotlib backend)

alpha_templates: float

Alpha value for templates, default 1 (matplotlib backend)

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

same_axis: bool

If True, waveforms and templates are diplayed on the same axis, default False (matplotlib backend)

x_offset_units: bool

In case same_axis is True, this parameter allow to x-offset the waveforms for different units (recommended for a few units), default False (matlotlib backend)

plot_legend: bool (default True)

Display legend.

spikeinterface.widgets.plot_unit_templates(*args, **kargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsity: dict or None

If given, the channel sparsity for each unit

radius_um: None or float

If not None, all channels within a circle around the peak waveform will be displayed Ignored is sparsity is provided. Incompatible with with max_channels

max_channelsNone or int

If not None only max_channels are displayed per units. Ignored is sparsity is provided. Incompatible with with radius_um

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

unit_selected_waveforms: None or dict

A dict key is unit_id and value is the subset of waveforms indices that should be be displayed (matplotlib backend)

max_spikes_per_unit: int or None

If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are displayed per waveform, default 50 (matplotlib backend)

axis_equal: bool

Equal aspect ratio for x and y axis, to visualize the array geometry to scale.

lw_waveforms: float

Line width for the waveforms, default 1 (matplotlib backend)

lw_templates: float

Line width for the templates, default 2 (matplotlib backend)

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used. (matplotlib backend)

alpha_waveforms: float

Alpha value for waveforms, default 0.5 (matplotlib backend)

alpha_templates: float

Alpha value for templates, default 1 (matplotlib backend)

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

same_axis: bool

If True, waveforms and templates are diplayed on the same axis, default False (matplotlib backend)

x_offset_units: bool

In case same_axis is True, this parameter allow to x-offset the waveforms for different units (recommended for a few units), default False (matlotlib backend)

plot_legend: bool (default True)

Display legend.

spikeinterface.widgets.plot_principal_component(*args, **kwargs)

Plots principal component.

Parameters
waveform_extractor: WaveformExtractor
pc: None or WaveformPrincipalComponent

If None then pc are recomputed

spikeinterface.widgets.plot_unit_probe_map(*args, **kwargs)

Plots unit map. Amplitude is color coded on probe contact.

Can be static (animated=False) or animated (animated=True)

Parameters
waveform_extractor: WaveformExtractor
unit_ids: list

List of unit ids.

channel_ids: list

The channel ids to display

animated: True/False

animation for amplitude on time

with_channel_ids: bool False default

add channel ids text on the probe

spikeinterface.widgets.plot_confusion_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
gt_comparison: GroundTruthComparison

The ground truth sorting comparison object

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ConfusionMatrixWidget

The output widget

spikeinterface.widgets.plot_agreement_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
sorting_comparison: GroundTruthComparison or SymmetricSortingComparison

The sorting comparison object. Symetric or not.

ordered: bool

Order units with best agreement scores. This enable to see agreement on a diagonal.

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_multicomp_graph(*args, **kwargs)

Plots multi comparison graph.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

draw_labels: bool

If True unit labels are shown

node_cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘viridis’)

edge_cmap: matplotlib colormap

The colormap to be used for the edges (default ‘hot’)

alpha_edges: float

Alpha value for edges

colorbar: bool

If True a colorbar for the edges is plotted

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement(*args, **kwargs)

Plots multi comparison agreement as pie or bar plot.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement_by_sorter(*args, **kwargs)

Plots multi comparison agreement as pie or bar plot.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored.

show_legend: bool

Show the legend in the last axes (default True).

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_pair_by_pair(*args, **kwargs)

Plots CollisionGTComparison pair by pair.

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

unit_ids: list

List of considered units

nbins: int

Number of bins

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_by_similarity(*args, **kwargs)

Plots CollisionGTComparison pair by pair orderer by cosine_similarity

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

templates: array

template of units

mode: ‘heatmap’ or ‘lines’

to see collision curves for every pairs (‘heatmap’) or as lines averaged over pairs.

similarity_bins: array

if mode is ‘lines’, the bins used to average the pairs

cmap: string

colormap used to show averages if mode is ‘lines’

metric: ‘cosine_similarity’

metric for ordering

good_only: True

keep only the pairs with a non zero accuracy (found templates)

min_accuracy: float

If good only, the minimum accuracy every cell should have, individually, to be considered in a putative pair

unit_ids: list

List of considered units

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_sorting_performance(*args, **kwargs)

Plots sorting performance for each ground-truth unit.

Parameters
gt_sorting_comparison: GroundTruthComparison

The ground truth sorting comparison object

property_name: str

The property of the sorting extractor to use as x-axis (e.g. snr). If None, no property is used.

metric: str

The performance metric. ‘accuracy’ (default), ‘precision’, ‘recall’, ‘miss rate’, etc.

markersize: int

The size of the marker

marker: str

The matplotlib marker to use (default ‘.’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: SortingPerformanceWidget

The output widget

spikeinterface.widgets.plot_unit_summary(waveform_extractor, unit_id, unit_colors=None, backend=None, **backend_kwargs)

Plot a unit summary.

If amplitudes are alreday computed they are displayed.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

unit_id: into or str

The unit id to plot the summary of

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

spikeinterface.exporters

spikeinterface.exporters.export_to_phy(waveform_extractor, output_folder, compute_pc_features=True, compute_amplitudes=True, sparsity_dict=None, copy_binary=True, max_channels_per_template=16, remove_if_exists=False, peak_sign='neg', template_mode='median', dtype=None, verbose=True, **job_kwargs)

Exports a waveform extractor to the phy template-gui format.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the phy template-gui files are saved

compute_pc_features: bool

If True (default), pc features are computed

compute_amplitudes: bool

If True (default), waveforms amplitudes are computed

sparsity_dict: dict or None

If given, the dictionary should contain a sparsity method (e.g. “best_channels”) and optionally arguments associated with the method (e.g. “num_channels” for “best_channels” method). Other examples are:

  • by radius: sparsity_dict=dict(method=”radius”, radius_um=100)

  • by SNR threshold: sparsity_dict=dict(method=”threshold”, threshold=2)

  • by property: sparsity_dict=dict(method=”by_property”, by_property=”group”)

Default is sparsity_dict=dict(method=”best_channels”, num_channels=16) For more info, see the postprocessing.get_template_channel_sparsity() function.

max_channels_per_template: int or None

Maximum channels per unit to return. If None, all channels are returned

copy_binary: bool

If True, the recording is copied and saved in the phy ‘output_folder’

remove_if_exists: bool

If True and ‘output_folder’ exists, it is removed and overwritten

peak_sign: ‘neg’, ‘pos’, ‘both’

Used by compute_spike_amplitudes

template_mode: str

Parameter ‘mode’ to be given to WaveformExtractor.get_template()

dtype: dtype or None

Dtype to save binary data

verbose: bool

If True, output is verbose

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.exporters.export_report(waveform_extractor, output_folder, remove_if_exists=False, format='png', show_figures=False, peak_sign='neg', force_computation=False, **job_kwargs)

Exports a SI spike sorting report. The report includes summary figures of the spike sorting output (e.g. amplitude distributions, unit localization and depth VS amplitude) as well as unit-specific reports, that include waveforms, templates, template maps, ISI distributions, and more.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the report files are saved

remove_if_exists: bool

If True and the output folder exists, it is removed

format: str

‘png’ (default) or ‘pdf’ or any format handled by matplotlib

peak_sign: ‘neg’ or ‘pos’

used to compute amplitudes and metrics

show_figures: bool

If True, figures are shown. If False (default), figures are closed after saving.

force_computation: bool default False

Force or not some heavy computaion before exporting.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.sortingcomponents

Peak Localization

Sorting components: peak localization.

spikeinterface.sortingcomponents.peak_localization.localize_peaks(recording, peaks, ms_before=1, ms_after=1, method='center_of_mass', method_kwargs={}, **job_kwargs)

Localize peak (spike) in 2D or 3D depending the method.

When a probe is 2D then:
  • X is axis 0 of the probe

  • Y is axis 1 of the probe

  • Z is orthogonal to the plane of the probe

Parameters
recording: RecordingExtractor

The recording extractor object.

peaks: array

Peaks array, as returned by detect_peaks() in “compact_numpy” way.

ms_before: float

The left window, before a peak, in milliseconds.

ms_after: float

The right window, after a peak, in milliseconds.

method: ‘center_of_mass’ or ‘monopolar_triangulation’

Method to use.

method_kwargs: dict of kwargs method
Keyword arguments for the chosen method:
‘center_of_mass’:
  • local_radius_um: float

    For channel sparsity.

‘monopolar_triangulation’:
  • local_radius_um: float

    For channel sparsity.

  • max_distance_um: float, default: 1000

    Boundary for distance estimation.

  • enforce_decreseNone or “radial”

    If+how to enforce spatial decreasingness for PTP vectors.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
peak_locations: ndarray

Array with estimated location for each spike. The dtype depends on the method. (‘x’, ‘y’) or (‘x’, ‘y’, ‘z’, ‘alpha’).

Peak Detection

Sorting components: peak detection.

spikeinterface.sortingcomponents.peak_detection.detect_peaks(recording, method='by_channel', peak_sign='neg', detect_threshold=5, exclude_sweep_ms=0.1, local_radius_um=50, noise_levels=None, random_chunk_kwargs={}, pipeline_steps=None, outputs='numpy_compact', **job_kwargs)

Peak detection based on threshold crossing in term of k x MAD.

Parameters
recording: RecordingExtractor

The recording extractor object.

method: ‘by_channel’, ‘locally_exclusive’
Method to use. Options:
  • ‘by_channel’ : peak are detected in each channel independently

  • ‘locally_exclusive’ : a single best peak is taken from a set of neighboring channels

peak_sign: ‘neg’, ‘pos’, ‘both’

Sign of the peak.

detect_threshold: float

Threshold, in median absolute deviations (MAD), to use to detect peaks.

exclude_sweep_ms: float or None

Time, in ms, during which the peak is isolated. Exclusive param with exclude_sweep_size For example, if exclude_sweep_ms is 0.1, a peak is detected if a sample crosses the threshold, and no larger peaks are located during the 0.1ms preceding and following the peak.

local_radius_um: float

The radius to use for detection across local channels.

noise_levels: array, optional

Estimated noise levels to use, if already computed. If not provide then it is estimated from a random snippet of the data.

random_chunk_kwargs: dict, optional

A dict that contain option to randomize chunk for get_noise_levels(). Only used if noise_levels is None.

pipeline_steps: None or list[PeakPipelineStep]

Optional additional PeakPipelineStep need to computed just after detection time. This avoid reading the recording multiple times.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
peaks: array

Detected peaks.

Notes

This peak detection ported from tridesclous into spikeinterface.

Motion Correction

class spikeinterface.sortingcomponents.motion_correction.CorrectMotionRecording(recording, motion, temporal_bins, spatial_bins, direction=1)

Recording that corrects motion on-the-fly given a rigid or non-rigid motion vector estimation. This internally applies for every time bin an inverse weighted distance interpolation on the original after reverse the motion. estimate_motion() must be call before this to get the motion vector.

Parameters
recording: Recording

The parent recording.

motion: np.array 2D

motion.shape[0] equal temporal_bins.shape[0] motion.shape[1] equal 1 when “rigid” motion

equal temporal_bins.shape[0] when “none rigid”

temporal_bins: np.array

Temporal bins in second.

spatial_bins: None or np.array

Bins for non-rigid motion. If None, rigid motion is used

direction: int in (0, 1, 2)

Dimension of shift in channel_locations.

Returns
Corrected_recording: CorrectMotionRecording

Recording after motion correction

Attributes
channel_ids
dtype
sampling_frequency

Methods

binary_compatible_with([dtype, time_axis, ...])

Check is the recording is binary compatible with some constrain on

channel_slice(channel_ids[, renamed_channel_ids])

Returns a new object with sliced channels.

clone()

Clones an existing extractor into a new instance.

copy_metadata(other[, only_main, ids])

Copy annotations/properties/features to another extractor.

dump(file_path[, relative_to, folder_metadata])

Dumps extractor to json or pickle

dump_to_json([file_path, relative_to, ...])

Dump recording extractor to json file.

dump_to_pickle([file_path, ...])

Dump recording extractor to a pickle file.

frame_slice(start_frame, end_frame)

Returns a new object with sliced frames.

from_dict(d[, base_folder])

Instantiate extractor from dictionary

get_annotation(key[, copy])

Get a annotation.

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_preferred_mp_context()

Get the preferred context for multiprocessing.

get_times([segment_index])

Get time vector for a recording segment.

get_traces([segment_index, start_frame, ...])

Returns traces from recording.

has_time_vector([segment_index])

Check if the segment of the recording has a time vector.

ids_to_indices(ids[, prefer_slice])

Transform a ids list (aka channel_ids or unit_ids) into a indices array. Useful to manipulate: * data * properties * features.

is_binary_compatible()

Inform is this recording is "binary" compatible.

load(file_path[, base_folder])

Load extractor from file path (.json or .pkl)

planarize([axes])

Returns a Recording with a 2D probe from one with a 3D probe

remove_channels(remove_channel_ids)

Returns a new object with removed channels.

save(**kwargs)

Save a SpikeInterface object.

save_to_folder([name, folder, verbose])

Save extractor to folder.

save_to_zarr([name, zarr_path, ...])

Save extractor to zarr.

select_segments(segment_indices)

Return a new object with the segments specified by 'segment_indices'.

set_annotation(annotation_key, value[, ...])

This function adds an entry to the annotations dictionary.

set_dummy_probe_from_locations(locations[, ...])

Sets a 'dummy' probe based on locations.

set_probe(probe[, group_mode, in_place])

Wrapper on top on set_probes when there one unique probe.

set_probes(probe_or_probegroup[, ...])

Attach a Probe to a recording.

set_property(key, values[, ids, missing_value])

Set property vector for main ids.

set_times(times[, segment_index, with_warning])

Set times for a recording segment.

split_by([property, outputs])

Splits object based on a certain property (e.g.

to_dict([include_annotations, ...])

Make a nested serialized dictionary out of the extractor.

add_recording_segment

annotate

check_if_dumpable

clear_channel_groups

clear_channel_locations

delete_property

get_annotation_keys

get_channel_gains

get_channel_groups

get_channel_ids

get_channel_locations

get_channel_offsets

get_channel_property

get_dtype

get_num_channels

get_num_frames

get_num_samples

get_num_segments

get_probe

get_probegroup

get_probes

get_property

get_property_keys

get_sampling_frequency

get_total_duration

get_total_samples

has_3d_locations

has_scaled

has_scaled_traces

id_to_index

is_filtered

load_from_folder

load_metadata_from_folder

save_metadata_to_folder

save_to_memory

set_channel_gains

set_channel_groups

set_channel_locations

set_channel_offsets

set_probegroup

Clustering

spikeinterface.sortingcomponents.clustering.find_cluster_from_peaks(recording, peaks, method='stupid', method_kwargs={}, extra_outputs=False, **job_kwargs)

Find cluster from peaks.

Parameters
recording: RecordingExtractor

The recording extractor object

peaks: WaveformExtractor

The waveform extractor

method: str

Which method to use (‘stupid’ | ‘XXXX’)

method_kwargs: dict, optional

Keyword arguments for the chosen method

extra_outputs: bool

If True then debug is also return

Returns
labels: ndarray of int

possible clusters list

peak_labels: array of int

peak_labels.shape[0] == peaks.shape[0]

Template Matching

spikeinterface.sortingcomponents.matching.find_spikes_from_templates(recording, method='naive', method_kwargs={}, extra_outputs=False, **job_kwargs)

Find spike from a recording from given templates.

Parameters
recording: RecordingExtractor

The recording extractor object

waveform_extractor: WaveformExtractor

The waveform extractor

method: str

Which method to use (‘naive’ | ‘tridesclous’ | ‘circus’)

method_kwargs: dict, optional

Keyword arguments for the chosen method

extra_outputs: bool

If True then method_kwargs is also return

job_kwargs: dict

Parameters for ChunkRecordingExecutor

Returns
spikes: ndarray

Spikes found from templates.

method_kwargs:

Optionaly returns for debug purpose.

Notes
Templates are represented as WaveformExtractor so statistics can be extracted.