API

spikeinterface.core

spikeinterface.core.load_extractor(file_or_folder_or_dict, base_folder=None)
Instantiate extractor from:
  • a dict

  • a json file

  • a pickle file

  • folder (after save)

Parameters
file_or_folder_or_dict: dictionary or folder or file (json, pickle)
Returns
extractor: Recording or Sorting

The loaded extractor object

class spikeinterface.core.BaseRecording(sampling_frequency: float, channel_ids: List, dtype)

Abstract class representing several a multichannel timeseries (or block of raw ephys traces). Internally handle list of RecordingSegment

add_recording_segment(recording_segment)

Adds a recording segment.

Parameters
recording_segmentBaseRecordingSegment

The recording segment to add

binary_compatible_with(dtype=None, time_axis=None, file_paths_lenght=None, file_offset=None, file_suffix=None)

Check is the recording is binary compatible with some constrain on

  • dtype

  • tim_axis

  • len(file_paths)

  • file_offset

  • file_suffix

get_binary_description()

When rec.is_binary_compatible() is True this returns a dictionary describing the binary format.

get_num_frames(segment_index=None)

Returns the number of samples for a segment.

Parameters
segment_indexint, optional

The segment index to retrieve the number of samples for. For multi-segment objects, it is required, by default None

Returns
int

The number of samples

get_num_samples(segment_index=None)

Returns the number of samples for a segment.

Parameters
segment_indexint, optional

The segment index to retrieve the number of samples for. For multi-segment objects, it is required, by default None

Returns
int

The number of samples

get_num_segments()

Returns the number of segments.

Returns
int

Number of segments in the recording

get_times(segment_index=None)

Get time vector for a recording segment.

If the segment has a time_vector, then it is returned. Otherwise a time_vector is constructed on the fly with sampling frequency. If t_start is defined and the time vector is constructed on the fly, the first time will be t_start. Otherwise it will start from 0.

Parameters
segment_indexint, optional

The segment index (required for multi-segment), by default None

Returns
np.array

The 1d times array

get_total_duration()

Returns the total duration in s

Returns
float

The duration in seconds

get_total_samples()

Returns the total number of samples

Returns
int

The total number of samples

get_traces(segment_index: Optional[int] = None, start_frame: Optional[int] = None, end_frame: Optional[int] = None, channel_ids: Optional[Iterable] = None, order: Optional[str] = None, return_scaled=False, cast_unsigned=False)

Returns traces from recording.

Parameters
segment_indexUnion[int, None], optional

The segment index to get traces from. If recording is multi-segment, it is required, by default None

start_frameUnion[int, None], optional

The start frame. If None, 0 is used, by default None

end_frameUnion[int, None], optional

The end frame. If None, the number of samples in the segment is used, by default None

channel_idsUnion[Iterable, None], optional

The channel ids. If None, all channels are used, by default None

orderUnion[str, None], optional

The order of the traces (“C” | “F”). If None, traces are returned as they are, by default None

return_scaledbool, optional

If True and the recording has scaling (gain_to_uV and offset_to_uV properties), traces are scaled to uV, by default False

cast_unsignedbool, optional

If True and the traces are unsigned, they are cast to integer and centered (an offset of (2**nbits) is subtracted), by default False

Returns
np.array

The traces (num_samples, num_channels)

Raises
ValueError

If return_scaled is True, but recording does not have scaled traces

has_scaled_traces()

Checks if the recording has scaled traces

Returns
bool

True if the recording has scaled traces, False otherwise

has_time_vector(segment_index=None)

Check if the segment of the recording has a time vector.

Parameters
segment_indexint, optional

The segment index (required for multi-segment), by default None

Returns
bool

True if the recording has time vectors, False otherwise

is_binary_compatible()

Inform is this recording is “binary” compatible. To be used before calling rec.get_binary_description()

Returns
bool

True if the underlying recording is binary

set_times(times, segment_index=None, with_warning=True)

Set times for a recording segment.

Parameters
times1d np.array

The time vector

segment_indexint, optional

The segment index (required for multi-segment), by default None

with_warningbool, optional

If True, a warning is printed, by default True

class spikeinterface.core.BaseSorting(sampling_frequency: float, unit_ids: List)

Abstract class representing several segment several units and relative spiketrains.

get_all_spike_trains(outputs='unit_id')

Return all spike trains concatenated

get_times(segment_index=None)

Get time vector for a registered recording segment.

If a recording is registered:
  • if the segment has a time_vector, then it is returned

  • if not, a time_vector is constructed on the fly with sampling frequency

If there is no registered recording it returns None

get_total_num_spikes()

Get total number of spikes for each unit across segments.

Returns
dict

Dictionary with unit_ids as key and number of spikes as values

has_time_vector(segment_index=None)

Check if the segment of the registered recording has a time vector.

remove_empty_units()

Removes units with empty spike trains

Returns
BaseSorting

Sorting object with non-empty units

remove_units(remove_unit_ids)

Removes a subset of units

Parameters
remove_unit_idsnumpy.array or list

List of unit ids to remove

Returns
BaseSorting

Sorting object without removed units

select_units(unit_ids, renamed_unit_ids=None)

Selects a subset of units

Parameters
unit_idsnumpy.array or list

List of unit ids to keep

renamed_unit_idsnumpy.array or list, optional

If given, the kept unit ids are renamed, by default None

Returns
BaseSorting

Sorting object with selected units

to_spike_vector(extremum_channel_inds=None)

Construct a unique structured numpy vector concatenating all spikes with several fields: sample_ind, unit_index, segment_index.

See also get_all_spike_trains()

Parameters
extremum_channel_inds: None or dict

If a dictionnary of unit_id to channel_ind is given then an extra field ‘channel_ind’. This can be convinient for computing spikes postion after sorter.

This dict can be computed with get_template_extremum_channel(we, outputs=”index”)

Returns
spikes: np.array

Structured numpy array (‘sample_ind’, ‘unit_index’, ‘segment_index’) with all spikes Or (‘sample_ind’, ‘unit_index’, ‘segment_index’, ‘channel_ind’) if extremum_channel_inds is given

class spikeinterface.core.BaseSnippets(sampling_frequency: float, nbefore: Optional[int], snippet_len: int, channel_ids: List, dtype)

Abstract class representing several multichannel snippets.

class spikeinterface.core.BaseEvent(channel_ids, structured_dtype)

Abstract class representing events.

Parameters
channel_idslist or np.array

The channel ids

structured_dtypedtype or dict

The dtype of the events. If dict, each key is the channel_id and values must be the dtype of the channel (also structured). If dtype, each channel is assigned the same dtype. In case of structured dtypes, the “time” or “timestamp” field name must be present.

get_event_times(channel_id=None, segment_index=None, start_time=None, end_time=None)

Return events timestamps of a channel in seconds.

Parameters
channel_idint or str, optional

The event channel id, by default None

segment_indexint, optional

The segment index, required for multi-segment objects, by default None

start_timefloat, optional

The start time in seconds, by default None

end_timefloat, optional

The end time in seconds, by default None

Returns
np.array

1d array of timestamps for the event channel

get_events(channel_id=None, segment_index=None, start_time=None, end_time=None)

Return events of a channel in its native structured type.

Parameters
channel_idint or str, optional

The event channel id, by default None

segment_indexint, optional

The segment index, required for multi-segment objects, by default None

start_timefloat, optional

The start time in seconds, by default None

end_timefloat, optional

The end time in seconds, by default None

Returns
np.array

Structured np.array of dtype get_dtype(channel_id)

class spikeinterface.core.WaveformExtractor(recording, sorting, folder=None, rec_attributes=None, allow_unfiltered=False, sparsity=None)

Class to extract waveform on paired Recording-Sorting objects. Waveforms are persistent on disk and cached in memory.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: Path

The folder where waveforms are cached

rec_attributes: None or dict

When recording is None then a minimal dict with some attributes is needed.

allow_unfiltered: bool

If true, will accept unfiltered recording. False by default.

Returns
——-
we: WaveformExtractor

The WaveformExtractor object

Examples

>>> # Instantiate
>>> we = WaveformExtractor.create(recording, sorting, folder)
>>> # Compute
>>> we = we.set_params(...)
>>> we = we.run_extract_waveforms(...)
>>> # Retrieve
>>> waveforms = we.get_waveforms(unit_id)
>>> template = we.get_template(unit_id, mode='median')
>>> # Load  from folder (in another session)
>>> we = WaveformExtractor.load(folder)
delete_extension(extension_name)

Deletes an existing extension.

Parameters
extension_name: str

The extension name.

delete_waveforms()

Deletes waveforms folder.

get_all_templates(unit_ids=None, mode='average')

Return templates (average waveform) for multiple units.

Parameters
unit_ids: list or None

Unit ids to retrieve waveforms for

mode: str

‘average’ (default) or ‘median’ , ‘std’

Returns
templates: np.array

The returned templates (num_units, num_samples, num_channels)

get_available_extension_names()

Return a list of loaded or available extension names either in memory or in persistent extension folders. Then instances can be loaded with we.load_extension(extension_name)

Importante note: extension modules need to be loaded (and so registered) before this call, otherwise extensions will be ignored even if the folder exists.

Returns
extension_names_in_folder: list

A list of names of computed extension in this folder

get_extension_class(extension_name)

Get extension class from name and check if registered.

Parameters
extension_name: str

The extension name.

Returns
ext_class:

The class of the extension.

get_sampled_indices(unit_id)

Return sampled spike indices of extracted waveforms

Parameters
unit_id: int or str

Unit id to retrieve indices for

Returns
sampled_indices: np.array

The sampled indices

get_template(unit_id, mode='average', sparsity=None)

Return template (average waveform).

Parameters
unit_id: int or str

Unit id to retrieve waveforms for

mode: str

‘average’ (default), ‘median’ , ‘std’(standard deviation)

sparsity: ChannelSparsity, optional

Sparsity to apply to the waveforms (if WaveformExtractor is not sparse)

Returns
template: np.array

The returned template (num_samples, num_channels)

get_template_segment(unit_id, segment_index, mode='average', sparsity=None)

Return template for the specified unit id computed from waveforms of a specific segment.

Parameters
unit_id: int or str

Unit id to retrieve waveforms for

segment_index: int

The segment index to retrieve template from

mode: str

‘average’ (default), ‘median’, ‘std’(standard deviation)

sparsity: ChannelSparsity, optional

Sparsity to apply to the waveforms (if WaveformExtractor is not sparse).

Returns
template: np.array

The returned template (num_samples, num_channels)

get_waveforms(unit_id, with_index=False, cache=False, lazy=True, sparsity=None)

Return waveforms for the specified unit id.

Parameters
unit_id: int or str

Unit id to retrieve waveforms for

with_index: bool

If True, spike indices of extracted waveforms are returned (default False)

cache: bool

If True, waveforms are cached to the self._waveforms dictionary (default False)

lazy: bool

If True, waveforms are loaded as memmap objects (when format=”binary”) or Zarr datasets (when format=”zarr”). If False, waveforms are loaded as np.array objects (default True)

sparsity: ChannelSparsity, optional

Sparsity to apply to the waveforms (if WaveformExtractor is not sparse)

Returns
wfs: np.array

The returned waveform (num_spikes, num_samples, num_channels)

indices: np.array

If ‘with_index’ is True, the spike indices corresponding to the waveforms extracted

get_waveforms_segment(segment_index, unit_id, sparsity)

Return waveforms from a specified segment and unit_id.

Parameters
segment_index: int

The segment index to retrieve waveforms from

unit_id: int or str

Unit id to retrieve waveforms for

sparsity: ChannelSparsity, optional

Sparsity to apply to the waveforms (if WaveformExtractor is not sparse)

Returns
wfs: np.array

The returned waveform (num_spikes, num_samples, num_channels)

is_extension(extension_name)

Check if the extension exists in memory or in the folder.

Parameters
extension_name: str

The extension name.

Returns
exists: bool

Whether the extension exists or not

load_extension(extension_name)

Load an extension from its name. The module of the extension must be loaded and registered.

Parameters
extension_name: str

The extension name.

Returns
ext_instanace:

The loaded instance of the extension

precompute_templates(modes=('average', 'std'))
Precompute all template for different “modes”:
  • average

  • std

  • median

The results is cache in memory as 3d ndarray (nunits, nsamples, nchans) and also saved as npy file in the folder to avoid recomputation each time.

classmethod register_extension(extension_class)

This maintains a list of possible extensions that are available. It depends on the imported submodules (e.g. for postprocessing module).

For instance: import spikeinterface as si si.WaveformExtractor.extensions == []

from spikeinterface.postprocessing import WaveformPrincipalComponent si.WaveformExtractor.extensions == [WaveformPrincipalComponent, …]

save(folder, format='binary', use_relative_path=False, overwrite=False, sparsity=None, **kwargs)

Save WaveformExtractor object to disk.

Parameters
folderstr or Path

The output waveform folder

formatstr, optional

“binary”, “zarr”, by default “binary”

overwritebool

If True and folder exists, it is deleted, by default False

use_relative_pathbool, optional

If True, the recording and sorting paths are relative to the waveforms folder. This allows portability of the waveform folder provided that the relative paths are the same, but forces all the data files to be in the same drive, by default False

sparsityChannelSparsity, optional

If given and WaveformExtractor is not sparse, it makes the returned WaveformExtractor sparse

select_units(unit_ids, new_folder=None, use_relative_path=False)

Filters units by creating a new waveform extractor object in a new folder.

Extensions are also updated to filter the selected unit ids.

Parameters
unit_idslist or array

The unit ids to keep in the new WaveformExtractor object

new_folderPath or None

The new folder where selected waveforms are copied

Returns
weWaveformExtractor

The newly create waveform extractor with the selected units

set_params(ms_before=1.0, ms_after=2.0, max_spikes_per_unit=500, return_scaled=False, dtype=None)

Set parameters for waveform extraction

Parameters
ms_before: float

Cut out in ms before spike time

ms_after: float

Cut out in ms after spike time

max_spikes_per_unit: int

Maximum number of spikes to extract per unit

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, waveforms are converted to uV.

dtype: np.dtype

The dtype of the computed waveforms

spikeinterface.core.extract_waveforms(recording, sorting, folder=None, mode='folder', precompute_template=('average',), ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, overwrite=False, return_scaled=True, dtype=None, sparse=False, num_spikes_for_sparsity=100, allow_unfiltered=False, use_relative_path=False, seed=None, load_if_exists=None, **kwargs)

Extracts waveform on paired Recording-Sorting objects. Waveforms can be persistent on disk (mode`=”folder”) or in-memory (`mode`=”memory”). By default, waveforms are extracted on a subset of the spikes (`max_spikes_per_unit) and on all channels (dense). If the sparse parameter is set to True, a sparsity is estimated using a small number of spikes (num_spikes_for_sparsity) and waveforms are extracted and saved in sparse mode.

Parameters
recording: Recording

The recording object

sorting: Sorting

The sorting object

folder: str or Path or None

The folder where waveforms are cached

mode: str

“folder” (default) or “memory”. The “folder” argument must be specified in case of mode “folder”. If “memory” is used, the waveforms are stored in RAM. Use this option carefully!

precompute_template: None or list

Precompute average/std/median for template. If None not precompute.

ms_before: float

Time in ms to cut before spike peak

ms_after: float

Time in ms to cut after spike peak

max_spikes_per_unit: int or None

Number of spikes per unit to extract waveforms from (default 500). Use None to extract waveforms for all spikes

overwrite: bool

If True and ‘folder’ exists, the folder is removed and waveforms are recomputed. Otherwise an error is raised.

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, waveforms are converted to uV.

dtype: dtype or None

Dtype of the output waveforms. If None, the recording dtype is maintained.

sparse: bool (default False)

If True, before extracting all waveforms the precompute_sparsity() functio is run using a few spikes to get an estimate of dense templates to create a ChannelSparsity object. Then, the waveforms will be sparse at extraction time, which saves a lot of memory. When True, you must some provide kwargs handle precompute_sparsity() to control the kind of sparsity you want to apply (by radius, by best channels, …).

num_spikes_for_sparsity: int (default 100)

The number of spikes to use to estimate sparsity (if sparse=True).

allow_unfiltered: bool

If true, will accept an allow_unfiltered recording. False by default.

use_relative_path: bool

If True, the recording and sorting paths are relative to the waveforms folder. This allows portability of the waveform folder provided that the relative paths are the same, but forces all the data files to be in the same drive. Default is False.

seed: int or None

Random seed for spike selection

load_if_exists: None or bool

If True and waveforms have already been extracted in the specified folder, they are loaded and not recomputed.

sparsity kwargs:
method: str
  • “best_channels”: N best channels with the largest amplitude. Use the ‘num_channels’ argument to specify the

    number of channels.

  • “radius”: radius around the best channel. Use the ‘radius_um’ argument to specify the radius in um

  • “threshold”: thresholds based on template signal-to-noise ratio. Use the ‘threshold’ argument

    to specify the SNR threshold.

  • “by_property”: sparsity is given by a property of the recording and sorting(e.g. ‘group’).

    Use the ‘by_property’ argument to specify the property name.

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

num_channels: int

Number of channels for ‘best_channels’ method

radius_um: float

Radius in um for ‘radius’ method

threshold: float

Threshold in SNR ‘threshold’ method

by_property: object

Property name for ‘by_property’ method

job kwargs:
**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
we: WaveformExtractor

The WaveformExtractor object

Examples

>>> import spikeinterface as si
>>> # Extract dense waveforms and save to disk
>>> we = si.extract_waveforms(recording, sorting, folder="waveforms")
>>> # Extract dense waveforms with parallel processing and save to disk
>>> job_kwargs = dict(n_jobs=8, chunk_duration="1s", progress_bar=True)
>>> we = si.extract_waveforms(recording, sorting, folder="waveforms", **job_kwargs)
>>> # Extract dense waveforms on all spikes
>>> we = si.extract_waveforms(recording, sorting, folder="waveforms-all", max_spikes_per_unit=None)
>>> # Extract dense waveforms in memory
>>> we = si.extract_waveforms(recording, sorting, folder=None, mode="memory")
>>> # Extract sparse waveforms (with radius-based sparsity of 50um) and save to disk
>>> we = si.extract_waveforms(recording, sorting, folder="waveforms-sparse", mode="folder",
>>>                           sparse=True, num_spikes_for_sparsity=100, method="radius", radius_um=50)
spikeinterface.core.load_waveforms(folder, with_recording=True, sorting=None)

Load a waveform extractor object from disk.

Parameters
folderstr or Path

The folder / zarr folder where the waveform extractor is stored

with_recordingbool, optional

If True, the recording is loaded, by default True

sortingBaseSorting, optional

If passed, the sorting object associated to the waveform extractor, by default None

Returns
we: WaveformExtractor

The loaded waveform extractor

spikeinterface.core.compute_sparsity(waveform_extractor, method='radius', peak_sign='neg', num_channels=5, radius_um=100.0, threshold=5, by_property=None)

Get channel sparsity (subset of channels) for each template with several methods.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

method: str
  • “best_channels”: N best channels with the largest amplitude. Use the ‘num_channels’ argument to specify the

    number of channels.

  • “radius”: radius around the best channel. Use the ‘radius_um’ argument to specify the radius in um

  • “threshold”: thresholds based on template signal-to-noise ratio. Use the ‘threshold’ argument

    to specify the SNR threshold.

  • “by_property”: sparsity is given by a property of the recording and sorting(e.g. ‘group’).

    Use the ‘by_property’ argument to specify the property name.

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

num_channels: int

Number of channels for ‘best_channels’ method

radius_um: float

Radius in um for ‘radius’ method

threshold: float

Threshold in SNR ‘threshold’ method

by_property: object

Property name for ‘by_property’ method

Returns
sparsity: ChannelSparsity

The estimated sparsity

class spikeinterface.core.BinaryRecordingExtractor(file_paths, sampling_frequency, num_chan, dtype, t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)

RecordingExtractor for a binary format

Parameters
file_paths: str or Path or list

Path to the binary file

sampling_frequency: float

The sampling frequency

num_chan: int

Number of channels

dtype: str or dtype

The dtype of the binary file

time_axis: int

The axis of the time dimension (default 0: F order)

t_starts: None or list of float

Times in seconds of the first sample for each segment

channel_ids: list (optional)

A list of channel ids

file_offset: int (optional)

Number of bytes in the file to offset by during memmap instantiation.

gain_to_uV: float or array-like (optional)

The gain to apply to the traces

offset_to_uV: float or array-like

The offset to apply to the traces

is_filtered: bool or None

If True, the recording is assumed to be filtered. If None, is_filtered is not set.

Returns
recording: BinaryRecordingExtractor

The recording Extractor

class spikeinterface.core.ZarrRecordingExtractor(root_path: Union[Path, str], storage_options=None)

RecordingExtractor for a zarr format

Parameters
root_path: str or Path

Path to the zarr root file

storage_options: dict or None

Storage options for zarr store. E.g., if “s3://” or “gcs://” they can provide authentication methods, etc.

Returns
recording: ZarrRecordingExtractor

The recording Extractor

class spikeinterface.core.BinaryFolderRecording(folder_path)

BinaryFolderRecording is an internal format used in spikeinterface. It is a BinaryRecordingExtractor + metadata contained in a folder.

It is created with the function: recording.save(format=’binary’, folder=’/myfolder’)

Parameters
folder_path: str or Path
Returns
recording: BinaryFolderRecording

The recording

class spikeinterface.core.NpzFolderSorting(folder_path)

NpzFolderSorting is an internal format used in spikeinterface. It is a NpzSortingExtractor + metadata contained in a folder.

It is created with the function: sorting.save(folder=’/myfolder’)

Parameters
folder_path: str or Path
Returns
sorting: NpzFolderSorting

The sorting

class spikeinterface.core.NpyFolderSnippets(folder_path)

NpyFolderSnippets is an internal format used in spikeinterface. It is a NpySnippetsExtractor + metadata contained in a folder.

It is created with the function: snippets.save(format=’npy’, folder=’/myfolder’)

Parameters
folder_path: str or Path

The path to the folder

Returns
snippets: NpyFolderSnippets

The snippets

class spikeinterface.core.NumpyRecording(traces_list, sampling_frequency, t_starts=None, channel_ids=None)

In memory recording. Contrary to previous version this class does not handle npy files.

Parameters
traces_list: list of array or array (if mono segment)

The traces to instantiate a mono or multisegment Recording

sampling_frequency: float

The sampling frequency in Hz

t_starts: None or list of float

Times in seconds of the first sample for each segment

channel_ids: list

An optional list of channel_ids. If None, linear channels are assumed

class spikeinterface.core.NumpySorting(sampling_frequency, unit_ids=[])
class spikeinterface.core.NumpySnippets(snippets_list, spikesframes_list, sampling_frequency, nbefore=None, channel_ids=None)

In memory recording. Contrary to previous version this class does not handle npy files.

Parameters
snippets_list: list of array or array (if mono segment)

The snippets to instantiate a mono or multisegment basesnippet

spikesframes_list: list of array or array (if mono segment)

Frame of each snippet

sampling_frequency: float

The sampling frequency in Hz

channel_ids: list

An optional list of channel_ids. If None, linear channels are assumed

class spikeinterface.core.AppendSegmentRecording(recording_list, sampling_frequency_max_diff=0)

Takes as input a list of parent recordings each with multiple segments and returns a single multi-segment recording that “appends” all segments from all parent recordings.

For instance, given one recording with 2 segments and one recording with 3 segments, this class will give one recording with 5 segments

Parameters
recording_listlist of BaseRecording

A list of recordings

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across recordings (default 0)

class spikeinterface.core.ConcatenateSegmentRecording(recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Return a recording that “concatenates” all segments from all parent recordings into one recording with a single segment. The operation is lazy.

For instance, given one recording with 2 segments and one recording with 3 segments, this class will give one recording with one large segment made by concatenating the 5 segments.

Time information is lost upon concatenation. By default ignore_times is True. If it is False, you get an error unless:

  • all segments DO NOT have times, AND

  • all segment have t_start=None

Parameters
recording_listlist of BaseRecording

A list of recordings

ignore_times: bool

If True (default), time information (t_start, time_vector) is ignored when concatenating recordings.

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across recordings (default 0)

class spikeinterface.core.SelectSegmentRecording(recording: BaseRecording, segment_indices: Union[int, List[int]])

Return a new recording with a subset of segments from a multi-segment recording.

Parameters
recordingBaseRecording

The multi-segment recording

segment_indiceslist of int

The segment indices to select

class spikeinterface.core.AppendSegmentSorting(sorting_list, sampling_frequency_max_diff=0)

Return a sorting that “append” all segments from all sorting into one sorting multi segment.

Parameters
sorting_listlist of BaseSorting

A list of sortings

sampling_frequency_max_difffloat

Maximum allowed difference of sampling frequencies across sortings (default 0)

class spikeinterface.core.SplitSegmentSorting(parent_sorting: BaseSorting, recording_or_recording_list=None)

Splits a sorting with a single segment to multiple segments based on the given list of recordings (must be in order)

Parameters
parent_sortingBaseSorting

Sorting with a single segment (e.g. from sorting concatenated recording)

recording_or_recording_listlist of recordings, ConcatenateSegmentRecording, or None

If list of recordings, uses the lengths of those recordings to split the sorting into smaller segments If ConcatenateSegmentRecording, uses the associated list of recordings to split the sorting into smaller segments If None, looks for the recording associated with the sorting (default None)

class spikeinterface.core.SelectSegmentSorting(sorting: BaseSorting, segment_indices: Union[int, List[int]])

Return a new sorting with a single segment from a multi-segment sorting.

Parameters
sortingBaseSorting

The multi-segment sorting

segment_indiceslist of int

The segment indices to select

spikeinterface.core.download_dataset(repo=None, remote_path=None, local_folder=None, update_if_exists=False, unlock=False)
spikeinterface.core.write_binary_recording(recording, file_paths=None, dtype=None, add_file_extension=True, verbose=False, byte_offset=0, auto_cast_uint=True, **job_kwargs)

Save the trace of a recording extractor in several binary .dat format.

Note :

time_axis is always 0 (contrary to previous version. to get time_axis=1 (which is a bad idea) use write_binary_recording_file_handle()

Parameters
recording: RecordingExtractor

The recording extractor object to be saved in .dat format

file_path: str

The path to the file.

dtype: dtype

Type of the saved data. Default float32.

add_file_extension: bool

If True (default), file the ‘.raw’ file extension is added if the file name is not a ‘raw’, ‘bin’, or ‘dat’

verbose: bool

If True, output is verbose (when chunks are used)

byte_offset: int

Offset in bytes (default 0) to for the binary file (e.g. to write a header)

auto_cast_uint: bool

If True (default), unsigned integers are automatically cast to int if the specified dtype is signed

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.core.set_global_tmp_folder(folder)

Set the global path temporary folder.

spikeinterface.core.set_global_dataset_folder(folder)

Set the global dataset folder.

spikeinterface.core.set_global_job_kwargs(**job_kwargs)

Set the global job kwargs.

Parameters
**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.core.get_random_data_chunks(recording, return_scaled=False, num_chunks_per_segment=20, chunk_size=10000, concatenated=True, seed=0)

Exctract random chunks across segments

This is used for instance in get_noise_levels() to estimate noise on traces.

Parameters
recording: BaseRecording

The recording to get random chunks from

return_scaled: bool

If True, returned chunks are scaled to uV

num_chunks_per_segment: int

Number of chunks per segment

chunk_size: int

Size of a chunk in number of frames

concatenated: bool (default True)

If True chunk are concatenated along time axis.

seed: int

Random seed

Returns
——-
chunk_list: np.array

Array of concatenate chunks per segment

spikeinterface.core.get_channel_distances(recording)

Distance between channel pairs

spikeinterface.core.get_closest_channels(recording, channel_ids=None, num_channels=None)

Get closest channels + distances

Parameters
recording: RecordingExtractor

The recording extractor to get closest channels

channel_ids: list

List of channels ids to compute there near neighborhood

num_channels: int, optional

Maximum number of neighborhood channels to return

Returns
closest_channels_indsarray (2d)

Closest channel indices in ascending order for each channel id given in input

dists: array (2d)

Distance in ascending order for each channel id given in input

spikeinterface.core.get_noise_levels(recording, return_scaled=True, **random_chunk_kwargs)

Estimate noise for each channel using MAD methods.

Internally it sample some chunk across segment. And then, it use MAD estimator (more robust than STD)

spikeinterface.core.get_chunk_with_margin(rec_segment, start_frame, end_frame, channel_indices, margin, add_zeros=False, window_on_margin=False, dtype=None)

Helper to get chunk with margin

spikeinterface.core.order_channels_by_depth(recording, channel_ids=None, dimensions=('x', 'y'))

Order channels by depth, by first ordering the x-axis, and then the y-axis.

Parameters
recordingBaseRecording

The input recording

channel_idslist/array or None

If given, a subset of channels to order locations for

dimensionsstr or tuple

If str, it needs to be ‘x’, ‘y’, ‘z’. If tuple, it sorts the locations in two dimensions using lexsort. This approach is recommended since there is less ambiguity, by default (‘x’, ‘y’)

Returns
order_fnp.array

Array with sorted indices

order_rnp.array

Array with indices to revert sorting

spikeinterface.core.get_template_amplitudes(waveform_extractor, peak_sign: str = 'neg', mode: str = 'extremum')

Get amplitude per channel for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

‘extremum’: max or min ‘at_index’: take value at spike index

Returns
peak_values: dict

Dictionary with unit ids as keys and template amplitudes as values

spikeinterface.core.get_template_extremum_channel(waveform_extractor, peak_sign: str = 'neg', mode: str = 'extremum', outputs: str = 'id')

Compute the channel with the extremum peak for each unit.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

‘extremum’: max or min ‘at_index’: take value at spike index

outputs: str
  • ‘id’: channel id

  • ‘index’: channel index

Returns
extremum_channels: dict

Dictionary with unit ids as keys and extremum channels (id or index based on ‘outputs’) as values

spikeinterface.core.get_template_extremum_channel_peak_shift(waveform_extractor, peak_sign: str = 'neg')

In some situations spike sorters could return a spike index with a small shift related to the waveform peak. This function estimates and return these alignment shifts for the mean template. This function is internally used by compute_spike_amplitudes() to accurately retrieve the spike amplitudes.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

Returns
shifts: dict

Dictionary with unit ids as keys and shifts as values

spikeinterface.core.get_template_extremum_amplitude(waveform_extractor, peak_sign: str = 'neg', mode: str = 'at_index')

Computes amplitudes on the best channel.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

peak_sign: str

Sign of the template to compute best channels (‘neg’, ‘pos’, ‘both’)

mode: str

Where the amplitude is computed ‘extremum’: max or min ‘at_index’: take value at spike index

Returns
amplitudes: dict

Dictionary with unit ids as keys and amplitudes as values

Low-level

class spikeinterface.core.BaseWaveformExtractorExtension(waveform_extractor)

This the base class to extend the waveform extractor. It handles persistency to disk any computations related to a waveform extractor.

For instance:
  • principal components

  • spike amplitudes

  • quality metrics

The design is done via a WaveformExtractor.register_extension(my_extension_class), so that only imported modules can be used as extension.

It also enables any custum computation on top on waveform extractor to be implemented by the user.

An extension needs to inherit from this class and implement some abstract methods:
  • _reset

  • _set_params

  • _run

The subclass must also save to the self.extension_folder any file that needs to be reloaded when calling _load_extension_data

The subclass must also set an extension_name attribute which is not None by default.

class spikeinterface.core.ChannelSparsity(mask, unit_ids, channel_ids)

Handle channel sparsity for a set of units.

Internally, sparsity is stored as a boolean mask.

The ChannelSparsity object can also provide other sparsity representations:

  • ChannelSparsity.unit_id_to_channel_ids : unit_id to channel_ids

  • ChannelSparsity.unit_id_to_channel_indices : unit_id channel_inds

By default it is constructed with a boolean array: >>> sparsity = ChannelSparsity(mask, unit_ids, channel_ids)

But can be also constructed from a dictionary: >>> sparsity = ChannelSparsity.from_unit_id_to_channel_ids(unit_id_to_channel_ids, unit_ids, channel_ids)

Parameters
mask: np.array of bool

The sparsity mask (num_units, num_channels)

unit_ids: list or array

Unit ids vector or list

channel_ids: list or array

Channel ids vector or list

Examples

The class can also be used to construct/estimate the sparsity from a Waveformextractor with several methods:

Using the N best channels (largest template amplitude): >>> sparsity = ChannelSparsity.from_best_channels(we, num_channels, peak_sign=’neg’)

Using a neighborhood by radius: >>> sparsity = ChannelSparsity.from_radius(we, radius_um, peak_sign=’neg’)

Using a SNR threshold: >>> sparsity = ChannelSparsity.from_threshold(we, threshold, peak_sign=’neg’)

Using a recording/sorting property (e.g. ‘group’): >>> sparsity = ChannelSparsity.from_property(we, by_property=”group”)

class spikeinterface.core.ChunkRecordingExecutor(recording, func, init_func, init_args, verbose=False, progress_bar=False, handle_returns=False, n_jobs=1, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, mp_context=None, job_name='', max_threads_per_process=1)

Core class for parallel processing to run a “function” over chunks on a recording.

It supports running a function:
  • in loop with chunk processing (low RAM usage)

  • at once if chunk_size is None (high RAM usage)

  • in parallel with ProcessPoolExecutor (higher speed)

The initializer (‘init_func’) allows to set a global context to avoid heavy serialization (for examples, see implementation in core.WaveformExtractor).

Parameters
recording: RecordingExtractor

The recording to be processed

func: function

Function that runs on each chunk

init_func: function

Initializer function to set the global context (accessible by ‘func’)

init_args: tuple

Arguments for init_func

verbose: bool

If True, output is verbose

progress_bar: bool

If True, a progress bar is printed to monitor the progress of the process

handle_returns: bool

If True, the function can return values

n_jobs: int

Number of jobs to be used (default 1). Use -1 to use as many jobs as number of cores

total_memory: str

Total memory (RAM) to use (e.g. “1G”, “500M”)

chunk_memory: str

Memory per chunk (RAM) to use (e.g. “1G”, “500M”)

chunk_size: int or None

Size of each chunk in number of samples. If ‘total_memory’ or ‘chunk_memory’ are used, it is ignored.

chunk_durationstr or float or None

Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

mp_contextstr or None

“fork” (default) or “spawn”. If None, the context is taken by the recording.get_preferred_mp_context(). “fork” is only available on UNIX systems.

job_name: str

Job name

max_threads_per_process: int or None

Limit the number of thread per process using threadpoolctl modules. This used only when n_jobs>1 If None, no limits.

Returns
res: list

If ‘handle_returns’ is True, the results for each chunk process

spikeinterface.extractors

NEO-based

spikeinterface.extractors.read_alphaomega(folder_path, lsx_files=None, stream_id='RAW', stream_name=None, all_annotations=False)

Class for reading from AlphaRS and AlphaLab SnR boards.

Based on neo.rawio.AlphaOmegaRawIO

Parameters
folder_path: str or Path-like

The folder path to the AlphaOmega recordings.

lsx_files: list of strings or None, optional

A list of listings files that refers to mpx files to load.

stream_id: {‘RAW’, ‘LFP’, ‘SPK’, ‘ACC’, ‘AI’, ‘UD’}, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_alphaomega_event(folder_path)

Class for reading events from AlphaOmega MPX file format

spikeinterface.extractors.read_axona(file_path, all_annotations=False)

Class for reading Axona RAW format.

Based on neo.rawio.AxonaRawIO

Parameters
file_path: str

The file path to load the recordings from.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_biocam(file_path, mea_pitch=None, electrode_width=None, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from a Biocam file from 3Brain.

Based on neo.rawio.BiocamRawIO

Parameters
file_path: str

The file path to load the recordings from.

mea_pitch: float, optional

The inter-electrode distance (pitch) between electrodes.

electrode_width: float, optional

Width of the electrodes in um.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool (default False)

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_blackrock(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading BlackRock data.

Based on neo.rawio.BlackrockRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_ced(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading smr/smrw CED file.

Based on neo.rawio.CedRawIO / sonpy

Alternative to read_spike2 which does not handle smrx

Parameters
file_path: str

The file path to the smr or smrx file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_intan(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data from a intan board. Supports rhd and rhs format.

Based on neo.rawio.IntanRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_maxwell(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, rec_name=None, install_maxwell_plugin=False)

Class for reading data from Maxwell device. It handles MaxOne (old and new format) and MaxTwo.

Based on neo.rawio.MaxwellRawIO

Parameters
file_path: str

The file path to the maxwell h5 file.

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For MaxTwo when there are several wells at the same time you need to specify stream_id=’well000’ or ‘well0001’, etc.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

rec_name: str, optional

When the file contains several recordings you need to specify the one you want to extract. (rec_name=’rec0000’).

install_maxwell_plugin: bool, optional, default: False

If True, install the maxwell plugin for neo.

spikeinterface.extractors.read_mearec(file_path)

Read a MEArec file.

Parameters
file_path: str or Path

Path to MEArec h5 file

Returns
recording: MEArecRecordingExtractor

The recording extractor object

sorting: MEArecSortingExtractor

The sorting extractor object

spikeinterface.extractors.read_mcsraw(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading data from “Raw” Multi Channel System (MCS) format. This format is NOT the native MCS format (.mcd). This format is a raw format with an internal binary header exported by the “MC_DataTool binary conversion” with the option header selected.

Based on neo.rawio.RawMCSRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_neuralynx(folder_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading neuralynx folder

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_neuralynx_sorting(folder_path, sampling_frequency=None)

Class for reading spike data from a folder with neuralynx spiking data (i.e .nse and .ntt formats).

Based on neo.rawio.NeuralynxRawIO

Parameters
folder_path: str

The file path to load the recordings from.

sampling_frequency: float

The sampling frequency for the spiking channels. When the signal data is available (.ncs) those files will be used to extract the frequency. Otherwise, the sampling frequency needs to be specified for this extractor.

spikeinterface.extractors.read_neuroscope(file_path, stream_id=None, keep_mua_units=False, exclude_shanks=None, load_recording=True, load_sorting=False)

Read neuroscope recording and sorting. This function assumses that all .res and .clu files are in the same folder as the .xml file.

Parameters
file_path: str

The xml file.

stream_id: str or None
keep_mua_units: bool

Optional. Whether or not to return sorted spikes from multi-unit activity. Defaults to True.

exclude_shanks: list

Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the final integer of all the .res. % i and .clu. % i pairs.

load_recording: bool

If True, the recording is loaded (default True)

load_sorting: bool

If True, the sorting is loaded (default False)

spikeinterface.extractors.read_nix(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading Nix file

Based on neo.rawio.NIXRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks, specify the block index you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_openephys(folder_path, **kwargs)

Read ‘legacy’ or ‘binary’ Open Ephys formats

Parameters
folder_path: str or Path

Path to openephys folder

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

all_annotations: bool (default False)

Load exhaustively all annotation from neo.

Returns
recording: OpenEphysLegacyRecordingExtractor or OpenEphysBinaryExtractor
spikeinterface.extractors.read_openephys_event(folder_path, block_index=None)

Read Open Ephys events from ‘binary’ format.

Parameters
folder_path: str or Path

Path to openephys folder

block_index: int, optional

If there are several blocks (experiments), specify the block index you want to load.

Returns
event: OpenEphysBinaryEventExtractor
spikeinterface.extractors.read_plexon(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading plexon plx files.

Based on neo.rawio.PlexonRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_plexon_sorting(file_path)

Class for reading plexon spiking data (.plx files).

Based on neo.rawio.PlexonRawIO

Parameters
file_path: str

The file path to load the recordings from.

spikeinterface.extractors.read_spike2(file_path, stream_id=None, stream_name=None, all_annotations=False)

Class for reading spike2 smr files. smrx are not supported with this, prefer CedRecordingExtractor instead.

Based on neo.rawio.Spike2RawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_spikegadgets(file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading rec files from spikegadgets.

Based on neo.rawio.SpikeGadgetsRawIO

Parameters
file_path: str

The file path to load the recordings from.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_spikeglx(folder_path, load_sync_channel=False, stream_id=None, stream_name=None, all_annotations=False)

Class for reading data saved by SpikeGLX software. See https://billkarsh.github.io/SpikeGLX/

Based on neo.rawio.SpikeGLXRawIO

Contrary to older verion this reader is folder based. So if the folder contain several streams (‘imec0.ap’ ‘nidq’ ‘imec0.lf’) then it has to be specified with ‘stream_id’.

Parameters
folder_path: str

The folder path to load the recordings from.

load_sync_channel: bool dafult False

Whether or not to load the last channel in the stream, which is typically used for synchronization. If True, then the probe is not loaded.

stream_id: str, optional

If there are several streams, specify the stream id you want to load. For example, ‘imec0.ap’ ‘nidq’ or ‘imec0.lf’.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

spikeinterface.extractors.read_tdt(folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False)

Class for reading TDT folder.

Based on neo.rawio.TdTRawIO

Parameters
folder_path: str

The folder path to the tdt folder.

stream_id: str, optional

If there are several streams, specify the stream id you want to load.

stream_name: str, optional

If there are several streams, specify the stream name you want to load.

all_annotations: bool, optional, default: False

Load exhaustively all annotations from neo.

Non-NEO-based

spikeinterface.extractors.read_alf_sorting(folder_path, sampling_frequency=30000)

Load ALF format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

sampling_frequencyint, optional, default: 30000

The sampling frequency.

Returns
extractorALFSortingExtractor

The loaded data.

spikeinterface.extractors.read_bids(folder_path)

Load a BIDS folder of data into extractor objects.

The following files are considered:

  • _channels.tsv

  • _contacts.tsv

  • _ephys.nwb

  • _probes.tsv

Parameters
folder_pathstr or Path

Path to the BIDS folder.

Returns
extractorslist of extractors

The loaded data, with attached Probes.

spikeinterface.extractors.read_cbin_ibl(folder_path, load_sync_channel=False)

Load IBL data as an extractor object.

IBL have a custom format - compressed binary with spikeglx meta.

The format is like spikeglx (have a meta file) but contains:

  • “cbin” file (instead of “bin”)

  • “ch” file used by mtscomp for compression info

Parameters
folder_path: str or Path

Path to ibl folder.

load_sync_channel: bool, optional, default: False

Load or not the last channel (sync). If not then the probe is loaded.

Returns
recordingCompressedBinaryIblExtractor

The loaded data.

spikeinterface.extractors.read_combinato(folder_path, sampling_frequency=None, user='simple', det_sign='both', keep_good_only=True)

Load Combinato format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the Combinato folder.

sampling_frequencyint, optional, default: 30000

The sampling frequency.

userstr

The username that ran the sorting. Defaults to ‘simple’.

det_sign{‘both’, ‘pos’, ‘neg’}

Which sign was used for detection.

keep_good_onlybool, optional, default: True

Whether to only keep good units.

Returns
extractorCombinatoSortingExtractor

The loaded data.

spikeinterface.extractors.read_ibl_streaming_recording(session: str, stream_name: str, load_sync_channel: bool = False, cache_folder: Optional[Union[str, Path]] = None, remove_cached: bool = True)

Stream IBL data as an extractor object.

Parameters
sessionstr

The session ID to extract recordings for. In ONE, this is sometimes referred to as the ‘eid’. When doing a session lookup such as

>>> from one.api import ONE
>>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
>>> sessions = one.alyx.rest('sessions', 'list', tag='2022_Q2_IBL_et_al_RepeatedSite')

each returned value in sessions refers to it as the ‘id’.

stream_namestr

The name of the stream to load for the session. These can be retrieved from calling StreamingIblExtractor.get_stream_names(session=”<your session ID>”).

load_sync_channelsbool, default: false

Load or not the last channel (sync). If not then the probe is loaded.

cache_folderstr, optional

The location to temporarily store chunks of data during streaming. The default uses the folder designated by ONE.alyx._par.CACHE_DIR / “cache”, which is typically the designated ‘Downloads’ folder on your operating system. As long as remove_cached is set to True, the only files that will persist in this folder are the metadata header files and the chunk of data being actively streamed and used in RAM.

remove_cachedbool, default: True

Whether or not to remove streamed data from the cache immediately after it is read. If you expect to reuse fetched data many times, and have the disk space available, it is recommended to set this to False.

Returns
recordingIblStreamingRecordingExtractor

The recording extractor which allows access to the traces.

spikeinterface.extractors.read_hdsort(file_path, keep_good_only=True)

Load HDSort format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to HDSort mat file.

keep_good_onlybool, optional, default: True

Whether to only keep good units.

Returns
extractorHDSortSortingExtractor

The loaded data.

spikeinterface.extractors.read_herdingspikes(file_path, load_unit_info=True)

Load HerdingSpikes format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

load_unit_infobool, optional, default: True

Whether to load the unit info from the file.

Returns
extractorHerdingSpikesSortingExtractor

The loaded data.

spikeinterface.extractors.read_kilosort(folder_path, keep_good_only=False)

Load Kilosort format data as a sorting extractor.

Parameters
folder_path: str or Path

Path to the output Phy folder (containing the params.py).

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

keep_good_onlybool, optional, default: True

Whether to only keep good units. If True, only Kilosort-labeled ‘good’ units are returned.

Returns
extractorKiloSortSortingExtractor

The loaded data.

spikeinterface.extractors.read_klusta(file_or_folder_path, exclude_cluster_groups=None)

Load Klusta format data as a sorting extractor.

Parameters
file_or_folder_pathstr or Path

Path to the ALF folder.

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

Returns
extractorKlustaSortingExtractor

The loaded data.

spikeinterface.extractors.read_mcsh5(file_path, stream_id=0)

Load a MCS H5 file as a recording extractor.

Parameters
file_pathstr or Path

The path to the MCS h5 file.

stream_idint, optional, default: 0

The stream ID to load.

Returns
recordingMCSH5RecordingExtractor

The loaded data.

spikeinterface.extractors.read_mda_recording(folder_path, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv')

Load MDA format data as a recording extractor.

Parameters
folder_pathstr or Path

Path to the MDA folder.

raw_fname: str

File name of raw file. Defaults to ‘raw.mda’.

params_fname: str

File name of params file. Defaults to ‘params.json’.

geom_fname: str

File name of geom file. Defaults to ‘geom.csv’.

Returns
extractorMdaRecordingExtractor

The loaded data.

spikeinterface.extractors.read_mda_sorting(file_path, sampling_frequency)

Load MDA format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to the MDA file.

sampling_frequencyint

The sampling frequency.

Returns
extractorMdaRecordingExtractor

The loaded data.

spikeinterface.extractors.read_nwb(file_path, load_recording=True, load_sorting=False, electrical_series_name=None)

Reads NWB file into SpikeInterface extractors.

Parameters
file_path: str or Path

Path to NWB file.

load_recordingbool, optional, default: True

If True, the recording object is loaded.

load_sortingbool, optional, default: False

If True, the recording object is loaded.

electrical_series_name: str, optional

The name of the ElectricalSeries (if multiple ElectricalSeries are present)

Returns
extractors: extractor or tuple

Single RecordingExtractor/SortingExtractor or tuple with both (depending on ‘load_recording’/’load_sorting’) arguments.

spikeinterface.extractors.read_phy(folder_path, exclude_cluster_groups=None)

Load Phy format data as a sorting extractor.

Parameters
folder_path: str or Path

Path to the output Phy folder (containing the params.py).

exclude_cluster_groups: list or str, optional

Cluster groups to exclude (e.g. “noise” or [“noise”, “mua”]).

Returns
extractorPhySortingExtractor

The loaded data.

spikeinterface.extractors.read_shybrid_recording(file_path)

Load SHYBRID format data as a recording extractor.

Parameters
file_pathstr or Path

Path to the SHYBRID file.

Returns
extractorSHYBRIDRecordingExtractor

Loaded data.

spikeinterface.extractors.read_shybrid_sorting(file_path, sampling_frequency, delimiter=',')

Load SHYBRID format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to the SHYBRID file.

sampling_frequencyint

The sampling frequency.

delimiterstr

The delimiter to use for loading the file.

Returns
extractorSHYBRIDSortingExtractor

Loaded data.

spikeinterface.extractors.read_spykingcircus(folder_path)

Load SpykingCircus format data as a recording extractor.

Parameters
folder_pathstr or Path

Path to the SpykingCircus folder.

Returns
extractorSpykingCircusSortingExtractor

Loaded data.

spikeinterface.extractors.toy_example(duration=10, num_channels=4, num_units=10, sampling_frequency=30000.0, num_segments=2, average_peak_amplitude=-100, upsample_factor=13, contact_spacing_um=40, num_columns=1, spike_times=None, spike_labels=None, score_detection=1, firing_rate=3.0, seed=None)

Creates a toy recording and sorting extractors.

Parameters
duration: float (or list if multi segment)

Duration in seconds (default 10).

num_channels: int

Number of channels (default 4).

num_units: int

Number of units (default 10).

sampling_frequency: float

Sampling frequency (default 30000).

num_segments: int

Number of segments (default 2).

spike_times: ndarray (or list of multi segment)

Spike time in the recording.

spike_labels: ndarray (or list of multi segment)

Cluster label for each spike time (needs to specified both together).

score_detection: int (between 0 and 1)

Generate the sorting based on a subset of spikes compare with the trace generation.

firing_rate: float

The firing rate for the units (in Hz).

seed: int

Seed for random initialization.

Returns
recording: RecordingExtractor

The output recording extractor.

sorting: SortingExtractor

The output sorting extractor.

spikeinterface.extractors.read_tridesclous(folder_path, chan_grp=None)

Load Tridesclous format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the Tridesclous folder.

chan_grplist, optional

The channel group(s) to load.

Returns
extractorTridesclousSortingExtractor

Loaded data.

spikeinterface.extractors.read_waveclus(file_path, keep_good_only=True)

Load WaveClus format data as a sorting extractor.

Parameters
file_pathstr or Path

Path to the WaveClus file.

keep_good_onlybool, optional, default: True

Whether to only keep good units.

Returns
extractorWaveClusSortingExtractor

Loaded data.

spikeinterface.extractors.read_yass(folder_path)

Load YASS format data as a sorting extractor.

Parameters
folder_pathstr or Path

Path to the ALF folder.

Returns
extractorYassSortingExtractor

Loaded data.

spikeinterface.preprocessing

spikeinterface.preprocessing.bandpass_filter(recording, freq_min=300.0, freq_max=6000.0, margin_ms=5.0, dtype=None, **filter_kwargs)

Bandpass filter of a recording

Parameters
recording: Recording

The recording extractor to be re-referenced

freq_min: float

The highpass cutoff frequency in Hz

freq_max: float

The lowpass cutoff frequency in Hz

margin_ms: float

Margin in ms on border to avoid border effect

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

{}
Returns
——-
filter_recording: BandpassFilterRecording

The bandpass-filtered recording extractor object

spikeinterface.preprocessing.blank_staturation(recording, abs_threshold=None, quantile_threshold=None, direction='upper', fill_value=None, num_chunks_per_segment=50, chunk_size=500, seed=0)

Find and remove parts of the signal with extereme values. Some arrays may produce these when amplifiers enter saturation, typically for short periods of time. To remove these artefacts, values below or above a threshold are set to the median signal value. The threshold is either be estimated automatically, using the lower and upper 0.1 signal percentile with the largest deviation from the median, or specificed. Use this function with caution, as it may clip uncontaminated signals. A warning is printed if the data range suggests no artefacts.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed Minimum value. If None, clipping is not performed on lower interval edge.

abs_threshold: float or None

The absolute value for considering that the signal is saturating

quantile_threshold: float or None

Tha value in [0, 1] used if abs_threshold is None to automatically set the abs_threshold given the data. Must be provided if abs_threshold is None

direction: string in [‘upper’, ‘lower’, ‘both’]

Only values higher than the detection threshold are set to fill_value (‘higher’), or only values lower than the detection threshold (‘lower’), or both (‘both’)

fill_value: float or None

The value to write instead of the saturating signal. If None, then the value is automatically computed as the median signal value

num_chunks_per_segment: int (default 50)

The number of chunks per segments to consider to estimate the threshold/fill_values

chunk_size: int (default 500)

The chunk size to estimate the threshold/fill_values

seed: int (default 0)

The seed to select the random chunks

Returns
rescaled_traces: BlankSaturationRecording

The filtered traces recording extractor object

spikeinterface.preprocessing.center(recording, mode='median', dtype='float32', **random_chunk_kwargs)

Centers traces from the given recording extractor by removing the median/mean of each channel.

Parameters
recording: RecordingExtractor

The recording extractor to be centered

mode: str

‘median’ (default) | ‘mean’

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

**random_chunk_kwargs: keyword arguments for `get_random_data_chunks()` function
Returns
centered_traces: ScaleRecording

The centered traces recording extractor object

spikeinterface.preprocessing.clip(recording, a_min=None, a_max=None)

Limit the values of the data between a_min and a_max. Values exceeding the range will be set to the minimum or maximum, respectively.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

a_min: float or `None` (default `None`)

Minimum value. If None, clipping is not performed on lower interval edge.

a_max: float or `None` (default `None`)

Maximum value. If None, clipping is not performed on upper interval edge.

Returns
rescaled_traces: ClipTracesRecording

The clipped traces recording extractor object

spikeinterface.preprocessing.common_reference(recording, reference='global', operator='median', groups=None, ref_channel_ids=None, local_radius=(30, 55), verbose=False)

Re-references the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be re-referenced

reference: str ‘global’, ‘single’ or ‘local’

If ‘global’ then CMR/CAR is used either by groups or all channel way. If ‘single’, the selected channel(s) is remove from all channels. operator is no used in that case. If ‘local’, an average CMR/CAR is implemented with only k channels selected the nearest outside of a radius around each channel

operator: str ‘median’ or ‘average’
If ‘median’, common median reference (CMR) is implemented (the median of

the selected channels is removed for each timestamp).

If ‘average’, common average reference (CAR) is implemented (the mean of the selected channels is removed

for each timestamp).

groups: list

List of lists containing the channel ids for splitting the reference. The CMR, CAR, or referencing with respect to single channels are applied group-wise. However, this is not applied for the local CAR. It is useful when dealing with different channel groups, e.g. multiple tetrodes.

ref_channel_ids: list or int

If no ‘groups’ are specified, all channels are referenced to ‘ref_channel_ids’. If ‘groups’ is provided, then a list of channels to be applied to each group is expected. If ‘single’ reference, a list of one channel or an int is expected.

local_radius: tuple(int, int)

Use in the local CAR implementation as the selecting annulus (exclude radius, include radius)

verbose: bool

If True, output is verbose

Returns
referenced_recording: CommonReferenceRecording

The re-referenced recording extractor object

spikeinterface.preprocessing.correct_lsb(recording, num_chunks_per_segment=20, chunk_size=10000, seed=None, verbose=False)

Estimates the LSB of the recording and divide traces by LSB to ensure LSB = 1. Medians are also subtracted to avoid rounding errors.

Parameters
recordingRecordingExtractor

The recording extractor to be LSB-corrected.

num_chunks_per_segment: int

Number of chunks per segment for random chunk, by default 20

chunk_sizeint

Size of a chunk in number for random chunk, by default 10000

seedint

Random seed for random chunk, by default None

verbosebool

If True, estimate LSB value is printed, by default False

Returns
correct_lsb_recording: ScaleRecording

The recording extractor with corrected LSB

spikeinterface.preprocessing.detect_bad_channels(recording, method='coherence+psd', std_mad_threshold=5, psd_hf_threshold=0.02, dead_channel_threshold=-0.5, noisy_channel_threshold=1.0, outside_channel_threshold=-0.75, n_neighbors=11, nyquist_threshold=0.8, direction='y', chunk_duration_s=0.3, num_random_chunks=10, welch_window_ms=10.0, highpass_filter_cutoff=300, seed=None)

Perform bad channel detection. The recording is assumed to be filtered. If not, a highpass filter is applied on the fly.

Different methods are implemented:

  • stdthrehshold on channel standard deviations

    If the standard deviation of a channel is greater than std_mad_threshold times the median of all channels standard deviations, the channel is flagged as noisy

  • mad : same as std, but using median absolute deviations instead

  • coeherence+psd : method developed by the International Brain Laboratory that detects bad channels of three types:

    • Dead channels are those with low similarity to the surrounding channels (n=`n_neighbors` median)

    • Noise channels are those with power at >80% Nyquist above the psd_hf_threshold (default 0.02 uV^2 / Hz) and a high coherence with “far away” channels”

    • Out of brain channels are contigious regions of channels dissimilar to the median of all channels at the top end of the probe (i.e. large channel number)

Parameters
recordingBaseRecording

The recording for which bad channels are detected

methodstr

The method to be used:

  • coeherence+psd (default, developed by IBL)

  • mad

  • std

std_mad_threshold (mstd)float

(method std, mad) The standard deviation/mad multiplier threshold

psd_hf_threshold (coeherence+psd)float

An absolute threshold (uV^2/Hz) used as a cutoff for noise channels. Channels with average power at >80% Nyquist larger than this threshold will be labeled as noise, by default 0.02

dead_channel_threshold (coeherence+psd)float, optional

Threshold for channel coherence below which channels are labeled as dead, by default -0.5

noisy_channel_threshold (coeherence+psd)float

Threshold for channel coherence above which channels are labeled as noisy (together with psd condition), by default 1

outside_channel_threshold (coeherence+psd)float

Threshold for channel coherence above which channels at the edge of the recording are marked as outside of the brain, by default -0.75

n_neighbors (coeherence+psd)int

Number of channel neighbors to compute median filter (needs to be odd), by default 11

nyquist_threshold (coeherence+psd)float

Frequency with respect to Nyquist (Fn=1) above which the mean of the PSD is calculated and compared with psd_hf_threshold, by default 0.8

direction (coeherence+psd): str

‘x’, ‘y’, ‘z’, the depth dimension, by default ‘y’

highpass_filter_cutofffloat

If the recording is not filtered, the cutoff frequency of the highpass filter, by default 300

chunk_duration_sfloat

Duration of each chunk, by default 0.3

num_random_chunksint

Number of random chunks, by default 10

welch_window_msfloat

Window size for the scipy.signal.welch that will be converted to nperseg, by default 10ms

seedint or None

The random seed to extract chunks, by default None

Returns
bad_channel_idsnp.array

The identified bad channel ids

channel_labelsnp.array of str
Channels labels depending on the method:
  • (coeherence+psd) good/dead/noise/out

  • (std, mad) good/noise

Notes

For details refer to: International Brain Laboratory et al. (2022). Spike sorting pipeline for the International Brain Laboratory. https://www.internationalbrainlab.com/repro-ephys

Examples

>>> import spikeinterface.preprocessing as spre
>>> bad_channel_ids, channel_labels = spre.detect_bad_channels(recording, method="coherence+psd")
>>> # remove bad channels
>>> recording_clean = recording.remove_channels(bad_channel_ids)
spikeinterface.preprocessing.filter(recording, band=[300.0, 6000.0], btype='bandpass', filter_order=5, ftype='butter', filter_mode='sos', margin_ms=5.0, coeff=None, dtype=None)

Generic filter class based on:

  • scipy.signal.iirfilter

  • scipy.signal.filtfilt or scipy.signal.sosfilt

BandpassFilterRecording is built on top of it.

Parameters
recording: Recording

The recording extractor to be re-referenced

band: float or list

If float, cutoff frequency in Hz for ‘highpass’ filter type If list. band (low, high) in Hz for ‘bandpass’ filter type

btype: str

Type of the filter (‘bandpass’, ‘highpass’)

margin_ms: float

Margin in ms on border to avoid border effect

filter_mode: str ‘sos’ or ‘ba’

Filter form of the filter coefficients: - second-order sections (default): ‘sos’ - numerator/denominator: ‘ba’

coef: ndarray or None

Filter coefficients in the filter_mode form.

dtype: dtype or None

The dtype of the returned traces. If None, the dtype of the parent recording is used

{}
Returns
filter_recording: FilterRecording

The filtered recording extractor object

spikeinterface.preprocessing.highpass_spatial_filter(recording, n_channel_pad=None, n_channel_taper=5, direction='y', apply_agc=True, agc_window_length_s=0.01, highpass_butter_order=3, highpass_butter_wn=0.01)

Perform destriping with high-pass spatial filtering. Uses the kfilt() function of the International Brain Laboratory.

Median average filtering, by removing the median of signal across channels, assumes noise is constant across all channels. However, noise have exhibit low-frequency changes across nearby channels.

Alternative to median filtering across channels, in which the cut-band is extended from 0 to the 0.01 Nyquist corner frequency using butterworth filter. This allows removal of contaminating stripes that are not constant across channels.

Performs filtering on the 0 axis (across channels), with optional padding (mirrored) and tapering (cosine taper) prior to filtering. Applies a butterworth filter on the 0-axis with tapering / padding.

Parameters
recordingBaseRecording

The parent recording

n_channel_padint

Number of channels to pad prior to filtering. Channels are padded with mirroring. If None, no padding is applied, by default 5

n_channel_taperint

Number of channels to perform cosine tapering on prior to filtering. If None and n_channel_pad is set, n_channel_taper will be set to the number of padded channels. Otherwise, the passed value will be used, by default None

directionstr

The direction in which the spatial filter is applied, by default “y”

apply_agcbool

It True, Automatic Gain Control is applied, by default True

agc_window_length_sfloat

Window in seconds to compute Hanning window for AGC, by default 0.01

highpass_butter_orderint

Order of spatial butterworth filter, by default 3

highpass_butter_wnfloat

Critical frequency (with respect to Nyquist) of spatial butterworth filter, by default 0.01

Returns
highpass_recordingHighpassSpatialFilterRecording

The recording with highpass spatial filtered traces

References

Details of the high-pass spatial filter function (written by Olivier Winter) used in the IBL pipeline can be found at: International Brain Laboratory et al. (2022). Spike sorting pipeline for the International Brain Laboratory. https://www.internationalbrainlab.com/repro-ephys

spikeinterface.preprocessing.interpolate_bad_channels(recording, bad_channel_ids, sigma_um=None, p=1.3, weights=None)

Interpolate the channel labeled as bad channels using linear interpolation. This is based on the distance (Gaussian kernel) from the bad channel, as determined from x,y channel coordinates.

Details of the interpolation function (written by Olivier Winter) used in the IBL pipeline can be found at:

International Brain Laboratory et al. (2022). Spike sorting pipeline for the International Brain Laboratory. https://www.internationalbrainlab.com/repro-ephys

Parameters
recording: BaseRecording

The parent recording

bad_channel_idslist or 1d np.array

Channel ids of the bad channels to interpolate.

sigma_umfloat

Distance between sequential channels in um. If None, will use the most common distance between y-axis channels, by default None

pfloat

Exponent of the Gaussian kernel. Determines rate of decay for distance weightings, by default 1.3

weightsnp.array

The weights to give to bad_channel_ids at interpolation. If None, weights are automatically computed, by default None

Returns
interpolated_recording: InterpolateBadChannelsRecording

The recording object with interpolated bad channels

spikeinterface.preprocessing.normalize_by_quantile(recording, scale=1.0, median=0.0, q1=0.01, q2=0.99, mode='by_channel', dtype='float32', **random_chunk_kwargs)

Rescale the traces from the given recording extractor with a scalar and offset. First, the median and quantiles of the distribution are estimated. Then the distribution is rescaled and offset so that the scale is given by the distance between the quantiles (1st and 99th by default) is set to scale, and the median is set to the given median.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

scalar: float

Scale for the output distribution

median: float

Median for the output distribution

q1: float (default 0.01)

Lower quantile used for measuring the scale

q1: float (default 0.99)

Upper quantile used for measuring the

seed: int

Random seed for reproducibility

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

**random_chunk_kwargs: keyword arguments for `get_random_data_chunks()` function
Returns
rescaled_traces: NormalizeByQuantileRecording

The rescaled traces recording extractor object

spikeinterface.preprocessing.notch_filter(recording, freq=3000, q=30, margin_ms=5.0, dtype=None)
Parameters
recording: RecordingExtractor

The recording extractor to be notch-filtered

freq: int or float

The target frequency in Hz of the notch filter

q: int

The quality factor of the notch filter

{}
Returns
——-
filter_recording: NotchFilterRecording

The notch-filtered recording extractor object

spikeinterface.preprocessing.phase_shift(recording, margin_ms=40.0, inter_sample_shift=None, dtype=None)

This apply a phase shift to a recording to cancel the small sampling delay across for some recording system.

This is particularly relevant for neuropixel recording.

This code is inspired from from IBL lib. https://github.com/int-brain-lab/ibllib/blob/master/ibllib/dsp/fourier.py and also the one from spikeglx https://billkarsh.github.io/SpikeGLX/help/dmx_vs_gbl/dmx_vs_gbl/

Parameters
recording: Recording

The recording. It need to have “inter_sample_shift” in properties.

margin_ms: float (default 40)

margin in ms for computation 40ms ensure a very small error when doing chunk processing

inter_sample_shift: None or numpy array

If “inter_sample_shift” is not in recording.properties we can externaly provide one.

Returns
——-
filter_recording: PhaseShiftRecording

The phase shifted recording object

spikeinterface.preprocessing.rectify(recording)
spikeinterface.preprocessing.remove_artifacts(recording, list_triggers, ms_before=0.5, ms_after=3.0, mode='zeros', fit_sample_spacing=1.0, list_labels=None, artifacts=None, sparsity=None, scale_amplitude=False, time_jitter=0, waveforms_kwargs={'allow_unfiltered': True, 'mode': 'memory'})

Removes stimulation artifacts from recording extractor traces. By default, artifact periods are zeroed-out (mode = ‘zeros’). This is only recommended for traces that are centered around zero (e.g. through a prior highpass filter); if this is not the case, linear and cubic interpolation modes are also available, controlled by the ‘mode’ input argument. Note that several artifacts can be removed at once (potentially with distinct duration each), if labels are specified

Parameters
recording: RecordingExtractor

The recording extractor to remove artifacts from

list_triggers: list of lists/arrays

One list per segment of int with the stimulation trigger frames

ms_before: float or None

Time interval in ms to remove before the trigger events. If None, then also ms_after must be None and a single sample is removed

ms_after: float or None

Time interval in ms to remove after the trigger events. If None, then also ms_before must be None and a single sample is removed

list_labels: list of lists/arrays or None

One list per segment of labels with the stimulation labels for the given artefacs. labels should be strings, for JSON serialization. Required for ‘median’ and ‘average’ modes.

mode: str

Determines what artifacts are replaced by. Can be one of the following:

  • ‘zeros’ (default): Artifacts are replaced by zeros.

  • ‘median’: The median over all artifacts is computed and subtracted for

    each occurence of an artifact

  • ‘average’: The mean over all artifacts is computed and subtracted for each

    occurence of an artifact

  • ‘linear’: Replacement are obtained through Linear interpolation between

    the trace before and after the artifact. If the trace starts or ends with an artifact period, the gap is filled with the closest available value before or after the artifact.

  • ‘cubic’: Cubic spline interpolation between the trace before and after

    the artifact, referenced to evenly spaced fit points before and after the artifact. This is an option thatcan be helpful if there are significant LFP effects around the time of the artifact, but visual inspection of fit behaviour with your chosen settings is recommended. The spacing of fit points is controlled by ‘fit_sample_spacing’, with greater spacing between points leading to a fit that is less sensitive to high frequency fluctuations but at the cost of a less smooth continuation of the trace. If the trace starts or ends with an artifact, the gap is filled with the closest available value before or after the artifact.

fit_sample_spacing: float

Determines the spacing (in ms) of reference points for the cubic spline fit if mode = ‘cubic’. Default = 1ms. Note: The actual fit samples are the median of the 5 data points around the time of each sample point to avoid excessive influence from hyper-local fluctuations.

artifacts: dict

If provided (when mode is ‘median’ or ‘average’) then it must be a dict with keys that are the labels of the artifacts, and values the artifacts themselves, on all channels (and thus bypassing ms_before and ms_after)

sparsity: dict

If provided (when mode is ‘median’ or ‘average’) then it must be a dict with keys that are the labels of the artifacts, and values that are boolean mask of the channels where the artifacts should be considered (for subtraction/scaling)

scale_amplitude: False

If true, then for mode ‘median’ or ‘average’ the amplitude of the template will be scaled in amplitude at each time occurence to minimize residuals

time_jitter: float (default 0)

If non 0, then for mode ‘median’ or ‘average’, a time jitter in ms can be allowed to minimize the residuals

waveforms_kwargs: dict or None

The arguments passed to the WaveformExtractor object when extracting the artifacts, for mode ‘median’ or ‘average’. By default, the global job kwargs are used, in addition to {‘allow_unfiltered’ : True, ‘mode’:’memory’}. To estimate sparse artifact

Returns
removed_recording: RemoveArtifactsRecording

The recording extractor after artifact removal

spikeinterface.preprocessing.scale(recording, gain=1.0, offset=0.0, dtype='float32')

Scale traces from the given recording extractor with a scalar and offset. New traces = traces*scalar + offset.

Parameters
recording: RecordingExtractor

The recording extractor to be transformed

gain: float or array

Scalar for the traces of the recording extractor or array with scalars for each channel

offset: float or array

Offset for the traces of the recording extractor or array with offsets for each channel

dtype: str or np.dtype

The dtype of the output traces. Default “float32”

Returns
transform_traces: ScaleRecording

The transformed traces recording extractor object

spikeinterface.preprocessing.whiten(recording, dtype='float32', num_chunks_per_segment=20, chunk_size=10000, seed=None, W=None)

Whitens the recording extractor traces.

Parameters
recording: RecordingExtractor

The recording extractor to be whitened.

num_chunks_per_segment: int

Number of chunks per segment for random chunk, by default 20

chunk_sizeint

Size of a chunk in number for random chunk, by default 10000

seedint

Random seed for random chunk, by default None

W2d np.array

Pre-computed whitening matrix, by default None

Returns
whitened_recording: WhitenRecording

The whitened recording extractor

spikeinterface.preprocessing.zero_channel_pad(parent_recording: BaseRecording, num_channels: int, channel_mapping: Optional[list] = None)

spikeinterface.postprocessing

spikeinterface.postprocessing.compute_noise_levels(waveform_extractor, load_if_exists=False, **params)

Computes the noise level associated to each recording channel.

This function will wraps the get_noise_levels(recording) to make the noise levels persistent on disk (folder or zarr) as a WaveformExtension. The noise levels do not depend on the unit list, only the recording, but it is a convenient way to retrieve the noise levels directly ine the WaveformExtractor.

Note that the noise levels can be scaled or not, depending on the return_scaled parameter of the WaveformExtractor.

Parameters
waveform_extractor: WaveformExtractor

A waveform extractor object.

num_chunks_per_segment: int (deulf 20)

Number of chunks to estimate the noise

chunk_size: int (default 10000)

Size of chunks in sample

seed: int (default None)

Eventualy a seed for reproducibility.

Returns
noise_levels: np.array

noise level vector.

spikeinterface.postprocessing.compute_template_metrics(waveform_extractor, load_if_exists=False, metric_names=None, peak_sign='neg', upsampling_factor=10, sparsity=None, window_slope_ms=0.7)
Compute template metrics including:
  • peak_to_valley

  • peak_trough_ratio

  • halfwidth

  • repolarization_slope

  • recovery_slope

Parameters
waveform_extractorWaveformExtractor, optional

The waveform extractor used to compute template metrics

load_if_existsbool, optional, default: False

Whether to load precomputed template metrics, if they already exist.

metric_nameslist, optional

List of metrics to compute (see si.postprocessing.get_template_metric_names()), by default None

peak_signstr, optional

“pos” | “neg”, by default ‘neg’

upsampling_factorint, optional

Upsample factor, by default 10

sparsity: dict or None

Default is sparsity=None and template metric is computed on extremum channel only. If given, the dictionary should contain a unit ids as keys and a channel id or a list of channel ids as values. For more generating a sparsity dict, see the postprocessing.compute_sparsity() function.

window_slope_ms: float

Window in ms after the positiv peak to compute slope, by default 0.7

Returns
tempalte_metricspd.DataFrame

Dataframe with the computed template metrics. If ‘sparsity’ is None, the index is the unit_id. If ‘sparsity’ is given, the index is a multi-index (unit_id, channel_id)

spikeinterface.postprocessing.compute_principal_components(waveform_extractor, load_if_exists=False, n_components=5, mode='by_channel_local', sparsity=None, whiten=True, dtype='float32', **job_kwargs)

Compute PC scores from waveform extractor. The PCA projections are pre-computed only on the sampled waveforms available from the WaveformExtractor.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

load_if_exists: bool

If True and pc scores are already in the waveform extractor folders, pc scores are loaded and not recomputed.

n_components: int

Number of components fo PCA - default 5

mode: str
  • ‘by_channel_local’: a local PCA is fitted for each channel (projection by channel)

  • ‘by_channel_global’: a global PCA is fitted for all channels (projection by channel)

  • ‘concatenated’: channels are concatenated and a global PCA is fitted

sparsity: ChannelSparsity or None

The sparsity to apply to waveforms. If waveform_extractor is already sparse, the default sparsity will be used.

whiten: bool

If True, waveforms are pre-whitened

dtype: dtype

Dtype of the pc scores (default float32)

n_jobs: int

Number of jobs used to fit the PCA model (if mode is ‘by_channel_local’) - default 1

progress_bar: bool

If True, a progress bar is shown - default False

Returns
pc: WaveformPrincipalComponent

The waveform principal component object

Examples

>>> we = si.extract_waveforms(recording, sorting, folder='waveforms')
>>> pc = st.compute_principal_components(we, n_components=3, mode='by_channel_local')
>>> # get pre-computed projections for unit_id=1
>>> projections = pc.get_projections(unit_id=1)
>>> # get all pre-computed projections and labels
>>> all_projections, all_labels = pc.get_all_projections()
>>> # retrieve fitted pca model(s)
>>> pca_model = pc.get_pca_model()
>>> # compute projections on new waveforms
>>> proj_new = pc.project_new(new_waveforms)
>>> # run for all spikes in the SortingExtractor
>>> pc.run_for_all_spikes(file_path="all_pca_projections.npy")
spikeinterface.postprocessing.compute_spike_amplitudes(waveform_extractor, load_if_exists=False, peak_sign='neg', return_scaled=True, outputs='concatenated', **job_kwargs)

Computes the spike amplitudes from a WaveformExtractor.

  1. The waveform extractor is used to determine the max channel per unit.

  2. Then a “peak_shift” is estimated because for some sorters the spike index is not always at the peak.

  3. Amplitudes are extracted in chunks (parallel or not)

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

load_if_existsbool, optional, default: False

Whether to load precomputed spike amplitudes, if they already exist.

peak_sign: str
The sign to compute maximum channel:
  • ‘neg’

  • ‘pos’

  • ‘both’

return_scaled: bool

If True and recording has gain_to_uV/offset_to_uV properties, amplitudes are converted to uV.

outputs: str
How the output should be returned:
  • ‘concatenated’

  • ‘by_unit’

{}
Returns
amplitudes: np.array or list of dict
The spike amplitudes.
  • If ‘concatenated’ all amplitudes for all spikes and all units are concatenated

  • If ‘by_unit’, amplitudes are returned as a list (for segments) of dictionaries (for units)

spikeinterface.postprocessing.compute_unit_locations(waveform_extractor, load_if_exists=False, method='center_of_mass', outputs='numpy', **method_kwargs)

Localize units in 2D or 3D with several methods given the template.

Parameters
waveform_extractor: WaveformExtractor

A waveform extractor object.

load_if_existsbool, optional, default: False

Whether to load precomputed unit locations, if they already exist.

method: str

‘center_of_mass’ / ‘monopolar_triangulation’

outputs: str

‘numpy’ (default) / ‘by_unit’

method_kwargs:

Other kwargs depending on the method.

Returns
unit_locations: np.array

unit location with shape (num_unit, 2) or (num_unit, 3) or (num_unit, 3) (with alpha)

spikeinterface.postprocessing.compute_spike_locations(waveform_extractor, load_if_exists=False, ms_before=1.0, ms_after=1.5, method='center_of_mass', method_kwargs={}, outputs='concatenated', **job_kwargs)

Localize spikes in 2D or 3D with several methods given the template.

Parameters
waveform_extractorWaveformExtractor

A waveform extractor object.

load_if_existsbool, optional, default: False

Whether to load precomputed spike locations, if they already exist.

ms_beforefloat

The left window, before a peak, in milliseconds.

ms_afterfloat

The right window, after a peak, in milliseconds.

methodstr

‘center_of_mass’ / ‘monopolar_triangulation’

method_kwargsdict

Other kwargs depending on the method.

outputsstr

‘numpy’ (default) / ‘numpy_dtype’ / ‘dict’

{}
Returns
spike_locations: np.array or list of dict
The spike locations.
  • If ‘concatenated’ all locations for all spikes and all units are concatenated

  • If ‘by_unit’, locations are returned as a list (for segments) of dictionaries (for units)

spikeinterface.postprocessing.compute_template_similarity(waveform_extractor, load_if_exists=False, method='cosine_similarity', waveform_extractor_other=None)

Compute similarity between templates with several methods.

Parameters
waveform_extractor: WaveformExtractor

A waveform extractor object

load_if_existsbool, optional, default: False

Whether to load precomputed similarity, if is already exists.

method: str

Method name (‘cosine_similarity’)

waveform_extractor_other: WaveformExtractor, optional

A second waveform extractor object

Returns
similarity: np.array

The similarity matrix

spikeinterface.postprocessing.compute_correlograms(waveform_or_sorting_extractor, load_if_exists=False, window_ms: float = 100.0, bin_ms: float = 5.0, method: str = 'auto')

Compute auto and cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

If WaveformExtractor, the correlograms are saved as WaveformExtensions.

load_if_existsbool, optional, default: False

Whether to load precomputed crosscorrelograms, if they already exist.

window_msfloat, optional

The window in ms, by default 100.0.

bin_msfloat, optional

The bin size in ms, by default 5.0.

methodstr, optional

“auto” | “numpy” | “numba”. If _auto” and numba is installed, numba is used, by default “auto”

Returns
ccgsnp.array

Correlograms with shape (num_units, num_units, num_bins) The diagonal of ccgs is the auto correlogram. ccgs[A, B, :] is the symetrie of ccgs[B, A, :] ccgs[A, B, :] have to be read as the histogram of spiketimesA - spiketimesB

binsnp.array

The bin edges in ms

spikeinterface.postprocessing.compute_isi_histograms(waveform_or_sorting_extractor, load_if_exists=False, window_ms: float = 50.0, bin_ms: float = 1.0, method: str = 'auto')

Compute ISI histograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

If WaveformExtractor, the ISI histograms are saved as WaveformExtensions.

load_if_existsbool, optional, default: False

Whether to load precomputed crosscorrelograms, if they already exist.

window_msfloat, optional

The window in ms, by default 50.0.

bin_msfloat, optional

The bin size in ms, by default 1.0.

methodstr, optional

“auto” | “numpy” | “numba”. If _auto” and numba is installed, numba is used, by default “auto”

Returns
isi_histogramsnp.array

IDI_histograms with shape (num_units, num_bins)

binsnp.array

The bin edges in ms

spikeinterface.postprocessing.get_template_metric_names()
spikeinterface.postprocessing.align_sorting(sorting, unit_peak_shifts)

Class to shift a unit (generally to align the template on the peak) given the shifts for each unit.

Parameters
sorting: BaseSorting

The sorting to align.

unit_peak_shifts: dict

Dictionary mapping the unit_id to the unit’s shift (in number of samples). A positive shift means the spike train is shifted back in time, while a negative shift means the spike train is shifted forward.

Returns
aligned_sorting: AlignSortingExtractor

The aligned sorting.

spikeinterface.qualitymetrics

spikeinterface.qualitymetrics.compute_quality_metrics(waveform_extractor, load_if_exists=False, metric_names=None, qm_params=None, peak_sign=None, seed=None, sparsity=None, skip_pc_metrics=False, verbose=False, **job_kwargs)

Compute quality metrics on waveform extractor.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor to compute metrics on.

load_if_existsbool, optional, default: False

Whether to load precomputed quality metrics, if they already exist.

metric_nameslist or None

List of quality metrics to compute.

qm_paramsdict or None

Dictionary with parameters for quality metrics calculation. Default parameters can be obtained with: si.qualitymetrics.get_default_qm_params()

sparsitydict or None

If given, the sparse channel_ids for each unit in PCA metrics computation. This is used also to identify neighbor units and speed up computations. If None (default) all channels and all units are used for each unit.

skip_pc_metricsbool

If True, PC metrics computation is skipped.

n_jobsint

Number of jobs (used for PCA metrics)

verbosebool

If True, output is verbose.

progress_barbool

If True, progress bar is shown.

Returns
metrics: pandas.DataFrame

Data frame with the computed metrics

spikeinterface.qualitymetrics.get_quality_metric_list()

Get a list of the available quality metrics.

spikeinterface.qualitymetrics.get_quality_pca_metric_list()

Get a list of the available PCA-based quality metrics.

spikeinterface.qualitymetrics.get_default_qm_params()

Return default dictionary of quality metrics parameters.

Returns
dict

Default qm parameters with metric name as key and parameter dictionary as values.

spikeinterface.sorters

spikeinterface.sorters.available_sorters()

Lists available sorters.

spikeinterface.sorters.installed_sorters()

Lists installed sorters.

spikeinterface.sorters.get_default_sorter_params(sorter_name_or_class)

Returns default parameters for the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve default parameters from.

Returns
default_params: dict

Dictionary with default params for the specified sorter.

spikeinterface.sorters.get_sorter_params_description(sorter_name_or_class)

Returns a description of the parameters for the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve parameters description from.

Returns
params_description: dict

Dictionary with parameter description

spikeinterface.sorters.print_sorter_versions()

“Prints the versions of the installed sorters.

spikeinterface.sorters.get_sorter_description(sorter_name_or_class)

Returns a brief description for the specified sorter.

Parameters
sorter_name_or_class: str or SorterClass

The sorter to retrieve description from.

Returns
params_description: dict

Dictionary with parameter description.

spikeinterface.sorters.run_sorter(sorter_name: str, recording: BaseRecording, output_folder: Optional[str] = None, remove_existing_folder: bool = True, delete_output_folder: bool = False, verbose: bool = False, raise_error: bool = True, docker_image: Optional[Union[bool, str]] = False, singularity_image: Optional[Union[bool, str]] = False, with_output: bool = True, **sorter_params)

Generic function to run a sorter via function approach.

Parameters
sorter_name: str

The sorter name

recording: RecordingExtractor

The recording extractor to be spike sorted

output_folder: str or Path

Path to output folder

remove_existing_folder: bool

If True and output_folder exists yet then delete.

delete_output_folder: bool

If True, output folder is deleted (default False)

verbose: bool

If True, output is verbose

raise_error: bool

If True, an error is raised if spike sorting fails (default). If False, the process continues and the error is logged in the log file.

docker_image: bool or str

If True, pull the default docker container for the sorter and run the sorter in that container using docker. Use a str to specify a non-default container. If that container is not local it will be pulled from docker hub. If False, the sorter is run locally.

singularity_image: bool or str

If True, pull the default docker container for the sorter and run the sorter in that container using singularity. Use a str to specify a non-default container. If that container is not local it will be pulled from Docker Hub. If False, the sorter is run locally.

**sorter_params: keyword args

Spike sorter specific arguments (they can be retrieved with ‘get_default_params(sorter_name_or_class)’

Returns
sortingextractor: SortingExtractor

The spike sorted data

Examples

>>> sorting = run_sorter("tridesclous", recording)
spikeinterface.sorters.run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params={}, mode_if_folder_exists='raise', engine='loop', engine_kwargs={}, verbose=False, with_output=True, docker_images={}, singularity_images={})

Run several sorter on several recordings.

Parameters
sorter_list: list of str

List of sorter names.

recording_dict_or_list: dict or list

If a dict of recording, each key should be the name of the recording. If a list, the names should be recording_0, recording_1, etc.

working_folder: str

The working directory.

sorter_params: dict of dict with sorter_name as key

This allow to overwrite default params for sorter.

mode_if_folder_exists: {‘raise’, ‘overwrite’, ‘keep’}
The mode when the subfolder of recording/sorter already exists.
  • ‘raise’ : raise error if subfolder exists

  • ‘overwrite’ : delete and force recompute

  • ‘keep’ : do not compute again if f=subfolder exists and log is OK

engine: {‘loop’, ‘joblib’, ‘dask’}

Which engine to use to run sorter.

engine_kwargs: dict
This contains kwargs specific to the launcher engine:
  • ‘loop’ : no kwargs

  • ‘joblib’ : {‘n_jobs’ : } number of processes

  • ‘dask’ : {‘client’:} the dask client for submitting task

verbose: bool

Controls sorter verboseness.

with_output: bool

Whether to return the output.

docker_images: dict

A dictionary {sorter_name : docker_image} to specify if some sorters should use docker images.

singularity_images: dict

A dictionary {sorter_name : singularity_image} to specify if some sorters should use singularity images

Returns
resultsdict

The output is nested dict[(rec_name, sorter_name)] of SortingExtractor.

spikeinterface.sorters.run_sorter_by_property(sorter_name, recording, grouping_property, working_folder, mode_if_folder_exists='raise', engine='loop', engine_kwargs={}, verbose=False, docker_image=None, singularity_image=None, **sorter_params)

Generic function to run a sorter on a recording after splitting by a ‘grouping_property’ (e.g. ‘group’).

Internally, the function works as follows:
  • the recording is split based on the provided ‘grouping_property’ (using the ‘split_by’ function)

  • the ‘run_sorters’ function is run on the split recordings

  • sorting outputs are aggregated using the ‘aggregate_units’ function

  • the ‘grouping_property’ is added as a property to the SortingExtractor

Parameters
sorter_name: str

The sorter name

recording: BaseRecording

The recording to be sorted

grouping_property: object

Property to split by before sorting

working_folder: str

The working directory.

mode_if_folder_exists: {‘raise’, ‘overwrite’, ‘keep’}
The mode when the subfolder of recording/sorter already exists.
  • ‘raise’ : raise error if subfolder exists

  • ‘overwrite’ : delete and force recompute

  • ‘keep’ : do not compute again if f=subfolder exists and log is OK

engine: {‘loop’, ‘joblib’, ‘dask’}

Which engine to use to run sorter.

engine_kwargs: dict
This contains kwargs specific to the launcher engine:
  • ‘loop’ : no kwargs

  • ‘joblib’ : {‘n_jobs’ : } number of processes

  • ‘dask’ : {‘client’:} the dask client for submitting task

verbose: bool

default True

docker_image: None or str

If str run the sorter inside a container (docker) using the docker package.

**sorter_params: keyword args

Spike sorter specific arguments (they can be retrieved with ‘get_default_params(sorter_name_or_class)’

Returns
sortingUnitsAggregationSorting

The aggregated SortingExtractor.

Examples

This example shows how to run spike sorting split by group using the ‘joblib’ backend with 4 jobs for parallel processing.

>>> sorting = si.run_sorter_by_property("tridesclous", recording, grouping_property="group",
                                        working_folder="sort_by_group", engine="joblib",
                                        engine_kwargs={"n_jobs": 4})

Low level

class spikeinterface.sorters.BaseSorter(recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Base Sorter object.

spikeinterface.comparison

spikeinterface.comparison.compare_two_sorters(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

spikeinterface.comparison.compare_multiple_sorters(sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)

Compares multiple spike sorting outputs based on spike trains.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

Parameters
sorting_list: list

List of sorting extractor objects to be compared

name_list: list

List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters

  • ‘intersection’: spike trains are the intersection between the spike trains of the

    best matching two sorters

verbose: bool

if True, output is verbose

Returns
multi_sorting_comparison: MultiSortingComparison

MultiSortingComparison object with the multiple sorter comparison

spikeinterface.comparison.compare_sorter_to_ground_truth(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=-1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

This class can:
  • compute a “match between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positive detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hungarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

spikeinterface.comparison.compare_templates(we1, we2, we1_name=None, we2_name=None, unit_ids1=None, unit_ids2=None, match_score=0.7, chance_score=0.3, similarity_method='cosine_similarity', sparsity_dict=None, verbose=False)

Compares units from different sessions based on template similarity

Parameters
we1WaveformExtractor

The first waveform extractor to get templates to compare

we2WaveformExtractor

The second waveform extractor to get templates to compare

unit_ids1list, optional

List of units from we1 to compare, by default None

unit_ids2list, optional

List of units from we2 to compare, by default None

similarity_methodstr, optional

Method for the similaroty matrix, by default “cosine_similarity”

sparsity_dictdict, optional

Dictionary for sparsity, by default None

verbosebool, optional

If True, output is verbose, by default False

Returns
comparisonTemplateComparison

The output TemplateComparison object

spikeinterface.comparison.compare_multiple_templates(waveform_list, name_list=None, match_score=0.8, chance_score=0.3, verbose=False, similarity_method='cosine_similarity', sparsity_dict=None, do_matching=True)

Compares multiple waveform extractors using template similarity.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

Parameters
waveform_list: list

List of waveform extractor objects to be compared

name_list: list

List of session names. If not given, sorters are named as ‘sess0’, ‘sess1’, ‘sess2’, etc.

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

verbose: bool

if True, output is verbose

Returns
multi_template_comparison: MultiTemplateComparison

MultiTemplateComparison object with the multiple template comparisons

spikeinterface.comparison.aggregate_performances_table(study_folder, exhaustive_gt=False, **karg_thresh)

Aggregate some results into dataframe to have a “study” overview on all recordingXsorter.

Tables are:
  • run_times: run times per recordingXsorter

  • perf_pooled_with_sum: GroundTruthComparison.see get_performance

  • perf_pooled_with_average: GroundTruthComparison.see get_performance

  • count_units: given some threshold count how many units : ‘well_detected’, ‘redundant’, ‘false_postive_units, ‘bad’

Parameters
study_folder: str

The study folder.

karg_thresh: dict

Threshold parameters used for the “count_units” table.

Returns
dataframes: a dict of DataFrame

Return several useful DataFrame to compare all results. Note that count_units depend on karg_thresh.

spikeinterface.comparison.create_hybrid_units_recording(parent_recording: BaseRecording, templates: ndarray, injected_sorting: Optional[BaseSorting] = None, nbefore: Optional[Union[List[int], int]] = None, firing_rate: float = 10, amplitude_factor: Optional[ndarray] = None, amplitude_std: float = 0.0, refractory_period_ms: float = 2.0, injected_sorting_folder: Optional[Union[str, Path]] = None)

Class for creating a hybrid recording where additional units are added to an existing recording.

Parameters
parent_recording: BaseRecording

Existing recording to add on top of.

templates: np.ndarray[n_units, n_samples, n_channels]

Array containing the templates to inject for all the units.

injected_sorting: BaseSorting | None:

The sorting for the injected units. If None, will be generated using the following parameters.

nbefore: list[int] | int | None

Where is the center of the template for each unit? If None, will default to the highest peak.

firing_rate: float

The firing rate of the injected units (in Hz).

amplitude_factor: np.ndarray | None:

The amplitude factor for each spike. If None, will be generated as a gaussian centered at 1.0 and with an std of amplitude_std.

amplitude_std: float

The standard deviation of the amplitude (centered at 1.0).

refractory_period_ms: float

The refractory period of the injected spike train (in ms).

injected_sorting_folder: str | Path | None

If given, the injected sorting is saved to this folder. It must be specified if injected_sorting is None or not dumpable.

Returns
hybrid_units_recording: HybridUnitsRecording

The recording containing real and hybrid units.

spikeinterface.comparison.create_hybrid_spikes_recording(wvf_extractor: Union[WaveformExtractor, Path], injected_sorting: Optional[BaseSorting] = None, unit_ids: Optional[List[int]] = None, max_injected_per_unit: int = 1000, injected_rate: float = 0.05, refractory_period_ms: float = 1.5, injected_sorting_folder: Optional[Union[str, Path]] = None) None

Class for creating a hybrid recording where additional spikes are added to already existing units.

Parameters
wvf_extractor: WaveformExtractor

The waveform extractor object of the existing recording.

injected_sorting: BaseSorting | None

Additional spikes to inject. If None, will generate it.

max_injected_per_unit: int

If injected_sorting=None, the max number of spikes per unit that is allowed to be injected.

unit_ids: list[int] | None

unit_ids to take in the wvf_extractor for spikes injection.

injected_rate: float

If injected_sorting=None, the max fraction of spikes per unit that is allowed to be injected.

refractory_period_ms: float

If injected_sorting=None, the injected spikes need to respect this refractory period.

injected_sorting_folder: str | Path | None

If given, the injected sorting is saved to this folder. It must be specified if injected_sorting is None or not dumpable.

Returns
hybrid_spikes_recording: HybridSpikesRecording:

The recording containing units with real and hybrid spikes.

class spikeinterface.comparison.GroundTruthComparison(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=-1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

This class can:
  • compute a “match between gt_sorting and tested_sorting

  • compute optionally the score label (TP, FN, CL, FP) for each spike

  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count

  • compute the confusion matrix .get_confusion_matrix()

  • compute some performance metric with several strategy based on the count score by unit

  • count well detected units

  • count false positive detected units

  • count redundant units

  • count overmerged units

  • summary all this

Parameters
gt_sorting: SortingExtractor

The first sorting for the comparison

tested_sorting: SortingExtractor

The second sorting for the comparison

gt_name: str

The name of sorter 1

tested_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms) match_score: float Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

redundant_score: float

Agreement score above which units are redundant (default 0.2)

overmerged_score: float

Agreement score above which units can be overmerged (default 0.2)

well_detected_score: float

Agreement score above which units are well detected (default 0.8)

exhaustive_gt: bool (default True)

Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True

match_mode: ‘hungarian’, or ‘best’

What is match used for counting : ‘hungarian’ or ‘best match’.

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

compute_labels: bool

If True, labels are computed at instantiation (default False)

compute_misclassifications: bool

If True, misclassifications are computed at instantiation (default False)

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

count_bad_units()

See get_bad_units

count_false_positive_units(redundant_score=None)

See get_false_positive_units().

count_overmerged_units(overmerged_score=None)

See get_overmerged_units().

count_redundant_units(redundant_score=None)

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units. kwargs are the same as get_well_detected_units.

get_bad_units()

Return units list of “bad units”.

“bad units” are defined as units in tested that are not in the best match list of GT units.

So it is the union of “false positive units” + “redundant units”.

Need exhaustive_gt=True

get_confusion_matrix()

Computes the confusion matrix.

Returns
confusion_matrix: pandas.DataFrame

The confusion matrix

get_false_positive_units(redundant_score=None)

Return units list of “false positive units” from tested_sorting.

“false positive units” are defined as units in tested that are not matched at all in GT units.

Need exhaustive_gt=True

Parameters
redundant_score: float (default 0.2)

The agreement score below which tested units are counted as “false positive”” (and not “redundant”).

get_overmerged_units(overmerged_score=None)

Return “overmerged units”

“overmerged units” are defined as units in tested that match more than one GT unit with an agreement score larger than overmerged_score.

Parameters
overmerged_score: float (default 0.4)

Tested units with 2 or more agreement scores above ‘overmerged_score’ are counted as “overmerged”.

get_performance(method='by_unit', output='pandas')
Get performance rate with several method:
  • ‘raw_count’ : just render the raw count table

  • ‘by_unit’ : render perf as rate unit by unit of the GT

  • ‘pooled_with_average’ : compute rate unit by unit and average

Parameters
method: str

‘by_unit’, or ‘pooled_with_average’

output: str

‘pandas’ or ‘dict’

Returns
perf: pandas dataframe/series (or dict)

dataframe/series (based on ‘output’) with performance entries

get_redundant_units(redundant_score=None)

Return “redundant units”

“redundant units” are defined as units in tested that match a GT units with a big agreement score but it is not the best match. In other world units in GT that detected twice or more.

Parameters
redundant_score=None: float (default 0.2)

The agreement score above which tested units are counted as “redundant” (and not “false positive” ).

get_well_detected_units(well_detected_score=None)

Return units list of “well detected units” from tested_sorting.

“well detected units” are defined as units in tested that are well matched to GT units.

Parameters
well_detected_score: float (default 0.8)

The agreement score above which tested units are counted as “well detected”.

print_performance(method='pooled_with_average')

Print performance with the selected method

print_summary(well_detected_score=None, redundant_score=None, overmerged_score=None)
Print a global performance summary that depend on the context:
  • exhaustive= True/False

  • how many gt units (one or several)

This summary mix several performance metrics.

class spikeinterface.comparison.SymmetricSortingComparison(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores

  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

Parameters
sorting1: SortingExtractor

The first sorting for the comparison

sorting2: SortingExtractor

The second sorting for the comparison

sorting1_name: str

The name of sorter 1

sorting2_name:str

The name of sorter 2

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

verbose: bool

If True, output is verbose

Returns
sorting_comparison: SortingComparison

The SortingComparison object

get_agreement_fraction(unit1=None, unit2=None)
get_best_unit_match1(unit1)
get_best_unit_match2(unit2)
get_matching()
get_matching_event_count(unit1, unit2)
get_matching_unit_list1(unit1)
get_matching_unit_list2(unit2)
class spikeinterface.comparison.GroundTruthStudy(study_folder=None)
aggregate_count_units(well_detected_score=None, redundant_score=None, overmerged_score=None)
aggregate_dataframes(copy_into_folder=True, **karg_thresh)
aggregate_performance_by_unit()
aggregate_run_times()
compute_metrics(rec_name, metric_names=['snr'], ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=-1, total_memory='1G')
compute_waveforms(rec_name, sorter_name=None, ms_before=3.0, ms_after=4.0, max_spikes_per_unit=500, n_jobs=-1, total_memory='1G')
concat_all_snr()
copy_sortings()
classmethod create(study_folder, gt_dict, **job_kwargs)
get_ground_truth(rec_name=None)
get_metrics(rec_name=None, **metric_kwargs)

Load or compute units metrics for a given recording.

get_recording(rec_name=None)
get_sorting(sort_name, rec_name=None)
get_templates(rec_name, sorter_name=None, mode='median')

Get template for a given recording.

If sorter_name=None then template are from the ground truth.

get_units_snr(rec_name=None, **metric_kwargs)
get_waveform_extractor(rec_name, sorter_name=None)
run_comparisons(exhaustive_gt=False, **kwargs)
run_sorters(sorter_list, mode_if_folder_exists='keep', remove_sorter_folders=False, **kwargs)
scan_folder()
class spikeinterface.comparison.MultiSortingComparison(sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)

Compares multiple spike sorting outputs based on spike trains.

  • Pair-wise comparisons are made

  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

Parameters
sorting_list: list

List of sorting extractor objects to be compared

name_list: list

List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.

delta_time: float

Number of ms to consider coincident spikes (default 0.4 ms)

match_score: float

Minimum agreement score to match units (default 0.5)

chance_score: float

Minimum agreement score to for a possible match (default 0.1)

n_jobs: int

Number of cores to use in parallel. Uses all available if -1

spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters

  • ‘intersection’: spike trains are the intersection between the spike trains of the

    best matching two sorters

verbose: bool

if True, output is verbose

Returns
multi_sorting_comparison: MultiSortingComparison

MultiSortingComparison object with the multiple sorter comparison

get_agreement_sorting(minimum_agreement_count=1, minimum_agreement_count_only=False)

Returns AgreementSortingExtractor with units with a ‘minimum_matching’ agreement.

Parameters
minimum_agreement_count: int

Minimum number of matches among sorters to include a unit.

minimum_agreement_count_only: bool

If True, only units with agreement == ‘minimum_matching’ are included. If False, units with an agreement >= ‘minimum_matching’ are included

Returns
agreement_sorting: AgreementSortingExtractor

The output AgreementSortingExtractor

class spikeinterface.comparison.CollisionGTComparison(gt_sorting, tested_sorting, collision_lag=2.0, nbins=11, **kwargs)

This class is an extension of GroundTruthComparison by focusing to benchmark spike in collision

collision_lag: float

Collision lag in ms.

class spikeinterface.comparison.CorrelogramGTComparison(gt_sorting, tested_sorting, window_ms=100.0, bin_ms=1.0, well_detected_score=0.8, **kwargs)

This class is an extension of GroundTruthComparison by focusing to benchmark correlation reconstruction

collision_lag: float

Collision lag in ms.

class spikeinterface.comparison.CollisionGTStudy(study_folder=None)
class spikeinterface.comparison.CorrelogramGTStudy(study_folder=None)

spikeinterface.widgets

spikeinterface.widgets.set_default_plotter_backend(backend)
spikeinterface.widgets.get_default_plotter_backend()

Return the default backend for spikeinterface widgets. The default backend is ‘matplotlib’ at init. It can be be globaly set with set_default_plotter_backend(backend)

@jeremy: we could also used ENV variable if you prefer

spikeinterface.widgets.plot_all_amplitudes_distributions(waveform_extractor: WaveformExtractor, unit_ids=None, unit_colors=None, backend=None, **backend_kwargs)

Plots distributions of amplitudes as violon plot for all or some units.

Parameters
waveform_extractor: WaveformExtractor

The input waveform extractor

unit_ids: list

List of unit ids.

unit_colors: None or dict

Dict of colors

backend: str

[‘matplotlib’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

spikeinterface.widgets.plot_amplitudes(waveform_extractor: WaveformExtractor, unit_ids=None, unit_colors=None, segment_index=None, max_spikes_per_unit=None, hide_unit_selector=False, plot_histograms=False, bins=None, plot_legend=True, backend=None, **backend_kwargs)

Plots spike amplitudes

Parameters
waveform_extractor: WaveformExtractor

The input waveform extractor

unit_ids: list

List of unit ids.

segment_index: int

The segment index (or None if mono-segment)

max_spikes_per_unit: int

Number of max spikes per unit to display. Use None for all spikes. Default None.

hide_unit_selectorbool

If True the unit selector is not displayed (sortingview backend)

plot_histogrambool

If True, an histogram of the amplitudes is plotted on the right axis (matplotlib backend)

binsint

If plot_histogram is True, the number of bins for the amplitude histogram. If None (default), this is automatically adjusted.

plot_legend: bool (default True)

plot or not the legend

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_autocorrelograms(*args, **kargs)

Plots unit cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

window_msfloat

Window for CCGs in ms, by default 100 ms

bin_msfloat

Bin size in ms, by default 1 ms

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

unit_colors: dict or None

Optional dict of colors for units.

backend: str

[‘matplotlib’, ‘sortingview’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

spikeinterface.widgets.plot_crosscorrelograms(waveform_or_sorting_extractor: Union[WaveformExtractor, BaseSorting], unit_ids=None, window_ms=100.0, bin_ms=1.0, hide_unit_selector=False, unit_colors=None, backend=None, **backend_kwargs)

Plots unit cross correlograms.

Parameters
waveform_or_sorting_extractorWaveformExtractor or BaseSorting

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

window_msfloat

Window for CCGs in ms, by default 100 ms

bin_msfloat

Bin size in ms, by default 1 ms

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

unit_colors: dict or None

Optional dict of colors for units.

backend: str

[‘matplotlib’, ‘sortingview’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

spikeinterface.widgets.plot_quality_metrics(waveform_extractor: WaveformExtractor, unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Plots quality metrics distributions.

Parameters
waveform_extractorWaveformExtractor

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

skip_metrics: list or None

If given, a list of quality metrics to skip

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_sorting_summary(waveform_extractor: WaveformExtractor, unit_ids=None, sparsity=None, max_amplitudes_per_unit=None, curation=False, unit_table_properties=None, label_choices=None, backend=None, **backend_kwargs)

Plots spike sorting summary

Parameters
waveform_extractorWaveformExtractor

The waveform extractor object.

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

max_amplitudes_per_unitint or None

Maximum number of spikes per unit for plotting amplitudes, by default None (all spikes)

curationbool

If True, manual curation is enabled, by default False (sortingview backend)

unit_table_propertieslist or None

List of properties to be added to the unit table, by default None (sortingview backend)

backend: str

[‘sortingview’]

**backend_kwargs: kwargs

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

spikeinterface.widgets.plot_spike_locations(waveform_extractor: WaveformExtractor, unit_ids=None, segment_index=None, max_spikes_per_unit=500, with_channel_ids=False, unit_colors=None, hide_unit_selector=False, plot_all_units=True, plot_legend=False, hide_axis=False, backend=None, **backend_kwargs)

Plots spike locations.

Parameters
waveform_extractorWaveformExtractor

The object to compute/get spike locations from

unit_ids: list

List of unit ids.

max_spikes_per_unit: int

Number of max spikes per unit to display. Use None for all spikes. Default 500.

with_channel_ids: bool False default

Add channel ids text on the probe

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

plot_all_unitsbool

If True, all units are plotted. The unselected ones (not in unit_ids), are plotted in grey. Default True (matplotlib backend)

plot_legendbool

If True, the legend is plotted. Default False (matplotlib backend)

hide_axisbool

If True, the axis is set to off. Default False (matplotlib backend)

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_spikes_on_traces(waveform_extractor: WaveformExtractor, segment_index=None, channel_ids=None, unit_ids=None, order_channel_by_depth=False, time_range=None, unit_colors=None, sparsity=None, mode='auto', return_scaled=False, cmap='RdBu', show_channel_ids=False, color_groups=False, color=None, clim=None, tile_size=512, seconds_per_row=0.2, with_colorbar=True, backend=None, **backend_kwargs)

Plots unit spikes/waveforms over traces.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

unit_selected_waveforms: None or dict

A dict key is unit_id and value is the subset of waveforms indices that should be be displayed (matplotlib backend)

max_spikes_per_unit: int or None

If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are displayed per waveform, default 50 (matplotlib backend)

axis_equal: bool

Equal aspect ratio for x and y axis, to visualize the array geometry to scale.

lw_waveforms: float

Line width for the waveforms, default 1 (matplotlib backend)

lw_templates: float

Line width for the templates, default 2 (matplotlib backend)

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used. (matplotlib backend)

alpha_waveforms: float

Alpha value for waveforms, default 0.5 (matplotlib backend)

alpha_templates: float

Alpha value for templates, default 1 (matplotlib backend)

same_axis: bool

If True, waveforms and templates are diplayed on the same axis, default False (matplotlib backend)

x_offset_units: bool

In case same_axis is True, this parameter allow to x-offset the waveforms for different units (recommended for a few units), default False (matlotlib backend)

backend: str

[‘matplotlib’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_template_metrics(waveform_extractor: WaveformExtractor, unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Plots template metrics distributions.

Parameters
waveform_extractorWaveformExtractor

The object to compute/get crosscorrelograms from

unit_ids: list

List of unit ids.

skip_metrics: list or None

If given, a list of quality metrics to skip

compute_kwargsdict or None

If given, dictionary with keyword arguments for “compute_template_metrics” function

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_template_similarity(waveform_extractor: WaveformExtractor, unit_ids=None, cmap='viridis', display_diagonal_values=False, show_unit_ticks=False, show_colorbar=True, backend=None, **backend_kwargs)

Plots unit cross correlograms.

Parameters
waveform_extractorWaveformExtractor

The object to compute/get template similarity from

unit_idslist

List of unit ids.

display_diagonal_valuesbool

If False, the diagonal is displayed as zeros. If True, the similarity values (all 1s) are displayed. Default False

cmapMatplotlib colormap

The matplotlib colormap. Default ‘viridis’. (matplotlib backend)

show_unit_ticksbool

If True, ticks display unit ids. Default False. (matplotlib backend)

show_colorbarbool

If True, color bar is displayed. Default True. (matplotlib backend)

backend: str

[‘matplotlib’, ‘sortingview’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

spikeinterface.widgets.plot_timeseries(recording, segment_index=None, channel_ids=None, order_channel_by_depth=False, time_range=None, mode='auto', return_scaled=False, cmap='RdBu_r', show_channel_ids=False, color_groups=False, color=None, clim=None, tile_size=1500, seconds_per_row=0.2, with_colorbar=True, add_legend=True, backend=None, **backend_kwargs)

Plots recording timeseries.

Parameters
recording: RecordingExtractor, dict, or list

The recording extractor object. If dict (or list) then it is a multi-layer display to compare, for example, different processing steps

segment_index: None or int

The segment index (required for multi-segment recordings)

channel_ids: list

The channel ids to display.

order_channel_by_depth: bool

Reorder channel by depth.

time_range: list

List with start time and end time

mode: str

Three possible modes:

  • ‘line’: classical for low channel count

  • ‘map’: for high channel count use color heat map

  • ‘auto’: auto switch depending the channel count (‘line’ if less than 64 channels, ‘map’ otherwise)

return_scaled: bool

If True and the recording has scaled traces, it plots the scaled traces, by default False

cmap: str

matplotlib colormap used in mode ‘map’, by default ‘RdBu’

show_channel_ids: bool

Set yticks with channel ids

color_groups: bool

If True groups are plotted with different colors, by default False

color: str

The color used to draw the traces, by default None

clim: None, tuple or dict

When mode is ‘map’, this argument controls color limits. If dict, keys should be the same as recording keys

with_colorbar: bool

When mode is ‘map’, a colorbar is added, by default True

tile_size: int

For sortingview backend, the size of each tile in the rendered image

seconds_per_row: float

For ‘map’ mode and sortingview backend, seconds to reder in each row

Returns
W: TimeseriesWidget

The output widget

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_unit_depths(waveform_extractor, unit_colors=None, depth_axis=1, peak_sign='neg', backend=None, **backend_kwargs)

Plot unit depths

Parameters
waveform_extractor: WaveformExtractor

The input waveform extractor

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

depth_axis: int default 1

Which dimension of unit_locations is depth. 1 by defaults

peak_sign: str (neg/pos/both)

Sign of peak for amplitudes.

backend: str

[‘matplotlib’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

spikeinterface.widgets.plot_unit_locations(waveform_extractor: WaveformExtractor, unit_ids=None, with_channel_ids=False, unit_colors=None, hide_unit_selector=False, plot_all_units=True, plot_legend=False, hide_axis=False, backend=None, **backend_kwargs)

Plots unit locations.

Parameters
waveform_extractorWaveformExtractor

The object to compute/get unit locations from

unit_ids: list

List of unit ids.

with_channel_ids: bool False default

Add channel ids text on the probe

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

hide_unit_selectorbool

If True, the unit selector is not displayed. Default False (sortingview backend)

plot_all_unitsbool

If True, all units are plotted. The unselected ones (not in unit_ids), are plotted in grey. Default True (matplotlib backend)

plot_legendbool

If True, the legend is plotted. Default False (matplotlib backend)

hide_axisbool

If True, the axis is set to off. Default False (matplotlib backend)

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_unit_summary(waveform_extractor, unit_id, unit_colors=None, sparsity=None, radius_um=100, backend=None, **backend_kwargs)

Plot a unit summary.

If amplitudes are alreday computed they are displayed.

Parameters
waveform_extractor: WaveformExtractor

The waveform extractor object

unit_id: into or str

The unit id to plot the summary of

unit_colorsdict or None

If given, a dictionary with unit ids as keys and colors as values

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

backend: str

[‘matplotlib’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

spikeinterface.widgets.plot_unit_templates(*args, **kargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

unit_selected_waveforms: None or dict

A dict key is unit_id and value is the subset of waveforms indices that should be be displayed (matplotlib backend)

max_spikes_per_unit: int or None

If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are displayed per waveform, default 50 (matplotlib backend)

axis_equal: bool

Equal aspect ratio for x and y axis, to visualize the array geometry to scale.

lw_waveforms: float

Line width for the waveforms, default 1 (matplotlib backend)

lw_templates: float

Line width for the templates, default 2 (matplotlib backend)

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used. (matplotlib backend)

alpha_waveforms: float

Alpha value for waveforms, default 0.5 (matplotlib backend)

alpha_templates: float

Alpha value for templates, default 1 (matplotlib backend)

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

same_axis: bool

If True, waveforms and templates are diplayed on the same axis, default False (matplotlib backend)

x_offset_units: bool

In case same_axis is True, this parameter allow to x-offset the waveforms for different units (recommended for a few units), default False (matlotlib backend)

plot_legend: bool (default True)

Display legend.

backend: str

[‘matplotlib’, ‘sortingview’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

sortingview:

  • generate_url: If True, the figurl URL is generated and printed. Default True

  • display: If True and in jupyter notebook/lab, the widget is displayed in the cell. Default True.

  • figlabel: The figurl figure label. Default None

  • height: The height of the sortingview View in jupyter. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

spikeinterface.widgets.plot_unit_waveforms_density_map(waveform_extractor, channel_ids=None, unit_ids=None, sparsity=None, same_axis=False, use_max_channel=False, peak_sign='neg', unit_colors=None, backend=None, **backend_kwargs)

Plots unit waveforms using heat map density.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

use_max_channel: bool default False

Use only the max channel

peak_sign: str “neg”

Used to detect max channel only when use_max_channel=True

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used.

same_axis: bool

If True then all density are plot on the same axis and then channels is the union all channel per units.

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces, only used if channel_locs is True

backend: str

[‘matplotlib’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

spikeinterface.widgets.plot_unit_waveforms(waveform_extractor: WaveformExtractor, channel_ids=None, unit_ids=None, plot_waveforms=True, plot_templates=True, plot_channels=False, unit_colors=None, sparsity=None, ncols=5, lw_waveforms=1, lw_templates=2, axis_equal=False, unit_selected_waveforms=None, max_spikes_per_unit=50, set_title=True, same_axis=False, x_offset_units=False, alpha_waveforms=0.5, alpha_templates=1, hide_unit_selector=False, plot_legend=True, backend=None, **backend_kwargs)

Plots unit waveforms.

Parameters
waveform_extractor: WaveformExtractor
channel_ids: list

The channel ids to display

unit_ids: list

List of unit ids.

plot_templates: bool

If True, templates are plotted over the waveforms

sparsityChannelSparsity or None

Optional ChannelSparsity to apply. If WaveformExtractor is already sparse, the argument is ignored

set_title: bool

Create a plot title with the unit number if True.

plot_channels: bool

Plot channel locations below traces.

unit_selected_waveforms: None or dict

A dict key is unit_id and value is the subset of waveforms indices that should be be displayed (matplotlib backend)

max_spikes_per_unit: int or None

If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are displayed per waveform, default 50 (matplotlib backend)

axis_equal: bool

Equal aspect ratio for x and y axis, to visualize the array geometry to scale.

lw_waveforms: float

Line width for the waveforms, default 1 (matplotlib backend)

lw_templates: float

Line width for the templates, default 2 (matplotlib backend)

unit_colors: None or dict

A dict key is unit_id and value is any color format handled by matplotlib. If None, then the get_unit_colors() is internally used. (matplotlib backend)

alpha_waveforms: float

Alpha value for waveforms, default 0.5 (matplotlib backend)

alpha_templates: float

Alpha value for templates, default 1 (matplotlib backend)

hide_unit_selectorbool

For sortingview backend, if True the unit selector is not displayed

same_axis: bool

If True, waveforms and templates are diplayed on the same axis, default False (matplotlib backend)

x_offset_units: bool

In case same_axis is True, this parameter allow to x-offset the waveforms for different units (recommended for a few units), default False (matlotlib backend)

plot_legend: bool (default True)

Display legend.

backend: str

[‘matplotlib’, ‘ipywidgets’]

**backend_kwargs: kwargs

matplotlib:

  • figure: Matplotlib figure. When None, it is created. Default None

  • ax: Single matplotlib axis. When None, it is created. Default None

  • axes: Multiple matplotlib axes. When None, they is created. Default None

  • ncols: Number of columns to create in subplots. Default 5

  • figsize: Size of matplotlib figure. Default None

  • figtitle: The figure title. Default None

ipywidgets:

  • width_cm: Width of the figure in cm (default 10)

  • height_cm: Height of the figure in cm (default 6)

  • display: If True, widgets are immediately displayed

Legacy widgets

These widgets are only available with the “matplotlib” backend

spikeinterface.widgets.plot_rasters(*args, **kwargs)

Plots spike train rasters.

Parameters
sorting: SortingExtractor

The sorting extractor object

segment_index: None or int

The segment index.

unit_ids: list

List of unit ids

time_range: list

List with start time and end time

color: matplotlib color

The color to be used

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: RasterWidget

The output widget

spikeinterface.widgets.plot_probe_map(*args, **kwargs)

Plot the probe of a recording.

Parameters
recording: RecordingExtractor

The recording extractor object

channel_ids: list

The channel ids to display

with_channel_ids: bool False default

Add channel ids text on the probe

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

**plot_probe_kwargs: keyword arguments for probeinterface.plotting.plot_probe_group() function
Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_isi_distribution(*args, **kwargs)

Plots spike train ISI distribution.

Parameters
sorting: SortingExtractor

The sorting extractor object

unit_ids: list

List of unit ids

bins_ms: int

Bin size in ms

window_ms: float

Window size in ms

ncols: int

Number of maximum columns (default 5)

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

Returns
W: ISIDistributionWidget

The output widget

spikeinterface.widgets.plot_drift_over_time(*args, **kwargs)

Plot “y” (=depth) (or “x”) drift over time. The use peak detection on channel and make histogram of peak activity over time bins.

Parameters
recording: RecordingExtractor

The recordng extractor object

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

mode: str ‘heatmap’ or ‘scatter’

plot mode

probe_axis: 0 or 1

Axis of the probe 0=x 1=y

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: float (default 60.)

Bin duration in second

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_peak_activity_map(*args, **kwargs)

Plots spike rate (estimated with detect_peaks()) as 2D activity map.

Can be static (bin_duration_s=None) or animated (bin_duration_s=60.)

Parameters
recording: RecordingExtractor

The recording extractor object.

peaks: None or numpy array

Optionally can give already detected peaks to avoid multiple computation.

detect_peaks_kwargs: None or dict

If peaks is None here the kwargs for detect_peak function.

weight_with_amplitudes: bool False by default

Peak are weighted by amplitude

bin_duration_s: None or float

If None then static image If not None then it is an animation per bin.

with_contact_color: bool (default True)

Plot rates with contact colors

with_interpolated_map: bool (default True)

Plot rates with interpolated map

with_channel_ids: bool False default

Add channel ids text on the probe

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ProbeMapWidget

The output widget

spikeinterface.widgets.plot_principal_component(*args, **kwargs)

Plots principal component.

Parameters
waveform_extractor: WaveformExtractor
pc: None or WaveformPrincipalComponent

If None then pc are recomputed

spikeinterface.widgets.plot_unit_probe_map(*args, **kwargs)

Plots unit map. Amplitude is color coded on probe contact.

Can be static (animated=False) or animated (animated=True)

Parameters
waveform_extractor: WaveformExtractor
unit_ids: list

List of unit ids.

channel_ids: list

The channel ids to display

animated: True/False

animation for amplitude on time

with_channel_ids: bool False default

add channel ids text on the probe

spikeinterface.widgets.plot_confusion_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
gt_comparison: GroundTruthComparison

The ground truth sorting comparison object

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: ConfusionMatrixWidget

The output widget

spikeinterface.widgets.plot_agreement_matrix(*args, **kwargs)

Plots sorting comparison confusion matrix.

Parameters
sorting_comparison: GroundTruthComparison or SymmetricSortingComparison

The sorting comparison object. Symetric or not.

ordered: bool

Order units with best agreement scores. This enable to see agreement on a diagonal.

count_text: bool

If True counts are displayed as text

unit_ticks: bool

If True unit tick labels are displayed

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_multicomp_graph(*args, **kwargs)

Plots multi comparison graph.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

draw_labels: bool

If True unit labels are shown

node_cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘viridis’)

edge_cmap: matplotlib colormap

The colormap to be used for the edges (default ‘hot’)

alpha_edges: float

Alpha value for edges

colorbar: bool

If True a colorbar for the edges is plotted

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement(*args, **kwargs)

Plots multi comparison agreement as pie or bar plot.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_multicomp_agreement_by_sorter(*args, **kwargs)

Plots multi comparison agreement as pie or bar plot.

Parameters
multi_comparison: BaseMultiComparison

The multi comparison object

plot_type: str

‘pie’ or ‘bar’

cmap: matplotlib colormap

The colormap to be used for the nodes (default ‘Reds’)

axes: list of matplotlib axes

The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored.

show_legend: bool

Show the legend in the last axes (default True).

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_pair_by_pair(*args, **kwargs)

Plots CollisionGTComparison pair by pair.

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

unit_ids: list

List of considered units

nbins: int

Number of bins

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: MultiCompGraphWidget

The output widget

spikeinterface.widgets.plot_comparison_collision_by_similarity(*args, **kwargs)

Plots CollisionGTComparison pair by pair orderer by cosine_similarity

Parameters
comp: CollisionGTComparison

The collision ground truth comparison object

templates: array

template of units

mode: ‘heatmap’ or ‘lines’

to see collision curves for every pairs (‘heatmap’) or as lines averaged over pairs.

similarity_bins: array

if mode is ‘lines’, the bins used to average the pairs

cmap: string

colormap used to show averages if mode is ‘lines’

metric: ‘cosine_similarity’

metric for ordering

good_only: True

keep only the pairs with a non zero accuracy (found templates)

min_accuracy: float

If good only, the minimum accuracy every cell should have, individually, to be considered in a putative pair

unit_ids: list

List of considered units

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

spikeinterface.widgets.plot_sorting_performance(*args, **kwargs)

Plots sorting performance for each ground-truth unit.

Parameters
gt_sorting_comparison: GroundTruthComparison

The ground truth sorting comparison object

property_name: str

The property of the sorting extractor to use as x-axis (e.g. snr). If None, no property is used.

metric: str

The performance metric. ‘accuracy’ (default), ‘precision’, ‘recall’, ‘miss rate’, etc.

markersize: int

The size of the marker

marker: str

The matplotlib marker to use (default ‘.’)

figure: matplotlib figure

The figure to be used. If not given a figure is created

ax: matplotlib axis

The axis to be used. If not given an axis is created

Returns
W: SortingPerformanceWidget

The output widget

spikeinterface.exporters

spikeinterface.exporters.export_to_phy(waveform_extractor, output_folder, compute_pc_features=True, compute_amplitudes=True, sparsity=None, copy_binary=True, remove_if_exists=False, peak_sign='neg', template_mode='median', dtype=None, verbose=True, **job_kwargs)

Exports a waveform extractor to the phy template-gui format.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the phy template-gui files are saved

compute_pc_features: bool

If True (default), pc features are computed

compute_amplitudes: bool

If True (default), waveforms amplitudes are computed

sparsity: ChannelSparsity or None

The sparsity object.

copy_binary: bool

If True, the recording is copied and saved in the phy ‘output_folder’

remove_if_exists: bool

If True and ‘output_folder’ exists, it is removed and overwritten

peak_sign: ‘neg’, ‘pos’, ‘both’

Used by compute_spike_amplitudes

template_mode: str

Parameter ‘mode’ to be given to WaveformExtractor.get_template()

dtype: dtype or None

Dtype to save binary data

verbose: bool

If True, output is verbose

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.exporters.export_report(waveform_extractor, output_folder, remove_if_exists=False, format='png', show_figures=False, peak_sign='neg', force_computation=False, **job_kwargs)

Exports a SI spike sorting report. The report includes summary figures of the spike sorting output (e.g. amplitude distributions, unit localization and depth VS amplitude) as well as unit-specific reports, that include waveforms, templates, template maps, ISI distributions, and more.

Parameters
waveform_extractor: a WaveformExtractor or None

If WaveformExtractor is provide then the compute is faster otherwise

output_folder: str

The output folder where the report files are saved

remove_if_exists: bool

If True and the output folder exists, it is removed

format: str

‘png’ (default) or ‘pdf’ or any format handled by matplotlib

peak_sign: ‘neg’ or ‘pos’

used to compute amplitudes and metrics

show_figures: bool

If True, figures are shown. If False (default), figures are closed after saving.

force_computation: bool default False

Force or not some heavy computaion before exporting.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

spikeinterface.curation

class spikeinterface.curation.CurationSorting(parent_sorting, make_graph=False, properties_policy='keep')

Class that handles curation of a Sorting object.

Parameters
parent_sorting: Recording

The recording object

properties_policy: str

Policy used to propagate properties after split and merge operation. If ‘keep’ the properties will be passed to the new units (if the original units have the same value). If ‘remove’ the new units will have an empty value for all the properties. Default: ‘keep’

make_graph: bool

True to keep a networkx graph with the curation history

Returns
——-
sorting: Sorting

Sorting object with the selected units merged

class spikeinterface.curation.MergeUnitsSorting(parent_sorting, units_to_merge, new_unit_ids=None, properties_policy='keep', delta_time_ms=0.4)

Class that handles several merges of units from a Sorting object based on a list of list of unit_ids.

Parameters
parent_sorting: Recording

The sorting object

units_to_merge: list of lists

A list of lists for every merge group. Each element needs to have at least two elements (two units to merge), but it can also have more (merge multiple units at once).

new_unit_ids: None or list

A new unit_ids for merged units. If given, it needs to have the same length as units_to_merge

properties_policy: str (‘keep’, ‘remove’)
Policy used to propagate propierties. If ‘keep’ the properties will be pass to the new units

(if the units_to_merge have the same value). If ‘remove’ the new units will have an empty value for all the properties of the new unit. Default: ‘keep’

delta_time_ms: float or None

Number of ms to consider duplicated spikes. None to don’t check duplications

Returns
——-
sorting: Sorting

Sorting object with the selected units merged

class spikeinterface.curation.SplitUnitSorting(parent_sorting, split_unit_id, indices_list, new_unit_ids=None, properties_policy='keep')

Class that handles spliting of a unit. It creates a new Sorting object linked to parent_sorting.

Parameters
parent_sorting: Recording

The recording object

parent_unit_id: int

Unit id of the unit to split

indices_list: list

A list of index arrays selecting the spikes to split in each segment. Each array can contain more than 2 indices (e.g. for splitting in 3 or more units) and it should be the exact same length as the spike train (for each segment)

new_unit_ids: int

Units id of the new units to be created.

properties_policy: str
Policy used to propagate propierties. If ‘keep’ the properties will be pass to the new units

(if the units_to_merge have the same value). If ‘remove’ the new units will have an empty value for all the properties of the new unit. Default: ‘keep’

Returns
——-
sorting: Sorting

Sorting object with the selected unit splited

spikeinterface.curation.get_potential_auto_merge(waveform_extractor, minimum_spikes=1000, maximum_distance_um=150.0, peak_sign='neg', bin_ms=0.25, window_ms=100.0, corr_diff_thresh=0.16, template_diff_thresh=0.25, censored_period_ms=0.0, refractory_period_ms=1.0, sigma_smooth_ms=0.6, contamination_threshold=0.2, adaptative_window_threshold=0.5, num_channels=5, num_shift=5, firing_contamination_balance=1.5, extra_outputs=False, steps=None)

Algorithm to find and check potential merges between units.

This is taken from lussac version 1 done by Aurelien Wyngaard. https://github.com/BarbourLab/lussac/blob/v1.0.0/postprocessing/merge_units.py

The merges are proposed when the following criteria are met:

  • STEP 1: enough spikes are found in each units for computing the correlogram (minimum_spikes)

  • STEP 2: each unit is not contaminated (by checking auto-correlogram - contamination_threshold)

  • STEP 3: estimated unit locations are close enough (maximum_distance_um)

  • STEP 4: the cross-correlograms of the two units are similar to each auto-corrleogram (corr_diff_thresh)

  • STEP 5: the templates of the two units are similar (template_diff_thresh)

  • STEP 6: the unit “quality score” is increased after the merge.

The “quality score” factors in the increase in firing rate (f) due to the merge and a possible increase in contamination (C), wheighted by a factor k (firing_contamination_balance).

\[Q = f(1 - (k + 1)C)\]
Parameters
waveform_extractor: WaveformExtractor

The waveform extractor

minimum_spikes: int

Minimum number of spikes for each unit to consider a potential merge. Enough spikes are needed to estimate the correlogram, by default 1000

maximum_distance_um: float

Minimum distance between units for considering a merge, by default 150

peak_sign: “neg”/”pos”/”both”

Peak sign used to estimate the maximum channel of a template, by default “neg”

bin_ms: float

Bin size in ms used for computing the correlogram, by default 0.25

window_ms: float

Window size in ms used for computing the correlogram, by default 100

corr_diff_thresh: float

The threshold on the “correlogram distance metric” for considering a merge. It needs to be between 0 and 1, by default 0.16

template_diff_thresh: float

The threshold on the “template distance metric” for considering a merge. It needs to be between 0 and 1, by default 0.25

censored_period_ms: float

Used to compute the refractory period violations aka “contamination”, by default 0

refractory_period_ms: float

Used to compute the refractory period violations aka “contamination”, by default 1

sigma_smooth_ms: float

Parameters to smooth the correlogram estimation, by default 0.6

contamination_threshold: float

Threshold for not taking in account a unit when it is too contaminated, by default 0.2

adaptative_window_threshold:: float

Parameter to detect the window size in correlogram estimation, by default 0.5

num_channels: int

Number of channel to use for template similarity computation, by default 5

num_shift: int

Number of shifts in samles to be explored for template similarity computation, by default 5

firing_contamination_balance: float

Parameter to control the balance between firing rate and contamination in computing unit “quality score”, by default 1.5

extra_outputs: bool

If True, an additional dictionary (outs) with processed data is returned, by default False

steps: None or list of str

which steps to run (gives flexibility to running just some steps) If None all steps are done. Pontential steps: ‘min_spikes’, ‘remove_contaminated’, ‘unit_positions’, ‘correlogram’, ‘template_similarity’, ‘check_increase_score’. Please check steps explanations above!

Returns
potential_merges:

A list of tuples of 2 elements. List of pairs that could be merged.

outs:

Returned only when extra_outputs=True A dictionary that contains data for debugging and plotting.

spikeinterface.curation.find_redundant_units(sorting, delta_time: float = 0.4, agreement_threshold=0.2, duplicate_threshold=0.8)

Finds redundant or duplicate units by comparing the sorting output with itself.

Parameters
sortingBaseSorting

The input sorting object

delta_timefloat, optional

The time in ms to consider matching spikes, by default 0.4

agreement_thresholdfloat, optional

Threshold on the agreement scores to flag possible redundant/duplicate units, by default 0.2

duplicate_thresholdfloat, optional

Final threshold on the portion of coincident events over the number of spikes above which the unit is flagged as duplicate/redundant, by default 0.8

Returns
list

The list of duplicate units

list of 2-element lists

The list of duplicate pairs

spikeinterface.curation.remove_redundant_units(sorting_or_waveform_extractor, align=True, unit_peak_shifts=None, delta_time=0.4, agreement_threshold=0.2, duplicate_threshold=0.8, remove_strategy='minimum_shift', peak_sign='neg', extra_outputs=False)

Removes redundant or duplicate units by comparing the sorting output with itself.

When a redundant pair is found, there are several strategy to choice which one the best:

  • ‘minimum_shift’

  • ‘highest_amplitude’

  • ‘max_spikes’

Parameters
sorting_or_waveform_extractorBaseSorting or WaveformExtractor

If WaveformExtractor, the spike trains can be optionally realigned using the peak shift in the template to improve the matching procedure. If BaseSorting, the spike trains are not aligned.

alignbool, optional

If True, spike trains are aligned (if a WaveformExtractor is used), by default False

delta_timefloat, optional

The time in ms to consider matching spikes, by default 0.4

agreement_thresholdfloat, optional

Threshold on the agreement scores to flag possible redundant/duplicate units, by default 0.2

duplicate_thresholdfloat, optional

Final threshold on the portion of coincident events over the number of spikes above which the unit is removed, by default 0.84

remove_strategy: str

Which stragtegy to remove one of the two duplicated units:

  • ‘minimum_shift’: keep the unit with best peak alignment (minimum shift)

    If shift are equal then the ‘highest_amplitude’ is used

  • ‘highest_amplitude’: keep the unit with the best amplitude on un shifted max.

  • ‘max_spikes’: keep the unit with more spikes

peak_sign: str (‘neg’, ‘pos’, ‘both’)

Used when remove_strategy=’highest_amplitude’

extra_outputs: bool

If True, will return the redundant pairs.

Returns
BaseSorting

Sorting object without redundant units

spikeinterface.curation.remove_duplicated_spikes(sorting: BaseSorting, censored_period_ms: float = 0.3, method: str = 'keep_first') None

Class to remove duplicated spikes from the spike trains. Spikes are considered duplicated if they are less than x ms appart where x is the censored period.

Parameters
sorting: BaseSorting

The parent sorting.

censored_period_ms: float

The censored period to consider 2 spikes to be duplicated (in ms).

method: str in (“keep_first”, “keep_last”, “keep_first_iterative’, ‘keep_last_iterative”, random”)

Method used to remove the duplicated spikes. If method = “random”, will randomly choose to remove the first or last spike. If method = “keep_first”, for each ISI violation, will remove the second spike. If method = “keep_last”, for each ISI violation, will remove the first spike. If method = “keep_first_iterative”, will iteratively keep the first spike and remove the following violations. If method = “keep_last_iterative”, does the same as “keep_first_iterative” but starting from the end. In the iterative methods, if there is a triplet A, B, C where (A, B) and (A, C) are in the censored period (but not (A, C)), then only B is removed. In the non iterative method however, only one spike remains.

Returns
sorting_without_duplicated_spikes: Remove_DuplicatedSpikesSorting

The sorting without any duplicated spikes.

spikeinterface.curation.apply_sortingview_curation(sorting, uri_or_json, exclude_labels=None, include_labels=None, skip_merge=False, verbose=False)

Apply curation from SortingView manual curation. First, merges (if present) are applied. Then labels are loaded and units are optionally filtered based on exclude_labels and include_labels.

Parameters
sortingBaseSorting

The sorting object to be curated

uri_or_jsonstr or Path

The URI curation link from sortingview or the path to the curation json file

exclude_labelslist, optional

Optional list of labels to exclude (e.g. [“reject”, “noise”]). Mutually exclusive with include_labels, by default None

include_labelslist, optional

Optional list of labels to include (e.g. [“accept”]). Mutually exclusive with exclude_labels, by default None

skip_mergebool, optional

If True, merges are not applied (only labels), by default False

verbosebool, optional

If True, output is verbose, by default False

Returns
sorting_curatedBaseSorting

The curated sorting

spikeinterface.sortingcomponents

Peak Localization

Sorting components: peak localization.

spikeinterface.sortingcomponents.peak_localization.localize_peaks(recording, peaks, method='center_of_mass', ms_before=0.3, ms_after=0.5, **kwargs)

Localize peak (spike) in 2D or 3D depending the method.

When a probe is 2D then:
  • X is axis 0 of the probe

  • Y is axis 1 of the probe

  • Z is orthogonal to the plane of the probe

Parameters
recording: RecordingExtractor

The recording extractor object.

peaks: array

Peaks array, as returned by detect_peaks() in “compact_numpy” way.

method: ‘center_of_mass’, ‘monopolar_triangulation’

Method to use.

arguments for method=’center_of_mass’
local_radius_um: float

Radius in um for channel sparsity.

arguments for method=’monopolar_triangulation’
local_radius_um: float

For channel sparsity.

max_distance_um: float, default: 1000

Boundary for distance estimation.

enforce_decreasebool (default True)

Enforce spatial decreasingness for PTP vectors.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
peak_locations: ndarray

Array with estimated location for each spike. The dtype depends on the method. (‘x’, ‘y’) or (‘x’, ‘y’, ‘z’, ‘alpha’).

Peak Detection

Sorting components: peak detection.

spikeinterface.sortingcomponents.peak_detection.detect_peaks(recording, method='by_channel', pipeline_nodes=None, **kwargs)

Peak detection based on threshold crossing in term of k x MAD.

In ‘by_channel’ : peak are detected in each channel independently In ‘locally_exclusive’ : a single best peak is taken from a set of neighboring channels

Parameters
recording: RecordingExtractor

The recording extractor object.

pipeline_nodes: None or list[PipelineNode]

Optional additional PipelineNode need to computed just after detection time. This avoid reading the recording multiple times.

method: ‘by_channel’, ‘locally_exclusive’

Method to use.

arguments for method=’by_channel’
peak_sign: ‘neg’, ‘pos’, ‘both’

Sign of the peak.

detect_threshold: float

Threshold, in median absolute deviations (MAD), to use to detect peaks.

exclude_sweep_ms: float or None

Time, in ms, during which the peak is isolated. Exclusive param with exclude_sweep_size For example, if exclude_sweep_ms is 0.1, a peak is detected if a sample crosses the threshold, and no larger peaks are located during the 0.1ms preceding and following the peak.

noise_levels: array, optional

Estimated noise levels to use, if already computed. If not provide then it is estimated from a random snippet of the data.

random_chunk_kwargs: dict, optional

A dict that contain option to randomize chunk for get_noise_levels(). Only used if noise_levels is None.

arguments for method=’locally_exclusive’
peak_sign: ‘neg’, ‘pos’, ‘both’

Sign of the peak.

detect_threshold: float

Threshold, in median absolute deviations (MAD), to use to detect peaks.

exclude_sweep_ms: float or None

Time, in ms, during which the peak is isolated. Exclusive param with exclude_sweep_size For example, if exclude_sweep_ms is 0.1, a peak is detected if a sample crosses the threshold, and no larger peaks are located during the 0.1ms preceding and following the peak.

noise_levels: array, optional

Estimated noise levels to use, if already computed. If not provide then it is estimated from a random snippet of the data.

random_chunk_kwargs: dict, optional

A dict that contain option to randomize chunk for get_noise_levels(). Only used if noise_levels is None.

local_radius_um: float

The radius to use to select neighbour channels for locally exclusive detection.

**job_kwargs: keyword arguments for parallel processing:
  • chunk_duration or chunk_size or chunk_memory or total_memory
    • chunk_size: int

      Number of samples per chunk

    • chunk_memory: str

      Memory usage for each job (e.g. ‘100M’, ‘1G’)

    • total_memory: str

      Total memory usage (e.g. ‘500M’, ‘2G’)

    • chunk_durationstr or float or None

      Chunk duration in s if float or with units if str (e.g. ‘1s’, ‘500ms’)

  • n_jobs: int

    Number of jobs to use. With -1 the number of jobs is the same as number of cores

  • progress_bar: bool

    If True, a progress bar is printed

  • mp_context: str or None

    Context for multiprocessing. It can be None (default), “fork” or “spawn”. Note that “fork” is only available on UNIX systems

Returns
peaks: array

Detected peaks.

Notes

This peak detection ported from tridesclous into spikeinterface.

Motion Correction

class spikeinterface.sortingcomponents.motion_correction.CorrectMotionRecording(recording, motion, temporal_bins, spatial_bins, direction=1, border_mode='remove_channels', spatial_interpolation_method='kriging', sigma_um=20.0, p=1, num_closest=3)

Recording that corrects motion on-the-fly given a motion vector estimation (rigid or non-rigid). This internally applies a spatial interpolation on the original traces after reversing the motion. estimate_motion() must be called before this to estimate the motion vector.

Parameters
recording: Recording

The parent recording.

motion: np.array 2D

The motion signal obtained with estimate_motion() motion.shape[0] must correspond to temporal_bins.shape[0] motion.shape[1] is 1 when “rigid” motion and spatial_bins.shape[0] when “non-rigid”

temporal_bins: np.array

Temporal bins in second.

spatial_bins: None or np.array

Bins for non-rigid motion. If None, rigid motion is used

direction: int (0, 1, 2)

Dimension along which channel_locations are shifted (0 - x, 1 - y, 2 - z), by default 1

spatial_interpolation_method: str

‘kriging’ or ‘idw’ or ‘nearest’. See spikeinterface.preprocessing.get_spatial_interpolation_kernel() for more details. Choice of the method:

  • ‘kriging’ : the same one used in kilosort

  • ‘idw’ : inverse distance weighted

  • ‘nearest’ : use nereast channel

sigma_um: float (default 20.)

Used in the ‘kriging’ formula

p: int (default 1)

Used in the ‘kriging’ formula

num_closest: int (default 3)

Number of closest channels used by ‘idw’ method for interpolation.

border_mode: str

Control how channels are handled on border:

  • ‘remove_channels’: remove channels on the border, the recording has less channels

  • ‘force_extrapolate’: keep all channel and force extrapolation (can lead to strange signal)

  • ‘force_zeros’: keep all channel but set zeros when outside (force_extrapolate=False)

Returns
corrected_recording: CorrectMotionRecording

Recording after motion correction

Clustering

spikeinterface.sortingcomponents.clustering.find_cluster_from_peaks(recording, peaks, method='stupid', method_kwargs={}, extra_outputs=False, **job_kwargs)

Find cluster from peaks.

Parameters
recording: RecordingExtractor

The recording extractor object

peaks: WaveformExtractor

The waveform extractor

method: str

Which method to use (‘stupid’ | ‘XXXX’)

method_kwargs: dict, optional

Keyword arguments for the chosen method

extra_outputs: bool

If True then debug is also return

Returns
labels: ndarray of int

possible clusters list

peak_labels: array of int

peak_labels.shape[0] == peaks.shape[0]

Template Matching

spikeinterface.sortingcomponents.matching.find_spikes_from_templates(recording, method='naive', method_kwargs={}, extra_outputs=False, **job_kwargs)

Find spike from a recording from given templates.

Parameters
recording: RecordingExtractor

The recording extractor object

waveform_extractor: WaveformExtractor

The waveform extractor

method: str

Which method to use (‘naive’ | ‘tridesclous’ | ‘circus’)

method_kwargs: dict, optional

Keyword arguments for the chosen method

extra_outputs: bool

If True then method_kwargs is also return

job_kwargs: dict

Parameters for ChunkRecordingExecutor

Returns
spikes: ndarray

Spikes found from templates.

method_kwargs:

Optionaly returns for debug purpose.

Notes

Templates are represented as WaveformExtractor so statistics can be extracted.