API

Module spikeinterface.extractors

class spikeextractors.baseextractor.BaseExtractor
add_epoch(epoch_name, start_frame, end_frame)

This function adds an epoch to your extractor that tracks a certain time period. It is stored in an internal dictionary of start and end frame tuples.

epoch_name: str
The name of the epoch to be added
start_frame: int
The start frame of the epoch to be added (inclusive)
end_frame: int
The end frame of the epoch to be added (exclusive). If set to None, it will include the entire sorting after the start_frame
allocate_array(memmap, shape=None, dtype=None, name=None, array=None)

Allocates a memory or memmap array

memmap: bool
If True, a memmap array is created in the sorting temporary folder
shape: tuple
Shape of the array. If None array must be given
dtype: dtype
Dtype of the array. If None array must be given
name: str or None
Name (root) of the file (if memmap is True). If None, a random name is generated
array: np.array
If array is given, shape and dtype are initialized based on the array. If memmap is True, the array is then deleted to clear memory
arr: np.array or np.memmap
The allocated memory or memmap array
annotate(annotation_key, value, overwrite=False)

This function adds an entry to the annotations dictionary.

annotation_key: str
An annotation stored by the Extractor
value:
The data associated with the given property name. Could be many formats as specified by the user
overwrite: bool
If True and the annotation already exists, it is overwritten
copy_annotations(extractor)

Copy object properties from another extractor to the current extractor.

extractor: Extractor
The extractor from which the annotations will be copied
copy_epochs(extractor)

Copy epochs from another extractor.

extractor: BaseExtractor
The extractor from which the epochs will be copied
del_memmap_file(memmap_file)

Safely deletes instantiated memmap file.

memmap_file: str or Path
The memmap file to delete
dump_to_dict(relative_to=None)

Dumps recording to a dictionary. The dictionary be used to re-initialize an extractor with spikeextractors.load_extractor_from_dict(dump_dict)

relative_to: str, Path, or None
If not None, file_paths are serialized relative to this path
dump_dict: dict
Serialized dictionary
dump_to_json(file_path=None, relative_to=None)

Dumps recording extractor to json file. The extractor can be re-loaded with spikeextractors.load_extractor_from_json(json_file)

file_path: str
Path of the json file
relative_to: str, Path, or None
If not None, file_paths are serialized relative to this path
dump_to_pickle(file_path=None, include_properties=True, include_features=True, relative_to=None)

Dumps recording extractor to a pickle file. The extractor can be re-loaded with spikeextractors.load_extractor_from_json(json_file)

file_path: str
Path of the json file
include_properties: bool
If True, all properties are dumped
include_features: bool
If True, all features are dumped
relative_to: str, Path, or None
If not None, file_paths are serialized relative to this path
get_annotation(annotation_name)

This function returns the data stored under the annotation name.

annotation_name: str
A property stored by the Extractor
annotation_data
The data associated with the given property name. Could be many formats as specified by the user
get_annotation_keys()

This function returns a list of stored annotation keys

property_names: list
List of stored annotation keys
get_epoch_info(epoch_name)

This function returns the start frame and end frame of the epoch in a dict.

epoch_name: str
The name of the epoch to be returned
epoch_info: dict
A dict containing the start frame and end frame of the epoch
get_epoch_names()

This function returns a list of all the epoch names in the extractor

epoch_names: list
List of epoch names in the recording extractor
get_tmp_folder()

Returns temporary folder associated to the extractor

temp_folder: Path
The temporary folder
static load_extractor_from_dict(d)

Instantiates extractor from dictionary

d: dictionary
Python dictionary
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
static load_extractor_from_json(json_file)

Instantiates extractor from json file

json_file: str or Path
Path to json file
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
static load_extractor_from_pickle(pkl_file)

Instantiates extractor from pickle file.

pkl_file: str or Path
Path to pickle file
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
make_serialized_dict(relative_to=None)

Makes a nested serialized dictionary out of the extractor. The dictionary be used to re-initialize an extractor with spikeextractors.load_extractor_from_dict(dump_dict)

relative_to: str, Path, or None
If not None, file_paths are serialized relative to this path
dump_dict: dict
Serialized dictionary
remove_epoch(epoch_name)

This function removes an epoch from your extractor.

epoch_name: str
The name of the epoch to be removed
set_tmp_folder(folder)

Sets temporary folder of the extractor

folder: str or Path
The temporary folder
class spikeextractors.RecordingExtractor

A class that contains functions for extracting important information from recorded extracellular data. It is an abstract class so all functions with the @abstractmethod tag must be implemented for the initialization to work.

clear_channel_gains(channel_ids=None)

This function clears the gains of each channel specified by channel_ids

channel_ids: array-like or int
The channel ids (ints) for which the groups will be cleared. If None, all channel ids are assumed.
clear_channel_groups(channel_ids=None)

This function clears the group of each channel specified by channel_ids

channel_ids: array-like or int
The channel ids (ints) for which the groups will be cleared. If None, all channel ids are assumed.
clear_channel_locations(channel_ids=None)

This function clears the location of each channel specified by channel_ids.

channel_ids: array-like or int
The channel ids (ints) for which the locations will be cleared. If None, all channel ids are assumed.
clear_channel_offsets(channel_ids=None)

This function clears the gains of each channel specified by channel_ids.

channel_ids: array-like or int
The channel ids (ints) for which the groups will be cleared. If None, all channel ids are assumed.
clear_channel_property(channel_id, property_name)

This function clears the channel property for the given property.

channel_id: int
The id that specifies a channel in the recording
property_name: string
The name of the property to be cleared
clear_channels_property(property_name, channel_ids=None)

This function clears the channels’ properties for the given property.

property_name: string
The name of the property to be cleared
channel_ids: list
A list of ids that specifies a set of channels in the recording. If None all channels are cleared
copy_channel_properties(recording, channel_ids=None)

Copy channel properties from another recording extractor to the current recording extractor.

recording: RecordingExtractor
The recording extractor from which the properties will be copied
channel_ids: (array_like, (int, np.integer))
The list (or single value) of channel_ids for which the properties will be copied
copy_times(extractor)

This function copies times from another extractor.

extractor: BaseExtractor
The extractor from which the epochs will be copied
frame_to_time(frames)

This function converts user-inputted frame indexes to times with units of seconds.

frames: float or array-like
The frame or frames to be converted to times
times: float or array-like
The corresponding times in seconds
get_channel_gains(channel_ids=None)

This function returns the gain of each channel specified by channel_ids.

channel_ids: array_like
The channel ids (ints) for which the gains will be returned
gains: array_like
Returns a list of corresponding gains (floats) for the given channel_ids
get_channel_groups(channel_ids=None)

This function returns the group of each channel specified by channel_ids

channel_ids: array-like or int
The channel ids (ints) for which the groups will be returned
groups: array_like
Returns a list of corresponding groups (ints) for the given channel_ids
get_channel_ids()

Returns the list of channel ids. If not specified, the range from 0 to num_channels - 1 is returned.

channel_ids: list
Channel list
get_channel_locations(channel_ids=None, locations_2d=True)

This function returns the location of each channel specified by channel_ids

channel_ids: array-like or int
The channel ids (ints) for which the locations will be returned. If None, all channel ids are assumed.
locations_2d: bool
If True (default), first two dimensions are returned
locations: array_like
Returns a list of corresponding locations (floats) for the given channel_ids
get_channel_offsets(channel_ids=None)

This function returns the offset of each channel specified by channel_ids.

channel_ids: array_like
The channel ids (ints) for which the gains will be returned
offsets: array_like
Returns a list of corresponding offsets for the given channel_ids
get_channel_property(channel_id, property_name)

This function returns the data stored under the property name from the given channel.

channel_id: int
The channel id for which the property will be returned
property_name: str
A property stored by the RecordingExtractor (location, etc.)
property_data
The data associated with the given property name. Could be many formats as specified by the user
get_channel_property_names(channel_id)

Get a list of property names for a given channel.

channel_id: int
The channel id for which the property names will be returned If None (default), will return property names for all channels
property_names
The list of property names
get_dtype(return_scaled=True)

This function returns the traces dtype

return_scaled: bool
If False and the recording extractor has unscaled traces, it returns the dtype of unscaled traces. If True (default) it returns the dtype of the scaled traces
dtype: np.dtype
The dtype of the traces
get_epoch(epoch_name)

This function returns a SubRecordingExtractor which is a view to the given epoch

epoch_name: str
The name of the epoch to be returned
epoch_extractor: SubRecordingExtractor
A SubRecordingExtractor which is a view to the given epoch
get_num_channels()

This function returns the number of channels in the recording.

num_channels: int
Number of channels in the recording
get_num_frames()

This function returns the number of frames in the recording

num_frames: int
Number of frames in the recording (duration of recording)
get_sampling_frequency()

This function returns the sampling frequency in units of Hz.

fs: float
Sampling frequency of the recordings in Hz
get_shared_channel_property_names(channel_ids=None)

Get the intersection of channel property names for a given set of channels or for all channels if channel_ids is None.

channel_ids: array_like
The channel ids for which the shared property names will be returned. If None (default), will return shared property names for all channels
property_names
The list of shared property names
get_snippets(reference_frames, snippet_len, channel_ids=None, return_scaled=True)

This function returns data snippets from the given channels that are starting on the given frames and are the length of the given snippet lengths before and after.

reference_frames: array_like
A list or array of frames that will be used as the reference frame of each snippet.
snippet_len: int or tuple
If int, the snippet will be centered at the reference frame and and return half before and half after of the length. If tuple, it will return the first value of before frames and the second value of after frames around the reference frame (allows for asymmetry).
channel_ids: array_like
A list or array of channel ids (ints) from which each trace will be extracted
return_scaled: bool
If True, snippets are returned after scaling (using gain/offset). If False, the raw traces are returned.
snippets: numpy.ndarray
Returns a list of the snippets as numpy arrays. The length of the list is len(reference_frames) Each array has dimensions: (num_channels x snippet_len) Out-of-bounds cases should be handled by filling in zeros in the snippet
get_sub_extractors_by_property(property_name, return_property_list=False)

Returns a list of SubRecordingExtractors from this RecordingExtractor based on the given property_name (e.g. group)

property_name: str
The property used to subdivide the extractor
return_property_list: bool
If True the property list is returned
sub_list: list
The list of subextractors to be returned

OR sub_list, prop_list

If return_property_list is True, the property list will be returned as well
get_traces(channel_ids=None, start_frame=None, end_frame=None, return_scaled=True)

This function extracts and returns a trace from the recorded data from the given channels ids and the given start and end frame. It will return traces from within three ranges:

[start_frame, start_frame+1, …, end_frame-1] [start_frame, start_frame+1, …, final_recording_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_recording_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Traces are returned in a 2D array that contains all of the traces from each channel with dimensions (num_channels x num_frames). In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

channel_ids: array_like
A list or 1D array of channel ids (ints) from which each trace will be extracted.
start_frame: int
The starting frame of the trace to be returned (inclusive).
end_frame: int
The ending frame of the trace to be returned (exclusive).
return_scaled: bool
If True, traces are returned after scaling (using gain/offset). If False, the raw traces are returned.
traces: numpy.ndarray
A 2D array that contains all of the traces from each channel. Dimensions are: (num_channels x num_frames)
get_ttl_events(start_frame=None, end_frame=None, channel_id=0)

Returns an array with frames of TTL signals. To be implemented in sub-classes

start_frame: int
The starting frame of the ttl to be returned (inclusive)
end_frame: int
The ending frame of the ttl to be returned (exclusive)
channel_id: int
The TTL channel id
ttl_frames: array-like
Frames of TTL signal for the specified channel
ttl_state: array-like
State of the transition: 1 - rising, -1 - falling
load_probe_file(probe_file, channel_map=None, channel_groups=None, verbose=False)

This function returns a SubRecordingExtractor that contains information from the given probe file (channel locations, groups, etc.) If a .prb file is given, then ‘location’ and ‘group’ information for each channel is added to the SubRecordingExtractor. If a .csv file is given, then it will only add ‘location’ to the SubRecordingExtractor.

probe_file: str
Path to probe file. Either .prb or .csv
channel_map : array-like
A list of channel IDs to set in the loaded file. Only used if the loaded file is a .csv.
channel_groups : array-like
A list of groups (ints) for the channel_ids to set in the loaded file. Only used if the loaded file is a .csv.
verbose: bool
If True, output is verbose
subrecording = SubRecordingExtractor
The extractor containing all of the probe information.
save_to_probe_file(probe_file, grouping_property=None, radius=None, graph=True, geometry=True, verbose=False)

Saves probe file from the channel information of this recording extractor.

probe_file: str
file name of .prb or .csv file to save probe information to
grouping_property: str (default None)
If grouping_property is a shared_channel_property, different groups are saved based on the property.
radius: float (default None)
Adjacency radius (used by some sorters). If None it is not saved to the probe file.
graph: bool
If True, the adjacency graph is saved (default=True)
geometry: bool
If True, the geometry is saved (default=True)
verbose: bool
If True, output is verbose
set_channel_gains(gains, channel_ids=None)

This function sets the gain key property of each specified channel id with the corresponding group of the passed in gains float/list.

gains: float/array_like
If a float, each channel will be assigned the corresponding gain. If a list, each channel will be given a gain from the list
channel_ids: array_like or None
The channel ids (ints) for which the groups will be specified. If None, all channel ids are assumed.
set_channel_groups(groups, channel_ids=None)

This function sets the group key property of each specified channel id with the corresponding group of the passed in groups list.

groups: array-like or int
A list of groups (ints) for the channel_ids
channel_ids: array_like or None
The channel ids (ints) for which the groups will be specified. If None, all channel ids are assumed.
set_channel_locations(locations, channel_ids=None)

This function sets the location key properties of each specified channel id with the corresponding locations of the passed in locations list.

locations: array_like
A list of corresponding locations (array_like) for the given channel_ids
channel_ids: array-like or int
The channel ids (ints) for which the locations will be specified. If None, all channel ids are assumed.
set_channel_offsets(offsets, channel_ids=None)

This function sets the offset key property of each specified channel id with the corresponding group of the passed in gains float/list.

offsets: float/array_like
If a float, each channel will be assigned the corresponding offset. If a list, each channel will be given an offset from the list
channel_ids: array_like or None
The channel ids (ints) for which the groups will be specified. If None, all channel ids are assumed.
set_channel_property(channel_id, property_name, value)

This function adds a property dataset to the given channel under the property name.

channel_id: int
The channel id for which the property will be added
property_name: str
A property stored by the RecordingExtractor (location, etc.)
value:
The data associated with the given property name. Could be many formats as specified by the user
set_times(times)

This function sets the recording times (in seconds) for each frame

times: array-like
The times in seconds for each frame
time_to_frame(times)

This function converts a user-inputted times (in seconds) to a frame indexes.

times: float or array-like
The times (in seconds) to be converted to frame indexes
frames: float or array-like
The corresponding frame indexes
static write_recording(recording, save_path)

This function writes out the recorded file of a given recording extractor to the file format of this current recording extractor. Allows for easy conversion between recording file formats. It is a static method so it can be used without instantiating this recording extractor.

recording: RecordingExtractor
An RecordingExtractor that can extract information from the recording file to be converted to the new format.
save_path: string
A path to where the converted recorded data will be saved, which may either be a file or a folder, depending on the format.
write_to_binary_dat_format(save_path, time_axis=0, dtype=None, chunk_size=None, chunk_mb=500, n_jobs=1, joblib_backend='loky', return_scaled=True, verbose=False)

Saves the traces of this recording extractor into binary .dat format.

save_path: str
The path to the file.
time_axis: 0 (default) or 1
If 0 then traces are transposed to ensure (nb_sample, nb_channel) in the file. If 1, the traces shape (nb_channel, nb_sample) is kept in the file.
dtype: dtype
Type of the saved data. Default float32
chunk_size: None or int
Size of each chunk in number of frames. If None (default) and ‘chunk_mb’ is given, the file is saved in chunks of ‘chunk_mb’ Mb (default 500Mb)
chunk_mb: None or int
Chunk size in Mb (default 500Mb)
n_jobs: int
Number of jobs to use (Default 1)
joblib_backend: str
Joblib backend for parallel processing (‘loky’, ‘threading’, ‘multiprocessing’)
return_scaled: bool
If True, traces are returned after scaling (using gain/offset). If False, the raw traces are returned
verbose: bool
If True, output is verbose (when chunks are used)
write_to_h5_dataset_format(dataset_path, save_path=None, file_handle=None, time_axis=0, dtype=None, chunk_size=None, chunk_mb=500, verbose=False)

Saves the traces of a recording extractor in an h5 dataset.

dataset_path: str
Path to dataset in h5 file (e.g. ‘/dataset’)
save_path: str
The path to the file.
file_handle: file handle
The file handle to dump data. This can be used to append data to an header. In case file_handle is given, the file is NOT closed after writing the binary data.
time_axis: 0 (default) or 1
If 0 then traces are transposed to ensure (nb_sample, nb_channel) in the file. If 1, the traces shape (nb_channel, nb_sample) is kept in the file.
dtype: dtype
Type of the saved data. Default float32.
chunk_size: None or int
Size of each chunk in number of frames. If None (default) and ‘chunk_mb’ is given, the file is saved in chunks of ‘chunk_mb’ Mb (default 500Mb)
chunk_mb: None or int
Chunk size in Mb (default 500Mb)
verbose: bool
If True, output is verbose (when chunks are used)
class spikeextractors.SortingExtractor

A class that contains functions for extracting important information from spiked sorted data given a spike sorting software. It is an abstract class so all functions with the @abstractmethod tag must be implemented for the initialization to work.

clear_unit_property(unit_id, property_name)

This function clears the unit property for the given property.

unit_id: int
The id that specifies a unit in the sorting
property_name: string
The name of the property to be cleared
clear_unit_spike_features(unit_id, feature_name)

This function clears the unit spikes features for the given feature.

unit_id: int
The id that specifies a unit in the sorting
feature_name: string
The name of the feature to be cleared
clear_units_property(property_name, unit_ids=None)

This function clears the units’ properties for the given property.

property_name: string
The name of the property to be cleared
unit_ids: list
A list of ids that specifies a set of units in the sorting. If None, all units are cleared
clear_units_spike_features(feature_name, unit_ids=None)

This function clears the units’ spikes features for the given feature.

feature_name: string
The name of the feature to be cleared
unit_ids: list
A list of ids that specifies a set of units in the sorting. If None, all units are cleared
copy_times(extractor)

This function copies times from another extractor.

extractor: BaseExtractor
The extractor from which the epochs will be copied
copy_unit_properties(sorting, unit_ids=None)

Copy unit properties from another sorting extractor to the current sorting extractor.

sorting: SortingExtractor
The sorting extractor from which the properties will be copied
unit_ids: (array_like, (int, np.integer))
The list (or single value) of unit_ids for which the properties will be copied
copy_unit_spike_features(sorting, unit_ids=None)

Copy unit spike features from another sorting extractor to the current sorting extractor.

sorting: SortingExtractor
The sorting extractor from which the spike features will be copied
unit_ids: (array_like, (int, np.integer))
The list (or single value) of unit_ids for which the spike features will be copied
frame_to_time(frames)

This function converts user-inputted frame indexes to times with units of seconds.

frames: float or array-like
The frame or frames to be converted to times
times: float or array-like
The corresponding times in seconds
get_epoch(epoch_name)

This function returns a SubSortingExtractor which is a view to the given epoch.

epoch_name: str
The name of the epoch to be returned
epoch_extractor: SubRecordingExtractor
A SubRecordingExtractor which is a view to the given epoch
get_sampling_frequency()

It returns the sampling frequency.

sampling_frequency: float
The sampling frequency
get_shared_unit_property_names(unit_ids=None)

Get the intersection of unit property names for a given set of units or for all units if unit_ids is None.

unit_ids: array_like
The unit ids for which the shared property names will be returned. If None (default), will return shared property names for all units
property_names
The list of shared property names
get_shared_unit_spike_feature_names(unit_ids=None)

Get the intersection of unit feature names for a given set of units or for all units if unit_ids is None.

unit_ids: array_like
The unit ids for which the shared feature names will be returned. If None (default), will return shared feature names for all units
property_names
The list of shared feature names
get_sub_extractors_by_property(property_name, return_property_list=False)

Returns a list of SubSortingExtractors from this SortingExtractor based on the given property_name (e.g. group)

property_name: str
The property used to subdivide the extractor
return_property_list: bool
If True the property list is returned
sub_list: list
The list of subextractors to be returned
get_unit_ids()

This function returns a list of ids (ints) for each unit in the sorsted result.

unit_ids: array_like
A list of the unit ids in the sorted result (ints).
get_unit_property(unit_id, property_name)

This function returns the data stored under the property name given from the given unit.

unit_id: int
The unit id for which the property will be returned
property_name: str
The name of the property
value
The data associated with the given property name. Could be many formats as specified by the user
get_unit_property_names(unit_id)

Get a list of property names for a given unit.

unit_id: int
The unit id for which the property names will be returned
property_names
The list of property names
get_unit_spike_feature_names(unit_id)

This function returns the list of feature names for the given unit

unit_id: int
The unit id for which the feature names will be returned
property_names
The list of feature names.
get_unit_spike_features(unit_id, feature_name, start_frame=None, end_frame=None)

This function extracts the specified spike features from the specified unit. It will return spike features from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike features are returned in the form of an array_like of spike features. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
feature_name: string
The name of the feature to be returned
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_features: numpy.ndarray
An array containing all the features for each spike in the specified unit given the range of start and end frames
get_unit_spike_train(unit_id, start_frame=None, end_frame=None)

This function extracts spike frames from the specified unit. It will return spike frames from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike frames are returned in the form of an array_like of spike frames. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 1D array containing all the frames for each spike in the specified unit given the range of start and end frames
get_units_property(*, unit_ids=None, property_name)

Returns a list of values stored under the property name corresponding to a list of units

unit_ids: list
The unit ids for which the property will be returned Defaults to get_unit_ids()
property_name: str
The name of the property
values
The list of values
get_units_spike_train(unit_ids=None, start_frame=None, end_frame=None)

This function extracts spike frames from the specified units.

unit_ids: array_like
The unit ids from which to return spike trains. If None, all unit spike trains will be returned
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 2D array containing all the frames for each spike in the specified units given the range of start and end frames
get_unsorted_spike_train(start_frame=None, end_frame=None)

This function extracts spike frames from the unsorted events. It will return spike frames from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike frames are returned in the form of an array_like of spike frames. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 1D array containing all the frames for each spike in the specified unit given the range of start and end frames
set_sampling_frequency(sampling_frequency)

It sets the sorting extractor sampling frequency.

sampling_frequency: float
The sampling frequency
set_times(times)

This function sets the sorting times to convert spike trains to seconds

times: array-like
The times in seconds for each frame
set_unit_property(unit_id, property_name, value)

This function adds a unit property data set under the given property name to the given unit.

unit_id: int
The unit id for which the property will be set
property_name: str
The name of the property to be stored
value
The data associated with the given property name. Could be many formats as specified by the user
set_unit_spike_features(unit_id, feature_name, value, indexes=None)

This function adds a unit features data set under the given features name to the given unit.

unit_id: int
The unit id for which the features will be set
feature_name: str
The name of the feature to be stored
value: array_like
The data associated with the given feature name. Could be many formats as specified by the user.
indexes: array_like
The indices of the specified spikes (if the number of spike features is less than the length of the unit’s spike train). If None, it is assumed that value has the same length as the spike train.
set_units_property(*, unit_ids=None, property_name, values)

Sets unit property data for a list of units

unit_ids: list
The list of unit ids for which the property will be set Defaults to get_unit_ids()
property_name: str
The name of the property
value: list
The list of values to be set
time_to_frame(times)

This function converts a user-inputted times (in seconds) to a frame indexes.

times: float or array-like
The times (in seconds) to be converted to frame indexes
frames: float or array-like
The corresponding frame indexes
static write_sorting(sorting, save_path)

This function writes out the spike sorted data file of a given sorting extractor to the file format of this current sorting extractor. Allows for easy conversion between spike sorting file formats. It is a static method so it can be used without instantiating this sorting extractor.

sorting: SortingExtractor
A SortingExtractor that can extract information from the sorted data file to be converted to the new format
save_path: string
A path to where the converted sorted data will be saved, which may either be a file or a folder, depending on the format
class spikeextractors.SubRecordingExtractor(parent_recording, *, channel_ids=None, renamed_channel_ids=None, start_frame=None, end_frame=None)
copy_channel_properties(recording, channel_ids=None)

Copy channel properties from another recording extractor to the current recording extractor.

recording: RecordingExtractor
The recording extractor from which the properties will be copied
channel_ids: (array_like, (int, np.integer))
The list (or single value) of channel_ids for which the properties will be copied
frame_to_time(frame)

This function converts user-inputted frame indexes to times with units of seconds.

frames: float or array-like
The frame or frames to be converted to times
times: float or array-like
The corresponding times in seconds
get_channel_ids()

Returns the list of channel ids. If not specified, the range from 0 to num_channels - 1 is returned.

channel_ids: list
Channel list
get_num_frames()

This function returns the number of frames in the recording

num_frames: int
Number of frames in the recording (duration of recording)
get_sampling_frequency()

This function returns the sampling frequency in units of Hz.

fs: float
Sampling frequency of the recordings in Hz
get_snippets(reference_frames, snippet_len, channel_ids=None, return_scaled=True)

This function returns data snippets from the given channels that are starting on the given frames and are the length of the given snippet lengths before and after.

reference_frames: array_like
A list or array of frames that will be used as the reference frame of each snippet.
snippet_len: int or tuple
If int, the snippet will be centered at the reference frame and and return half before and half after of the length. If tuple, it will return the first value of before frames and the second value of after frames around the reference frame (allows for asymmetry).
channel_ids: array_like
A list or array of channel ids (ints) from which each trace will be extracted
return_scaled: bool
If True, snippets are returned after scaling (using gain/offset). If False, the raw traces are returned.
snippets: numpy.ndarray
Returns a list of the snippets as numpy arrays. The length of the list is len(reference_frames) Each array has dimensions: (num_channels x snippet_len) Out-of-bounds cases should be handled by filling in zeros in the snippet
get_traces(channel_ids=None, start_frame=None, end_frame=None, return_scaled=True)

This function extracts and returns a trace from the recorded data from the given channels ids and the given start and end frame. It will return traces from within three ranges:

[start_frame, start_frame+1, …, end_frame-1] [start_frame, start_frame+1, …, final_recording_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_recording_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Traces are returned in a 2D array that contains all of the traces from each channel with dimensions (num_channels x num_frames). In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

channel_ids: array_like
A list or 1D array of channel ids (ints) from which each trace will be extracted.
start_frame: int
The starting frame of the trace to be returned (inclusive).
end_frame: int
The ending frame of the trace to be returned (exclusive).
return_scaled: bool
If True, traces are returned after scaling (using gain/offset). If False, the raw traces are returned.
traces: numpy.ndarray
A 2D array that contains all of the traces from each channel. Dimensions are: (num_channels x num_frames)
get_ttl_events(start_frame=None, end_frame=None, channel_id=0)

Returns an array with frames of TTL signals. To be implemented in sub-classes

start_frame: int
The starting frame of the ttl to be returned (inclusive)
end_frame: int
The ending frame of the ttl to be returned (exclusive)
channel_id: int
The TTL channel id
ttl_frames: array-like
Frames of TTL signal for the specified channel
ttl_state: array-like
State of the transition: 1 - rising, -1 - falling
time_to_frame(time)

This function converts a user-inputted times (in seconds) to a frame indexes.

times: float or array-like
The times (in seconds) to be converted to frame indexes
frames: float or array-like
The corresponding frame indexes
class spikeextractors.SubSortingExtractor(parent_sorting, *, unit_ids=None, renamed_unit_ids=None, start_frame=None, end_frame=None)
copy_unit_properties(sorting, unit_ids=None)

Copy unit properties from another sorting extractor to the current sorting extractor.

sorting: SortingExtractor
The sorting extractor from which the properties will be copied
unit_ids: (array_like, (int, np.integer))
The list (or single value) of unit_ids for which the properties will be copied
copy_unit_spike_features(sorting, unit_ids=None, start_frame=None, end_frame=None)

Copy unit spike features from another sorting extractor to the current sorting extractor.

sorting: SortingExtractor
The sorting extractor from which the spike features will be copied
unit_ids: (array_like, (int, np.integer))
The list (or single value) of unit_ids for which the spike features will be copied
frame_to_time(frame)

This function converts user-inputted frame indexes to times with units of seconds.

frames: float or array-like
The frame or frames to be converted to times
times: float or array-like
The corresponding times in seconds
get_sampling_frequency()

It returns the sampling frequency.

sampling_frequency: float
The sampling frequency
get_unit_ids()

This function returns a list of ids (ints) for each unit in the sorsted result.

unit_ids: array_like
A list of the unit ids in the sorted result (ints).
get_unit_spike_train(unit_id, start_frame=None, end_frame=None)

This function extracts spike frames from the specified unit. It will return spike frames from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike frames are returned in the form of an array_like of spike frames. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 1D array containing all the frames for each spike in the specified unit given the range of start and end frames
time_to_frame(time)

This function converts a user-inputted times (in seconds) to a frame indexes.

times: float or array-like
The times (in seconds) to be converted to frame indexes
frames: float or array-like
The corresponding frame indexes
class spikeextractors.MultiRecordingChannelExtractor(recordings, groups=None)
get_channel_ids()

Returns the list of channel ids. If not specified, the range from 0 to num_channels - 1 is returned.

channel_ids: list
Channel list
get_num_frames()

This function returns the number of frames in the recording

num_frames: int
Number of frames in the recording (duration of recording)
get_sampling_frequency()

This function returns the sampling frequency in units of Hz.

fs: float
Sampling frequency of the recordings in Hz
get_traces(channel_ids=None, start_frame=None, end_frame=None, return_scaled=True)

This function extracts and returns a trace from the recorded data from the given channels ids and the given start and end frame. It will return traces from within three ranges:

[start_frame, start_frame+1, …, end_frame-1] [start_frame, start_frame+1, …, final_recording_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_recording_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Traces are returned in a 2D array that contains all of the traces from each channel with dimensions (num_channels x num_frames). In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

channel_ids: array_like
A list or 1D array of channel ids (ints) from which each trace will be extracted.
start_frame: int
The starting frame of the trace to be returned (inclusive).
end_frame: int
The ending frame of the trace to be returned (exclusive).
return_scaled: bool
If True, traces are returned after scaling (using gain/offset). If False, the raw traces are returned.
traces: numpy.ndarray
A 2D array that contains all of the traces from each channel. Dimensions are: (num_channels x num_frames)
class spikeextractors.MultiRecordingTimeExtractor(recordings, epoch_names=None)
frame_to_time(frame)

This function converts user-inputted frame indexes to times with units of seconds.

frames: float or array-like
The frame or frames to be converted to times
times: float or array-like
The corresponding times in seconds
get_channel_ids()

Returns the list of channel ids. If not specified, the range from 0 to num_channels - 1 is returned.

channel_ids: list
Channel list
get_num_frames()

This function returns the number of frames in the recording

num_frames: int
Number of frames in the recording (duration of recording)
get_sampling_frequency()

This function returns the sampling frequency in units of Hz.

fs: float
Sampling frequency of the recordings in Hz
get_traces(channel_ids=None, start_frame=None, end_frame=None, return_scaled=True)

This function extracts and returns a trace from the recorded data from the given channels ids and the given start and end frame. It will return traces from within three ranges:

[start_frame, start_frame+1, …, end_frame-1] [start_frame, start_frame+1, …, final_recording_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_recording_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Traces are returned in a 2D array that contains all of the traces from each channel with dimensions (num_channels x num_frames). In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

channel_ids: array_like
A list or 1D array of channel ids (ints) from which each trace will be extracted.
start_frame: int
The starting frame of the trace to be returned (inclusive).
end_frame: int
The ending frame of the trace to be returned (exclusive).
return_scaled: bool
If True, traces are returned after scaling (using gain/offset). If False, the raw traces are returned.
traces: numpy.ndarray
A 2D array that contains all of the traces from each channel. Dimensions are: (num_channels x num_frames)
get_ttl_events(start_frame=None, end_frame=None, channel_id=0)

Returns an array with frames of TTL signals. To be implemented in sub-classes

start_frame: int
The starting frame of the ttl to be returned (inclusive)
end_frame: int
The ending frame of the ttl to be returned (exclusive)
channel_id: int
The TTL channel id
ttl_frames: array-like
Frames of TTL signal for the specified channel
ttl_state: array-like
State of the transition: 1 - rising, -1 - falling
time_to_frame(time)

This function converts a user-inputted times (in seconds) to a frame indexes.

times: float or array-like
The times (in seconds) to be converted to frame indexes
frames: float or array-like
The corresponding frame indexes
class spikeextractors.MultiSortingExtractor(sortings)
clear_unit_property(unit_id, property_name)

This function clears the unit property for the given property.

unit_id: int
The id that specifies a unit in the sorting
property_name: string
The name of the property to be cleared
clear_unit_spike_features(unit_id, feature_name)

This function clears the unit spikes features for the given feature.

unit_id: int
The id that specifies a unit in the sorting
feature_name: string
The name of the feature to be cleared
get_sampling_frequency()

It returns the sampling frequency.

sampling_frequency: float
The sampling frequency
get_unit_ids()

This function returns a list of ids (ints) for each unit in the sorsted result.

unit_ids: array_like
A list of the unit ids in the sorted result (ints).
get_unit_property(unit_id, property_name)

This function returns the data stored under the property name given from the given unit.

unit_id: int
The unit id for which the property will be returned
property_name: str
The name of the property
value
The data associated with the given property name. Could be many formats as specified by the user
get_unit_property_names(unit_id)

Get a list of property names for a given unit.

unit_id: int
The unit id for which the property names will be returned
property_names
The list of property names
get_unit_spike_feature_names(unit_id)

This function returns the list of feature names for the given unit

unit_id: int
The unit id for which the feature names will be returned
property_names
The list of feature names.
get_unit_spike_features(unit_id, feature_name, start_frame=None, end_frame=None)

This function extracts the specified spike features from the specified unit. It will return spike features from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike features are returned in the form of an array_like of spike features. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
feature_name: string
The name of the feature to be returned
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_features: numpy.ndarray
An array containing all the features for each spike in the specified unit given the range of start and end frames
get_unit_spike_train(unit_id, start_frame=None, end_frame=None)

This function extracts spike frames from the specified unit. It will return spike frames from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike frames are returned in the form of an array_like of spike frames. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 1D array containing all the frames for each spike in the specified unit given the range of start and end frames
set_sampling_frequency(sampling_frequency)

It sets the sorting extractor sampling frequency.

sampling_frequency: float
The sampling frequency
set_unit_property(unit_id, property_name, value)

This function adds a unit property data set under the given property name to the given unit.

unit_id: int
The unit id for which the property will be set
property_name: str
The name of the property to be stored
value
The data associated with the given property name. Could be many formats as specified by the user
set_unit_spike_features(unit_id, feature_name, value, indexes=None)

This function adds a unit features data set under the given features name to the given unit.

unit_id: int
The unit id for which the features will be set
feature_name: str
The name of the feature to be stored
value: array_like
The data associated with the given feature name. Could be many formats as specified by the user.
indexes: array_like
The indices of the specified spikes (if the number of spike features is less than the length of the unit’s spike train). If None, it is assumed that value has the same length as the spike train.
spikeextractors.load_extractor_from_dict(d)

Instantiates extractor from dictionary

d: dictionary
Python dictionary
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
spikeextractors.load_extractor_from_json(json_file)

Instantiates extractor from json file

json_file: str or Path
Path to json file
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
spikeextractors.load_extractor_from_pickle(pkl_file)

Instantiates extractor from pickle file

pkl_file: str or Path
Path to pickle file
extractor: RecordingExtractor or SortingExtractor
The loaded extractor object
spikeextractors.load_probe_file(recording, probe_file, channel_map=None, channel_groups=None, verbose=False)

This function returns a SubRecordingExtractor that contains information from the given probe file (channel locations, groups, etc.) If a .prb file is given, then ‘location’ and ‘group’ information for each channel is added to the SubRecordingExtractor. If a .csv file is given, then it will only add ‘location’ to the SubRecordingExtractor.

recording: RecordingExtractor
The recording extractor to load channel information from.
probe_file: str
Path to probe file. Either .prb or .csv
channel_map : array-like
A list of channel IDs to set in the loaded file. Only used if the loaded file is a .csv.
channel_groups : array-like
A list of groups (ints) for the channel_ids to set in the loaded file. Only used if the loaded file is a .csv.
verbose: bool
If True, output is verbose
subrecording: SubRecordingExtractor
The extractor containing all of the probe information.
spikeextractors.save_to_probe_file(recording, probe_file, grouping_property=None, radius=None, graph=True, geometry=True, verbose=False)

Saves probe file from the channel information of the given recording extractor.

recording: RecordingExtractor
The recording extractor to save probe file from
probe_file: str
file name of .prb or .csv file to save probe information to
grouping_property: str (default None)
If grouping_property is a shared_channel_property, different groups are saved based on the property.
radius: float (default None)
Adjacency radius (used by some sorters). If None it is not saved to the probe file.
graph: bool
If True, the adjacency graph is saved (default=True)
geometry: bool
If True, the geometry is saved (default=True)
verbose: bool
If True, output is verbose
spikeextractors.write_to_binary_dat_format(recording, save_path=None, file_handle=None, time_axis=0, dtype=None, chunk_size=None, chunk_mb=500, n_jobs=1, joblib_backend='loky', return_scaled=True, verbose=False)

Saves the traces of a recording extractor in binary .dat format.

recording: RecordingExtractor
The recording extractor object to be saved in .dat format
save_path: str
The path to the file.
file_handle: file handle
The file handle to dump data. This can be used to append data to an header. In case file_handle is given, the file is NOT closed after writing the binary data.
time_axis: 0 (default) or 1
If 0 then traces are transposed to ensure (nb_sample, nb_channel) in the file. If 1, the traces shape (nb_channel, nb_sample) is kept in the file.
dtype: dtype
Type of the saved data. Default float32.
chunk_size: None or int
Size of each chunk in number of frames. If None (default) and ‘chunk_mb’ is given, the file is saved in chunks of ‘chunk_mb’ Mb (default 500Mb)
chunk_mb: None or int
Chunk size in Mb (default 500Mb)
n_jobs: int
Number of jobs to use (Default 1)
joblib_backend: str
Joblib backend for parallel processing (‘loky’, ‘threading’, ‘multiprocessing’)
return_scaled: bool
If True, traces are written after scaling (using gain/offset). If False, the raw traces are written
verbose: bool
If True, output is verbose (when chunks are used)
spikeextractors.get_sub_extractors_by_property(extractor, property_name, return_property_list=False)

Returns a list of SubExtractors from the Extractor based on the given property_name (e.g. group)

extractor: RecordingExtractor or SortingExtractor
The extractor object to access SubRecordingExtractors from.
property_name: str
The property used to subdivide the extractor
return_property_list: bool
If True the property list is returned
sub_list: list
The list of subextractors to be returned.

OR sub_list, prop_list

If return_property_list is True, the property list will be returned as well.

Module spikeinterface.toolkit

Preprocessing

spiketoolkit.preprocessing.bandpass_filter(recording, freq_min=300, freq_max=6000, freq_wid=1000, filter_type='fft', order=3, chunk_size=30000, cache_chunks=False, dtype=None)

Performs a lazy filter on the recording extractor traces.

recording: RecordingExtractor
The recording extractor to be filtered.
freq_min: int or float
High-pass cutoff frequency.
freq_max: int or float
Low-pass cutoff frequency.
freq_wid: int or float
Width of the filter (when type is ‘fft’).
filter_type: str
‘fft’ or ‘butter’. The ‘fft’ filter uses a kernel in the frequency domain. The ‘butter’ filter uses scipy butter and filtfilt functions.
order: int
Order of the filter (if ‘butter’).
chunk_size: int
The chunk size to be used for the filtering.
cache_chunks: bool (default False).
If True then each chunk is cached in memory (in a dict)
dtype: dtype
The dtype of the traces
filter_recording: BandpassFilterRecording
The filtered recording extractor object
spiketoolkit.preprocessing.blank_saturation(recording, threshold=None, seed=0)

Find and remove parts of the signal with extereme values. Some arrays may produce these when amplifiers enter saturation, typically for short periods of time. To remove these artefacts, values below or above a threshold are set to the median signal value. The threshold is either be estimated automatically, using the lower and upper 0.1 signal percentile with the largest deviation from the median, or specificed. Use this function with caution, as it may clip uncontaminated signals. A warning is printed if the data range suggests no artefacts.

recording: RecordingExtractor
The recording extractor to be transformed Minimum value. If None, clipping is not performed on lower interval edge.
threshold: float or ‘None’ (default None)
Threshold value (in absolute units) for saturation artifacts. If None, the threshold will be determined from the 0.1 signal percentile.
seed: int
Random seed for reproducibility
rescaled_traces: BlankSaturationRecording
The filtered traces recording extractor object
spiketoolkit.preprocessing.clip(recording, a_min=None, a_max=None)

Limit the values of the data between a_min and a_max. Values exceeding the range will be set to the minimum or maximum, respectively.

recording: RecordingExtractor
The recording extractor to be transformed
a_min: float or None (default None)
Minimum value. If None, clipping is not performed on lower interval edge.
a_max: float or None (default None)
Maximum value. If None, clipping is not performed on upper interval edge.
rescaled_traces: ClipTracesRecording
The clipped traces recording extractor object
spiketoolkit.preprocessing.normalize_by_quantile(recording, scale=1.0, median=0.0, q1=0.01, q2=0.99, seed=0)

Rescale the traces from the given recording extractor with a scalar and offset. First, the median and quantiles of the distribution are estimated. Then the distribution is rescaled and offset so that the scale is given by the distance between the quantiles (1st and 99th by default) is set to scale, and the median is set to the given median.

recording: RecordingExtractor
The recording extractor to be transformed
scalar: float
Scale for the output distribution
median: float
Median for the output distribution
q1: float (default 0.01)
Lower quantile used for measuring the scale
q1: float (default 0.99)
Upper quantile used for measuring the
seed: int
Random seed for reproducibility
rescaled_traces: NormalizeByQuantileRecording
The rescaled traces recording extractor object
spiketoolkit.preprocessing.notch_filter(recording, freq=3000, q=30, chunk_size=30000, cache_chunks=False)

Performs a notch filter on the recording extractor traces using scipy iirnotch function.

recording: RecordingExtractor
The recording extractor to be notch-filtered.
freq: int or float
The target frequency of the notch filter.
q: int
The quality factor of the notch filter.
chunk_size: int
The chunk size to be used for the filtering.
cache_chunks: bool (default False).
If True then each chunk is cached in memory (in a dict)
filter_recording: NotchFilterRecording
The notch-filtered recording extractor object
spiketoolkit.preprocessing.rectify(recording)

Rectifies the recording extractor traces. It is useful, in combination with ‘resample’, to compute multi-unit activity (MUA).

recording: RecordingExtractor
The recording extractor object to be rectified
rectified_recording: RectifyRecording
The rectified recording extractor object
spiketoolkit.preprocessing.remove_artifacts(recording, triggers, ms_before=0.5, ms_after=3, mode='zeros', fit_sample_spacing=1.0)

Removes stimulation artifacts from recording extractor traces. By default, artifact periods are zeroed-out (mode = ‘zeros’). This is only recommended for traces that are centered around zero (e.g. through a prior highpass filter); if this is not the case, linear and cubic interpolation modes are also available, controlled by the ‘mode’ input argument.

recording: RecordingExtractor
The recording extractor to remove artifacts from
triggers: list
List of int with the stimulation trigger frames
ms_before: float
Time interval in ms to remove before the trigger events
ms_after: float
Time interval in ms to remove after the trigger events
mode: str

Determines what artifacts are replaced by. Can be one of the following:

  • ‘zeros’ (default): Artifacts are replaced by zeros.
  • ‘linear’: Replacement are obtained through Linear interpolation between
    the trace before and after the artifact. If the trace starts or ends with an artifact period, the gap is filled with the closest available value before or after the artifact.
  • ‘cubic’: Cubic spline interpolation between the trace before and after
    the artifact, referenced to evenly spaced fit points before and after the artifact. This is an option thatcan be helpful if there are significant LFP effects around the time of the artifact, but visual inspection of fit behaviour with your chosen settings is recommended. The spacing of fit points is controlled by ‘fit_sample_spacing’, with greater spacing between points leading to a fit that is less sensitive to high frequency fluctuations but at the cost of a less smooth continuation of the trace. If the trace starts or ends with an artifact, the gap is filled with the closest available value before or after the artifact.
fit_sample_spacing: float
Determines the spacing (in ms) of reference points for the cubic spline fit if mode = ‘cubic’. Default = 1ms. Note: The actual fit samples are the median of the 5 data points around the time of each sample point to avoid excessive influence from hyper-local fluctuations.
removed_recording: RemoveArtifactsRecording
The recording extractor after artifact removal
spiketoolkit.preprocessing.remove_bad_channels(recording, bad_channel_ids=None, bad_threshold=2, seconds=10, verbose=False)

Remove bad channels from the recording extractor.

recording: RecordingExtractor
The recording extractor object
bad_channel_ids: list
List of bad channel ids (int). If None, automatic removal will be done based on standard deviation.
bad_threshold: float
If automatic is used, the threshold for the standard deviation over which channels are removed
seconds: float
If automatic is used, the number of seconds used to compute standard deviations
verbose: bool
If True, output is verbose
remove_bad_channels_recording: RemoveBadChannelsRecording
The recording extractor without bad channels
spiketoolkit.preprocessing.resample(recording, resample_rate)

Resamples the recording extractor traces. If the resampling rate is multiple of the sampling rate, the faster scipy decimate function is used.

recording: RecordingExtractor
The recording extractor to be resampled
resample_rate: int or float
The resampling frequency
resampled_recording: ResampleRecording
The resample recording extractor
spiketoolkit.preprocessing.transform(recording, scalar=1, offset=0)

Transforms the traces from the given recording extractor with a scalar and offset. New traces = traces*scalar + offset.

recording: RecordingExtractor
The recording extractor to be transformed
scalar: float or array
Scalar for the traces of the recording extractor or array with scalars for each channel
offset: float or array
Offset for the traces of the recording extractor or array with offsets for each channel
transform_traces: TransformTracesRecording
The transformed traces recording extractor object
spiketoolkit.preprocessing.whiten(recording, chunk_size=30000, cache_chunks=False, seed=0)

Whitens the recording extractor traces.

recording: RecordingExtractor
The recording extractor to be whitened.
chunk_size: int
The chunk size to be used for the filtering.
cache_chunks: bool
If True, filtered traces are computed and cached all at once (default False).
seed: int
Random seed for reproducibility
whitened_recording: WhitenRecording
The whitened recording extractor
spiketoolkit.preprocessing.common_reference(recording, reference='median', groups=None, ref_channels=None, local_radius=(30, 55), dtype=None, verbose=False)

Re-references the recording extractor traces.

recording: RecordingExtractor
The recording extractor to be re-referenced
reference: str
‘median’, ‘average’, ‘single’ or ‘local’ If ‘median’, common median reference (CMR) is implemented (the median of the selected channels is removed for each timestamp). If ‘average’, common average reference (CAR) is implemented (the mean of the selected channels is removed for each timestamp). If ‘single’, the selected channel(s) is remove from all channels. If ‘local’, an average CAR is implemented with only k channels selected the nearest outside of a radius around each channel
groups: list
List of lists containing the channels for splitting the reference. The CMR, CAR, or referencing with respect to single channels are applied group-wise. However, this is not applied for the local CAR. It is useful when dealing with different channel groups, e.g. multiple tetrodes.
ref_channels: list or int
If no ‘groups’ are specified, all channels are referenced to ‘ref_channels’. If ‘groups’ is provided, then a list of channels to be applied to each group is expected. If ‘single’ reference, a list of one channel or an int is expected.
local_radius: tuple(int, int)
Use in the local CAR implementation as the selecting annulus (exclude radius, include radius)
dtype: str
dtype of the returned traces. If None, dtype is maintained
verbose: bool
If True, output is verbose
referenced_recording: CommonReferenceRecording
The re-referenced recording extractor object

Postprocessing

spiketoolkit.postprocessing.get_unit_waveforms(recording, sorting, unit_ids=None, channel_ids=None, return_idxs=False, chunk_size=None, chunk_mb=500, **kwargs)

Computes the spike waveforms from a recording and sorting extractor. The recording is split in chunks (the size in Mb is set with the chunk_mb argument) and all waveforms are extracted for each chunk and then re-assembled. If multiple jobs are used (n_jobs > 1), more and smaller chunks are created and processed in parallel.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to extract waveforms
channel_ids: list
List of channels ids to compute waveforms from
return_idxs: bool
If True, spike indexes and channel indexes are returned
chunk_size: int
Size of chunks in number of samples. If None, it is automatically calculated
chunk_mb: int
Size of chunks in Mb (default 500 Mb)
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned.
n_jobs: int
Number of parallel jobs (default 1)
max_spikes_per_unit: int
The maximum number of spikes to extract per unit.
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
verbose: bool
If True output is verbose
waveforms: list
List of np.array (n_spikes, n_channels, n_timepoints) containing extracted waveforms for each unit
spike_indexes: list
List of spike indexes for which waveforms are computed. Returned if ‘return_idxs’ is True
channel_indexes: list
List of max channel indexes
spiketoolkit.postprocessing.get_unit_templates(recording, sorting, unit_ids=None, channel_ids=None, mode='median', _waveforms=None, **kwargs)

Computes the spike templates from a recording and sorting extractor. If waveforms are not found as features, they are computed.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to extract templates
channel_ids: list
List of channels ids to compute templates from
mode: str
Use ‘mean’ or ‘median’ to compute templates
_waveforms: list
Pre-computed waveforms to be used for computing templates
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
verbose: bool
If True output is verbose
templates: list
List of np.array (n_channels, n_timepoints) containing extracted templates for each unit
spiketoolkit.postprocessing.get_unit_amplitudes(recording, sorting, unit_ids=None, channel_ids=None, return_idxs=False, **kwargs)

Computes the spike amplitudes from a recording and sorting extractor. Amplitudes can be computed in absolute value (uV) or relative to the template amplitude.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to extract maximum channels
channel_ids: list
List of channels ids to compute amplitudes from
return_idxs: bool
If True, spike indexes and channel indexes are returned
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes.
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: float
Frames after peak to compute amplitude
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
n_jobs: int
Number of jobs for parallelization. Default is None (no parallelization)
joblib_backend: str
The backend for joblib. Default is ‘loky’
verbose: bool
If True output is verbose
amplitudes: list
List of int containing extracted amplitudes for each unit
indexes: list
List of spike indexes for which amplitudes are computed. Returned if ‘return_idxs’ is True
spiketoolkit.postprocessing.get_unit_max_channels(recording, sorting, unit_ids=None, channel_ids=None, max_channels=1, peak='both', mode='median', **kwargs)

Computes the spike maximum channels from a recording and sorting extractor. If templates are not found as property, they are computed. If templates are computed by group, the max channels refer to the overall channel ids.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to extract maximum channels
channel_ids: list
List of channels ids to compute max_channels from
max_channels: int
Number of max channels per units to return (default=1)
mode: str
Use ‘mean’ or ‘median’ to compute templates
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
verbose: bool
If True output is verbose
max_channels: list
List of int containing extracted maximum channels for each unit
spiketoolkit.postprocessing.set_unit_properties_by_max_channel_properties(recording, sorting, property, unit_ids=None, peak='both', mode='median', verbose=False, **kwargs)

Extracts ‘property’ from recording channel with largest peak for each unit and saves it as unit property.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
property: str
Property to compute
unit_ids: list
List of unit ids to extract maximum channels
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
mode: str
Use ‘mean’ or ‘median’ to compute templates
verbose: bool
If True output is verbose
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
seed: int
Random seed for extracting random waveforms
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
spiketoolkit.postprocessing.compute_unit_pca_scores(recording, sorting, unit_ids=None, channel_ids=None, return_idxs=False, _waveforms=None, _spike_index_list=None, _channel_index_list=None, **kwargs)

Computes the PCA scores from the unit waveforms. If waveforms are not found as features, they are computed.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to compute pca scores
channel_ids: list
List of channels ids to compute pca from
return_idxs: list
List of indexes of used spikes for each unit
_waveforms: list
Pre-computed waveforms (optional)
_spike_index_list: list
Pre-computed spike indexes for waveforms (optional)
_channel_index_list: list
Pre-computed channel indexes for waveforms (optional)
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

n_comp: int
Number of PCA components (default 3)
by_electrode: bool
If True, PCA scores are computed electrode-wise (channel by channel)
max_spikes_for_pca: int
The maximum number of spike per unit to use to fit the PCA.
whiten: bool
If True, PCA is run with whiten equal True
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
verbose: bool
If True output is verbose
pcs_scores: list
List of np.array containing extracted pca scores. If ‘by_electrode’ is False, the array has shape (n_spikes, n_comp) If ‘by_electrode’ is True, the array has shape (n_spikes, n_channels, n_comp)
indexes: list
List of spike indexes for which pca scores are computed. Returned if ‘return_idxs’ is True
spiketoolkit.postprocessing.export_to_phy(recording, sorting, output_folder, compute_pc_features=True, compute_amplitudes=True, max_channels_per_template=16, copy_binary=True, **kwargs)

Exports paired recording and sorting extractors to phy template-gui format.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
output_folder: str
The output folder where the phy template-gui files are saved
compute_pc_features: bool
If True (default), pc features are computed
compute_amplitudes: bool
If True (default), waveforms amplitudes are compute
max_channels_per_template: int or None
Maximum channels per unit to return. If None, all channels are returned
copy_binary: bool
If True, the recording is copied and saved in the phy ‘output_folder’. If False and the ‘recording’ is a CacheRecordingExtractor or a BinDatRecordingExtractor, then a relative link to the file recording location is used. Otherwise, the recording is not copied and the recording path is set to ‘None’. (default True)
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

n_comp: int
Number of PCA components (default 3)
max_spikes_for_pca: int
The maximum number of spikes per unit to use to fit the PCA.
whiten: bool
If True, PCA is run with whiten equal True
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
n_jobs: int
Number of parallel jobs (default 1)
joblib_backend: str
The backend for joblib. Default is ‘loky’.
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes.
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: float
Frames after peak to compute amplitude
recompute_info: bool
If True, will always re-extract waveforms and templates.
save_property_or_features: bool
If True, will store all calculated features and properties
verbose: bool
If True output is verbose
seed: int
Random seed for extracting random waveforms
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
filter_flag: bool
If False, will not display the warning on non-filtered recording. Default is True.
spiketoolkit.postprocessing.compute_unit_template_features(recording, sorting, unit_ids=None, channel_ids=None, feature_names=None, max_channels_per_features=1, recovery_slope_window=0.7, upsampling_factor=1, invert_waveforms=False, as_dataframe=False, **kwargs)

Use SpikeInterface/spikefeatures to compute features for the unit template.

These consist of a set of 1D features:
  • peak to valley (peak_to_valley), time between peak and valley
  • halfwidth (halfwidth), width of peak at half its amplitude
  • peak trough ratio (peak_trough_ratio), amplitude of peak over amplitude of trough
  • repolarization slope (repolarization_slope), slope between trough and return to base
  • recovery slope (recovery_slope), slope after peak towards baseline
And 2D features:
  • unit_spread
  • propagation velocity

To be implemented

The metrics are computed on ‘negative’ waveforms, if templates are saved as positive, pass keyword ‘invert_waveforms’.

recording: RecordingExtractor
The recording extractor
sorting: SortingExtractor
The sorting extractor
unit_ids: list
List of unit ids to compute features
channel_ids: list
List of channels ids to compute templates on which features are computed
feature_names: list
List of feature names to be computed. If None, all features are computed
max_channels_per_features: int
Maximum number of channels to compute features on (default 1). If channel_ids is used, this parameter is ignored
upsampling_factor: int
Factor with which to upsample the template resolution (default 1)
invert_waveforms: bool
Invert templates before computing features (default False)
recovery_slope_window: float
Window after peak in ms wherein to compute recovery slope (default 0.7)
as_dataframe: bool
IfTrue, output is returned as a pandas dataframe, otherwise as a dictionary
**kwargs: Keyword arguments

A dictionary with default values can be retrieved with: st.postprocessing.get_waveforms_params():

grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
seed: int
Random seed for extracting random waveforms
save_property_or_features: bool
If True (default), waveforms are saved as features of the sorting extractor object
recompute_info: bool
If True, waveforms are recomputed (default False)
verbose: bool
If True output is verbose
features: dict or pandas.DataFrame
The computed features as a dictionary or a pandas.DataFrame (if as_dataframe is True)

Validation

spiketoolkit.validation.compute_isolation_distances(sorting, recording, num_channels_to_compare=13, max_spikes_per_cluster=500, unit_ids=None, **kwargs)

Computes and returns the isolation distances in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
isolation_distances: np.ndarray
The isolation distances of the sorted units.
spiketoolkit.validation.compute_isi_violations(sorting, duration_in_frames, isi_threshold=0.0015, min_isi=None, sampling_frequency=None, unit_ids=None, **kwargs)

Computes and returns the isi violations for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
duration_in_frames: int
Length of recording (in frames).
isi_threshold: float
The isi threshold for calculating isi violations
min_isi: float
The minimum expected isi value
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation
isi_violations: np.ndarray
The isi violations of the sorted units.
spiketoolkit.validation.compute_snrs(sorting, recording, snr_mode='mad', snr_noise_duration=10.0, max_spikes_per_unit_for_snr=1000, template_mode='median', max_channel_peak='both', unit_ids=None, **kwargs)

Computes and returns the snrs in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
snr_mode: str
Mode to compute noise SNR (‘mad’ | ‘std’ - default ‘mad’)
snr_noise_duration: float
Number of seconds to compute noise level from (default 10.0)
max_spikes_per_unit_for_snr: int
Maximum number of spikes to compute templates from (default 1000)
template_mode: str
Use ‘mean’ or ‘median’ to compute templates
max_channel_peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
snrs: np.ndarray
The snrs of the sorted units.
spiketoolkit.validation.compute_amplitude_cutoffs(sorting, recording, unit_ids=None, **kwargs)

Computes and returns the amplitude cutoffs for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
apply_filter: bool
If True, recording is bandpass-filtered.
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
save_property_or_features: bool
If true, it will save amplitudes in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes.
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: float
Frames after peak to compute amplitude
save_property_or_features: bool
If True, the metric is saved as sorting property
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
amplitude_cutoffs: np.ndarray
The amplitude cutoffs of the sorted units.
spiketoolkit.validation.compute_d_primes(sorting, recording, num_channels_to_compare=13, max_spikes_per_cluster=500, unit_ids=None, **kwargs)

Computes and returns the d primes in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
d_primes: np.ndarray
The d primes of the sorted units.
spiketoolkit.validation.compute_drift_metrics(sorting, recording, drift_metrics_interval_s=51, drift_metrics_min_spikes_per_interval=10, unit_ids=None, **kwargs)

Computes and returns the drift metrics in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
drift_metrics_interval_s: float
Time period for evaluating drift.
drift_metrics_min_spikes_per_interval: int
Minimum number of spikes for evaluating drift metrics per interval.
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
dm_metrics: np.ndarray
The drift metrics of the sorted units.
spiketoolkit.validation.compute_firing_rates(sorting, duration_in_frames, sampling_frequency=None, unit_ids=None, **kwargs)

Computes and returns the firing rates for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
duration_in_frames: int
Length of recording (in frames).
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation
firing_rates: np.ndarray
The firing rates of the sorted units.
spiketoolkit.validation.compute_l_ratios(sorting, recording, num_channels_to_compare=13, max_spikes_per_cluster=500, unit_ids=None, **kwargs)

Computes and returns the l ratios in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
l_ratios: np.ndarray
The l ratios of the sorted units.
spiketoolkit.validation.compute_nn_metrics(sorting, recording, num_channels_to_compare=13, max_spikes_per_cluster=500, max_spikes_for_nn=10000, n_neighbors=4, unit_ids=None, **kwargs)

Computes and returns the nearest neighbor metrics in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
max_spikes_for_nn: int
Max spikes to be used for nearest-neighbors calculation
n_neighbors: int
Number of neighbors to compare
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
nn_metrics: np.ndarray
The nearest neighbor metrics of the sorted units.
spiketoolkit.validation.compute_num_spikes(sorting, sampling_frequency=None, unit_ids=None, **kwargs)

Computes and returns the num spikes for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation
num_spikes: np.ndarray
The number of spikes of the sorted units.
spiketoolkit.validation.compute_presence_ratios(sorting, duration_in_frames, sampling_frequency=None, unit_ids=None, **kwargs)

Computes and returns the presence ratios for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
duration_in_frames: int
Length of recording (in frames).
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation
presence_ratios: np.ndarray
The presence ratios of the sorted units.
spiketoolkit.validation.compute_silhouette_scores(sorting, recording, max_spikes_for_silhouette=10000, unit_ids=None, **kwargs)

Computes and returns the silhouette scores in the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
max_spikes_for_silhouette: int
Max spikes to be used for silhouette metric
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
silhouette_scores: np.ndarray
The sihouette scores of the sorted units.
spiketoolkit.validation.compute_quality_metrics(sorting, recording=None, duration_in_frames=None, sampling_frequency=None, metric_names=None, unit_ids=None, as_dataframe=False, isi_threshold=0.0015, min_isi=None, snr_mode='mad', snr_noise_duration=10.0, max_spikes_per_unit_for_snr=1000, template_mode='median', max_channel_peak='both', max_spikes_per_unit_for_noise_overlap=1000, noise_overlap_num_features=10, noise_overlap_num_knn=6, drift_metrics_interval_s=51, drift_metrics_min_spikes_per_interval=10, max_spikes_for_silhouette=10000, num_channels_to_compare=13, max_spikes_per_cluster=500, max_spikes_for_nn=10000, n_neighbors=4, **kwargs)

Computes and returns all specified metrics for the sorted dataset.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor from which to extract amplitudes
duration_in_frames: int
Length of recording (in frames).
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
metric_names: list
List of metric names to be computed
unit_ids: list
List of unit ids to compute metric for. If not specified, all units are used
as_dataframe: bool
If True, will return dataframe of metrics. If False, will return dictionary.
isi_threshold: float
The isi threshold for calculating isi violations
min_isi: float
The minimum expected isi value
snr_mode: str
Mode to compute noise SNR (‘mad’ | ‘std’ - default ‘mad’)
snr_noise_duration: float
Number of seconds to compute noise level from (default 10.0)
max_spikes_per_unit_for_snr: int
Maximum number of spikes to compute templates for SNR from (default 1000)
template_mode: str
Use ‘mean’ or ‘median’ to compute templates
max_channel_peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
max_spikes_per_unit_for_noise_overlap: int
Maximum number of spikes to compute templates for noise overlap from (default 1000)
noise_overlap_num_features: int
Number of features to use for PCA for noise overlap
noise_overlap_num_knn: int
Number of nearest neighbors for noise overlap
drift_metrics_interval_s: float
Time period for evaluating drift.
drift_metrics_min_spikes_per_interval: int
Minimum number of spikes for evaluating drift metrics per interval
max_spikes_for_silhouette: int
Max spikes to be used for silhouette metric
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
max_spikes_for_nn: int
Max spikes to be used for nearest-neighbors calculation
n_neighbors: int
Number of neighbors to compare
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation
metrics: dictionary OR pandas.dataframe
Dictionary or pandas.dataframe of metrics.

Curation

spiketoolkit.curation.threshold_amplitude_cutoffs(sorting, recording, threshold, threshold_sign, **kwargs)

Computes and thresholds the amplitude cutoffs in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit

threshold sorting extractor

spiketoolkit.curation.threshold_d_primes(sorting, recording, threshold, threshold_sign, num_channels_to_compare=13, max_spikes_per_cluster=500, **kwargs)

Computes and thresholds the d primes in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_drift_metrics(sorting, recording, threshold, threshold_sign, metric_name='max_drift', drift_metrics_interval_s=51, drift_metrics_min_spikes_per_interval=10, **kwargs)

Computes and thresholds the specified drift metric for the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold. If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold. If ‘greater’, will threshold any metric greater than the given threshold. If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold.
metric_name: str
The name of the drift metric to be thresholded (either “max_drift” or “cumulative_drift”).
drift_metrics_interval_s: float
Time period for evaluating drift.
drift_metrics_min_spikes_per_interval: int
Minimum number of spikes for evaluating drift metrics per interval.
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_firing_rates(sorting, threshold, threshold_sign, duration_in_frames, sampling_frequency=None, **kwargs)

Computes and thresholds the firing rates in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
duration_in_frames: int
Length of recording (in frames).
sampling_frequency:
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_isi_violations(sorting, threshold, threshold_sign, duration_in_frames, isi_threshold=0.0015, min_isi=None, sampling_frequency=None, **kwargs)

Computes and thresholds the isi violations in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
duration_in_frames: int
Length of recording (in frames).
isi_threshold: float
The isi threshold for calculating isi violations.
min_isi: float
The minimum expected isi value.
sampling_frequency:
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor.
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_isolation_distances(sorting, recording, threshold, threshold_sign, num_channels_to_compare=13, max_spikes_per_cluster=500, **kwargs)

Computes and thresholds the isolation distances in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold. If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold. If ‘greater’, will threshold any metric greater than the given threshold. If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold.
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default None)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_l_ratios(sorting, recording, threshold, threshold_sign, num_channels_to_compare=13, max_spikes_per_cluster=500, **kwargs)

Computes and thresholds the l ratios in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold. If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold. If ‘greater’, will threshold any metric greater than the given threshold. If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold.
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_nn_metrics(sorting, recording, threshold, threshold_sign, metric_name='nn_hit_rate', num_channels_to_compare=13, max_spikes_per_cluster=500, max_spikes_for_nn=10000, n_neighbors=4, **kwargs)

Computes and thresholds the specified nearest neighbor metric for the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold. If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold. If ‘greater’, will threshold any metric greater than the given threshold. If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold.
metric_name: str
The name of the nearest neighbor metric to be thresholded (either “nn_hit_rate” or “nn_miss_rate”).
num_channels_to_compare: int
The number of channels to be used for the PC extraction and comparison
max_spikes_per_cluster: int
Max spikes to be used from each unit
max_spikes_for_nn: int
Max spikes to be used for nearest-neighbors calculation.
n_neighbors: int
Number of neighbors to compare.
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_num_spikes(sorting, threshold, threshold_sign, sampling_frequency=None, **kwargs)

Computes and thresholds the num spikes in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
sampling_frequency: float
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_presence_ratios(sorting, threshold, threshold_sign, duration_in_frames, sampling_frequency=None, **kwargs)

Computes and thresholds the presence ratios in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
duration_in_frames: int
Length of recording (in frames).
sampling_frequency:
The sampling frequency of the result. If None, will check to see if sampling frequency is in sorting extractor
**kwargs: keyword arguments
Keyword arguments among the following:
save_property_or_features: bool
If True, the metric is saved as sorting property
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_silhouette_scores(sorting, recording, threshold, threshold_sign, max_spikes_for_silhouette=10000, **kwargs)

Computes and thresholds the silhouette scores in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold If ‘greater’, will threshold any metric greater than the given threshold If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold
max_spikes_for_silhouette: int
Max spikes to be used for silhouette metric
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

spiketoolkit.curation.threshold_snrs(sorting, recording, threshold, threshold_sign, snr_mode='mad', snr_noise_duration=10.0, max_spikes_per_unit_for_snr=1000, template_mode='median', max_channel_peak='both', **kwargs)

Computes and thresholds the snrs in the sorted dataset with the given sign and value.

sorting: SortingExtractor
The sorting result to be evaluated.
recording: RecordingExtractor
The given recording extractor
threshold: int or float
The threshold for the given metric.
threshold_sign: str
If ‘less’, will threshold any metric less than the given threshold. If ‘less_or_equal’, will threshold any metric less than or equal to the given threshold. If ‘greater’, will threshold any metric greater than the given threshold. If ‘greater_or_equal’, will threshold any metric greater than or equal to the given threshold.
snr_mode: str
Mode to compute noise SNR (‘mad’ | ‘std’ - default ‘mad’)
snr_noise_duration: float
Number of seconds to compute noise level from (default 10.0)
max_spikes_per_unit_for_snr: int
Maximum number of spikes to compute templates from (default 1000)
template_mode: str
Use ‘mean’ or ‘median’ to compute templates
max_channel_peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
**kwargs: keyword arguments
Keyword arguments among the following:
method: str
If ‘absolute’ (default), amplitudes are absolute amplitudes in uV are returned. If ‘relative’, amplitudes are returned as ratios between waveform amplitudes and template amplitudes
peak: str
If maximum channel has to be found among negative peaks (‘neg’), positive (‘pos’) or both (‘both’ - default)
frames_before: int
Frames before peak to compute amplitude
frames_after: int
Frames after peak to compute amplitude
apply_filter: bool
If True, recording is bandpass-filtered
freq_min: float
High-pass frequency for optional filter (default 300 Hz)
freq_max: float
Low-pass frequency for optional filter (default 6000 Hz)
grouping_property: str
Property to group channels. E.g. if the recording extractor has the ‘group’ property and ‘grouping_property’ is ‘group’, then waveforms are computed group-wise.
ms_before: float
Time period in ms to cut waveforms before the spike events
ms_after: float
Time period in ms to cut waveforms after the spike events
dtype: dtype
The numpy dtype of the waveforms
compute_property_from_recording: bool
If True and ‘grouping_property’ is given, the property of each unit is assigned as the corresponding property of the recording extractor channel on which the average waveform is the largest
max_channels_per_waveforms: int or None
Maximum channels per waveforms to return. If None, all channels are returned
n_jobs: int
Number of parallel jobs (default 1)
memmap: bool
If True, waveforms are saved as memmap object (recommended for long recordings with many channels)
save_property_or_features: bool
If true, it will save features in the sorting extractor
recompute_info: bool
If True, waveforms are recomputed
max_spikes_per_unit: int
The maximum number of spikes to extract per unit
seed: int
Random seed for reproducibility
verbose: bool
If True, will be verbose in metric computation

threshold sorting extractor

class spiketoolkit.curation.CurationSortingExtractor(parent_sorting, curation_steps=None)
exclude_units(unit_ids)

This function deletes roots from the curation tree according to the given unit_ids

unit_ids: list or int
The unit ids to be excluded
append_curation_step: bool
Appends the curation step to the object keyword arguments
get_unit_ids()

This function returns a list of ids (ints) for each unit in the sorsted result.

unit_ids: array_like
A list of the unit ids in the sorted result (ints).
get_unit_spike_train(unit_id, start_frame=None, end_frame=None)

This function extracts spike frames from the specified unit. It will return spike frames from within three ranges:

[start_frame, t_start+1, …, end_frame-1] [start_frame, start_frame+1, …, final_unit_spike_frame - 1] [0, 1, …, end_frame-1] [0, 1, …, final_unit_spike_frame - 1]

if both start_frame and end_frame are given, if only start_frame is given, if only end_frame is given, or if neither start_frame or end_frame are given, respectively. Spike frames are returned in the form of an array_like of spike frames. In this implementation, start_frame is inclusive and end_frame is exclusive conforming to numpy standards.

unit_id: int
The id that specifies a unit in the recording
start_frame: int
The frame above which a spike frame is returned (inclusive)
end_frame: int
The frame below which a spike frame is returned (exclusive)
spike_train: numpy.ndarray
An 1D array containing all the frames for each spike in the specified unit given the range of start and end frames
merge_units(unit_ids)

This function merges two roots from the curation tree according to the given unit_ids. It creates a new unit_id and root that has the merged roots as children.

unit_ids: list
The unit ids to be merged
new_root_id: int
The unit id of the new merged unit.
print_curation_tree(unit_id)

This function prints the current curation tree for the unit_id (roots are current unit ids).

unit_id: in
The unit id whose curation history will be printed.
split_unit(unit_id, indices)

This function splits a root from the curation tree according to the given unit_id and indices. It creates two new unit_ids and roots that have the split root as a child. This function splits the spike train of the root by the given indices.

unit_id: int
The unit id to be split
indices: list
The indices of the unit spike train at which the spike train will be split.
new_root_ids: tuple
A tuple of new unit ids after the split (integers).

Module spikeinterface.sorters

spikesorters.available_sorters()

Lists available sorters.

spikesorters.get_default_params(sorter_name_or_class)

Returns default parameters for the specified sorter.

sorter_name_or_class: str or SorterClass
The sorter to retrieve default parameters from
default_params: dict
Dictionary with default params for the specified sorter
spikesorters.run_sorter(sorter_name_or_class, recording, output_folder=None, delete_output_folder=False, grouping_property=None, parallel=False, verbose=False, raise_error=True, n_jobs=-1, joblib_backend='loky', **params)

Generic function to run a sorter via function approach.

Two usages with name or class:

by name:
>>> sorting = run_sorter('tridesclous', recording)
by class:
>>> sorting = run_sorter(TridesclousSorter, recording)
sorter_name_or_class: str or SorterClass
The sorter to retrieve default parameters from
recording: RecordingExtractor
The recording extractor to be spike sorted
output_folder: str or Path
Path to output folder
delete_output_folder: bool
If True, output folder is deleted (default False)
grouping_property: str
Splits spike sorting by ‘grouping_property’ (e.g. ‘groups’)
parallel: bool
If True and spike sorting is by ‘grouping_property’, spike sorting jobs are launched in parallel
verbose: bool
If True, output is verbose
raise_error: bool
If True, an error is raised if spike sorting fails (default). If False, the process continues and the error is logged in the log file.
n_jobs: int
Number of jobs when parallel=True (default=-1)
joblib_backend: str
joblib backend when parallel=True (default=’loky’)
**params: keyword args
Spike sorter specific arguments (they can be retrieved with ‘get_default_params(sorter_name_or_class)’
sortingextractor: SortingExtractor
The spike sorted data
spikesorters.run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params={}, grouping_property=None, mode='raise', engine=None, engine_kwargs={}, verbose=False, with_output=True, run_sorter_kwargs={})

Run several sorters on several recordings.

sorter_list: list of str
List of sorter names to run.
recording_dict_or_list: dict or list
A dict of recordings. The key will be the name of the recording. If a list is given then the name will be recording_0, recording_1, …
working_folder: str
The working directory. This must not exist before calling this function.
sorter_params: dict of dict with sorter_name as key
This allow to overwrite default params for sorter.
grouping_property: str or None
The property of grouping given to sorters.
mode: ‘raise’ or ‘overwrite’ or ‘keep’
The mode when the subfolder of recording/sorter already exists.
  • ‘raise’ : raise error if subfolder exists
  • ‘overwrite’ : force recompute
  • ‘keep’ : do not compute again if f=subfolder exists and log is OK
engine: ‘loop’ or ‘multiprocessing’ or ‘dask’
Which approach to use to run the multiple sorters.
  • ‘loop’ : run sorters in a loop (serially)
  • ‘multiprocessing’ : use the Python multiprocessing library to run in parallel
  • ‘dask’ : use the Dask module to run in parallel
engine_kwargs: dict
This contains kwargs specific to the launcher engine:
  • ‘loop’ : no kargs
  • ‘multiprocessing’ : {‘processes’ : } number of processes
  • ‘dask’ : {‘client’:} the dask client for submiting task
verbose: bool
Controls sorter verbosity.
with_output: bool
return the output.
run_sorter_kwargs: dict
This contains kwargs specific to run_sorter function: * ‘raise_error’ : bool
  • ‘parallel’ : bool
  • ‘n_jobs’ : int
  • ‘joblib_backend’ : ‘loky’ / ‘multiprocessing’ / ‘threading’
results : dict
The output is nested dict[(rec_name, sorter_name)] of SortingExtractor.

Using multiprocessing through this function does not allow for subprocesses, so sorters that already use internally multiprocessing will fail.

Module spikeinterface.comparison

spikecomparison.compare_two_sorters(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Compares two spike sorter outputs.

  • Spike trains are matched based on their agreement scores
  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike train 1), false positives 2 (FP from spike train 2), misclassifications (CL)

It also allows to get confusion matrix and agreement fraction, false positive fraction and false negative fraction.

sorting1: SortingExtractor
The first sorting for the comparison
sorting2: SortingExtractor
The second sorting for the comparison
sorting1_name: str
The name of sorter 1
sorting2_name: : str
The name of sorter 2
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms)
sampling_frequency: float
Optional sampling frequency in Hz when not included in sorting
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
n_jobs: int
Number of cores to use in parallel. Uses all available if -1
verbose: bool
If True, output is verbose
sorting_comparison: SortingComparison
The SortingComparison object
spikecomparison.compare_multiple_sorters(sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, n_jobs=-1, spiketrain_mode='union', sampling_frequency=None, verbose=False)

Compares multiple spike sorter outputs.

  • Pair-wise comparisons are made
  • An agreement graph is built based on the agreement score

It allows to return a consensus-based sorting extractor with the get_agreement_sorting() method.

sorting_list: list
List of sorting extractor objects to be compared
name_list: list
List of spike sorter names. If not given, sorters are named as ‘sorter0’, ‘sorter1’, ‘sorter2’, etc.
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms)
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
n_jobs: int
Number of cores to use in parallel. Uses all availible if -1
spiketrain_mode: str
Mode to extract agreement spike trains:
  • ‘union’: spike trains are the union between the spike trains of the best matching two sorters
  • ‘intersection’: spike trains are the intersection between the spike trains of the
    best matching two sorters
sampling_frequency: float
Sampling frequency (used if information is not in the sorting extractors)
verbose: bool
if True, output is verbose
multi_sorting_comparison: MultiSortingComparison
MultiSortingComparison object with the multiple sorter comparison
spikecomparison.compare_sorter_to_ground_truth(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, exhaustive_gt=True, match_mode='hungarian', n_jobs=-1, compute_labels=False, compute_misclassifications=False, verbose=False)

Compares a sorter to a ground truth.

  • Spike trains are matched based on their agreement scores
  • Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP), misclassifications (CL)

It also allows to compute_performance and confusion matrix.

gt_sorting: SortingExtractor
The first sorting for the comparison
tested_sorting: SortingExtractor
The second sorting for the comparison
gt_name: str
The name of sorter 1
tested_name: : str
The name of sorter 2
delta_time: float
Number of ms to consider coincident spikes (default 0.4 ms)
sampling_frequency: float
Optional sampling frequency in Hz when not included in sorting
match_score: float
Minimum agreement score to match units (default 0.5)
chance_score: float
Minimum agreement score to for a possible match (default 0.1)
redundant_score: float
Agreement score above which units are redundant (default 0.2)
overmerged_score: float
Agreement score above which units can be overmerged (default 0.2)
well_detected_score: float
Agreement score above which units are well detected (default 0.8)
exhaustive_gt: bool (default True)
Tell if the ground true is “exhaustive” or not. In other world if the GT have all possible units. It allows more performance measurement. For instance, MEArec simulated dataset have exhaustive_gt=True
match_mode: ‘hungarian’, or ‘best’
What is match used for counting : ‘hugarian’ or ‘best match’.
n_jobs: int
Number of cores to use in parallel. Uses all available if -1
compute_labels: bool
If True, labels are computed at instantiation (default False)
compute_misclassifications: bool
If True, misclassifications are computed at instantiation (default False)
verbose: bool
If True, output is verbose
sorting_comparison: SortingComparison
The SortingComparison object
class spikecomparison.GroundTruthComparison(gt_sorting, tested_sorting, gt_name=None, tested_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, well_detected_score=0.8, redundant_score=0.2, overmerged_score=0.2, chance_score=0.1, exhaustive_gt=False, n_jobs=-1, match_mode='hungarian', compute_labels=False, compute_misclassifications=False, verbose=False)

Class to compare a sorter to ground truth (GT)

This class can:
  • compute a “macth between gt_sorting and tested_sorting
  • compte th score label (TP, FN, CL, FP) for each spike
  • count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe GroundTruthComparison.count
  • compute the confusion matrix .get_confusion_matrix()
  • compute some performance metric with several strategy based on the count score by unit
  • count well detected units
  • count false positve detected units
  • count redundant units
  • count overmerged units
  • summary all this
count_bad_units()

See get_bad_units

count_false_positive_units(redundant_score=None)

See get_false_positive_units().

count_overmerged_units(overmerged_score=None)

See get_overmerged_units().

count_redundant_units(redundant_score=None)

See get_redundant_units().

count_well_detected_units(well_detected_score)

Count how many well detected units. Kargs are the same as get_well_detected_units.

get_bad_units()

Return units list of “bad units”.

“bad units” are defined as units in tested that are not in the best match list of GT units.

So it is the union of “false positive units” + “redundant units”.

Need exhaustive_gt=True

get_confusion_matrix()

Computes the confusion matrix.

confusion_matrix: pandas.DataFrame
The confusion matrix
get_false_positive_units(redundant_score=None)

Return units list of “false positive units” from tested_sorting.

“false positive units” ara defined as units in tested that are not matched at all in GT units.

Need exhaustive_gt=True

redundant_score: float (default 0.2)
The agreement score below which tested units are counted as “false positive”” (and not “redundant”).
get_overmerged_units(overmerged_score=None)

Return “overmerged units”

“overmerged units” are defined as units in tested that match more than one GT unit with an agreement score larger than overmerged_score.

overmerged_score: float (default 0.4)
Tested units with 2 or more agrement scores above ‘overmerged_score’ are counted as “overmerged”.
get_performance(method='by_unit', output='pandas')
Get performance rate with several method:
  • ‘raw_count’ : just render the raw count table
  • ‘by_unit’ : render perf as rate unit by unit of the GT
  • ‘pooled_with_average’ : compute rate unit by unit and average
method: str
‘by_unit’, or ‘pooled_with_average’
output: str
‘pandas’ or ‘dict’
perf: pandas dataframe/series (or dict)
dataframe/series (based on ‘output’) with performance entries
get_redundant_units(redundant_score=None)

Return “redundant units”

“redundant units” are defined as units in tested that match a GT units with a big agreement score but it is not the best match. In other world units in GT that detected twice or more.

redundant_score=None: float (default 0.2)
The agreement score above which tested units are counted as “redundant” (and not “false positive” ).
get_well_detected_units(well_detected_score=None)

Return units list of “well detected units” from tested_sorting.

“well detected units” ara defined as units in tested that are well matched to GT units.

well_detected_score: float (default 0.8)
The agreement score above which tested units are counted as “well detected”.
print_performance(method='pooled_with_average')

Print performance with the selected method

print_summary(well_detected_score=None, redundant_score=None, overmerged_score=None)
Print a global performance summary that depend on the context:
  • exhaustive= True/False
  • how many gt units (one or several)

This summary mix several performance metrics.

class spikecomparison.SymmetricSortingComparison(sorting1, sorting2, sorting1_name=None, sorting2_name=None, delta_time=0.4, sampling_frequency=None, match_score=0.5, chance_score=0.1, n_jobs=-1, verbose=False)

Class for symmetric comparison of two sorters when no assumption is done.

get_agreement_fraction(unit1=None, unit2=None)
get_best_unit_match1(unit1)
get_best_unit_match2(unit2)
get_mapped_sorting1()

Returns a MappedSortingExtractor for sorting 1.

The returned MappedSortingExtractor.get_unit_ids returns the unit_ids of sorting 1.

The returned MappedSortingExtractor.get_mapped_unit_ids returns the mapped unit_ids of sorting 2 to the units of sorting 1 (if units are not mapped they are labeled as -1).

The returned MappedSortingExtractor.get_unit_spikeTrains returns the the spike trains of sorting 2 mapped to the unit_ids of sorting 1.

get_mapped_sorting2()

Returns a MappedSortingExtractor for sorting 2.

The returned MappedSortingExtractor.get_unit_ids returns the unit_ids of sorting 2.

The returned MappedSortingExtractor.get_mapped_unit_ids returns the mapped unit_ids of sorting 1 to the units of sorting 2 (if units are not mapped they are labeled as -1).

The returned MappedSortingExtractor.get_unit_spikeTrains returns the the spike trains of sorting 1 mapped to the unit_ids of sorting 2.

get_matching_event_count(unit1, unit2)
get_matching_unit_list1(unit1)
get_matching_unit_list2(unit2)
class spikecomparison.GroundTruthStudy(study_folder=None)
aggregate_count_units(well_detected_score=None, redundant_score=None, overmerged_score=None)
aggregate_dataframes(copy_into_folder=True, **karg_thresh)
aggregate_performance_by_units()
aggregate_run_times()
concat_all_snr()
copy_sortings()
classmethod create(study_folder, gt_dict)
get_ground_truth(rec_name=None)
get_recording(rec_name=None)
get_sorting(sort_name, rec_name=None)
get_units_snr(rec_name=None, **snr_kargs)

Load or compute units SNR for a given recording.

run_comparisons(exhaustive_gt=False, **kwargs)
run_sorters(sorter_list, sorter_params={}, mode='keep', engine='loop', engine_kwargs={}, verbose=False, run_sorter_kwargs={'parallel': False})
scan_folder()

Module spikeinterface.widgets

spikewidgets.plot_timeseries(recording, channel_ids=None, trange=None, color_groups=False, color=None, figure=None, ax=None)

Plots recording timeseries.

recording: RecordingExtractor
The recordng extractor object
channel_ids: list
The channel ids to display.
trange: list
List with start time and end time
color_groups: bool
If True groups are plotted with different colors
color: matplotlib color, default: None
The color used to draw the traces.
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: TimeseriesWidget
The output widget
spikewidgets.plot_electrode_geometry(recording, color='C0', label_color='r', figure=None, ax=None)

Plots electrode geometry.

recording: RecordingExtractor
The recordng extractor object
color: matplotlib color
The color of the electrodes
label_color: matplotlib color
The color of the channel label when clicking
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: UnitWaveformsWidget
The output widget
spikewidgets.plot_spectrum(recording, channels=None, trange=None, freqrange=None, color_groups=False, color='steelblue', nfft=256, figure=None, ax=None)

Plots electrode geometry.

recording: RecordingExtractor
The recordng extractor object
channels: list
The channels to show
trange: list
List with start time and end time
freqrange: list
List with start frequency and end frequency
color_groups: bool
If True groups are plotted with different colors
color: matplotlib color
The color to be used
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: TimeseriesWidget
The output widget
spikewidgets.plot_spectrogram(recording, channel, trange=None, freqrange=None, cmap='viridis', nfft=256, figure=None, ax=None)

Plots electrode geometry.

recording: RecordingExtractor
The recordng extractor object
channel: int
The channel to plot spectrogram of
trange: list
List with start time and end time
freqrange: list
List with start frequency and end frequency
cmap: matplotlib colorma
The colormap to be used
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: TimeseriesWidget
The output widget
spikewidgets.plot_activity_map(recording, channel_ids=None, trange=None, activity='rate', log=False, cmap='viridis', background='on', label_color='r', transpose=False, frame=False, colorbar=False, colorbar_bbox=None, colorbar_orientation='vertical', colorbar_width=0.02, ax=None, figure=None, **activity_kwargs)

Plots spike rate (estimated using simple threshold detector) as 2D activity map.

recording: RecordingExtractor
The recordng extractor object
channel_ids: list
The channel ids to display
trange: list
List with start time and end time
activity: str
‘rate’ or ‘amplitude’. If ‘rate’ the channel spike rate is used. If ‘amplitude’ the spike amplitude is used
log: bool
If True, log scale is used
cmap: matplotlib colormap
The colormap to be used (default ‘viridis’)
background: bool
If True, a background is added in between electrodes
transpose: bool, optional, default: False
Swap x and y channel coordinates if True
frame: bool, optional, default: False
Draw a frame around the array if True
colorbar: bool
If True, a colorbar is displayed
colorbar_bbox: bbox
Bbox (x,y,w,h) in figure coordinates to plot colorbar
colorbar_orientation: str
‘vertical’ or ‘horizontal’
colorbar_width: float
Width of colorbar in figure coordinates (default 0.02)
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created

activity_kwargs: keyword arguments for st.postprocessing.compute_channel_spiking_activity()

W: ActivityMapWidget
The output widget
spikewidgets.plot_rasters(sorting, sampling_frequency=None, unit_ids=None, trange=None, color='k', figure=None, ax=None)

Plots spike train rasters.

sorting: SortingExtractor
The sorting extractor object
sampling_frequency: float
The sampling frequency (if not in the sorting extractor)
unit_ids: list
List of unit ids
trange: list
List with start time and end time
color: matplotlib color
The color to be used
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: RasterWidget
The output widget
spikewidgets.plot_autocorrelograms(sorting, sampling_frequency=None, unit_ids=None, bin_size=2, window=50, figure=None, ax=None, axes=None)

Plots spike train auto-correlograms.

sorting: SortingExtractor
The sorting extractor object
sampling_frequency: float
The sampling frequency (if not in the sorting extractor)
unit_ids: list
List of unit ids
bin_size: float
Bin size in s
window: float
Window size in s
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored
W: AutoCorrelogramsWidget
The output widget
spikewidgets.plot_crosscorrelograms(sorting, sampling_frequency=None, unit_ids=None, bin_size=1, window=10, figure=None, ax=None, axes=None)

Plots spike train cross-correlograms.

sorting: SortingExtractor
The sorting extractor object
sampling_frequency: float
The sampling frequency (if not in the sorting extractor)
unit_ids: list
List of unit ids
bin_size: float
Bin size in s
window: float
Window size in s
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored
W: CrossCorrelogramsWidget
The output widget
spikewidgets.plot_isi_distribution(sorting, sampling_frequency=None, unit_ids=None, bins=10, window=1, figure=None, ax=None, axes=None)

Plots spike train ISI distribution.

sorting: SortingExtractor
The sorting extractor object
sampling_frequency: float
The sampling frequency (if not in the sorting extractor)
unit_ids: list
List of unit ids
bins: int
Number of bins
window: float
Window size in s
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored
W: ISIDistributionWidget
The output widget
spikewidgets.plot_unit_waveforms(recording, sorting, channel_ids=None, unit_ids=None, channel_locs=True, radius=None, max_channels=None, plot_templates=True, show_all_channels=True, color='k', lw=2, axis_equal=False, plot_channels=False, set_title=True, figure=None, ax=None, axes=None, **waveforms_kwargs)

Plots unit waveforms.

recording: RecordingExtractor
The recording extractor object
sorting: SortingExtractor
The sorting extractor object
channel_ids: list
The channel ids to display
unit_ids: list
List of unit ids.
max_channels: int
Maximum number of largest channels to plot waveform
channel_locs: bool
If True, channel locations are used to display the waveforms. If False, waveforms are displayed in vertical order (default)
plot_templates: bool
If True, templates are plotted over the waveforms
radius: float
If not None, all channels within a circle around the peak waveform will be displayed Ignores max_spikes_per_unit
set_title: bool
Create a plot title with the unit number if True.
plot_channels: bool
Plot channel locations below traces, only used if channel_locs is True
axis_equal: bool
Equal aspext ratio for x and y axis, to visualise the array geometry to scale
lw: float
Line width for the traces.
color: matplotlib color or list of colors
Color(s) of traces.
show_all_channels: bool
Show the whole probe if True, or only selected channels if False
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

waveforms_kwargs: keyword arguments for st.postprocessing.get_unit_waveforms()

W: UnitWaveformsWidget
The output widget
spikewidgets.plot_unit_templates(recording, sorting, channel_ids=None, unit_ids=None, max_channels=None, channel_locs=True, radius=None, show_all_channels=True, color='k', lw=2, axis_equal=False, plot_channels=False, set_title=True, figure=None, ax=None, axes=None, **template_kwargs)

Plots unit waveforms.

recording: RecordingExtractor
The recording extractor object
sorting: SortingExtractor
The sorting extractor object
channel_ids: list
The channel ids to display
unit_ids: list
List of unit ids.
max_channels: int
Maximum number of largest channels to plot waveform
channel_locs: bool
If True, channel locations are used to display the waveforms. If False, waveforms are displayed in vertical order. (default)
radius: float
If not None, all channels within a circle around the peak waveform will be displayed. Ignores max_spikes_per_unit
set_title: bool
Create a plot title with the unit number if True
plot_channels: bool
Plot channel locations below traces, only used if channel_locs is True
axis_equal: bool
Equal aspext ratio for x and y axis, to visualise the array geometry to scale
lw: float
Line width for the traces.
color: matplotlib color or list of colors
Color(s) of traces.
show_all_channels: bool
Show the whole probe if True, or only selected channels if False
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

template_kwargs: keyword arguments for st.postprocessing.get_unit_templates()

W: UnitWaveformsWidget
The output widget
spikewidgets.plot_unit_template_maps(recording, sorting, channel_ids=None, unit_ids=None, peak='neg', log=False, ncols=10, background='on', cmap='viridis', label_color='r', figure=None, ax=None, axes=None, **templates_kwargs)

Plots sorting comparison confusion matrix.

recording: RecordingExtractor
The recordng extractor object
sorting: SortingExtractor
The sorting extractor object
channel_ids: list
The channel ids to display
unit_ids: list
List of unit ids.
peak: str
‘neg’, ‘pos’ or ‘both’
log: bool
If True, log scale is used
ncols: int
Number of columns if multiple units are displayed
background: str
‘on’ or ‘off’
cmap: matplotlib colormap
The colormap to be used (default ‘viridis’)
label_color: matplotlib color
Color to display channel name upon click
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored

templates_kwargs: keyword arguments for st.postprocessing.get_unit_templates()

W: ActivityMapWidget
The output widget
spikewidgets.plot_amplitudes_distribution(recording, sorting, unit_ids=None, max_spikes_per_unit=100, figure=None, ax=None, axes=None)

Plots waveform amplitudes distribution.

recording: RecordingExtractor
The recording extractor object
sorting: SortingExtractor
The sorting extractor object
unit_ids: list
List of unit ids
max_spikes_per_unit: int
Maximum number of spikes to display per unit
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored
W: AmplitudeDistributionWidget
The output widget
spikewidgets.plot_amplitudes_timeseries(recording, sorting, unit_ids=None, max_spikes_per_unit=100, figure=None, ax=None, axes=None)

Plots waveform amplitudes timeseries.

recording: RecordingExtractor
The recording extractor object
sorting: SortingExtractor
The sorting extractor object
unit_ids: list
List of unit ids
max_spikes_per_unit: int
Maximum number of spikes to display per unit.
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored
W: AmplitudeTimeseriesWidget
The output widget
spikewidgets.plot_confusion_matrix(gt_comparison, count_text=True, unit_ticks=True, ax=None, figure=None)

Plots sorting comparison confusion matrix.

gt_comparison: GroundTruthComparison
The ground truth sorting comparison object
count_text: bool
If True counts are displayed as text
unit_ticks: bool
If True unit tick labels are displayed
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: ConfusionMatrixWidget
The output widget
spikewidgets.plot_agreement_matrix(sorting_comparison, ordered=True, count_text=True, unit_ticks=True, ax=None, figure=None)

Plots sorting comparison confusion matrix.

sorting_comparison: GroundTruthComparison or SymmetricSortingComparison
The sorting comparison object. Symetric or not.
ordered: bool
Order units with best agreement scores. This enable to see agreement on a diagonal.
count_text: bool
If True counts are displayed as text
unit_ticks: bool
If True unit tick labels are displayed
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: ConfusionMatrixWidget
The output widget
spikewidgets.plot_sorting_performance(gt_sorting_comparison, property_name=None, metric='accuracy', markersize=10, marker='.', figure=None, ax=None)

Plots sorting performance for each ground-truth unit.

gt_sorting_comparison: GroundTruthComparison
The ground truth sorting comparison object
property_name: str
The property of the sorting extractor to use as x-axis (e.g. snr). If None, no property is used.
metric: str
The performance metric. ‘accuracy’ (default), ‘precision’, ‘recall’, ‘miss rate’, etc.
markersize: int
The size of the marker
marker: str
The matplotlib marker to use (default ‘.’)
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: SortingPerformanceWidget
The output widget
spikewidgets.plot_multicomp_graph(multi_sorting_comparison, draw_labels=False, node_cmap='viridis', edge_cmap='hot_r', alpha_edges=0.7, colorbar=False, figure=None, ax=None)

Plots multi sorting comparison graph.

multi_sorting_comparison: MultiSortingComparison
The multi sorting comparison object
draw_labels: bool
If True unit labels are shown
node_cmap: matplotlib colormap
The colormap to be used for the nodes (default ‘viridis’)
edge_cmap: matplotlib colormap
The colormap to be used for the edges (default ‘hot’)
alpha_edges: float
Alpha value for edges
colorbar: bool
If True a colorbar for the edges is plotted
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: MultiCompGraphWidget
The output widget
spikewidgets.plot_multicomp_agreement(multi_sorting_comparison, plot_type='pie', cmap='YlOrRd', figure=None, ax=None)

Plots multi sorting comparison agreement as pie or bar plot.

multi_sorting_comparison: MultiSortingComparison
The multi sorting comparison object
plot_type: str
‘pie’ or ‘bar’
cmap: matplotlib colormap
The colormap to be used for the nodes (default ‘Reds’)
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
The axis to be used. If not given an axis is created
W: MultiCompGraphWidget
The output widget
spikewidgets.plot_multicomp_agreement_by_sorter(multi_sorting_comparison, plot_type='pie', cmap='YlOrRd', figure=None, ax=None, axes=None, show_legend=True)

Plots multi sorting comparison agreement as pie or bar plot.

multi_sorting_comparison: MultiSortingComparison
The multi sorting comparison object
plot_type: str
‘pie’ or ‘bar’
cmap: matplotlib colormap
The colormap to be used for the nodes (default ‘Reds’)
figure: matplotlib figure
The figure to be used. If not given a figure is created
ax: matplotlib axis
A single axis used to create a matplotlib gridspec for the individual plots. If None, an axis will be created.
axes: list of matplotlib axes
The axes to be used for the individual plots. If not given the required axes are created. If provided, the ax and figure parameters are ignored.
show_legend: bool
Show the legend in the last axes (default True).
W: MultiCompGraphWidget
The output widget