Note
Click here to download the full example code
Getting started tutorial¶
In this introductory example, you will see how to use the SpikeInterface to perform a full electrophysiology analysis. We will first create some simulated data, and we will then perform some pre-processing, run a couple of spike sorting algorithms, inspect and validate the results, export to Phy, and compare spike sorters.
import matplotlib.pyplot as plt
The spikeinterface module by itself import only the spikeinterface.core submodule which is not useful for end user
import spikeinterface
We need to import one by one different submodules separately (preferred). There several modules:
extractors
: file IOpreprocessing
: preprocessingsorters
: Python wrappers of spike sorterspostprocessing
: postprocessingqualitymetrics
: quality metrics on units found by sortercomparison
: comparison of spike sorting outputwidgets
: visualization
import spikeinterface as si # import core only
import spikeinterface.extractors as se
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeextractors/extractors/mearecextractors/mearecextractors.py:13: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if StrictVersion(mr.__version__) >= '1.5.0':
- We can also import all submodules at once with this
this internally import core+extractors+preprocessing+sorters+postprocessing+ qualitymetrics+comparison+widgets+exporters
This is useful for notebooks but this is a more heavy import because internally many more dependency are imported (scipy/sklearn/networkx/matplotlib/h5py…)
import spikeinterface.full as si
First, let’s download a simulated dataset from the ‘https://gin.g-node.org/NeuralEnsemble/ephy_testing_data’ repo
Then we can open it. Note that MEArec simulated file contains both “recording” and a “sorting” object.
local_path = si.download_dataset(remote_path='mearec/mearec_test_10s.h5')
recording, sorting_true = se.read_mearec(local_path)
print(recording)
print(sorting_true)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/datalad/support/external_versions.py:242: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
return LooseVersion(version)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/future/standard_library/__init__.py:65: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
[INFO] Cloning dataset to Dataset(/home/docs/spikeinterface_datasets/ephy_testing_data)
[INFO] Attempting to clone from https://gin.g-node.org/NeuralEnsemble/ephy_testing_data to /home/docs/spikeinterface_datasets/ephy_testing_data
[INFO] Start enumerating objects
[INFO] Start counting objects
[INFO] Start compressing objects
[INFO] Start receiving objects
[INFO] Start resolving deltas
[INFO] Completed clone attempts for Dataset(/home/docs/spikeinterface_datasets/ephy_testing_data)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/datalad/support/external_versions.py:242: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
return LooseVersion(version)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
install(ok): /home/docs/spikeinterface_datasets/ephy_testing_data (dataset)
get(ok): mearec/mearec_test_10s.h5 (file) [from origin...]
MEArecRecordingExtractor: 32 channels - 1 segments - 32.0kHz - 10.000s
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
MEArecSortingExtractor: 10 units - 1 segments - 32.0kHz
file_path: /home/docs/spikeinterface_datasets/ephy_testing_data/mearec/mearec_test_10s.h5
recording
is a BaseRecording
object, which extracts information about
channel ids, channel locations (if present), the sampling frequency of the recording, and the extracellular
traces. sorting_true
is a BaseSorting
object, which contains information
about spike-sorting related information, including unit ids, spike trains, etc. Since the data are simulated,
sorting_true
has ground-truth information of the spiking activity of each unit.
Let’s use the spikeinterface.widgets
module to visualize the traces and the raster plots.
w_ts = sw.plot_timeseries(recording, time_range=(0, 5))
w_rs = sw.plot_rasters(sorting_true, time_range=(0, 5))
This is how you retrieve info from a BaseRecording
…
channel_ids = recording.get_channel_ids()
fs = recording.get_sampling_frequency()
num_chan = recording.get_num_channels()
num_seg = recording.get_num_segments()
print('Channel ids:', channel_ids)
print('Sampling frequency:', fs)
print('Number of channels:', num_chan)
print('Number of segments:', num_seg)
Channel ids: ['1' '2' '3' '4' '5' '6' '7' '8' '9' '10' '11' '12' '13' '14' '15' '16'
'17' '18' '19' '20' '21' '22' '23' '24' '25' '26' '27' '28' '29' '30'
'31' '32']
Sampling frequency: 32000.0
Number of channels: 32
Number of segments: 1
…and a BaseSorting
num_seg = recording.get_num_segments()
unit_ids = sorting_true.get_unit_ids()
spike_train = sorting_true.get_unit_spike_train(unit_id=unit_ids[0])
print('Number of segments:', num_seg)
print('Unit ids:', unit_ids)
print('Spike train of first unit:', spike_train)
Number of segments: 1
Unit ids: ['#0' '#1' '#2' '#3' '#4' '#5' '#6' '#7' '#8' '#9']
Spike train of first unit: [ 5197 8413 13124 15420 15497 15668 16929 19607 55107 59060
60958 105193 105569 117082 119243 119326 122293 122877 132413 139498
147402 147682 148271 149857 165454 170569 174319 176237 183598 192278
201535 217193 219715 221226 222967 223897 225338 243206 243775 248754
253184 253308 265132 266197 266662 283149 284716 287592 304025 305286
310438 310775 318460]
SpikeInterface internally uses the ProbeInterface to handle Probe
and
ProbeGroup
. So any probe in the probeinterface collections can be download and set to a
Recording object. In this case, the MEArec dataset already handles a Probe and we don’t need to set it.
probe = recording.get_probe()
print(probe)
from probeinterface.plotting import plot_probe
plot_probe(probe)
Probe - 32ch - 1shanks
(<matplotlib.collections.PolyCollection object at 0x7f38a172d5e0>, <matplotlib.collections.PolyCollection object at 0x7f38a16e3ee0>)
Using the spikeinterface.preprocessing
, you can perform preprocessing on the recordings.
Each pre-processing function also returns a BaseRecording
,
which makes it easy to build pipelines. Here, we filter the recording and apply common median reference (CMR).
All these preprocessing steps are “lazy”. The computation is done on demand when we call
recording.get_traces(…) or when we save the object to disk.
recording_cmr = recording
recording_f = si.bandpass_filter(recording, freq_min=300, freq_max=6000)
print(recording_f)
recording_cmr = si.common_reference(recording_f, reference='global', operator='median')
print(recording_cmr)
# this computes and saves the recording after applying the preprocessing chain
recording_preprocessed = recording_cmr.save(format='binary')
print(recording_preprocessed)
BandpassFilterRecording: 32 channels - 1 segments - 32.0kHz - 10.000s
CommonReferenceRecording: 32 channels - 1 segments - 32.0kHz - 10.000s
Use cache_folder=/tmp/spikeinterface_cache/tmpx0b_n3xq/QQ1HY2PH
write_binary_recording with n_jobs = 1 and chunk_size = None
BinaryFolderRecording: 32 channels - 1 segments - 32.0kHz - 10.000s
Now you are ready to spike sort using the spikeinterface.sorters
module!
Let’s first check which sorters are implemented and which are installed
print('Available sorters', ss.available_sorters())
print('Installed sorters', ss.installed_sorters())
Available sorters ['combinato', 'hdsort', 'herdingspikes', 'ironclust', 'kilosort', 'kilosort2', 'kilosort2_5', 'kilosort3', 'klusta', 'mountainsort4', 'pykilosort', 'spykingcircus', 'spykingcircus2', 'tridesclous', 'tridesclous2', 'waveclus', 'waveclus_snippets', 'yass']
RUNNING SHELL SCRIPT: /tmp/tmp_shellscriptw4yczzi1/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscript18nzf72c/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscript9vl80jd5/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscript7fmldl9i/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscriptttah2ml2/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscriptrvkz7_h7/script.sh
RUNNING SHELL SCRIPT: /tmp/tmp_shellscript4f9fyziv/script.sh
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeinterface/sorters/waveclus/waveclus.py:125: ResourceWarning: unclosed file <_io.TextIOWrapper name=8 encoding='UTF-8'>
if cls.check_compiled():
RUNNING SHELL SCRIPT: /tmp/tmp_shellscript2bu729oa/script.sh
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeinterface/sorters/waveclus/waveclus_snippets.py:83: ResourceWarning: unclosed file <_io.TextIOWrapper name=8 encoding='UTF-8'>
if cls.check_compiled():
Installed sorters ['herdingspikes', 'mountainsort4', 'spykingcircus2', 'tridesclous', 'tridesclous2']
The ss.installed_sorters()
will list the sorters installed in the machine.
We can see we have HerdingSpikes and Tridesclous installed.
Spike sorters come with a set of parameters that users can change.
The available parameters are dictionaries and can be accessed with:
print(ss.get_default_params('herdingspikes'))
print(ss.get_default_params('tridesclous'))
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeinterface/sorters/sorterlist.py:98: DeprecationWarning: Use get_default_sorter_params() function instead
warnings.warn("Use get_default_sorter_params() function instead",
{'clustering_bandwidth': 5.5, 'clustering_alpha': 5.5, 'clustering_n_jobs': -1, 'clustering_bin_seeding': True, 'clustering_min_bin_freq': 16, 'clustering_subset': None, 'left_cutout_time': 0.3, 'right_cutout_time': 1.8, 'detect_threshold': 20, 'probe_masked_channels': [], 'probe_inner_radius': 70, 'probe_neighbor_radius': 90, 'probe_event_length': 0.26, 'probe_peak_jitter': 0.2, 't_inc': 100000, 'num_com_centers': 1, 'maa': 12, 'ahpthr': 11, 'out_file_name': 'HS2_detected', 'decay_filtering': False, 'save_all': False, 'amp_evaluation_time': 0.4, 'spk_evaluation_time': 1.0, 'pca_ncomponents': 2, 'pca_whiten': True, 'freq_min': 300.0, 'freq_max': 6000.0, 'filter': True, 'pre_scale': True, 'pre_scale_value': 20.0, 'filter_duplicates': True}
{'freq_min': 400.0, 'freq_max': 5000.0, 'detect_sign': -1, 'detect_threshold': 5, 'common_ref_removal': False, 'nested_params': None, 'n_jobs': 1, 'total_memory': None, 'chunk_size': None, 'chunk_memory': None, 'chunk_duration': '1s', 'progress_bar': True}
Let’s run herdingspikes and change one of the parameter, say, the detect_threshold:
sorting_HS = ss.run_herdingspikes(recording=recording_preprocessed, detect_threshold=4)
print(sorting_HS)
# Generating new position and neighbor files from data file
# Not Masking any Channels
# Sampling rate: 32000
# Localization On
# Number of recorded channels: 32
# Analysing frames: 320000; Seconds: 10.0
# Frames before spike in cutout: 10
# Frames after spike in cutout: 58
# tcuts: 42 90
# tInc: 100000
# Detection completed, time taken: 0:00:00.741980
# Time per frame: 0:00:00.002319
# Time per sample: 0:00:00.000072
Loaded 836 spikes.
Fitting dimensionality reduction using all spikes...
...projecting...
...done
Clustering...
Clustering 836 spikes...
number of seeds: 13
seeds/job: 7
using 2 cpus
[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done 2 out of 2 | elapsed: 2.3s finished
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:242: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
unique = np.ones(len(sorted_centers), dtype=np.bool)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:255: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
labels = np.zeros(n_samples, dtype=np.int)
Number of estimated units: 6
HerdingspikesSortingExtractor: 6 units - 1 segments - 32.0kHz
file_path: /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.96.1/examples/getting_started/herdingspikes_output/HS2_sorted.hdf5
Alternatively we can pass full dictionary containing the parameters:
other_params = ss.get_default_params('herdingspikes')
other_params['detect_threshold'] = 5
# parameters set by params dictionary
sorting_HS_2 = ss.run_herdingspikes(recording=recording_preprocessed, output_folder="redringspikes_output2",
**other_params)
print(sorting_HS_2)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeinterface/sorters/sorterlist.py:98: DeprecationWarning: Use get_default_sorter_params() function instead
warnings.warn("Use get_default_sorter_params() function instead",
# Generating new position and neighbor files from data file
# Not Masking any Channels
# Sampling rate: 32000
# Localization On
# Number of recorded channels: 32
# Analysing frames: 320000; Seconds: 10.0
# Frames before spike in cutout: 10
# Frames after spike in cutout: 58
# tcuts: 42 90
# tInc: 100000
# Detection completed, time taken: 0:00:00.725700
# Time per frame: 0:00:00.002268
# Time per sample: 0:00:00.000071
Loaded 826 spikes.
Fitting dimensionality reduction using all spikes...
...projecting...
...done
Clustering...
Clustering 826 spikes...
number of seeds: 13
seeds/job: 7
using 2 cpus
[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done 2 out of 2 | elapsed: 0.0s finished
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:242: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
unique = np.ones(len(sorted_centers), dtype=np.bool)
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/herdingspikes/clustering/mean_shift_.py:255: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
labels = np.zeros(n_samples, dtype=np.int)
Number of estimated units: 6
HerdingspikesSortingExtractor: 6 units - 1 segments - 32.0kHz
file_path: /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.96.1/examples/getting_started/redringspikes_output2/HS2_sorted.hdf5
Let’s run tridesclous as well, with default parameters:
sorting_TDC = ss.run_tridesclous(recording=recording_preprocessed)
The sorting_HS
and sorting_TDC
are BaseSorting
objects. We can print the units found using:
print('Units found by herdingspikes:', sorting_HS.get_unit_ids())
print('Units found by tridesclous:', sorting_TDC.get_unit_ids())
Units found by herdingspikes: [0 1 2 3 4 5]
Units found by tridesclous: [0 1 2 3 4 5 6 7 8 9]
SpikeInterface provides a efficient way to extractor waveform snippets from paired recording/sorting objects.
The WaveformExtractor
class samples some spikes (max_spikes_per_unit=500
)
for each cluster and stores them on disk. These waveforms per cluster are helpful to compute the average waveform,
or “template”, for each unit and then to compute, for example, quality metrics.
we_TDC = si.WaveformExtractor.create(recording_preprocessed, sorting_TDC, 'waveforms', remove_if_exists=True)
we_TDC.set_params(ms_before=3., ms_after=4., max_spikes_per_unit=500)
we_TDC.run_extract_waveforms(n_jobs=-1, chunk_size=30000)
print(we_TDC)
unit_id0 = sorting_TDC.unit_ids[0]
wavefroms = we_TDC.get_waveforms(unit_id0)
print(wavefroms.shape)
template = we_TDC.get_template(unit_id0)
print(template.shape)
WaveformExtractor: 32 channels - 10 units - 1 segments
before:96 after:128 n_per_units:500
(30, 224, 32)
(224, 32)
Once we have the WaveformExtractor object
we can post-process, validate, and curate the results. With
the spikeinterface.postprocessing
submodule, one can, for example,
get waveforms, templates, maximum channels, PCA scores, or export the data
to Phy. Phy is a GUI for manual
curation of the spike sorting output. To export to phy you can run:
from spikeinterface.exporters import export_to_phy
export_to_phy(we_TDC, './phy_folder_for_TDC',
compute_pc_features=False, compute_amplitudes=True)
write_binary_recording with n_jobs = 1 and chunk_size = None
Run:
phy template-gui /home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/checkouts/0.96.1/examples/getting_started/phy_folder_for_TDC/params.py
Then you can run the template-gui with: phy template-gui phy/params.py
and manually curate the results.
Quality metrics for the spike sorting output are very important to asses the spike sorting performance.
The spikeinterface.qualitymetrics
module implements several quality metrics
to assess the goodness of sorted units. Among those, for example,
are signal-to-noise ratio, ISI violation ratio, isolation distance, and many more.
Theses metrics are built on top of WaveformExtractor class and return a dictionary with the unit ids as keys:
snrs = si.compute_snrs(we_TDC)
print(snrs)
si_violations_ratio, isi_violations_rate, isi_violations_count = si.compute_isi_violations(we_TDC, isi_threshold_ms=1.5)
print(si_violations_ratio)
print(isi_violations_rate)
print(isi_violations_count)
{0: 27.257294, 1: 24.215126, 2: 24.246317, 3: 27.081573, 4: 13.247756, 5: 9.540428, 6: 8.31927, 7: 8.689199, 8: 11.160343, 9: 8.443421}
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0}
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0}
{0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0}
All theses quality metrics can be computed in one shot and returned as
a pandas.Dataframe
metrics = si.compute_quality_metrics(we_TDC, metric_names=['snr', 'isi_violation', 'amplitude_cutoff'])
print(metrics)
snr isi_violations_ratio ... isi_violations_count amplitude_cutoff
0 27.257294 0.0 ... 0 0.008626
1 24.215126 0.0 ... 0 0.005074
2 24.246317 0.0 ... 0 0.144832
3 27.081573 0.0 ... 0 0.005176
4 13.247756 0.0 ... 0 0.006018
5 9.540428 0.0 ... 0 0.007188
6 8.319270 0.0 ... 0 0.005391
7 8.689199 0.0 ... 0 0.089688
8 11.160343 0.0 ... 0 0.002006
9 8.443421 0.0 ... 0 0.252680
[10 rows x 5 columns]
Quality metrics can be also used to automatically curate the spike sorting output. For example, you can select sorted units with a SNR above a certain threshold:
keep_mask = (metrics['snr'] > 7.5) & (metrics['isi_violations_rate'] < 0.01)
print(keep_mask)
keep_unit_ids = keep_mask[keep_mask].index.values
print(keep_unit_ids)
curated_sorting = sorting_TDC.select_units(keep_unit_ids)
print(curated_sorting)
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 True
8 True
9 True
dtype: bool
[0 1 2 3 4 5 6 7 8 9]
UnitsSelectionSorting: 10 units - 1 segments - 32.0kHz
The final part of this tutorial deals with comparing spike sorting outputs.
We can either (1) compare the spike sorting results with the ground-truth
sorting sorting_true
, (2) compare the output of two (HerdingSpikes
and Tridesclous), or (3) compare the output of multiple sorters:
comp_gt_TDC = sc.compare_sorter_to_ground_truth(gt_sorting=sorting_true, tested_sorting=sorting_TDC)
comp_TDC_HS = sc.compare_two_sorters(sorting1=sorting_TDC, sorting2=sorting_HS)
comp_multi = sc.compare_multiple_sorters(sorting_list=[sorting_TDC, sorting_HS],
name_list=['tdc', 'hs'])
When comparing with a ground-truth sorting extractor (1), you can get the sorting performance and plot a confusion matrix
comp_gt_TDC.get_performance()
w_conf = sw.plot_confusion_matrix(comp_gt_TDC)
w_agr = sw.plot_agreement_matrix(comp_gt_TDC)
When comparing two sorters (2), we can see the matching of units between sorters. Units which are not matched has -1 as unit id:
comp_TDC_HS.hungarian_match_12
0 -1.0
1 -1.0
2 3.0
3 -1.0
4 -1.0
5 -1.0
6 -1.0
7 0.0
8 1.0
9 2.0
dtype: float64
or the reverse:
comp_TDC_HS.hungarian_match_21
0 7.0
1 8.0
2 9.0
3 2.0
4 -1.0
5 -1.0
dtype: float64
When comparing multiple sorters (3), you can extract a SortingExtractor
object with units in agreement
between sorters. You can also plot a graph showing how the units are matched between the sorters.
sorting_agreement = comp_multi.get_agreement_sorting(minimum_agreement_count=2)
print('Units in agreement between Klusta and Mountainsort4:', sorting_agreement.get_unit_ids())
w_multi = sw.plot_multicomp_graph(comp_multi)
plt.show()
Units in agreement between Klusta and Mountainsort4: [2 7 8 9]
/home/docs/checkouts/readthedocs.org/user_builds/spikeinterface/conda/0.96.1/lib/python3.8/site-packages/spikeinterface/widgets/_legacy_mpl_widgets/multicompgraph.py:72: PendingDeprecationWarning: The get_cmap function will be deprecated in a future version. Use ``matplotlib.colormaps[name]`` instead.
edge_cmap=plt.cm.get_cmap(self._edge_cmap), edge_vmin=self._msc.match_score,
Total running time of the script: ( 5 minutes 14.748 seconds)