Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory error on concatenating and slicing recording with OpenEphys #3576

Open
BovenE opened this issue Dec 11, 2024 · 3 comments
Open

Memory error on concatenating and slicing recording with OpenEphys #3576

BovenE opened this issue Dec 11, 2024 · 3 comments
Labels
io Problems related to IO operations (memory, multiprocessing, etc.) question General question regarding SI

Comments

@BovenE
Copy link

BovenE commented Dec 11, 2024

What is the best way to cut a multi-segment recording, slice it to get only certain time points across the recording and concatenate those slices for sorting?
I have used frame_slice and then concatenate but then when I use this it does not complete the spikesorting.

@alejoe91 alejoe91 added the question General question regarding SI label Dec 11, 2024
@alejoe91
Copy link
Member

Can you share your script and the error you're getting?

@BovenE
Copy link
Author

BovenE commented Dec 12, 2024

IMG-20241204-WA0030
Hi, this is my script and an image of the error
import spikeinterface.full as si
print(f"SpikeInterface version: {si.version}")
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
import os
import sys
import probeinterface as pi
from probeinterface.plotting import plot_probe, plot_probe_group
import spikeinterface.widgets as sw
import spikeinterface.extractors as se

sys.path.append('D:/Neuropixel/EB_analysis/EB_pipeline/src')

from preprocessing.preprocessing import preprocess

import os
os.environ["OMP_NUM_THREADS"] = '1'

if name == 'main':

 raw_recording = si.read_openephys(rec_folder, experiment_names=experiment, stream_name = stream, load_sync_channel = True)#, block_index=segment_idx )
  pg=pi.ProbeGroup()
  rec_probe = pi.read_openephys(probe_file, stream_name = stream)
  pg.add_probe(rec_probe)
  
  # pi.write_prb(file ="C:/probe_v2_zigzag.prb", probegroup=pg)

  # #to do: insert function to plot the probe 
  # plot_probe(raw_rec_probe.get_probe(),with_contact_id=True)
  # # plt.xlim([200, 400])
  # plt.ylim([000, 200])
  
  #the way to find channel idx : raw_recording.channel_ids[idx_shank0]
  
  if multishank_probe:
      print('processing multishank probe')
      #raw_rec_probe = raw_recording.set_probe(rec_probe)
      raw_rec_probe = raw_recording.set_probegroup(pg, group_mode = "by_shank")
      
      
      groups=raw_rec_probe.split_by(property="group")
  else:
      print('processing single shank probe')
      raw_rec_probe = raw_recording.set_probegroup(pg)
  
  # segmented_rec_1=si.select_segment_recording(raw_rec_probe, segment_indices=0)
  segmented_rec_2=si.select_segment_recording(raw_rec_probe, segment_indices=0)
  segmented_rec_4=si.select_segment_recording(raw_rec_probe, segment_indices=1)
  segmented_rec_5=si.select_segment_recording(raw_rec_probe, segment_indices=2)

  #segmented_rec_3=si.select_segment_recording(raw_rec_probe, segment_indices=2)#segment_idx)
  #segmented_rec_5=si.select_segment_recording(raw_rec_probe, segment_indices=3)#segment_idx)
  #segmented_rec_6=si.select_segment_recording(raw_rec_probe, segment_indices=5)#segment_idx)
  
  triggers_2_new = triggers_2 + segmented_rec_2.get_times()[-1]
  triggers_3_new = triggers_3 + segmented_rec_4.get_times()[-1]
  
  triggers = np.concatenate((triggers_1[:10], triggers_2[:10], triggers_3[:10]))

  
  concatenated_recording = si.concatenate_recordings([segmented_rec_2,  segmented_rec_4, segmented_rec_5])#,  segmented_rec_5])

delay = 3
sliced_recording1=[]
for trigger_i, i in enumerate(triggers):
start_frame =int( i30000)
end_frame = int(start_frame + (delay
30000))

     slice_interval_rec_1 = concatenated_recording.frame_slice(start_frame, end_frame)


     sliced_recording1.append(slice_interval_rec_1)
testconc = si.concatenate_recordings(sliced_recording1)
# # cut_recording=motion_corrected.frame_slice(0, int(times_sub[-1]*30000))

    
import spikeinterface.preprocessing as spre

# # rec_shift = spre.phase_shift(segmented_rec)

recs = testconc.split_by('group')#segmented_rec.split_by('group')
if len(recs)==3:
    print('NP v1.66')
    NP_shanks = 3
elif len(recs) == 4 or len(recs)==2:
    print('NP v2')
    NP_shanks = 4


preprocessed = []

for i,j in enumerate(recs):
    preprocessed.append(preprocess(recs[i], phase_shift=False, NP_shanks=NP_shanks))

aggr_rec = si.aggregate_channels(preprocessed)


sorter_params=si.get_default_sorter_params('kilosort4')
sorter_params['nblocks'] = 5
sorter_params['do_correction'] = True
sorter_params['skip_kilosort_preprocessing'] = True
sorter_params['nt']=91

sorter_params['dminx']=32
sorter_params['do_CAR']=False
sorter_params['batch_size']=60000

sorter_params['Th_learned'] = 9

sorting_combined = si.run_sorter(sorter_name='kilosort4', recording= aggr_rec, output_folder=f"{base_folder}/{mouse}/{date}/{recording_id}/{sorting}/sorting_Sth_test000", **sorter_params)

@h-mayorquin
Copy link
Collaborator

Hi, you are getting a memory error. Can you clean up your script so we can test it?

@h-mayorquin h-mayorquin changed the title Slicing multisegment recording Memory error on concatenating and slicing recording with OpenEphys Dec 17, 2024
@h-mayorquin h-mayorquin added the io Problems related to IO operations (memory, multiprocessing, etc.) label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
io Problems related to IO operations (memory, multiprocessing, etc.) question General question regarding SI
Projects
None yet
Development

No branches or pull requests

3 participants