GUI to facilitate conducting experiments with multi-probe e-phys (Neuropixels), multichannel audio (Avisoft) and multi-camera video (Loopbio) acquisition. Developed for behavioral recording purposes at the Princeton Neuroscience Institute 2021-25 (Falkner/Murthy labs). Due to proprietary software design and limitations, recordings can only be performed on OS Windows. The data processing, analysis and visualization branches of the GUI are platform-independent.
- Helvetica (download and install)
- Anaconda (and add it to PATH)
- git (and add it to PATH)
- ffmpeg (and add it to PATH)
- sox (and add it to PATH)
- sleap
- das
- vocalocator
- CoolTerm
Set up a new conda environment with Python 3.10 and give it any name, e.g., usv.
conda create --name usv python=3.10 -c conda-forge -y
Activate the virtual environment with:
conda activate usv
Install GUI with command below. Also, rerun the same command to check for and install updates.
pip install git+https://github.com/bartulem/usv-playpen#egg=usv-playpen --use-pep517
Add the python-motifapi package to your virtual environment:
pip install git+https://github.com/loopbio/python-motifapi.git#egg=motifapi --use-pep517
Load the environment with the appropriate name, e.g., usv., and run the GUI:
conda activate usv && usv-playpen
User guide with detailed instructions is available here.