Hello, everyone! Our repository provides a ready-to-use development container environment for working with SpikeInterface, a unified Python framework for spike sorting and other pre-processing steps in the analysis of electrophysiological data. Additionally, we included in the container also Kilosort4, one of the most popular available sorters.
This solution ensures that all developers working on a project can rely on the same consistent environment, reducing the "it works on my machine" problem.
To enable the Windows Subsytem for Linux (WSL) you can follow the detailed guide provided by Microsoft here. Alternatively, the main steps are reported in the following.
- Open PowerShell as Administrator and enter this command to enable the WSL:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
- Next, you must enable the Virtual Machine Platform feature to update to WSL 2. To do so, run:
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
- Restart your computer.
- Eventually, open PowerShell as Administrator and update the WSL by running:
wsl --update wsl --set-default-version 2
If an advanced control of the resources used by the WSL is needed (RAM, CPUs, ...), you can create or edit a
.wslconfigfile as specified in the official WSL documentation by Microsoft here.
Install Podman, a tool to manage containers and images, by downloading it from its website. You can choose either Podman CLI or Podman Desktop.
Docker is another viable option. More details about it are provided below.
Note that Podman is natively intended for multi-tenant settings. However, to date Podman Desktop is not installed system-wide and cannot be executed simultaneously by multiple users. Therefore, we discourage the usage of Podman Desktop over Podman CLI. Nonetheless,
podmancommands can be launched flawlessly from the terminal.
To properly configure Podman Desktop, follow the configuration steps provided by the software. Alternatively, if you downloaded the CLI version, execute these commands in the shell:
podman machine init
podman machine startBy default, Podman limits the system resources available for its containers. If necessary, check out the Podman documentation to append additional options to the
podman machine initcommand. It it often a good idea to expand the default allocated memory.
Several sorters made available by Spikeinterface require or might benefit from the presence of CUDA. The NVIDIA Container Toolkit enables to build and run GPU-accelerated containers and should hence be installed. You can follow this official guide. Otherwise, the main steps are reported in the following:
- Open PowerShell and enter into the Podman WSL virtual machine:
wsl --distribution podman-machine-default
- Configure the NVIDIA Container Toolkit production repository:
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \ sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo - Install the NVIDIA Container Toolkit packages:
sudo yum install -y nvidia-container-toolkit
- To configure Podman in order to access NVIDIA devices in containers, NVIDIA recommends using the Container Device Interface (CDI). Generate the CDI specification file:
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
- Last but not least, run a sample CUDA container with Podman in a new PowerShell to make sure that everything went smoothly:
podman run --rm --security-opt=label=disable --device=nvidia.com/gpu=all ubuntu nvidia-smi
Last but not least, to run this dev container we suggest you to install Visual Studio Code. Once it is properly configured, you must install the Dev Containers extension as well.
To enable the integration of Podman and the Dev Container extension, it is necessary to change some of its settings. Specifically, edit the following ones from VS Code:
{
"dev.containers.dockerPath": "podman",
"dev.containers.executeInWSL": false,
"dev.containers.mountWaylandSocket": false
}You can find additional information regarding the development inside a container here.
Before starting to code and run your scripts inside the containerized development environment provided here, press Ctrl + Shift + P to show the Command Palette and launch the Dev Containers: Reopen in Container command. This will build the dev container image if not already done and will open a remote connection to the the container, where SpikeInterface and Kilosort4 are available and ready to be used.
Any change to the dev container configuration files (e.g., devcontainer.json, Containerfile, ...) is strongly discouraged to avoid any errors or incompatibilities between the environment dependencies. Nevertheless, if you edit those files, make sure to run the Dev Containers: Rebuild and Reopen in Container command to force a new image build.
This dev container is based on Ubuntu, a popular Linux-based operating system. As a result, .pl2 files (Plexon newer file format) cannot be opened directly by using the method read_plexon2 provided by SpikeInterface. This limitation arises because that function relies on a .dll library to handle .pl2 files, which is not compatible with systems other than Windows. To work around this limitation, .pl2 files must be converted to binary .bin files before being loaded into SpikeInterface. You can then use the read_binary method to load the converted files.
Popular sorters have been containerized by the SpikeInterface team and are available here. They can be used inside this dev container via Apptainer via a nested level of containerization. However, with the current setup containerized sorters do not support CUDA and cannot be run on the GPU. To partially address this issue, Kilosort4, likely the most popular spike sorter nowadays, is already packaged in the dev container and is compatible with CUDA.
Short answer. Yes, you definitely can.
Long answer. We preferred Podman over Docker due to the fact it natively supports multi-tenancy. Nonetheless, with minor changes Docker can be used as well. First of all, the NVIDIA Container Toolkit should be configured specifically for Docker, as indicated in the official documentation here. Then, the devcontainer.json file should be updated to make it fully compliant with Docker. Specifically, the following X11 mount should be modified as:
"mounts": [
{
"source": "/run/desktop/mnt/host/wslg/.X11-unix",
"target": "/tmp/.X11-unix",
"type": "bind"
},
...
]Additionally, the --userns=keep-id argument is to be commented out:
"runArgs": [
"--gpus=all",
// "--userns=keep-id",
"--shm-size=0"
]Eventually, the Dev Containers extension settings are to be edited as follow:
{
"dev.containers.dockerPath": "docker",
"dev.containers.executeInWSL": false,
"dev.containers.mountWaylandSocket": false
}Cheers! You are now ready to run this dev container through Docker.
Absolutely! Network drives, often used to store raw or processed data, can definitely be mounted in development containers. That can be done by simply editing the devcontainer.json file as follows:
"mounts": [
...,
{
"source": "/mnt/m/your/custom/path...",
"target": "/your/target/path",
"type": "bind"
},
...
]However, the previous instructions won't work unless a network drive already mapped in Windows (e.g., M:\) is previously mounted in the Podman WSL machine.
- Open a PowerShell window and log in the Podman distribution:
wsl --distribution podman-machine-default
- Create a new folder for the drive letter under
/mntif it does not already exist:sudo mkdir /mnt/m
- If not present, install the
nanotext editor:sudo yum install -y nano
- Then, open the
/etc/fstabfile with the command:sudo nano /etc/fstab
- Edit the
/etc/fstabfile by adding the following line, then save and close it:M: /mnt/m drvfs defaults 0 0
- Update the mounted drives by running:
sudo mount -a
The previous instructions mapped Windows
M:\network drive to/mnt/m. Remember to change the drive letters according to your needs.