This guide provides detailed instructions for using fMRIPrep Workbench via Docker containers.
# Pull the image
docker pull shawnschwartz/fmriprep-workbench:latest
# Start the container
./fmriprep-workbench start
# Launch the TUI
./fmriprep-workbench launchThe wrapper script automatically configures mount points and environment variables:
./fmriprep-workbench startThis mounts:
- Current directory →
/opt/fmriprep-workbench/workspace config.yaml→/data/config/config.yamlall-subjects.txt→/data/subjects/all-subjects.txtlogs/→/data/logs~/.cache/templateflow→/data/cache/templateflow~/.cache/fmriprep→/data/cache/fmriprep
./fmriprep-workbench stop./fmriprep-workbench status./fmriprep-workbench logs./fmriprep-workbench launchThis opens the interactive terminal user interface where you can:
- Select pipeline steps
- Configure parameters
- Submit jobs to SLURM (if available)
Execute individual pipeline steps:
# Step 1: FlyWheel download
./fmriprep-workbench exec ./01-run.sbatch <args>
# Step 2: DICOM conversion
./fmriprep-workbench exec ./02-run.sbatch <args>
# Step 3: Prep for fMRIPrep
./fmriprep-workbench exec ./03-run.sbatch
# And so on...For direct interaction with the container:
./fmriprep-workbench shellOnce inside, you can run any command:
# Inside the container
cd /opt/fmriprep-workbench/workspace
./launch
ls -la
source load_config.shdocker-compose up -ddocker-compose logs -fdocker-compose downCreate a docker-compose.override.yml file for custom settings:
version: '3.8'
services:
fmriprep-workbench:
volumes:
# Add custom mounts
- /path/to/your/data:/data/study:rw
environment:
# Add custom environment variables
- CUSTOM_VAR=value
deploy:
resources:
limits:
cpus: '16'
memory: 32G# Replace vX.Y.Z with the desired fMRIPrep Workbench version
# Interactive shell
singularity shell \
--bind $(pwd):/workspace \
--bind $HOME/.cache/templateflow:/cache/templateflow \
--bind $HOME/.cache/fmriprep:/cache/fmriprep \
fmriprep-workbench_vX.Y.Z.sif
# Execute command
singularity exec \
--bind $(pwd):/workspace \
fmriprep-workbench_vX.Y.Z.sif \
/opt/fmriprep-workbench/launchSubmit pipeline steps as SLURM jobs:
#!/bin/bash
#SBATCH --job-name=fmriprep-workbench
#SBATCH --time=24:00:00
#SBATCH --mem=16G
#SBATCH --cpus-per-task=8
# Load Singularity module
module load singularity
# Set up bind mounts
export WORKDIR=$(pwd)
export SINGULARITY_BIND="${WORKDIR}:/workspace,${HOME}/.cache/templateflow:/cache/templateflow"
# Execute pipeline step
singularity exec \
fmriprep-workbench_vX.Y.Z.sif \
/opt/fmriprep-workbench/01-run.sbatch <args>Pass environment variables to Singularity:
singularity exec \
--env CUSTOM_VAR=value \
--bind $(pwd):/workspace \
fmriprep-workbench_vX.Y.Z.sif \
/opt/fmriprep-workbench/launchManual Docker run command:
docker run -it --rm \
--name fmriprep-workbench \
--user "$(id -u):$(id -g)" \
-v "$(pwd):/opt/fmriprep-workbench/workspace:rw" \
-v "$(pwd)/config.yaml:/data/config/config.yaml:ro" \
-v "$(pwd)/all-subjects.txt:/data/subjects/all-subjects.txt:ro" \
-v "$(pwd)/logs:/data/logs:rw" \
-v "${HOME}/.cache/templateflow:/data/cache/templateflow:rw" \
-v "${HOME}/.cache/fmriprep:/data/cache/fmriprep:rw" \
shawnschwartz/fmriprep-workbench:latest \
/bin/bashIf you need GPU access (for future GPU-accelerated processing):
docker run -it --rm \
--gpus all \
--name fmriprep-workbench \
-v "$(pwd):/opt/fmriprep-workbench/workspace:rw" \
shawnschwartz/fmriprep-workbench:latest \
/bin/bashRun with custom Docker network:
# Create network
docker network create fmriprep-network
# Run container on network
docker run -it --rm \
--name fmriprep-workbench \
--network fmriprep-network \
-v "$(pwd):/opt/fmriprep-workbench/workspace:rw" \
shawnschwartz/fmriprep-workbench:latest \
/bin/bashIf you encounter permission errors:
# Check file ownership
ls -la
# Ensure container runs as your user
docker run --user "$(id -u):$(id -g)" ...Verify mount points are correct:
# Inside container
ls -la /opt/fmriprep-workbench/workspace
ls -la /data/config
ls -la /data/subjectsCheck Docker logs:
docker logs fmriprep-workbenchVerify Docker is running:
docker ps
docker infoEnsure paths exist before binding:
mkdir -p $HOME/.cache/templateflow
mkdir -p $HOME/.cache/fmriprep
mkdir -p logsIf image pull fails:
# Try with explicit registry
docker pull docker.io/shawnschwartz/fmriprep-workbench:latest
# Check Docker Hub status
curl -s https://status.docker.com/api/v2/status.json- Keep data outside the container: Always use volume mounts for data
- Use named volumes for caches: Persist TemplateFlow and fMRIPrep caches
- Regular backups: Back up configuration files and subject lists
- Resource limits: Set appropriate CPU and memory limits in
docker-compose.yml - Cache directories: Mount cache directories to avoid re-downloading templates
- Parallel processing: Use SLURM array jobs for parallel subject processing
- Non-root user: Always run as non-root user (handled automatically by wrapper)
- Read-only mounts: Mount configuration files as read-only (
:ro) - Network isolation: Use custom networks when running multiple containers
# Start container
./fmriprep-workbench start
# Run preprocessing steps
./fmriprep-workbench exec ./01-run.sbatch <args> # FlyWheel download
./fmriprep-workbench exec ./02-run.sbatch <args> # DICOM conversion
./fmriprep-workbench exec ./03-run.sbatch # Prep for fMRIPrep
./fmriprep-workbench exec ./04-run.sbatch # QC metadata
./fmriprep-workbench exec ./05-run.sbatch # QC volumes
./fmriprep-workbench exec ./07-run.sbatch # fMRIPrep full
# Stop container
./fmriprep-workbench stop# Start container
./fmriprep-workbench start
# Setup GLM model
./fmriprep-workbench exec ./10-fsl-glm/setup_glm.sh
# Run analyses
./fmriprep-workbench exec ./08-run.sbatch my-model # Level 1
./fmriprep-workbench exec ./09-run.sbatch my-model # Level 2
./fmriprep-workbench exec ./10-run.sbatch my-model # Level 3
# Stop container
./fmriprep-workbench stop# Open shell
./fmriprep-workbench shell
# Inside container - test configurations
cd /opt/fmriprep-workbench/workspace
source load_config.sh
echo "Testing configuration..."
# Make changes, test, repeat
exit