DiffuKnee implements a diffusion-model + U-Net pipeline for automated multi-class knee MRI segmentation.
It provides dataset preparation helpers, training/evaluation workflows, 2D & 3D inference scripts, and reproducibility tools (Docker/conda).
- Project name: DiffuKnee
- Task: Knee MRI multi-class segmentation (NIfTI
.nii/.nii.gz) - Core method: U-Net backbone + diffusion model for segmentation
- Highlights
- Quickstart (TL;DR)
- Detailed setup & usage
- Configuration example
- Repository structure
- Good-to-have improvements
- Developer notes & tips
- Contributing, License & Contact
- Diffusion-guided segmentation for consistent, smooth mask generation.
- Supports both slice-level (2D) and volume-level (3D) MRI pipelines.
- Built-in dataset splitters and utilities to compute mean/std + class weights.
- Training loop with checkpointing, periodic sampling, and evaluation metrics (Dice/F1, IoU).
- Reproducible setup with Dockerfile and conda environment.yml.
# 1. Create environment
python -m venv .venv && source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# 2. Create data split (example)
python data/split.py --raw-dirs /data/raw_images --mask-dirs /data/masks --save-dir data/splitted --test-size 0.2
# 3. Compute mean/std and class weights
python experiments/pretrain.py --adapt-dir data/splitted --results-save-dir results/params --num-classes 6
# 4. Train
python train.py
# or multi-GPU
accelerate launch train.py
# 5. Predict
python predict_2d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001
python predict_3d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001
# 6. Evaluate
python evaluate.py- Recommended: Python 3.8+ with venv or conda
- Install dependencies:
pip install -r requirements.txt- Input: NIfTI (
.nii,.nii.gz) knee MRI volumes and corresponding segmentation masks. - Split into
train/,train_masks/,test/,test_masks/using either:or with apython data/split.py --raw-dirs ... --mask-dirs ... --save-dir data/splitted
paths.txt:python data/split_from_paths.py --raw-dirs ... --mask-dirs ... --save-dir data/splitted --paths-file paths.txt
python experiments/pretrain.py --adapt-dir data/splitted --results-save-dir results/params --num-classes 6Saves mean_std.pt and class_weights.pt under results/params/.
python train.pyor distributed:
accelerate launch train.pypython predict_2d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001
python predict_3d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001python evaluate.pyOutputs Dice/F1 and IoU scores, also writes eval.txt.
train:
lr: 5e-5
epochs: 250
batch_size: 8
save_every: 5
early_stopping_patience: 6
model:
image_size: 384
num_classes: 6
paths:
results: ./results
checkpoints: ./results/checkpointsDiffuKnee/
├─ data/ # dataset loaders + split helpers
├─ diffusion/ # diffusion model and schedules
├─ unet/ # U-Net backbone & smoothing utilities
├─ experiments/ # trainer + pretrain utilities
├─ results/ # sample outputs, params, examples
├─ utils/ # preprocessing, postprocessing, helper functions
├─ train.py # main training script
├─ predict_2d.py # 2D inference example
├─ predict_3d.py # 3D inference example
├─ evaluate.py # evaluation & metrics
├─ requirements.txt
├─ Dockerfile
├─ environment.yml
├─ LICENSE
└─ README.md
- Add a sample dataset or download script for demo purposes.
- Provide
config.yamlfor all training/prediction parameters. - Add GitHub badges (build, license, Python version).
- Set up CI/CD (GitHub Actions) for tests & linting.
- Recompute
mean_std.pt&class_weights.ptif dataset changes. - Uses TorchIO for 3D augmentation.
- Uses HuggingFace Accelerate for multi-GPU/distributed training.
- Optional smoothing and postprocessing in
utils/postprocessing.py.
- Licensed under MIT (see
LICENSE). - Contributions welcome! See
CONTRIBUTING.md. - For questions or collaborations, open an issue or pull request on GitHub.