Skip to content

A PyTorch implementation of a diffusion-model + U-Net pipeline for automated multi-class segmentation of knee MRI scans. Includes dataset utilities, training/evaluation workflows, and 2D/3D inference scripts, with Docker and conda support for reproducibility.

License

Notifications You must be signed in to change notification settings

meanderinghuman/DiffuKnee

Repository files navigation

DiffuKnee — Diffusion-based Knee MRI Segmentation

Python License: MIT Status Contributions welcome

DiffuKnee implements a diffusion-model + U-Net pipeline for automated multi-class knee MRI segmentation.
It provides dataset preparation helpers, training/evaluation workflows, 2D & 3D inference scripts, and reproducibility tools (Docker/conda).


🚀 Quick links

  • Project name: DiffuKnee
  • Task: Knee MRI multi-class segmentation (NIfTI .nii / .nii.gz)
  • Core method: U-Net backbone + diffusion model for segmentation

📚 Table of contents

  1. Highlights
  2. Quickstart (TL;DR)
  3. Detailed setup & usage
  4. Configuration example
  5. Repository structure
  6. Good-to-have improvements
  7. Developer notes & tips
  8. Contributing, License & Contact

✨ Highlights

  • Diffusion-guided segmentation for consistent, smooth mask generation.
  • Supports both slice-level (2D) and volume-level (3D) MRI pipelines.
  • Built-in dataset splitters and utilities to compute mean/std + class weights.
  • Training loop with checkpointing, periodic sampling, and evaluation metrics (Dice/F1, IoU).
  • Reproducible setup with Dockerfile and conda environment.yml.

⚡ Quickstart (TL;DR)

# 1. Create environment
python -m venv .venv && source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

# 2. Create data split (example)
python data/split.py   --raw-dirs /data/raw_images   --mask-dirs /data/masks   --save-dir data/splitted   --test-size 0.2

# 3. Compute mean/std and class weights
python experiments/pretrain.py   --adapt-dir data/splitted   --results-save-dir results/params   --num-classes 6

# 4. Train
python train.py
# or multi-GPU
accelerate launch train.py

# 5. Predict
python predict_2d.py --input /path/to/case.nii.gz     --checkpoint results/checkpoints/best_checkpoint     --output-dir results/predictions     --output-name case001

python predict_3d.py --input /path/to/case.nii.gz     --checkpoint results/checkpoints/best_checkpoint     --output-dir results/predictions     --output-name case001

# 6. Evaluate
python evaluate.py

🛠 Detailed setup & usage

Install

  • Recommended: Python 3.8+ with venv or conda
  • Install dependencies:
pip install -r requirements.txt

Prepare data

  • Input: NIfTI (.nii, .nii.gz) knee MRI volumes and corresponding segmentation masks.
  • Split into train/, train_masks/, test/, test_masks/ using either:
    python data/split.py --raw-dirs ... --mask-dirs ... --save-dir data/splitted
    or with a paths.txt:
    python data/split_from_paths.py --raw-dirs ... --mask-dirs ... --save-dir data/splitted --paths-file paths.txt

Compute stats & class weights

python experiments/pretrain.py --adapt-dir data/splitted --results-save-dir results/params --num-classes 6

Saves mean_std.pt and class_weights.pt under results/params/.

Train

python train.py

or distributed:

accelerate launch train.py

Inference (2D / 3D)

python predict_2d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001
python predict_3d.py --input /path/to/case.nii.gz --checkpoint results/checkpoints/best_checkpoint --output-dir results/predictions --output-name case001

Evaluate

python evaluate.py

Outputs Dice/F1 and IoU scores, also writes eval.txt.


🧾 Configuration

train:
  lr: 5e-5
  epochs: 250
  batch_size: 8
  save_every: 5
  early_stopping_patience: 6
model:
  image_size: 384
  num_classes: 6
paths:
  results: ./results
  checkpoints: ./results/checkpoints

📁 Repository structure

DiffuKnee/
├─ data/                # dataset loaders + split helpers
├─ diffusion/           # diffusion model and schedules
├─ unet/                # U-Net backbone & smoothing utilities
├─ experiments/         # trainer + pretrain utilities
├─ results/             # sample outputs, params, examples
├─ utils/               # preprocessing, postprocessing, helper functions
├─ train.py             # main training script
├─ predict_2d.py        # 2D inference example
├─ predict_3d.py        # 3D inference example
├─ evaluate.py          # evaluation & metrics
├─ requirements.txt
├─ Dockerfile
├─ environment.yml
├─ LICENSE
└─ README.md

✅ Good-to-have improvements

  • Add a sample dataset or download script for demo purposes.
  • Provide config.yaml for all training/prediction parameters.
  • Add GitHub badges (build, license, Python version).
  • Set up CI/CD (GitHub Actions) for tests & linting.

🧩 Developer notes & tips

  • Recompute mean_std.pt & class_weights.pt if dataset changes.
  • Uses TorchIO for 3D augmentation.
  • Uses HuggingFace Accelerate for multi-GPU/distributed training.
  • Optional smoothing and postprocessing in utils/postprocessing.py.

🤝 Contributing, License & Contact

  • Licensed under MIT (see LICENSE).
  • Contributions welcome! See CONTRIBUTING.md.
  • For questions or collaborations, open an issue or pull request on GitHub.

About

A PyTorch implementation of a diffusion-model + U-Net pipeline for automated multi-class segmentation of knee MRI scans. Includes dataset utilities, training/evaluation workflows, and 2D/3D inference scripts, with Docker and conda support for reproducibility.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published