Skip to content

Alrightlone/OBS-Diff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OBS-Diff:
Accurate Pruning For Diffusion Models in One-Shot

Paper Webpage Code License

Junhan Zhu1, Hesong Wang1,2, Mingluo Su1, Zefang Wang1,2, Huan Wang1*

1Westlake University, 2Zhejiang University
Corresponding author: wanghuan [at] westlake [dot] edu [dot] cn*

OBS-Diff
Qualitative comparison of unstructured pruning methods on the SD3-Medium model. We evaluate Magnitude, DSnoT, Wanda, and our method (OBS-Diff) at various sparsity levels (20%, 30%, 40%, and 50%) using the same prompt and negative prompt. All images are generated at a resolution of 512 x 512.

Abstract: Large-scale text-to-image diffusion models, while powerful, suffer from prohibitive computational cost. Existing one-shot network pruning methods can hardly be directly applied to them due to the iterative denoising nature of diffusion models. To bridge the gap, this paper presents OBS-Diff, a novel one-shot pruning framework that enables accurate and training-free compression of large-scale text-to-image diffusion models. Specifically,(i) OBS-Diff revitalizes the classic Optimal Brain Surgeon (OBS), adapting it to the complex architectures of modern diffusion models and supporting diverse pruning granularity, including unstructured, N:M semi-structured, and structured (MHA heads and FFN neurons) sparsity; (ii) To align the pruning criteria with the iterative dynamics of the diffusion process, by examining the problem from an error-accumulation perspective, we propose a novel timestep-aware Hessian construction that incorporates a logarithmic-decrease weighting scheme, assigning greater importance to earlier timesteps to mitigate potential error accumulation; (iii) Furthermore, a computationally efficient group-wise sequential pruning strategy is proposed to amortize the expensive calibration process. Extensive experiments show that OBS-Diff achieves state-of-the-art one-shot pruning for diffusion models, delivering inference acceleration with minimal degradation in visual quality.

News!

[2025-10-09] We have released the core code and paper for OBS-Diff!

Framework

OBS-Diff
Illustration of the proposed OBS-Diff framework applied to the MMDiT architecture. Target modules are first partitioned into a predefined number of module packages and processed sequentially. For each package, hooks capture layer activations during a forward pass with a calibration dataset. This data, combined with weights from a dedicated timestep weighting scheme, is used to construct Hessian matrices. These matrices guide the Optimal Brain Surgeon (OBS) algorithm to simultaneously prune all layers within the current package before proceeding to the next.

Some Quantitative Results

OBS-Diff
Quantitative comparison of unstructured pruning methods on text-to-image diffusion models. The best result per metric is highlighted in bold.

OBS-Diff
Performance of semi-structured (2:4 sparsity pattern) pruning on the Stable Diffusion 3.5-Large model. Pruning is applied to the 3rd through 25th MMDiT blocks. The best result is shown in bold.

OBS-Diff
Performance of structured pruning on the Stable Diffusion 3.5-Large model across various sparsity levels. The first and last transformer blocks were excluded from the pruning process. The TFLOPs metric represents the theoretical computational cost for a single forward pass of the entire transformer. For each sparsity group, the best result per metric is highlighted in bold.

Some Qualitative Results

OBS-Diff
Qualitative comparison of unstructured pruning methods on the SD3-Medium model. We evaluate Magnitude, DSnoT, Wanda, and our method (OBS-Diff) at various sparsity levels (20%, 30%, 40%, and 50%) using the same prompt and negative prompt. All images are generated at a resolution of 512 x 512.

OBS-Diff
Qualitative comparison of unstructured pruning methods on Flux 1.dev at 70% sparsity. Results from Magnitude, DSnoT, Wanda, and our proposed OBS-Diff are shown.

OBS-Diff
Qualitative comparison of structured pruning methods on the SD3.5-Large model at various sparsity levels (15%, 20%, 25%, and 30%). Results from the L1-norm baseline and our proposed OBS-Diff are shown.

Quick Start

1. Installation

First, install our codebase:

git clone https://github.com/alrightlone/OBS-Diff.git
cd OBS-Diff

Then, install the dependencies:

pip install -r requirements.txt

You need to install models (SD3-Medium) from Hugging Face and calibration dataset (GCC3M) from Conceptual Caption 12M.

2. Usage

  • Unstructured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Unstructured.sh
  • N:M Semi-structured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Semi.sh
  • Structured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Structured.sh

Note: You need to change the path to the models and calibration dataset in the scripts and codes.

Contact

If you have any questions, please contact us at zhujunhan@westlake.edu.cn.

Acknowledgments

We thank the following projects for their contributions to the development of OBS-Diff: SparseGPT, Wanda, DSnoT, EcoDiff, SlimGPT, DepGraph, Diff-Pruning.

Citation

If you find this work useful, please consider citing:

@article{zhu2025obs,
  title={OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot},
  author={Zhu, Junhan and Wang, Hesong and Su, Mingluo and Wang, Zefang and Wang, Huan},
  journal={arXiv preprint arXiv:2510.06751},
  year={2025}
}

About

Offical implementation of "OBS-Diff".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published