Junhan Zhu1, Hesong Wang1,2, Mingluo Su1, Zefang Wang1,2, Huan Wang1*
1Westlake University, 2Zhejiang University
Corresponding author: wanghuan [at] westlake [dot] edu [dot] cn*
Qualitative comparison of unstructured pruning methods on the SD3-Medium model. We evaluate Magnitude, DSnoT, Wanda, and our method (OBS-Diff) at various sparsity levels (20%, 30%, 40%, and 50%) using the same prompt and negative prompt. All images are generated at a resolution of 512 x 512.
Abstract: Large-scale text-to-image diffusion models, while powerful, suffer from prohibitive computational cost. Existing one-shot network pruning methods can hardly be directly applied to them due to the iterative denoising nature of diffusion models. To bridge the gap, this paper presents OBS-Diff, a novel one-shot pruning framework that enables accurate and training-free compression of large-scale text-to-image diffusion models. Specifically,(i) OBS-Diff revitalizes the classic Optimal Brain Surgeon (OBS), adapting it to the complex architectures of modern diffusion models and supporting diverse pruning granularity, including unstructured, N:M semi-structured, and structured (MHA heads and FFN neurons) sparsity; (ii) To align the pruning criteria with the iterative dynamics of the diffusion process, by examining the problem from an error-accumulation perspective, we propose a novel timestep-aware Hessian construction that incorporates a logarithmic-decrease weighting scheme, assigning greater importance to earlier timesteps to mitigate potential error accumulation; (iii) Furthermore, a computationally efficient group-wise sequential pruning strategy is proposed to amortize the expensive calibration process. Extensive experiments show that OBS-Diff achieves state-of-the-art one-shot pruning for diffusion models, delivering inference acceleration with minimal degradation in visual quality.
[2025-10-09] We have released the core code and paper for OBS-Diff!
Illustration of the proposed OBS-Diff framework applied to the MMDiT architecture. Target modules are first partitioned into a predefined number of module packages and processed sequentially. For each package, hooks capture layer activations during a forward pass with a calibration dataset. This data, combined with weights from a dedicated timestep weighting scheme, is used to construct Hessian matrices. These matrices guide the Optimal Brain Surgeon (OBS) algorithm to simultaneously prune all layers within the current package before proceeding to the next.
Quantitative comparison of unstructured pruning methods on text-to-image diffusion models. The best result per metric is highlighted in bold.
Performance of semi-structured (2:4 sparsity pattern) pruning on the Stable Diffusion 3.5-Large model. Pruning is applied to the 3rd through 25th MMDiT blocks. The best result is shown in bold.
Performance of structured pruning on the Stable Diffusion 3.5-Large model across various sparsity levels. The first and last transformer blocks were excluded from the pruning process. The TFLOPs metric represents the theoretical computational cost for a single forward pass of the entire transformer. For each sparsity group, the best result per metric is highlighted in bold.
Qualitative comparison of unstructured pruning methods on the SD3-Medium model. We evaluate Magnitude, DSnoT, Wanda, and our method (OBS-Diff) at various sparsity levels (20%, 30%, 40%, and 50%) using the same prompt and negative prompt. All images are generated at a resolution of 512 x 512.
Qualitative comparison of unstructured pruning methods on Flux 1.dev at 70% sparsity. Results from Magnitude, DSnoT, Wanda, and our proposed OBS-Diff are shown.
Qualitative comparison of structured pruning methods on the SD3.5-Large model at various sparsity levels (15%, 20%, 25%, and 30%). Results from the L1-norm baseline and our proposed OBS-Diff are shown.
First, install our codebase:
git clone https://github.com/alrightlone/OBS-Diff.git
cd OBS-DiffThen, install the dependencies:
pip install -r requirements.txtYou need to install models (SD3-Medium) from Hugging Face and calibration dataset (GCC3M) from Conceptual Caption 12M.
- Unstructured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Unstructured.sh- N:M Semi-structured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Semi.sh- Structured Pruning For SD3-Medium
bash ./scripts/OBS_Diff_Structured.shNote: You need to change the path to the models and calibration dataset in the scripts and codes.
If you have any questions, please contact us at zhujunhan@westlake.edu.cn.
We thank the following projects for their contributions to the development of OBS-Diff: SparseGPT, Wanda, DSnoT, EcoDiff, SlimGPT, DepGraph, Diff-Pruning.
If you find this work useful, please consider citing:
@article{zhu2025obs,
title={OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot},
author={Zhu, Junhan and Wang, Hesong and Su, Mingluo and Wang, Zefang and Wang, Huan},
journal={arXiv preprint arXiv:2510.06751},
year={2025}
}