This is a Local implementation of Deforum Stable Diffusion V0.5, supports json settings file. Supports all Stable Diffusion Models, Including v1-5-pruned.ckpt
Example Animated Video these example videos are generated using Deforum 0.5 and SD Check Point 1.5 (v1-5-pruned.ckpt). Also the settings for these examples are available in "examples" folder
Videos Generated using this Script Watch these videos on youtube, these were generated using Deforum Stable Diffusion V0.5. I built this script primarily to generate these kind of Videos.
This script is based on the Colab code by deforum (v0.5). I have tested it on Ubuntu 22.04 + Nvidia RTX 3090 ti
You can use an anaconda environment to run this on your local machine:
conda create --name dsdv0.5 python=3.8.5 -y
conda activate dsdv0.5
And then cd to the cloned folder, run the setup code.
python setup.py
Most of these files will be automatically downloaded during your first run. If not, you can download them manually.
- You need to get the
v1-5-pruned.ckptfile and put it on the./modelsfolder. It can be downloaded from HuggingFace. - Additionally, you should put
dpt_large-midas-2f21e586.pton the./modelsfolder as well, the download link is here - There should be another extra file
AdaBins_nyu.ptwhich should be downloaded into./pretrainedfolder, the download link is here
The run command should looks like this:
python run.py --settings "./settings/animation_settings.json" --generate_video true
The output results will be available at ./output folder.
Required variables & prompts for Deforum Stable Diffusion are set in the json file found in settings folder and I have also provided the settings for the example videos in "example" folder.
Thanks to
- Stable Diffusion by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer and the Stability.ai Team.
- K Diffusion by Katherine Crowson.
- Notebook by deforum
Enjoy!





