diff --git a/docs/taesd.md b/docs/taesd.md index 5160b7934..a41c64d43 100644 --- a/docs/taesd.md +++ b/docs/taesd.md @@ -14,4 +14,26 @@ curl -L -O https://huggingface.co/madebyollin/taesd/resolve/main/diffusion_pytor ```bash sd-cli -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat" --taesd ../models/diffusion_pytorch_model.safetensors -``` \ No newline at end of file +``` + +### Qwen-Image and wan (TAEHV) + +sd.cpp also supports [TAEHV](https://github.com/madebyollin/taehv) (#937), which can be used for Qwen-Image and wan. + +- For **Qwen-Image and wan2.1 and wan2.2-A14B**, download the wan2.1 tae [safetensors weights](https://github.com/madebyollin/taehv/blob/main/safetensors/taew2_1.safetensors) + + Or curl + + ```bash + curl -L -O https://github.com/madebyollin/taehv/raw/refs/heads/main/safetensors/taew2_1.safetensors + ``` + +- For **wan2.2-TI2V-5B**, use the wan2.2 tae [safetensors weights](https://github.com/madebyollin/taehv/blob/main/safetensors/taew2_2.safetensors) + + Or curl + + ```bash + curl -L -O https://github.com/madebyollin/taehv/raw/refs/heads/main/safetensors/taew2_2.safetensors + ``` + +Then simply replace the `--vae xxx.safetensors` with `--tae xxx.safetensors` in the commands. If it still out of VRAM, add `--vae-conv-direct` to your command though might be slower. diff --git a/docs/wan.md b/docs/wan.md index ce15ba589..6f5749c8c 100644 --- a/docs/wan.md +++ b/docs/wan.md @@ -39,6 +39,9 @@ - safetensors: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors - wan_2.2_vae (for Wan2.2 TI2V 5B only) - safetensors: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan2.2_vae.safetensors + + > Wan models vae requires really much VRAM! If you do not have enough VRAM, please try tae instead, though the results may be poorer. For tae usage, please refer to [taesd](taesd.md) + - Download umt5_xxl - safetensors: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/text_encoders/umt5_xxl_fp16.safetensors - gguf: https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main