Quantized GGUF versions of the TwinFlow Z-Image Turbo model, for stable-diffusion.cpp.
Converted from https://huggingface.co/azazeal2/TwinFlow-Z-Image-Turbo-repacked with stable-diffusion.cpp. Example conversion:
./sd-cli --mode convert --model TwinFlow_Z_Image_Turbo_exp_bf16.safetensors --tensor-type-rules "^context_refiner.*(attention\.(out|qkv)|feed_forward).*weight=q8_0,^(layers|noise_refiner).*(adaLN_modulation|attention\.(out|qkv)|feed_forward).*weight=q5_0" --output TwinFlow_Z_Image_Turbo_exp-Q5_0.gguf
(note, I didn't test these with ComfyUI, it may or may not work!)
Model Information
See the original model card at https://huggingface.co/inclusionAI/TwinFlow-Z-Image-Turbo
Usage
You need at least release master-385-34a6fd4. Get also the VAE and LLM model from https://huggingface.co/leejet/Z-Image-Turbo-GGUF .
Example command:
./sd-cli --diffusion-model TwinFlow_Z_Image_Turbo_exp-Q4_0.gguf --vae ae_bf16.safetensors --llm qwen_3_4b-Q8_0.gguf --cfg-scale 1 --steps 3 -p "an apple"
Parameters:
cfg-scale1sampling_methodeuler(default), withschedulerdiscrete(default),smoothsteporsgm_uniform, and 2-4 stepssampling_methoddpm2, withschedulersmoothsteporsgm_uniform, and 2-3 steps
For low VRAM setups, you may follow How to Use Z‐Image on a GPU with Only 4GB VRAM, changing the number of steps to 2-5.
Credits
- TwinFlow Z-Image Turbo
- Z-Image Turbo by Tongyi-MAI
- Safetensors model: https://huggingface.co/azazeal2/TwinFlow-Z-Image-Turbo-repacked
- GGUF files quantized with mainline stable-diffusion.cpp
License
Apache 2.0
- Downloads last month
- 570
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
Model tree for wbruna/TwinFlow-Z-Image-Turbo-sdcpp-GGUF
Base model
Tongyi-MAI/Z-Image-Turbo
Finetuned
inclusionAI/TwinFlow-Z-Image-Turbo