A workflow developed while experimenting with Z-Image-Turbo, extending the ComfyUI base workflow with additional features. This repository includes pre-configured workflows for GGUF and SAFETENSORS formats.
- Style Selector: Choose from fourteen customizable image styles for experimentation.
- Sampler Selector: Easily pick between the two optimal samplers.
- Preconfigured workflows for each checkpoint formats (GGUF / Safetensors).
- Custom sigma values subjectively adjusted.
- Generated images are saved in the "ZImage" folder, organized by date.
- Includes a trick to enable automatic CivitAI prompt detection.
The repository contains two workflow files:
- "amazing_zimage-GGUF.json" : Recommended for GPUs with 12GB or less VRAM.
- "amazing_zimage-SAFETENSORS.json": Based directly on the ComfyUI example.
You'll often come across discussions about the best file format for ComfyUI. Based on my experience, GGUF quantized models offer a better balance between compactness and maintaining good prompt response compared to SafeTensors versions. However, it's also true that ComfyUI has internal speed enhancements that work more effectively with SafeTensors, which might lead you to prefer larger SafeTensors files. The reality is that this depends on several factors: your ComfyUI version, PyTorch setup, CUDA configuration, GPU type, and available VRAM and RAM. To help you out, I've included links below to various checkpoint versions so you can determine what works best for your specific system.
These nodes can be installed via ComfyUI-Manager or downloaded from their respective repositories.
- rgthree: Required for both workflows.
- ComfyUI-GGUF: Required if you are using the workflow preconfigured for GGUF checkpoints.
This is my recommended workflow.
Using Q5_K_S quants, you will likely achieve the best balance between file size and prompt response.
- z_image_turbo-Q5_K_S.gguf (5.19 GB)
Local Directory:ComfyUI/models/diffusion_models/ - Qwen3-4B.i1-Q5_K_S.gguf (2.82 GB)
Local Directory:ComfyUI/models/text_encoders/ - ae.safetensors (335 MB)
Local Directory:ComfyUI/models/vae/
Based directly on the official ComfyUI example.
While it may require more than 12GB of VRAM to run smoothly, ComfyUI's optimizations may allow it to work well on your system.
- z_image_turbo_bf16.safetensors (12.3 GB)
Local Directory:ComfyUI/models/diffusion_models/ - qwen_3_4b.safetensors (8.04 GB)
Local Directory:ComfyUI/models/text_encoders/ - ae.safetensors (335 MB)
Local Directory:ComfyUI/models/vae/
If neither of the two provided versions nor their associated checkpoints perform adequately on your system, you can find links to several alternative checkpoint files below. Feel free to experiment with these options to determine which works best for you.
-
Z-Image-Turbo (GGUF Quantizations) This repository hosts various quantized versions of the
z_image_turbomodel (e.g., Q4_K_S, Q4_K_M, Q3_K_S). While some of these quantizations offer significantly reduced file sizes, this often comes at the expense of final output quality. -
Z-Image-Turbo (FP8 SafeTensors) Similar to the GGUF options, this repository provides two
z_image_turbomodels quantized to FP8 (8-bit floating point) in SafeTensors format. These can serve as replacements for the original SafeTensors model, but in my opinion, they degrade quality quite a bit.
- Qwen3-4B (Various GGUF Quantizations)
This repository offers various quantized versions of the
Qwen3-4Btext encoder in GGUF format (e.g., Q2_K, Q3_K_M). Note: Quantizations beginning with "IQ" might not work, as the GGUF node did not support them during my testing.
This project is licensed under the Unlicense license.
See the "LICENSE" file for details.

