doogyhatts
Arch-Supremacy Member
- Joined
- Feb 13, 2018
- Messages
- 12,316
- Reaction score
- 3,362
4-5 mins per image for those using 8gb vram GPU?
4-5 mins per image for those using 8gb vram GPU?
Once MagCache is ready for Flux2, it will help reduce the inference time.That’s quite good considering it doesn’t oom
17.7 seconds for the second generation ans 22 seconds for the 1st generation. 3080TI here.
Z-Image's architecture for their diffusion model is a slightly modified and larger version of the Lumina 2 architecture trained from scratch.
https://comfyanonymous.github.io/ComfyUI_examples/z_image/

So now people are using Z-Image as a refiner to Flux1.
A new image model by Alibaba.What is that?? Online or can download to install?
Hmm... I think still like quite complex to setup.... that is why I until now still yet to go try ComfyUI... and really dun have the interest to keep trying all these also..A new image model by Alibaba.
https://tongyi-mai.github.io/Z-Image-blog/
https://github.com/Tongyi-MAI/Z-Image
Need to download model and use it inside either ComfyUI or Wan2GP.
Wan2GP will auto-download the model.