Stable diffusion thread

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,316
Reaction score
3,362
That’s quite good considering it doesn’t oom
Once MagCache is ready for Flux2, it will help reduce the inference time.

SVDQuant might take a while, as the model needs to be quantized to FP4.
A smaller model also reduces loading time and inference time.
 

jadedRomeo

Master Member
Joined
Jul 4, 2015
Messages
2,829
Reaction score
638
I personally gave up trying to run any models locally. It has gotten like bitcoin mining last time. Too slow, even though it's good to bypass censors.
 

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,316
Reaction score
3,362
Last edited:

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,316
Reaction score
3,362
Yay! I got Z-image Q8_0 gguf to work Qwen3-4B-UD-Q8_K_XL gguf.
s8WwZxz.png
 

BlackWing1977

Greater Supremacy Member
Joined
Mar 1, 2006
Messages
97,805
Reaction score
35,462
A new image model by Alibaba.
https://tongyi-mai.github.io/Z-Image-blog/
https://github.com/Tongyi-MAI/Z-Image

Need to download model and use it inside either ComfyUI or Wan2GP.
Wan2GP will auto-download the model.
Hmm... I think still like quite complex to setup.... that is why I until now still yet to go try ComfyUI... and really dun have the interest to keep trying all these also.. :sick:

But I do think eventually will be so common until a more refined solution will be available also.
 
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top