Stable diffusion thread

x1243x

Arch-Supremacy Member
Joined
Oct 29, 2004
Messages
11,713
Reaction score
3,605
How many times do they regenerate the prompt to get each frame right and consistent.. that they did not twll you
doesn't matter. they have the good hardware and software.. so the generation time is a lot faster
 

BlackWing1977

Greater Supremacy Member
Joined
Mar 1, 2006
Messages
99,413
Reaction score
36,496
I personally gave up trying to run any models locally. It has gotten like bitcoin mining last time. Too slow, even though it's good to bypass censors.
I still keep a old copy of Amuse installed that have the censor bypass... later version all have the censor...
 

AikiBoy

Arch-Supremacy Member
Joined
Mar 31, 2005
Messages
22,860
Reaction score
1,285
Sideline abit... this local YT channel is crazy. Views are in million..

Look thru some... i admit beside graphic, i duno what this channel about. Lol
 

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,730
Reaction score
3,498
Sideline abit... this local YT channel is crazy. Views are in million..

Look thru some... i admit beside graphic, i duno what this channel about. Lol
The channel is non-monetised because the content is targeted at children.
 

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,730
Reaction score
3,498
Wao i search the youtube monetise website. It say per vid estimates generate 75k USD. If monetise, the channel owner huat big
The owner of the channel can monetise it in the future, because the targeted children have all grown up to 18+.
But he needs to make new content that is not for children.
 

V-FORTY

Senior Member
Joined
Oct 11, 2022
Messages
2,046
Reaction score
1,033
@doogyhatts have you tried using webtools for your videos generation? some webtools are quite impressive. able to combine scenes into fluid videos. or is the quality of webtools too low?
 

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,730
Reaction score
3,498
@doogyhatts have you tried using webtools for your videos generation? some webtools are quite impressive. able to combine scenes into fluid videos. or is the quality of webtools too low?
I am using KlingAI right now. They have both the web-browser and desktop application.
I did also use Sora2, but only for their multi-shot storytelling solution.

Sora2 outputs 352p resolution at free tier, so I still have to use Z-Image-Turbo and Wan2.2 to upscale the keyframes.
After which, I use the images at 1080p resolution in KlingAI, using either video-2.6 or video-O1 model.
Additional shots can be made from Kling Lab, which has Nano-banana-pro. I use it to make my character sheets.

The overall quality has reached cinematic realism, depends on the model.
Luma's Ray3 is expensive, but that's because those are from the US.
 

AikiBoy

Arch-Supremacy Member
Joined
Mar 31, 2005
Messages
22,860
Reaction score
1,285
I am using KlingAI right now. They have both the web-browser and desktop application.
I did also use Sora2, but only for their multi-shot storytelling solution.

Sora2 outputs 352p resolution at free tier, so I still have to use Z-Image-Turbo and Wan2.2 to upscale the keyframes.
After which, I use the images at 1080p resolution in KlingAI, using either video-2.6 or video-O1 model.
Additional shots can be made from Kling Lab, which has Nano-banana-pro. I use it to make my character sheets.

The overall quality has reached cinematic realism, depends on the model.
Luma's Ray3 is expensive, but that's because those are from the US.
Sora2 is the old or new one?
 

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,730
Reaction score
3,498
Open-source can now do cinematic hard-cuts similar to Sora-2 (without the audio).
He used a 4080 to do this.

 
Last edited:

doogyhatts

Arch-Supremacy Member
Joined
Feb 13, 2018
Messages
12,730
Reaction score
3,498
but only can start with first frame right? cant use middle or last frames.
Yes.
The input frame is likely to be a medium shot, that shows the upper body of the person and the table.
But notice that the table is slightly different later on.
Then again, I am not exactly sure what his input image is, but the whole process is more like reference-to-video.
So I wonder if it can input a large 2x2 image.

Btw, if there are new standards in AI short-film production, I will post on the other thread.
This one is the new standard for doing WoW-based storytelling content using AI tools/models.
https://forums.hardwarezone.com.sg/...beedeo-sound-kong-chio.7130521/post-158172829
 
Last edited:
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top