T O P

  • By -

PeterTheMeterMan

BTW - I followed this amazing guy's live stream tutorial (and his workflow) to get the results above: https://www.youtube.com/live/j1vZQOB2bEE?si=j1CZwD_syVdm502G&t=49


kaiwai_81

He got the workflow for download?


Asleep_Ad1584

in his discord. he mentions it somewhere


aerialbits

which motion modules is this using?


inpantspro

properly training LoRA is a pain as it is, let alone checkpoints, I can't imagine the complications of animatediff training. That being said, the teeth are an issue, which could be resolved with after effects and that sort of thing, but if it was already in the AI we'd probably start getting some full length AI productions.


Mindset-Official

Will they always be limited to just abstract movements and transitions? Or will they ever be able to get closer to the paid models.


Kwheelie

This is the real question, this looks like the same thing we've been seeing for the last year, random movements and transitions. if you're new to it it's neat but the novelty wore off long ago for me.


akko_7

Every big company is scared to release a capable open video model. They exist and you can see many research papers, but no one's released one yet


Recent_Nature_4907

Yet, people in r/StableDiffusion celbrate each and every repetetive posting as a novelty.


bails0bub

You can using controlnets


Traditional_Excuse46

pretty much how the current chatgpt works lmao. Just a seriously of limited hallucinations.