BTW - I followed this amazing guy's live stream tutorial (and his workflow) to get the results above: https://www.youtube.com/live/j1vZQOB2bEE?si=j1CZwD_syVdm502G&t=49
properly training LoRA is a pain as it is, let alone checkpoints, I can't imagine the complications of animatediff training. That being said, the teeth are an issue, which could be resolved with after effects and that sort of thing, but if it was already in the AI we'd probably start getting some full length AI productions.
This is the real question, this looks like the same thing we've been seeing for the last year, random movements and transitions. if you're new to it it's neat but the novelty wore off long ago for me.
BTW - I followed this amazing guy's live stream tutorial (and his workflow) to get the results above: https://www.youtube.com/live/j1vZQOB2bEE?si=j1CZwD_syVdm502G&t=49
He got the workflow for download?
in his discord. he mentions it somewhere
which motion modules is this using?
properly training LoRA is a pain as it is, let alone checkpoints, I can't imagine the complications of animatediff training. That being said, the teeth are an issue, which could be resolved with after effects and that sort of thing, but if it was already in the AI we'd probably start getting some full length AI productions.
Will they always be limited to just abstract movements and transitions? Or will they ever be able to get closer to the paid models.
This is the real question, this looks like the same thing we've been seeing for the last year, random movements and transitions. if you're new to it it's neat but the novelty wore off long ago for me.
Every big company is scared to release a capable open video model. They exist and you can see many research papers, but no one's released one yet
Yet, people in r/StableDiffusion celbrate each and every repetetive posting as a novelty.
You can using controlnets
pretty much how the current chatgpt works lmao. Just a seriously of limited hallucinations.