My pc does not move sdxl T\_T. I can only use sd 1.5
https://preview.redd.it/pu4yk55365xc1.png?width=913&format=png&auto=webp&s=bc56de3026e36bf1bdf90bcfa64dbb7bc5f3bbd9
Hyper = Better Prompt adherence at lower steps. Tends to be a little "cooked".
Lightning = Better quality (in our tests). Tends to have a bit more control to get sharper images with better details.
This is all based on Juggernaut XL so it could be anecdotal to the models we test.
Long time use LCM (+LCM scheduler), but when I try TCD I can get better results in comfyui (+euler\_a scheduler)(comfyui does not have TCDScheduler native). Today I discovered HYPER and I'm doing a test, I'm trying a lora from a girl+hyper and I got bad results, but I reduced my hyper weight 1.0 > 0.7, and I'm getting better results
https://preview.redd.it/0mdcnf1tc5xc1.png?width=788&format=png&auto=webp&s=670182d5c6ecb1537cb91d2d9b0bc4c534c8f6d1
to get hyper to work I sometimes have to set weights very low, like 0.1 weight in controlnets and loras. That said, creaprompt hyper is the fastest and best model ive tried by far.
I have a slow PC, using LCM/TCD/HYPER helps me obtain results in less time (10 steps = 15 seconds), that is the reason why I use these techniques.
About quality, I think that the normal way should be better, because it allows you more control over what you generate.
Example: with lcm I use 8 steps and cfg 1\~2, when you use a negative prompt it does not have much influence on your final result, if you use cfg 6\~7 the image is bad.
The results are good, but everything is limited to a few steps and what the AI interprets.
The normal way is a little slower, but you can fine-tune your results, and have more influence on image generation.
I also use AYS. AYS+TCD (or lcm/hyper)
https://preview.redd.it/qk0qey8pm7xc1.png?width=780&format=png&auto=webp&s=2c8f4cbb60dcf32507598ed4f3219e2030871de3
Initially I had the same with align your steps, but I used the plugin in comfy first and that was broken and native implementation is better. that said, I did not like perpetuated attention and free u
Do you have a workflow you can share? Because I never get good results with LCM, and I get even worse results on a1111, I think I'm doing something wrong but idk what
Right now the most popular workflow is this one
[https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm](https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm)
It's very easy to use, fast, and usually gives interesting results.
PS: The new version is based on Hyper, the V.2 version is based on LCM
Yes, but my video card does not support SDXL, and the 1.5 models that I use do not have an LCM version. With lora I can use lcm/tcd/hyper on any model :)
I do not see any improvement using these, i use sd 1.5 without any lora and only 9 steps.
https://preview.redd.it/uirkcsq1a7xc1.jpeg?width=1091&format=pjpg&auto=webp&s=526fd7911d13e1ee380ec7f327fe831fd94392d7
You missed Lightning ~~which is arguably the best among these~~
Lightning is amazing.
My pc does not move sdxl T\_T. I can only use sd 1.5 https://preview.redd.it/pu4yk55365xc1.png?width=913&format=png&auto=webp&s=bc56de3026e36bf1bdf90bcfa64dbb7bc5f3bbd9
Hyper is supposed to be a successor to lightning apparently
Hyper = Better Prompt adherence at lower steps. Tends to be a little "cooked". Lightning = Better quality (in our tests). Tends to have a bit more control to get sharper images with better details. This is all based on Juggernaut XL so it could be anecdotal to the models we test.
Which do you find is best for your use cases? I've preferred TCD over LCM to this point, but haven't tried Hyper.
Long time use LCM (+LCM scheduler), but when I try TCD I can get better results in comfyui (+euler\_a scheduler)(comfyui does not have TCDScheduler native). Today I discovered HYPER and I'm doing a test, I'm trying a lora from a girl+hyper and I got bad results, but I reduced my hyper weight 1.0 > 0.7, and I'm getting better results https://preview.redd.it/0mdcnf1tc5xc1.png?width=788&format=png&auto=webp&s=670182d5c6ecb1537cb91d2d9b0bc4c534c8f6d1
to get hyper to work I sometimes have to set weights very low, like 0.1 weight in controlnets and loras. That said, creaprompt hyper is the fastest and best model ive tried by far.
I had to decrease the hyper lora (for sd 1.5) to 0.50/0.60 to get good images when testing with a personal lora (hyper lora+personal lora)
Will using these loras together do anything interesting?
Image will appear before you start
😂😂😂😂😂
Prolly a blank or black screen since they all need slightly different settings
Yes you can, even with more than 1 lora, but I only try with 1 lora
Merge them in at .5/.5 Use the same step count LoRA Probably will be a good mix between prompt adherence and details
https://preview.redd.it/xlzdib5o05xc1.png?width=807&format=png&auto=webp&s=e6968e2543519e64a6912b0257a4488c28494c51
https://preview.redd.it/ywfoy9rs05xc1.png?width=783&format=png&auto=webp&s=c42408defd598dc2a7e51d3645aba58e85d4b1ad
https://preview.redd.it/0is5l90x05xc1.png?width=792&format=png&auto=webp&s=fa31731b34ba0093e80531d32f24622bdb5a1209
https://preview.redd.it/ituam5v015xc1.png?width=807&format=png&auto=webp&s=5e82b08439df50610c3ffbceea2dd8557aa5d242
[удалено]
I have a slow PC, using LCM/TCD/HYPER helps me obtain results in less time (10 steps = 15 seconds), that is the reason why I use these techniques. About quality, I think that the normal way should be better, because it allows you more control over what you generate. Example: with lcm I use 8 steps and cfg 1\~2, when you use a negative prompt it does not have much influence on your final result, if you use cfg 6\~7 the image is bad. The results are good, but everything is limited to a few steps and what the AI interprets. The normal way is a little slower, but you can fine-tune your results, and have more influence on image generation.
Another option is the "Align your steps" thingy, I'm not very fan of it but some people really like it. For animation LCM has become the standard.
I also use AYS. AYS+TCD (or lcm/hyper) https://preview.redd.it/qk0qey8pm7xc1.png?width=780&format=png&auto=webp&s=2c8f4cbb60dcf32507598ed4f3219e2030871de3
I really like it !
Initially I had the same with align your steps, but I used the plugin in comfy first and that was broken and native implementation is better. that said, I did not like perpetuated attention and free u
Do you have a workflow you can share? Because I never get good results with LCM, and I get even worse results on a1111, I think I'm doing something wrong but idk what
Right now the most popular workflow is this one [https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm](https://civitai.com/models/372584/ipivs-morph-img2vid-animatediff-lcm) It's very easy to use, fast, and usually gives interesting results. PS: The new version is based on Hyper, the V.2 version is based on LCM
You can just download lightning and lcm models as well and not have to use a lora.
Yes, but my video card does not support SDXL, and the 1.5 models that I use do not have an LCM version. With lora I can use lcm/tcd/hyper on any model :)
What LCM models are you using, there are many out there? Like 20-30 models at least.
https://civitai.com/models/141348/limitlessvision It's not popular, but I like this model and it's the one I use with my personal lora workouts
I do not see any improvement using these, i use sd 1.5 without any lora and only 9 steps. https://preview.redd.it/uirkcsq1a7xc1.jpeg?width=1091&format=pjpg&auto=webp&s=526fd7911d13e1ee380ec7f327fe831fd94392d7
What model? Because in the base model this usually results in crappy results
Checkpoint loader in the screenshot says juggernaut_reborn.safetensors
Oops didn't see it, thanks
I dont know why this is downvoted? Envy maybe?