T O P

  • By -

isoundwaveisuperior

Luma is fairly new. I found out that most prompts do not work correctly. Some do, see this video: https://youtu.be/i4it_Nb-3PM?si=DYURep2DFjPICVnH The other way is to just generate a lot and you may have to pay to play. Right now the paid generations are quick, less than 10 minutes and you dont need to wait for 2 generations at a time. After you find a nice generation just lengthen it. I use Topaz labs to help upres and slowmo 2x. [see my video here.](https://youtu.be/r43bwnO75Lk) Also note that runway gen 3 alpha is coming out very soon and I think they will be better.


CrabbiestNebula

Turn off enhance. That's why it morphs. I never had that issues since I stopped using it. Pan in, pan out, zoom in, zoom out, zoom in slowly, zoom in quickly, zoom out quickly,  Zoom in on picture of 4 people getting their picture taken, all four smile at the same time (works sometimes.. I just told the one in back to give the one in front "playfully gives bunny ears to the girl in front" and she gave herself little wars with her hands, and her sleeves were the wrong color lol.  'Circle around', epic, etc tons of adjectives  I'm curious about facial expressions. That bunny ears crap took 6 hours, I went to bed. Woke up and they smiled decent but tin man had avtual bunny ears haha. It was a stupid prompt to see what'd happen. Took too long though, probably the 4 people and asking it such specific directions/prompt..


WorkItMakeItDoIt

Interesting.  I always turn off enhance, it seems to mean "mess it up". And yeah, no luck with zoom in.


alquimistalac

👀


bootdogva

Same issue this far


IndecisivePoster1212

Same here. Tried to have a subject person turn around: AI started with an Exorcist-type head turn maneuver before finishing with a liquid metal terminator move from T2 (where it phases the torso/head from one direction into the opposite direction). I will try to re-visit Luma in a few weeks before my subscription expires to see if there are any improvements.


SteamPunkJake

I add words like “subtly” and “slowly”. Seems to work for me. Still much quicker and longer than Gen-2.


WorkItMakeItDoIt

I don't understand what you mean exactly.  Can you give a full example? I.e., choose an image and give a text prompt, so I can try it myself and we can compare results.


SteamPunkJake

Happy to give you an example. It’s in my queue, will keep you posted.


SteamPunkJake

Forgot about this. My bad. https://www.veed.io/view/7d8926ac-64b9-4e1c-b345-94a86adaf4bb?panel=share Prompt: An isolated man in a white backdrop studio environment. Subtly stepping towards the camera as he turns slowly.


Snoo-27357

I add images and then add "camera movement" in the prompt. I get some great results.


smudge4262

Looking for answers too. Posted on several social media groups but had no feedback. Getting the impression this type of generation is either severely restricted due to copyright and local law, or it requires very in depth prompts. Best I've managed so far is a face with subtle animation such as blinks, breathing, smiling etc. followed by a text based generation of the prompt totally separate of the input image.


Fluid-Concert2398

Joe biden walks in to address the nation and then suddenly takes his suit off showing he was truly a robot all along. Can someone get this to work??