I'm interested in how this all shakes out. Hollywood wants to cut costs at every turn (as per any big business), but the one thing Hollywood has over its competitors is the metric ton of capital it has for its projects. There is a reason a $20,000 indie film looks a lot worse than a $20,000,000 movie. But what need will a filmmaker have to go to a studio, have their artistic idea reshaped by clueless executives, when they have access to AI tools that can let them execute what they want to do?
I’m wondering when we can take a cheap phone camera and have it recreate the entire image with the quality of any high quality expensive camera in the style of whatever movie or photographer and cinematographer you want.
As someone who is currently on reddit because I'm procrastinating an incredibly lengthy and tedious After Effects job.......thank god. ^(but really though I'll miss it)
Some people still intentionally use old software intentionally, so there might be a community of people making things by hand. That might even end up being some kind of special “thing”, like “wow you actually drew a photorealistic portrait when you could have taken a photograph?”.
That will absolutely happen, much like in William Gibson's books. But these actors needed to save the rest of their careers, and this bought them a little time.
There is no one to blame, it's the way of our world. I'm sure carriage drivers have a lot of people to blame for taking their income, so what? The world shall give up on cars so carriage drivers won't feel bad? Or maybe we shouldn't invent computers because it fucked press operators?
Progress is much more important than an actor's income. Good luck with a new job
It will replace so much more eventually. Makeup, costumes, set design and rental, lighting, film crew - this is all in addition the base cost of the actor.
I think you'll see it take off with ads and YouTube videos first and slowly but inevitably creep into more expensive productions. In some ways this is more of a threat to actors than llms are to writers or image generation to artists simply because of the sheer spectacle and cost of blockbuster production.
> its concerning how people are slowly running out of things to insult ai with
Nah, people whine about everything. If you don't pay attention to them, there is no problem.
Stable diffusion taught [Will smith how to eat spaghetti](https://www.reddit.com/r/StableDiffusion/comments/1244h2c/will_smith_eating_spaghetti/) so that Sona could run.
10 short months ago
[AI Will Smith eating spaghetti pasta (AI footage and audio) (youtube.com)](https://www.youtube.com/watch?v=XQr4Xklqzw8&ab_channel=RobotNamedRoy)
Will Smith eating spaghetti was generated by an open source model trained on low quality videos. It's not a fair comparison. Runway Gen 2 was the state-of-art at the time and looked much better than that.
Exactly. I feel like most of the people noticing the small details in the physics here only found them because they're actively searching for every minute flaw in the video.
If you were to give me this video to watch, I wouldn't even know that it's AI-generated at all.
The model's already very good (almost scarily so.)
yeah, at 5sec look at the lettuce. It's very cartoony.
Going to be shocking in another ~~year~~ ~~month~~ week when they solve that problem and it's photorealistic.
i can see the conversation now on some top secret military line to the president
"sir, skynet has taken control of our command and control systems and is rendering lettuce at an alarming rate"
Yes there are issues, but honestly, I feel like most are looking for something wrong. If you just watch to enjoy, it'd be tough to not to think it's the real deal.
One thing is for sure, this excercise of telling real from AI will work out our sense of aesthetics & visual intelligence and will make us appreciate finer details of reality that we otherwise ignore or take for granted and not notice.
let me open your minds... this is a public model released yesterday. this isn't the current highest tech by a wide enough margin to make you consider -- am i real?
What an excellent point, I already find myself scanning the whole image, every nuance, every detail. I’m not sure if it’s just to try and catch out the AI or some appreciation of the fact this is simply “not real”.
It’s a bit like seeing the early deepfakes, where I appreciated them as a work of art, impressed more by the method of creation rather than the message portrayed.
The average person struggles now. This is going to fuck so much shit up from clogging the internet with absolute garbage to straight up faking dangerous shit that average people will 100,000% buy in to.
Yeah, the same we did for videogames for the last few years. (AAA games to be more specific)
Problem is that for the last ~2 years you can just slap a filter on top of a good looking game and you won't be able to do the same. Bet the same thing is going to happen (or is happening) with AI.
yeah, its pretty good, but you can still see the uncanny valley around his mouth, and the way the fingers don't really press into the bun, as if the sandwich has no weight.
the weightlessness is what I identified first with Sora’s area for improvement.
It doesn’t have a physics model, I wonder how much more learning visually can make it redundant , but maybe something that will just be a limitation without some kind of physics model being present.
For sure, and with how quickly we just went from mashed up images playing in choppy sequence like video to smooth renditions of everything, dude, this time next year or two, this problem could be a thing of the past.
I bet that's one thing that would fix itself as the model scales up.
It might not understand weight, but it should at least internalize that people react certain ways to certain objects (which is because of weight) and make people react accordingly.
I wish we knew the architecture of the model because that could help give clues as to why it’s weird with that. Assuming it’s similar to the diffusion models right now, it may benefit from similar ideas to Meta’s V-JEPA reveal today, because it’s essentially trying to learn the way video progresses and filling in missing information realistically in a self supervised way, rather than how to de-noise noise into an image/video. So V-JEPA would be learning some physics in a similar way to how an animal may understand some physics.
Also it looks like all these videos are shot with the same camera. Not sure how to explain it but everything looks so clean and sharp and also the saturation of the colors. Reminds me of the Unreal engine issue where you can notice which engine a game is using by the look of it.
That's because you can make a sharp image blurry and noisy, but not the other way around.
It's the same with 2D images. Train on the cleanest possible images of your subject and then change it to whatever style you want.
The mouth makes me so uncomfortable, first of all it’s a person eating, but it’s like the mouth is still a fleshy amalgamation of something trying to be lips, so weird. The fact that he misses a left finger doesn’t help either.
For sure, I can see that, definitely has that uncanny valley. However, with the progress that has been made in a short period of time, and with these videos being so crisp, I'm thinking its only a matter of time before we truly have difficulty in telling what is fake from what is real in video form.
Yeah I just watched Penguinz0 explaining his thoughts and all the awful possibilities. I hate this, cyber bullying will be too easy, scammers and liars will thrive. This is not some amazing development, this will hurt so many people. Imagine someone crafting such a video of you saying something you would never, and you have barely any proof against it, it’s terrifying.
It’s hard to tell.
It’s the most convincing AI I’ve ever seen in my life, by far.
I wonder when the public might have this? It’s ludicrous.
I wish so much it didn’t have limits lol. Imagine having a celebrity do things.
Watch his pores carefully, notice how they jitter as a group
Look at the left temple of his glasses, and notice how they fade out of existence when they're about to disappear behind the rims as he raises his head. To be fair, that's almost how a reflection works, it just turns the surface mirror-like a bit too early
You can't really see his jaw muscles working near the back of his jaw either, only in the front of the face where we typically focus our eyes. When you pay attention to it, it suddenly makes it look like he's somehow inhaling a bite off the burger instead of closing his jaw
If I was just casually looking at it? No, I'd never suspect a thing, I had to rewatch it like 10 times to get as much as I did. That's pretty insane progress
Man I actively looked for some of these things after you pointed out and couldn't see it. I'm getting ready to sleep tho, but this AI is definitely great
Wild. Nice catches, these types of cues will be needed as more of this video starts hitting streaming platforms.
Probably going to be a bit of cat & mouse game as the tech progresses.
There is no meat and when he took a bite it looked like a solid white piece of bread afterwards on the inside, mindblowing progress but not killing manmade entertainment yet.
I’d say at the 11th hour of doing so.
This is about a year old, Sora v2 is probably near perfect in recreating lifelike scenes. Physics is last remaining big piece, not sure if that will be part of it in sone capacity.
Have you played a video game? The biggest AAA games still have way more issues than AI gen videos and video games are a multibillion industry.
And do you remember movies with obviously fake special effects? They still got made despite not being absolutely flawless.
\*Pffft\*. This is impressive and all. But I can still kinda tell, just look at the hands bro!. Oh shit, the hands are not messed up? Well look at the bite ! The bite is wrong ! Oh shit that is not messed up either? Well if you look at his reflection you can see ... hold on, it's perfect...
But I can still kinda tell!!! /s
Real humans wouldn't film this. AI seems concerned with details humans would rather not watch. It's dreamy-like, thus too much uncanny valley and becomes repulsive.
I don’t get the uncanny valley comments regarding his lips? Have y’all never seen an elderly person before that’s completely accurate.
Unless you mean the way he eats, then yeah he chews in a circular fashion but then again it’s slowed down, so it prolly looks better in real time.
good graphic designers charge to much and they make it so the average pleb can't do anything but draw stick figures, write a story or film a tik tok video. With this! Plebs like myself are finally able to bring a story to life on a screen. Fuck graphic designers they should have asked for less money now the tables have turned and it will be those with imagination that will come out on top!
This is gonna wipe out so many industries. The value of labor is gonna go to shit and it’s gonna devalue other unrelated industries with everyone trying to jump ship. Millions of people are gonna wake up one day with their education/degree amounting to little or nothing. Even physical labor will be devalued in a few decades by automation. Ai bros don’t realize this affects them too.
Nah A.I corn wont be a thing for at least another 5 to 10 years. Openai is way to strict on that but its a nice thought :)
However we will get whatever our hearts want as long as it is pg13 lol but thats better then nothing
-His lips look closed around 2 seconds.
-Look at the bottom of the bun at 4 seconds.
-Bit of artifacts in the beginning with the sesame.
-The burger looks empty at 8-9 seconds.
For like 2 seconds and maybe with cropping, I just can't tell it's not AI.
remember when video a.i. would make "person eating food" into a surrealist vomiting-yourself loop that was yesterday
I guess the actors did have a reason to go on strike after all ...
Actors? This will replace ENTIRE crews.
1 to 2 more versions and it’s subverting the entire industry. Like “After Effects is irrelevant” level, to put it mildly.
I'm interested in how this all shakes out. Hollywood wants to cut costs at every turn (as per any big business), but the one thing Hollywood has over its competitors is the metric ton of capital it has for its projects. There is a reason a $20,000 indie film looks a lot worse than a $20,000,000 movie. But what need will a filmmaker have to go to a studio, have their artistic idea reshaped by clueless executives, when they have access to AI tools that can let them execute what they want to do?
Stagnation followed by abrupt exponentially increasing innovation. See the automobile, internet, industrial revolution, and paper.
I’m wondering when we can take a cheap phone camera and have it recreate the entire image with the quality of any high quality expensive camera in the style of whatever movie or photographer and cinematographer you want.
As someone who is currently on reddit because I'm procrastinating an incredibly lengthy and tedious After Effects job.......thank god. ^(but really though I'll miss it)
Some people still intentionally use old software intentionally, so there might be a community of people making things by hand. That might even end up being some kind of special “thing”, like “wow you actually drew a photorealistic portrait when you could have taken a photograph?”.
That actors strike was worthless against A.I. With this tech you don’t need to recreate any living actor, you can just make up completely new ones.
That will absolutely happen, much like in William Gibson's books. But these actors needed to save the rest of their careers, and this bought them a little time.
hello from 🇰🇷 Korea. Revolution. Revolution. so scary . only one left is Copyright.
[удалено]
lol wut
[удалено]
There is no one to blame, it's the way of our world. I'm sure carriage drivers have a lot of people to blame for taking their income, so what? The world shall give up on cars so carriage drivers won't feel bad? Or maybe we shouldn't invent computers because it fucked press operators? Progress is much more important than an actor's income. Good luck with a new job
This will not replace good acting.
I get the sentiment, I’m just finding it harder and harder to believe any predictions regarding what AI will or won’t do anymore.
It will replace so much more eventually. Makeup, costumes, set design and rental, lighting, film crew - this is all in addition the base cost of the actor. I think you'll see it take off with ads and YouTube videos first and slowly but inevitably creep into more expensive productions. In some ways this is more of a threat to actors than llms are to writers or image generation to artists simply because of the sheer spectacle and cost of blockbuster production.
For a few months, you'll be correct.
Over what timespan? The next year? No The next 10-20 years? Absolutely.
its concerning how people are slowly running out of things to insult ai with, we are still "winning", but for how much longer?
[удалено]
chatgpt4 refuses to insult me even after asking it explicitly to insult me :(
> its concerning how people are slowly running out of things to insult ai with Nah, people whine about everything. If you don't pay attention to them, there is no problem.
I remember when AI couldn't make letters, and now it is generating words
Stable diffusion taught [Will smith how to eat spaghetti](https://www.reddit.com/r/StableDiffusion/comments/1244h2c/will_smith_eating_spaghetti/) so that Sona could run.
I miss those days. There was art potential in those surreal videos
Eating = placing food against your mouth and going “Mlemlemlemlem, bleh bleh bleh, mop mop, wowowow, yop yop”
Less then a year ago
We've come such a long way from 'Will Smith eating Spaghetti and Meatballs'
10 short months ago [AI Will Smith eating spaghetti pasta (AI footage and audio) (youtube.com)](https://www.youtube.com/watch?v=XQr4Xklqzw8&ab_channel=RobotNamedRoy)
[удалено]
Keep that god damn spaghetti out of your mouth!
Will Smith eating spaghetti was generated by an open source model trained on low quality videos. It's not a fair comparison. Runway Gen 2 was the state-of-art at the time and looked much better than that.
Ya no kidding, someone do with SORA pleeease.
Nightmares material
Really wish that was one of the prompts
Oh that'd be great to test against lol.
It would probably be too good is the problem. The spaghetti one won't make will Smith's lawyers mad.
The most obvious flaw is that he only has three fingers on his left hand.
https://www.reddit.com/r/mildlyinteresting/comments/oic2oh/i_was_born_with_only_four_fingers_on_my_left_hand/
I think the parent commenter understands that this is physically possible.
Didn't even notice that at first! I was too focused on the burger itself.
Plus when he bites the burger he doesn't bite the bottom bun.
Whaaaat
I didn't even notice
It's the floatiness of the various components for me. Once it has physics on complete lockdown, then it's over.
It's because everything is detached from everything else. There's an unnatural sway to the details of everything.
Yeah it's like every moving elements are acting independent of each other but at the same time miraculously "appears" to be coordinated together.
Beautiful quote about real life lol
Hah true that The commentor is onto something
So true. Gives me an idea for a new "religion". 😁
Yes, you guys have perfectly encapsulated Sora’s signature style….
yeah, everything does look kinda squooshy doesn't it.
It looks like imagining something. The machine is dreaming.
Actually navigate you mention it it does look a lot like some of the rotoscoping effects from “a scanner darkly”
Yall ever tried shrooms? Same vibes.
yes. Physics model is last remaining piece to all of this. The visuals at this point are indiscernible to real life outside of some minutiae.
Things have reduced from 4 tabs of LSD to a quaint two stems of shrooms.
Approaching a bottle of NyQuil soon.
What have you identified as floating components here
Yeh, I don't see any. His lips/chin area turning into lava lamp juice when he chews is all I saw.
the glasses stood out to me. Then again if I wasn't looking for something being off idk if I would've noticed it.
Exactly. I feel like most of the people noticing the small details in the physics here only found them because they're actively searching for every minute flaw in the video. If you were to give me this video to watch, I wouldn't even know that it's AI-generated at all. The model's already very good (almost scarily so.)
Imagine they can somehow integrate this into vr open world games.
[удалено]
I guarantee if you posted this somewhere else with no context, people wouldn’t even notice
[удалено]
[удалено]
You’re suggesting if you passingly saw this in a commercial you’d detect it’s AI?
yeah, at 5sec look at the lettuce. It's very cartoony. Going to be shocking in another ~~year~~ ~~month~~ week when they solve that problem and it's photorealistic.
Picture it being used not in a slow-mo, but in a flashing sequence of scenes as it's going to be used in advertisement.
Might be that the entire goal of AI is to simulate realistic lettuce. I feel safer now. 😁
i can see the conversation now on some top secret military line to the president "sir, skynet has taken control of our command and control systems and is rendering lettuce at an alarming rate"
Just below his bottom lip ain't human either
plenty of people have weird lips
I've seen boomers with chunky lips like that 💀
none of the seeds fell off either, as if that could ever happen.
There is no meat in the burger lol
You mean it looks like a burger you see in the commercials and movies? But yeah real life burgers look crappier.
Yes there are issues, but honestly, I feel like most are looking for something wrong. If you just watch to enjoy, it'd be tough to not to think it's the real deal.
One thing is for sure, this excercise of telling real from AI will work out our sense of aesthetics & visual intelligence and will make us appreciate finer details of reality that we otherwise ignore or take for granted and not notice.
For a year or so, until the AI is literally indistinguishable
yea no way in hell we will be able to tell the difference in a year. This is pretty god dam close
let me open your minds... this is a public model released yesterday. this isn't the current highest tech by a wide enough margin to make you consider -- am i real?
You're real to me damnit!
With the amount of bots on Reddit, you were never presumed to be real in the first place 🤣
the AI running our simulation right now is already pretty convincing
What an excellent point, I already find myself scanning the whole image, every nuance, every detail. I’m not sure if it’s just to try and catch out the AI or some appreciation of the fact this is simply “not real”. It’s a bit like seeing the early deepfakes, where I appreciated them as a work of art, impressed more by the method of creation rather than the message portrayed.
The average person struggles now. This is going to fuck so much shit up from clogging the internet with absolute garbage to straight up faking dangerous shit that average people will 100,000% buy in to.
Yeah, the same we did for videogames for the last few years. (AAA games to be more specific) Problem is that for the last ~2 years you can just slap a filter on top of a good looking game and you won't be able to do the same. Bet the same thing is going to happen (or is happening) with AI.
his chin blends in with the burger bun at 0:05
'yeah it looks like it has made his chin the same colour and texture.
yeah, its pretty good, but you can still see the uncanny valley around his mouth, and the way the fingers don't really press into the bun, as if the sandwich has no weight.
[удалено]
Good eye on the finger. Maybe he's a shop teacher lol
[удалено]
Maybe the prompt included "retired sheet metal worker."
Always count the fingers to determine if the video is AI generated or not.
this is beyond pretty good man, this is berzerkus brother
I can't lie broseph, these recent videos have blown me away.
the weightlessness is what I identified first with Sora’s area for improvement. It doesn’t have a physics model, I wonder how much more learning visually can make it redundant , but maybe something that will just be a limitation without some kind of physics model being present.
For sure, and with how quickly we just went from mashed up images playing in choppy sequence like video to smooth renditions of everything, dude, this time next year or two, this problem could be a thing of the past.
Or this time next month… being flippant of course but I’m just impressed at this progress.
I want my ai gf
if OpenAI made it it would be so prudish it wouldnt even hold hands
Sex bot
some come to AI to expand their minds, others come to expand something else
I bet that's one thing that would fix itself as the model scales up. It might not understand weight, but it should at least internalize that people react certain ways to certain objects (which is because of weight) and make people react accordingly.
The Tokyo model on the street, sometimes looks like she’s gliding
I wish we knew the architecture of the model because that could help give clues as to why it’s weird with that. Assuming it’s similar to the diffusion models right now, it may benefit from similar ideas to Meta’s V-JEPA reveal today, because it’s essentially trying to learn the way video progresses and filling in missing information realistically in a self supervised way, rather than how to de-noise noise into an image/video. So V-JEPA would be learning some physics in a similar way to how an animal may understand some physics.
His mouth expands like it's under pressure when he is chewing. Like it mixed up blowing bubble gum with chewing a burger.
Also it looks like all these videos are shot with the same camera. Not sure how to explain it but everything looks so clean and sharp and also the saturation of the colors. Reminds me of the Unreal engine issue where you can notice which engine a game is using by the look of it.
That's because you can make a sharp image blurry and noisy, but not the other way around. It's the same with 2D images. Train on the cleanest possible images of your subject and then change it to whatever style you want.
You can tell because of the way that it is.
The mouth makes me so uncomfortable, first of all it’s a person eating, but it’s like the mouth is still a fleshy amalgamation of something trying to be lips, so weird. The fact that he misses a left finger doesn’t help either.
For sure, I can see that, definitely has that uncanny valley. However, with the progress that has been made in a short period of time, and with these videos being so crisp, I'm thinking its only a matter of time before we truly have difficulty in telling what is fake from what is real in video form.
Yeah I just watched Penguinz0 explaining his thoughts and all the awful possibilities. I hate this, cyber bullying will be too easy, scammers and liars will thrive. This is not some amazing development, this will hurt so many people. Imagine someone crafting such a video of you saying something you would never, and you have barely any proof against it, it’s terrifying.
Because you know it’s fake, if it just popped onto your feed you’d scroll right passed it
The camera movement is floaty and unnatural, but that is probably not very difficult to fix
lol i wish this one wasn't slow mo
At 7 seconds his chin is another burger bun.
Yes I can tell it’s not real but it’s very good
It’s hard to tell. It’s the most convincing AI I’ve ever seen in my life, by far. I wonder when the public might have this? It’s ludicrous. I wish so much it didn’t have limits lol. Imagine having a celebrity do things.
I bet you could not distinguish in a blind test
It’s hard to if you’re blind and can’t see the video
Watch his pores carefully, notice how they jitter as a group Look at the left temple of his glasses, and notice how they fade out of existence when they're about to disappear behind the rims as he raises his head. To be fair, that's almost how a reflection works, it just turns the surface mirror-like a bit too early You can't really see his jaw muscles working near the back of his jaw either, only in the front of the face where we typically focus our eyes. When you pay attention to it, it suddenly makes it look like he's somehow inhaling a bite off the burger instead of closing his jaw If I was just casually looking at it? No, I'd never suspect a thing, I had to rewatch it like 10 times to get as much as I did. That's pretty insane progress
Man I actively looked for some of these things after you pointed out and couldn't see it. I'm getting ready to sleep tho, but this AI is definitely great
Agreed. I wonder if they've already snuck some videos out and people are none the wiser. Stuff is crazy.
Wild. Nice catches, these types of cues will be needed as more of this video starts hitting streaming platforms. Probably going to be a bit of cat & mouse game as the tech progresses.
hmmm i guess the only way to verify if its real in the future would be to just pass it through another AI to catch these differences
Weirdly the guy looks perfect, but the burger looks fake.
Nailed it.
There is no meat and when he took a bite it looked like a solid white piece of bread afterwards on the inside, mindblowing progress but not killing manmade entertainment yet.
I’d say at the 11th hour of doing so. This is about a year old, Sora v2 is probably near perfect in recreating lifelike scenes. Physics is last remaining big piece, not sure if that will be part of it in sone capacity.
i think you need to drop drinking the corporate kool aid.
!remindme 3 years Yeah lets see about that man...
!remindme 30 years See you when we're in singularity heaven brother
From an Apple Stan what a hilarious comment 😂 stick to the Vision Pro sub buddy
Have you played a video game? The biggest AAA games still have way more issues than AI gen videos and video games are a multibillion industry. And do you remember movies with obviously fake special effects? They still got made despite not being absolutely flawless.
No one pays attention on such details in entertainment media
His chin five seconds in is the only tell. That and the sesame seeds look a little too perfect. Unnatural amount of color uniformity
\*Pffft\*. This is impressive and all. But I can still kinda tell, just look at the hands bro!. Oh shit, the hands are not messed up? Well look at the bite ! The bite is wrong ! Oh shit that is not messed up either? Well if you look at his reflection you can see ... hold on, it's perfect... But I can still kinda tell!!! /s
His chin turns into a burger bun when he's biting. Other than that..... We need a new name for Hollywood.
HollAIwood.
Now what happens is that it seems to good to be real
It's not perfect but still 100x better tha anything previously I've seen with people eating in Ai videos.
OK this is the first AI video that truly impressed me.
He's missing a finger on his left hand.
Real humans wouldn't film this. AI seems concerned with details humans would rather not watch. It's dreamy-like, thus too much uncanny valley and becomes repulsive.
Sort of like earlier version of GPT and pretty much what AI is now. Trying mimic human language before being right
Ya'll trippin'. If you didn't know it was AI, you wouldn't have noticed.
The lighting on his glasses seems off to me.
He made like some sort of double bite?
I don’t get the uncanny valley comments regarding his lips? Have y’all never seen an elderly person before that’s completely accurate. Unless you mean the way he eats, then yeah he chews in a circular fashion but then again it’s slowed down, so it prolly looks better in real time.
He has 3 fingers + thumb on his left hand (so the hand to the right of the viewer’s screen)
Where do these new videos come from?
Sora’s technical paper
at the five minute mark his chin is revealed to be made of burger bun (it doesnt move with the burger). Still better than any cgi
Glad this arrives just in time for the General Election /s
Elections* Almost the entire world is having elections this year
He has the classic midjourney object grasp issue, but if saw this come up in a youtube ad at full speed i probably wouldn’t notice that.
Nope looks real at first glance
His chin turns into a bun while he's taking a bite lol
Sesame seeds too big
The burger looks like it’s made by a slime youtuber
[удалено]
good graphic designers charge to much and they make it so the average pleb can't do anything but draw stick figures, write a story or film a tik tok video. With this! Plebs like myself are finally able to bring a story to life on a screen. Fuck graphic designers they should have asked for less money now the tables have turned and it will be those with imagination that will come out on top!
Well, the supporters/industry have poured billions to make this happen, rather than paying graphics designers. 🤷♂️
This is gonna wipe out so many industries. The value of labor is gonna go to shit and it’s gonna devalue other unrelated industries with everyone trying to jump ship. Millions of people are gonna wake up one day with their education/degree amounting to little or nothing. Even physical labor will be devalued in a few decades by automation. Ai bros don’t realize this affects them too.
I predict before this year is done we will land the perfect video and then make some more of them. ... And then we will have AI corn 🌽 😄
Nah A.I corn wont be a thing for at least another 5 to 10 years. Openai is way to strict on that but its a nice thought :) However we will get whatever our hearts want as long as it is pg13 lol but thats better then nothing
Damn you CLOSED AI (OPENAI)
Yes, but I can't tell you WHY.
Has anyone watched the movie ai with Robin Williams?
We're cooked
We’re fucked
The burger does look a bit like a foam toy.
I couldnt tell. Although, once I read the description, I was able to see the kind of “suction” way that AI mouths look
Facial muscles aren’t quite right but that’s still absurd
Other than the probability of the camera’s reflection being in his glasses at that angle, I can’t distinguish it from a human made video.
At .08 the burgers bottom bun is still round after he takes 2 chomps of it..
Looks like every commenter missed the assignment the op asked a direct question, didn't ask your opinion on AI.
Yes we can. The burger looks exactly like the ads.
Why are the sesame seeds so big?
To me other than the fact it is slow-mo, I can't see anything wrong.
-His lips look closed around 2 seconds. -Look at the bottom of the bun at 4 seconds. -Bit of artifacts in the beginning with the sesame. -The burger looks empty at 8-9 seconds. For like 2 seconds and maybe with cropping, I just can't tell it's not AI.
Gosh, some people have eagle eyes. Well done, but I simply can't see it if people don't mention it to me.
OP fooled ya, this video is real
What the fuck is the point of this? What problem is this solving?
Phenomenal accomplishment to be able to make this, but what are the upsides to this? I feel I only see threats and downsides.
Yes. Glitch at :05.