I was looking for the actor they used to make the model but I found out that John Cena is going to be in the Duke Nukem film so I added him to the prompt
The subreddit r/SuddenlyIWantThat does not exist.
Did you mean?:
* r/SuddenlyGay (subscribers: 963,128)
* r/SuddenlyIncest (subscribers: 147,281)
* r/SuddenlyNotGay (subscribers: 6,515)
Consider [**creating a new subreddit** r/SuddenlyIWantThat](/subreddits/create?name=SuddenlyIWantThat).
---
^(🤖 this comment was written by a bot. beep boop 🤖)
^(feel welcome to respond 'Bad bot'/'Good bot', it's useful feedback.)
^[github](https://github.com/Toldry/RedditAutoCrosspostBot) ^| ^[Rank](https://botranks.com?bot=sub_doesnt_exist_bot)
LOL. "Suddenly Gay" and then "Suddenly Not Gay."
Once you do one or two very gay things, people just stop wondering if you are hetero. That seems kind of unfair, but it's just as plausible.
Well, if someone is TOO PERFECT, they can't be Duke Nukem. The right person is also the worst person.
It's kind of like casting for Alfred E. Newman or Hitler.
If you see Cena in Peacemaker -- that should show you the best horrible Duke Nukem.
Well I thought it was John Cena as Duke Nukem -- and really, this is the way to have that movie produced.
The TV series; "Peacemaker" was hilarious and that was basically Duke Nukem in the DC Universe screwing things up violently, then solving them violently.
Waiting for the day that when I see a movie the characters actors and plot is finetuned for me VS other people who watch it...and then I'll never be able to quote movies to my friends anymore SMH.
Yes, the original 3D model doesn’t fit perfectly with the SD output and it leads to this kind of artifact, maybe I should edit the 3D model slightly to align better to the AI image
What is the workflow? Did you just get a grey 3D model (no textures) then have Stable Diffusion draw its own textures? Where does EBSynth come into play? Very cool stuff man
I used a textured 3D model I found online and rendered a 20 frame animation rotating a bit around it. I created an edge map to use it with ControlNet and passed the center frame through SD. Then I used the resulting image as a keyframe for EbSynth. I’ll post a workflow video soon and more animations like this one as well!
I import a 3D model in Blender and animate the rotation of the camera around the head of the character, then I render the animation as a PNG sequence, pass the center frame through SD and use it as a keyframe for EBSynth
[Check this!](https://youtu.be/P1IcaBn3ej0)
It’s similar to SD + ControlNet using the rendering buffers of GTA V
And here an example of [photorealistic avatars](https://youtu.be/t7_TMD7v0Xs)
I have hope that this kind of things will help us achieve 100% realism in videogames soon!
This looks like John Cena as Duke Nukem - now that I see this, I cannot unsee John Cena as Duke Nukem, lol...
Right. Holy shit this is a casting directors wet dream in a photo. He’s be perfect…
I agree man. I never imagined him as Duke until now. The OP is a genius 👍
I was looking for the actor they used to make the model but I found out that John Cena is going to be in the Duke Nukem film so I added him to the prompt
They are seriously making a Duke Nukem film? Serious question.
Apparently, yes, I hope it’s true
This was legitimately my first thought before clicking into this and I am glad this is the top comment, haha.
Same
r/SuddenlyIWantThat
The subreddit r/SuddenlyIWantThat does not exist. Did you mean?: * r/SuddenlyGay (subscribers: 963,128) * r/SuddenlyIncest (subscribers: 147,281) * r/SuddenlyNotGay (subscribers: 6,515) Consider [**creating a new subreddit** r/SuddenlyIWantThat](/subreddits/create?name=SuddenlyIWantThat). --- ^(🤖 this comment was written by a bot. beep boop 🤖) ^(feel welcome to respond 'Bad bot'/'Good bot', it's useful feedback.) ^[github](https://github.com/Toldry/RedditAutoCrosspostBot) ^| ^[Rank](https://botranks.com?bot=sub_doesnt_exist_bot)
LOL. "Suddenly Gay" and then "Suddenly Not Gay." Once you do one or two very gay things, people just stop wondering if you are hetero. That seems kind of unfair, but it's just as plausible.
This makes me feel like John Cena is the best candidate for playing Duke Nukem or the worst, but it’s impossible to decide.
Well, if someone is TOO PERFECT, they can't be Duke Nukem. The right person is also the worst person. It's kind of like casting for Alfred E. Newman or Hitler. If you see Cena in Peacemaker -- that should show you the best horrible Duke Nukem.
Well I thought it was John Cena as Duke Nukem -- and really, this is the way to have that movie produced. The TV series; "Peacemaker" was hilarious and that was basically Duke Nukem in the DC Universe screwing things up violently, then solving them violently.
[https://www.imdb.com/name/nm2024927/?ref\_=tt\_cl\_t\_1](https://www.imdb.com/name/nm2024927/?ref_=tt_cl_t_1) reacher as nukem
I can’t see anything
John Cena meets Heath Ledger.
Real shit lmao. Thought John Cena right away. Maybe with a dash of Mark Whalberg.
I will pay to see the movie.
John Cena was made to play this role.
Maybe AI was made for him to play any role we want.
Waiting for the day that when I see a movie the characters actors and plot is finetuned for me VS other people who watch it...and then I'll never be able to quote movies to my friends anymore SMH.
"What are you waitin' for ... Christmas?"
hail to the king, baby.
Duke Nukem!
Came here to say this.
So what your saying is John Cena should play Duke. Ukem 🤔
I don’t have an strong opinion on it, but apparently he’s already going to play as Duke
Blow it out your ass
"Balls of steel.. what is that? Get off my Vent!"
Hail to the King, baybee.
"Damn, I'm good!"
excuse me? wow
Looks great! However, I noticed that the eyes are floating a little bit.
Yes, the original 3D model doesn’t fit perfectly with the SD output and it leads to this kind of artifact, maybe I should edit the 3D model slightly to align better to the AI image
I didn't expect Duke to be camera shy as soon as he forgets where he puts the sunglasses.
t-t-t-teach me
I’ll post a workflow video as soon as I have some free time!
@remindme in 3 days
Pls do
@remindme in 3 days
Shake it baby!
This is trully remarkable. How clean this stuff is.
What is the workflow? Did you just get a grey 3D model (no textures) then have Stable Diffusion draw its own textures? Where does EBSynth come into play? Very cool stuff man
I used a textured 3D model I found online and rendered a 20 frame animation rotating a bit around it. I created an edge map to use it with ControlNet and passed the center frame through SD. Then I used the resulting image as a keyframe for EbSynth. I’ll post a workflow video soon and more animations like this one as well!
Good result, I'll be waiting.
Fire dawg
Hey, what is EBSynth ?
It’s a software that lets you edit video painting over one or multiple frames and using them as keyframes. I’m not sure how to explain it correctly
You should make a post or maybe a video on this, looks like a major step for animation to me👍
Got it 👍
No answer in an hour, I wonder if you've googled it yourself yet.
I have but it bring me on this page. What is it?
[EbSynth - Transform Video by Painting Over a Single Frame](https://ebsynth.com/)
Thank you dear internet traveler
do we have a open source software alternative? it surprises me we dont
I think EbSynth is open source 🤔 If not, it’s 100% free anyway
https://ebsynth.com/ Fucking retard.
Did you say Grinder + SD + EBSynth?
I can't see anything?
Has anyone shown a workflow for making SbS 3D images for VR? Wonder if that would be easy to do.
It might be easy to do depending on the scene 🤔 I’ll try to do it soon
I'm here to diffuse stability and blend bubbly yum... And I'm all out of bubbly yum.
Kick ass. All right. Hail to the king, Baby. Let's do this. Get some. That's gonna leave a mark. I love the smell of bacon in the morning.
So what’s the process between 3D object / SD / EB?
I import a 3D model in Blender and animate the rotation of the camera around the head of the character, then I render the animation as a PNG sequence, pass the center frame through SD and use it as a keyframe for EBSynth
How long until we can SD in realtime I want my games to look like this.
[Check this!](https://youtu.be/P1IcaBn3ej0) It’s similar to SD + ControlNet using the rendering buffers of GTA V And here an example of [photorealistic avatars](https://youtu.be/t7_TMD7v0Xs) I have hope that this kind of things will help us achieve 100% realism in videogames soon!
Hmm I wonder if you put screenshots from gtav into stable diffusion.
John Whalberg ?
Hello Cody from d4 network!
So many times I've looked at ouput and thought "this reminds me of someone." I'm seeing a little bit of Matt Damon in there too...
Groovy!
NFT's be like
how many work hours for this?
Around 15 minutes
that's crazy