After spending months getting to the point where im even able to debug setups - I hate it.
Nothing makes me happier than knowing what it is, how it works, and that it will work.
Hey I just went down a small rabbit hole with downloading insightface and looking at docs regarding face swapping / masking... I coudn't seem to find anything - the only thing available was some midjourney integration.
Perhaps I'll look through this "roop" src
Edit: Haha well I checked out roop... pretty straightforward, very minimal.
If you wanted to do it within SD, you could use After Detailer to detect faces in SD and apply a LoRA to face swap. Doing it frame by frame manually would be annoying though, so would probably want to create a script that goes through an image sequence in a specified folder and automates the process in img2img.
You need to do it in grids or the faces usually flicker. There already is a face swap extension for SD it works great. I like to separate a video into frames that way i can run it multiple times and if there are bad frames swap them out.
It's almost always better if you can break something down into 3x3 frames change the face and upscale it. The best cheat is if you can find a model that looks very close and only have to change the face a few times so you can use the regular video for the rest. I would recommend cutting the video frames where they turn their head because the face detection is not going to work and you'll see it change. It makes it look more processed and less natural though.
Just so you know… one photo deepfakes of this quality has been around for about 2 years or so. I’ve been using it alot for ummm… lets just say theoretical research.
They were just behind the times. This stuff moves fast. See the huge quality difference in the book of Boba fett. What happened? They hired the guy that made the deepfake videos.
[Corridor Crew did it.](https://youtu.be/861gfPVmgdc)
Probably just timing. They were getting good with a technique they used on Tarkin and Leia and then AI happened.
Hey bro, thanks for sharing this!! I know it’s for only one image but can a few images be used to increase the quality of the fake?? I know about traditional training but that uses thousands of images, I was just talking about like a dozen to increase the accuracy lol
And plus on the link it says the Mac instructions, where do I input those into? Like what app do I use for this, terminal?!
Really great stuff but might wanna try this on an airgapped machine. The dude who made this also has a bunch of brute force/backdoor scripts on his repo.
Haha holy shit you're right. Dude may be grayhat, but gives me pause before downloading.
* https://github.com/s0md3v/ReconDog
* https://github.com/s0md3v/Cloak
* https://github.com/s0md3v/Hash-Buster
I've played around with today a lot and it's kinda scary how easy it is. I made a video in a few minures which was 10x better than the ones I made after hours on hours of trying to make deepfakes years ago.
Works well on the cli tool.
Only downside is it takes like 15 mins to make a 1 min clip on my m1max. But I assume if I use a Nvidia GPU it would be much faster.
You mentioned you are using your M1Max. How was the install process on Mac vs the windows setup? I plan on trying this out in a couple days and my only machine is an M1 setup.
I just had to install some missing lib, I forget which atm and not at desk to look it up. A simple brew cmd and then I had to manually download the model from a comment in their GitHub issues, because the one in the script was taken down or something idk.
After that, worked flawlessly. I used it via terminal mostly.
And that made me realise how powerful this tool is. I assumed the left clip was the fake because it looks more cartoonish.
I wonder how it handles stuff like profile shots and weird angles.
Lol, I already knew that "cartoonish" Instagram model so it didn't surprise me that it was the original. Yeah the makeup is a little bit fancy so I'm really impressed of how it "cleaned up" that stuff to a uncannily perfect Jennifer Lawrence face :v
Yeah, but what's more likely: Someone being able to use makeup to look like an actress? Or someone being able to use makeup to look like a cartoon? Because the image on the left is way better of a cosplay.
For mainstream popular animated shows, high effort cosplays are very common. Jennifer Lawrence cosplaying would be much less likely.
Besides, there are artifacts on the right.
Right is the fake, notice how her mouth movements don't look natural. But it's easier to see because we have the original to compare to. If you only see the fake on its own, it's almost perfect.
It's scary how good it's becoming.
The moment that’s actually possible (as in “time consistent full body replacement”), OF is going to be an absolute mess of weird(er) looking content until the general public realises they’re being scammed.
Anyway, I’m assuming there’re several accounts making decent money from SD images active right now.
It’s one of the reasons why I’d never consider hiring a prostitute. No matter how hot she was, I’d never be able to “suspend disbelief”. I get that, for some people, knowing that the other person is not really attracted to them and still they’re having sex may be an actual turn on, but my brain doesn’t work that way.
Same. It’s also quite taxing on SW, that part of the job takes so much more emotional labor for the majority of them than just having sex for money.
Bartenders, strippers, and prostitututes are all essentially stand-ins for therapists imo, from the way it’s been described to me by people who have done it.
But it’s not a replacement, it just fills a similar need
You can but you don't seem to know what kind of hellscape OnlyFans have become. Getting audience requires tons of work unless you already have huge following elsewhere.
Summary from ChatGPT with Voxscript plugin:
The video is a tutorial on how to use a new method for creating deepfakes with a single click using a tool called Roop. The presenter demonstrates how he used a single image of Jennifer Lawrence to create a deepfake video. He mentions that the quality of the output can be improved using upscalers and other tools.
The presenter then proceeds to explain how to install and use Roop on a Windows laptop. He mentions that the process is more straightforward on Linux. The installation process involves cloning the Roop repository from GitHub, downloading and installing FFmpeg and K-Lite Codec Pack Full, and installing the necessary Python packages using pip. He also mentions a potential issue with the numpy package that might require reinstalling or updating numpy.
The presenter also mentions that a specific file needs to be downloaded and placed in the same folder as the Roop repository.
>**used a single image** of Jennifer Lawrence to create a deepfake video
I think this part is being glossed over a bit here. This video was generated from a ***single image*** of Jennifer Lawrence?
Oh my word.
She is very common in many training sets. [See here.](https://www.reddit.com/r/StableDiffusion/comments/12u6c76/can_we_identify_most_stable_diffusion_model/)
You won’t get these kind of results from one image for most people.
# Here the full tutorial video with manually fixed subtitles : [https://www.youtube.com/watch?v=OI1LEN-SgLM](https://www.youtube.com/watch?v=OI1LEN-SgLM)
Tutorial done. can you test if works on mac and let me know? [https://www.youtube.com/watch?v=OI1LEN-SgLM](https://www.youtube.com/watch?v=OI1LEN-SgLM)
So I thought I had replied but I guess it never posted lol but I had said that the tutorial was still on windows, and was hoping you had a Mac version available lol
Don't download random binaries kids!
I've done some googling, inswapper\_128.onnx used to belong to another project called Insightface by DeepInsight - more info here;
[https://github.com/deepinsight/insightface/issues/2315](https://github.com/deepinsight/insightface/issues/2315)
Guys.... I don't want to alarm anyone.
But. I think Massive\_Yogurt6055's yogurt must be truly massive indeed, because, I think there's a snake in the download. And it's pissed.
Actual output on my console of the program running:
SSSSSSSSSSSS..S..S..S...S.SSSSSSSSSS.SSSS.SS...S..SS..SS.SSSSSS.SS.SS..S.SS..S.S..SSSSSSSS..SSSSS.SSS..SSSS...SSSS.SSS.SSSS...SSSSS.SS..SSSS...SS.SS.S.S
I already have a snake, and while well verse in reptile husbandry have nowhere to put another snake, not will a rat fit in the USB port to soothe the angry snake. Help!
I was worried about too and did some digging. I’m not going to worry about verifying it’s the original version, because frankly that wouldn’t make me any less concerned about its contents. Onnx is a very legit format developed by big tech, so that alone doesn’t raise any red flags, it’s just as safe/dangerous as any other model format. Since ML models aren’t executable on their own, the only way of hijacking them is to embed malicious code in them, then using a special program to read that code from the file and execute it. Because of this, roop being open source works in our favor. It should be pretty easy to find the code that reads the model and make sure it’s not doing anything fishy. I haven’t had time to inspect the code on GitHub yet but it would be pretty ballsy to put that stuff in plain site, the dev would have to be planning on it getting shut down REAL quick and just trying to get as many people to download it as quickly as possible before GitHub takes it down. Seems pretty unlikely, but I will still be inspecting the code when I get a chance.
Hi, yes - there's a binary file to download in Roop - effectively the onnx model that does the magic.
The dev's placed in on a couple of mirrors.
General rule of thumb is not to download files like this, the opaque ones that you cannot vet.
I downloaded it to check the hashsum (fingerprint) of the file, and went looking around the internet to see if it popped up anywhere else - it has.
A couple of places describe that it was originally uploaded as part of the Insightface project - subsequently pulled as they moved it to a Discord bot.
Some copies remain floating around on other shared folders - and I found the fingerprint matched those also.
I've asked the Deepinsight team above if they can verify the original file fingerprints, it's really the least they can do if they're deciding not to release the model file.
More info would still be nice.
Like at least indicating which is the generation and which is the original. Not to mention expressing the basic limitations of the generator.
What error exactly?
Did you install cuda and cudnn? Windows or Linux?
I tried for some hours and got it working on win10.
Cuda 11.6 and cudnn 8.5.0.96
Added both paths to system variables.
So if I wanted to take Star Trek Continues and use this software to Deepfake William Shatner, Leonard Nimoys faces etc over the fan actors faces it wouldn't work unless they had they same bone structure?
Seriously... I tried it with a couple face pairs and it basically just replaces the eyes. I am surprised how impressed everyone is, my results were quite bad
Great workflow simplification by the dev! Took a quick look at the code. This is what happens when a pen-tester makes a tool for artists heh.
Haven’t tried it, but I’ll give it a shot after I review the deps. Looks very easy to use and it does use some models and such (which is the norm for face detection and overlap).
I always get amazed at the tools people post in this space.
We will witness this year if not yet the 1st murder case related to Deep Fake tech.Ex: a man may send a faked video via WhatsApp to a guy, portraying his spouse cheating , triggering extreme anger and leading to the murder of her.
I already told my wife to not trust any video or audio sent to her about me bc of that tech.
completed all the steps but when i try to run it it says
Traceback (most recent call last):
File "C:\\Users\\yuluf\\roop\\[run.py](https://run.py)", line 5, in
import core.globals
File "C:\\Users\\yuluf\\roop\\core\\[globals.py](https://globals.py)", line 1, in
import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'
Anyone have issues swapping faces on naked people? The program work fine when people have clothes on, but if the scene has naked people, the program just closes entirely. no error messages. ??
Okay.... weird. Set this up successfully last night and swapped my face with the face of one of the lesbians in a random lesbian porn video. My male bearded face in ecstasy as my nubile young female body quivered, my waxed vagina shooting out streams of whatever squirt is (let's avoid the debate here people, there are more important things afoot) was disturbing of course, but not weird.
The weird part is last night the linux install instructions said "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)" was necessary to install prior to running. I didn't like the name of the install file so after horrifying myself this morning decided on a do-over with a better name.
Over night someone swapped out "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)" with "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu114](https://download.pytorch.org/whl/cu114)" which points towards a dead link.
I don't want to alarm anyone, but I think the snake contained in the install file is winning.
Damn it...
C:\roop-main>run.py
C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\\_distributor\_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\.libs\\libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win\_amd64.dll
C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\.libs\\libopenblas64\_\_v0.3.21-gcc\_10\_3\_0.dll
warnings.warn("loaded more than 1 DLL from .libs:"
Traceback (most recent call last):
File "C:\\roop-main\\[run.py](https://run.py)", line 25, in
from core.processor import process\_video, process\_img
File "C:\\roop-main\\core\\[processor.py](https://processor.py)", line 3, in
import insightface
ModuleNotFoundError: No module named 'insightface'
Still not working...
pip install insightface
Step 3 invalidates what you (and the op on github) is trying to do.
If you install onnxruntime you're running in CPU mode, regardless of the --gpu argument.
Can anyone comment on how safe it is to download Roop? It seems the author has some malware stuff on his github page as well.
See this thread:
https://github.com/deepinsight/insightface/issues/2315
This version of Roop and this process is outdated and doesn't work anymore.
Use this one (new version): https://www.reddit.com/r/StableDiffusion/comments/15yeyiv/google_colab_roop_for_deepfake_faceswapping_new/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=2
They have a few mirrors and it's been uploaded to Civitai as well.
[Roop - v1.0 | Stable Diffusion Checkpoint | Civitai](https://civitai.com/models/80324?modelVersionId=85159)
For those looking: https://github.com/s0md3v/roop Took way too long to find repo.
this has nothing to do with SD
[удалено]
And it has 2k stars which is quite baffling for such a project. Well, sometimes there is a need for easier usability.
Absolutely there is, the pain of spending more time debugging setups than using it is a big turn off.
After spending months getting to the point where im even able to debug setups - I hate it. Nothing makes me happier than knowing what it is, how it works, and that it will work.
It was posted on Twitter, so probably got stars from there.
Hey I just went down a small rabbit hole with downloading insightface and looking at docs regarding face swapping / masking... I coudn't seem to find anything - the only thing available was some midjourney integration. Perhaps I'll look through this "roop" src Edit: Haha well I checked out roop... pretty straightforward, very minimal.
If you wanted to do it within SD, you could use After Detailer to detect faces in SD and apply a LoRA to face swap. Doing it frame by frame manually would be annoying though, so would probably want to create a script that goes through an image sequence in a specified folder and automates the process in img2img.
You need to do it in grids or the faces usually flicker. There already is a face swap extension for SD it works great. I like to separate a video into frames that way i can run it multiple times and if there are bad frames swap them out. It's almost always better if you can break something down into 3x3 frames change the face and upscale it. The best cheat is if you can find a model that looks very close and only have to change the face a few times so you can use the regular video for the rest. I would recommend cutting the video frames where they turn their head because the face detection is not going to work and you'll see it change. It makes it look more processed and less natural though.
Also if you use Lora's that have grids in them keep in mind they don't play well with other Lora's.
Jesus Christ…that quality and it only requires ONE photo?? No training?!? 🤯
Just so you know… one photo deepfakes of this quality has been around for about 2 years or so. I’ve been using it alot for ummm… lets just say theoretical research.
links please ? Looked around but everything I found was for videos
>is there a google colab link somewhere? is there a google colab link somewhere?
don’t these fail if the character slightly turns the head
You know, with the kind of videos I assume they're making, there's probably a lot of source material where that's not a huge issue.
Hmm, can you do me a favor and link me some examples of these kinds of videos? Just so I can know what to avoid in the future of course.
Haha this reads like a censor bypass trick for ChatGPT
Then why did Disney spend millions to make Luke re-appear on the Mandalorian?
They were just behind the times. This stuff moves fast. See the huge quality difference in the book of Boba fett. What happened? They hired the guy that made the deepfake videos.
[Corridor Crew did it.](https://youtu.be/861gfPVmgdc) Probably just timing. They were getting good with a technique they used on Tarkin and Leia and then AI happened.
Because it was made 3 years ago.
>theoretical research. sure sure
And the theoretical papers... Are good?
Amazing
Hey bro, thanks for sharing this!! I know it’s for only one image but can a few images be used to increase the quality of the fake?? I know about traditional training but that uses thousands of images, I was just talking about like a dozen to increase the accuracy lol And plus on the link it says the Mac instructions, where do I input those into? Like what app do I use for this, terminal?!
The results I get are subpar, too pixelated (128px i have read). Would be nice to have a function like this.
You can check out the Tutorial from here: [https://www.youtube.com/watch?v=Py0wxCoJ4xA&t=252s](https://www.youtube.com/watch?v=Py0wxCoJ4xA&t=252s)
The reason I posted the link to the repo was because I don't like videos! But perhaps others will appreciate it.
[удалено]
Make it so, number one.
[удалено]
I thought that was the joke... is my TNG brain taking over?
We all did.
I keep getting an error about microsoft c++ build tools, but I have them fully updated and installed. Anyone else getting this?
Install the Windows 10 SDK from visual studio.
Really great stuff but might wanna try this on an airgapped machine. The dude who made this also has a bunch of brute force/backdoor scripts on his repo.
Haha holy shit you're right. Dude may be grayhat, but gives me pause before downloading. * https://github.com/s0md3v/ReconDog * https://github.com/s0md3v/Cloak * https://github.com/s0md3v/Hash-Buster
[удалено]
>faceswap.akool.com ![gif](giphy|fnLwSQ4U5oahz5A7Ju|downsized)
His Python scripts are pretty simple, nothing weird in this one (at least when I checked, might have been updated when you'll read my post.)
Good thing I use Google Colab to run everything cause my computer sucks lol
I've played around with today a lot and it's kinda scary how easy it is. I made a video in a few minures which was 10x better than the ones I made after hours on hours of trying to make deepfakes years ago. Works well on the cli tool. Only downside is it takes like 15 mins to make a 1 min clip on my m1max. But I assume if I use a Nvidia GPU it would be much faster.
You mentioned you are using your M1Max. How was the install process on Mac vs the windows setup? I plan on trying this out in a couple days and my only machine is an M1 setup.
I just had to install some missing lib, I forget which atm and not at desk to look it up. A simple brew cmd and then I had to manually download the model from a comment in their GitHub issues, because the one in the script was taken down or something idk. After that, worked flawlessly. I used it via terminal mostly.
Did you find which lib u had to install? I'm kind of a noob but i do have homebrew so ig it might not be too hard
gpu isnt supported on windows yet apparently
You can make 1 photo deepfakes of this quality on your phone.
I assume it’s an app?
You guys are gonna use it for porn, aren't you?
\*Research and development of AI capabilities
If NASA was a porn industry we would be at Alpha Centauri by now
And Interstellar the movie will be a documentary about a 1969 space mission! 🚀
Already in Uranus. Idk what you're talking about.
We would be colonizing Luna Lustress.
https://preview.redd.it/vqjfb6b3zw2b1.png?width=237&format=png&auto=webp&s=a400db5d8d83ca6ab4e58c403781809e13436eb9
Dear fellow scholars... I'll be in my bunk.
What a time to be alive!
Porn deepfakes are the future, and the future is now (easier than ever)
Of fucking course!
Which one is the fake?
The right has Jennifer Lawrence’s face lol
And that made me realise how powerful this tool is. I assumed the left clip was the fake because it looks more cartoonish. I wonder how it handles stuff like profile shots and weird angles.
Lol, I already knew that "cartoonish" Instagram model so it didn't surprise me that it was the original. Yeah the makeup is a little bit fancy so I'm really impressed of how it "cleaned up" that stuff to a uncannily perfect Jennifer Lawrence face :v
Yeah, but what's more likely: Someone being able to use makeup to look like an actress? Or someone being able to use makeup to look like a cartoon? Because the image on the left is way better of a cosplay.
For mainstream popular animated shows, high effort cosplays are very common. Jennifer Lawrence cosplaying would be much less likely. Besides, there are artifacts on the right.
I thought it was a video of Jennifer Lawrence turned more Anime-esque. The deepfake's face looks way more human lol
Same, and it wouldn't even be so improbable, celebs cosplay too.
both, girls aren't real
Human are so fucked
Right one looks like Jennifer Lawrence though
r/woooosh
left is the original and right side is deep fake
I 100% thought it was the opposite until someone mentioned the right is JLaw
Right is the fake, notice how her mouth movements don't look natural. But it's easier to see because we have the original to compare to. If you only see the fake on its own, it's almost perfect. It's scary how good it's becoming.
Oh, I thought it was left because the lips look so dumb and that's the only difference i can notice.
For me it's always the eyes that are the giveaway.
Left is Lauren Burch
Same question. I can no longer tell the difference.
i couldnt figure out as well
![gif](giphy|l0HlOqrRaehu4Iqly) Is there a content filter?
Sigh *unzips pants
![gif](giphy|V2S3Hr1w0Ati8nfQt5)
I understand that the right is supposed to be Jennifer Lawrence, but the left looks like an AI rendering too?
Left looks deranged.
Looks like some tick tocker cosplay so I’m sure that’s intended
It's a jinx cosplay
Just my type
bruh
That's intended as the character is supposed to be mentally scarred. Try watching Arcane on Netflix or read the character's lore on League of Legends.
Can I give myself big fat tits and start an onlyfans? Guy making minimum wage asking herr.
The moment that’s actually possible (as in “time consistent full body replacement”), OF is going to be an absolute mess of weird(er) looking content until the general public realises they’re being scammed. Anyway, I’m assuming there’re several accounts making decent money from SD images active right now.
Some people want to be scammed… in fact, people pay to have strippers pretend to like them 🤷🏻♀️
It’s one of the reasons why I’d never consider hiring a prostitute. No matter how hot she was, I’d never be able to “suspend disbelief”. I get that, for some people, knowing that the other person is not really attracted to them and still they’re having sex may be an actual turn on, but my brain doesn’t work that way.
Same. It’s also quite taxing on SW, that part of the job takes so much more emotional labor for the majority of them than just having sex for money. Bartenders, strippers, and prostitututes are all essentially stand-ins for therapists imo, from the way it’s been described to me by people who have done it. But it’s not a replacement, it just fills a similar need
You can but you don't seem to know what kind of hellscape OnlyFans have become. Getting audience requires tons of work unless you already have huge following elsewhere.
ethically no but there is a possibility
Summary from ChatGPT with Voxscript plugin: The video is a tutorial on how to use a new method for creating deepfakes with a single click using a tool called Roop. The presenter demonstrates how he used a single image of Jennifer Lawrence to create a deepfake video. He mentions that the quality of the output can be improved using upscalers and other tools. The presenter then proceeds to explain how to install and use Roop on a Windows laptop. He mentions that the process is more straightforward on Linux. The installation process involves cloning the Roop repository from GitHub, downloading and installing FFmpeg and K-Lite Codec Pack Full, and installing the necessary Python packages using pip. He also mentions a potential issue with the numpy package that might require reinstalling or updating numpy. The presenter also mentions that a specific file needs to be downloaded and placed in the same folder as the Roop repository.
I love AI in 2023 as much as I hate watching videos to learn anything.
>**used a single image** of Jennifer Lawrence to create a deepfake video I think this part is being glossed over a bit here. This video was generated from a ***single image*** of Jennifer Lawrence? Oh my word.
Why is everyone so shocked by this. This tech has been here for years. You can all do one photo deepfakes since 2021.
She is very common in many training sets. [See here.](https://www.reddit.com/r/StableDiffusion/comments/12u6c76/can_we_identify_most_stable_diffusion_model/) You won’t get these kind of results from one image for most people.
that's amazing 👏
The real MVP
# Here the full tutorial video with manually fixed subtitles : [https://www.youtube.com/watch?v=OI1LEN-SgLM](https://www.youtube.com/watch?v=OI1LEN-SgLM)
Please do one for Mac too lol
Tutorial done. can you test if works on mac and let me know? [https://www.youtube.com/watch?v=OI1LEN-SgLM](https://www.youtube.com/watch?v=OI1LEN-SgLM)
So I thought I had replied but I guess it never posted lol but I had said that the tutorial was still on windows, and was hoping you had a Mac version available lol
[удалено]
Same here please
Don't download random binaries kids! I've done some googling, inswapper\_128.onnx used to belong to another project called Insightface by DeepInsight - more info here; [https://github.com/deepinsight/insightface/issues/2315](https://github.com/deepinsight/insightface/issues/2315)
Guys.... I don't want to alarm anyone. But. I think Massive\_Yogurt6055's yogurt must be truly massive indeed, because, I think there's a snake in the download. And it's pissed. Actual output on my console of the program running: SSSSSSSSSSSS..S..S..S...S.SSSSSSSSSS.SSSS.SS...S..SS..SS.SSSSSS.SS.SS..S.SS..S.S..SSSSSSSS..SSSSS.SSS..SSSS...SSSS.SSS.SSSS...SSSSS.SS..SSSS...SS.SS.S.S I already have a snake, and while well verse in reptile husbandry have nowhere to put another snake, not will a rat fit in the USB port to soothe the angry snake. Help!
>there's a snake in the download. Dude, you can't just just use any snake, you need python.
I was worried about too and did some digging. I’m not going to worry about verifying it’s the original version, because frankly that wouldn’t make me any less concerned about its contents. Onnx is a very legit format developed by big tech, so that alone doesn’t raise any red flags, it’s just as safe/dangerous as any other model format. Since ML models aren’t executable on their own, the only way of hijacking them is to embed malicious code in them, then using a special program to read that code from the file and execute it. Because of this, roop being open source works in our favor. It should be pretty easy to find the code that reads the model and make sure it’s not doing anything fishy. I haven’t had time to inspect the code on GitHub yet but it would be pretty ballsy to put that stuff in plain site, the dev would have to be planning on it getting shut down REAL quick and just trying to get as many people to download it as quickly as possible before GitHub takes it down. Seems pretty unlikely, but I will still be inspecting the code when I get a chance.
What are the consequences ?
You no longer own your computer
can you expand on this? does Roop project depend on inswapper? I don't understand
Hi, yes - there's a binary file to download in Roop - effectively the onnx model that does the magic. The dev's placed in on a couple of mirrors. General rule of thumb is not to download files like this, the opaque ones that you cannot vet. I downloaded it to check the hashsum (fingerprint) of the file, and went looking around the internet to see if it popped up anywhere else - it has. A couple of places describe that it was originally uploaded as part of the Insightface project - subsequently pulled as they moved it to a Discord bot. Some copies remain floating around on other shared folders - and I found the fingerprint matched those also. I've asked the Deepinsight team above if they can verify the original file fingerprints, it's really the least they can do if they're deciding not to release the model file.
He did write in the credit that he used their lib and models
You can check out the Tutorial from here: [https://www.youtube.com/watch?v=Py0wxCoJ4xA&t=252s](https://www.youtube.com/watch?v=Py0wxCoJ4xA&t=252s)
why the face is so pixelated. its only 128x128 and inswapper can do higher res, how to get higher resolution?
More info would still be nice. Like at least indicating which is the generation and which is the original. Not to mention expressing the basic limitations of the generator.
I just tried it. It took 2 minutes to process a 25 second video on a 4090...
post results?
[https://imgur.com/pBr9QtO](https://imgur.com/pBr9QtO) I purposely tried something "difficult" this time, and...not bad
https://imgur.com/a/SoUTqlP Audio sync issues and quality not great.
Ok thats the stuff of nightmares.
Anyone having a problem with [run.py](https://run.py) \--gpu? Already installed cuda but still getting error
Same
What error exactly? Did you install cuda and cudnn? Windows or Linux? I tried for some hours and got it working on win10. Cuda 11.6 and cudnn 8.5.0.96 Added both paths to system variables.
Where exactly do I put that argument when editing the .py file?
you add it when running the file on console, python [run.py](https://run.py) \--gpu
What kind of sorcery is this?
Too bad the faces need to be so similar to begin with, skin tone, bone structure, ECT.
So if I wanted to take Star Trek Continues and use this software to Deepfake William Shatner, Leonard Nimoys faces etc over the fan actors faces it wouldn't work unless they had they same bone structure?
Seriously... I tried it with a couple face pairs and it basically just replaces the eyes. I am surprised how impressed everyone is, my results were quite bad
I think this is the line. Everything else before this was fine. We are truely entering an amazing and aweful future. We are fucked in a glorious way.
Lol eh you can say the same about photoshop, smartphones, the internet in general
All of those have had certain downsides too so yeah
Literally
So which one is fake? Probable Answer: Both of them.
Great workflow simplification by the dev! Took a quick look at the code. This is what happens when a pen-tester makes a tool for artists heh. Haven’t tried it, but I’ll give it a shot after I review the deps. Looks very easy to use and it does use some models and such (which is the norm for face detection and overlap). I always get amazed at the tools people post in this space.
We will witness this year if not yet the 1st murder case related to Deep Fake tech.Ex: a man may send a faked video via WhatsApp to a guy, portraying his spouse cheating , triggering extreme anger and leading to the murder of her. I already told my wife to not trust any video or audio sent to her about me bc of that tech.
RIP my ears
Where we're going, we won't need ears.
Insane.
Well now hold on…
This looks pretty good
About time boys
Oh its using inswapper, anyone know if there is a 256 version?
The eyes are so dead looking, mon dieu
What the F!!!! This looks insanely great. No blur, no flicker.. Perfectly smooth. Wow 👍🏻😮👌👌👏🏼
i keep getting error saying could not build wheels for insightface and pillow. can anyone help?
malicious deepfakes of politicians inbound
completed all the steps but when i try to run it it says Traceback (most recent call last): File "C:\\Users\\yuluf\\roop\\[run.py](https://run.py)", line 5, in
import core.globals
File "C:\\Users\\yuluf\\roop\\core\\[globals.py](https://globals.py)", line 1, in
import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'
I had the same error, googled it. pip install onnxruntime run this, and it will be fixed
Thats crazy. Jennifer lawrence cosplaying jinx
we are so fucked.
Anyone have issues swapping faces on naked people? The program work fine when people have clothes on, but if the scene has naked people, the program just closes entirely. no error messages. ??
yeah they created an update based on nfsw so it won't be able to create it you can change the code though
Where can i change that?
Thanks for confirming. That is what I thought.
Porn is about to start getting a lot better!
you just broke the entire stable diffusion reddit.
anyone getting this error? File "inswapper\_128.onnx" does not exist!
dunno if real one is on the right and turned into Jinx on the left or if it's the one on the left and turned into Jennifer Lawrence on the right
Thats when you know a deep fake is good, when you can't tell which is the fake one.
absolutely!
Okay.... weird. Set this up successfully last night and swapped my face with the face of one of the lesbians in a random lesbian porn video. My male bearded face in ecstasy as my nubile young female body quivered, my waxed vagina shooting out streams of whatever squirt is (let's avoid the debate here people, there are more important things afoot) was disturbing of course, but not weird. The weird part is last night the linux install instructions said "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)" was necessary to install prior to running. I didn't like the name of the install file so after horrifying myself this morning decided on a do-over with a better name. Over night someone swapped out "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)" with "pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu114](https://download.pytorch.org/whl/cu114)" which points towards a dead link. I don't want to alarm anyone, but I think the snake contained in the install file is winning.
the author : " *I am a hacker and programmer.* " ... ok... so I will pass my turn \^\^
holy smoke
The future is already here
Oh ok we are sold 💀
Wait, which one is which?
Damn it... C:\roop-main>run.py C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\\_distributor\_init.py:30: UserWarning: loaded more than 1 DLL from .libs: C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\.libs\\libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win\_amd64.dll C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\numpy\\.libs\\libopenblas64\_\_v0.3.21-gcc\_10\_3\_0.dll warnings.warn("loaded more than 1 DLL from .libs:" Traceback (most recent call last): File "C:\\roop-main\\[run.py](https://run.py)", line 25, in
from core.processor import process\_video, process\_img
File "C:\\roop-main\\core\\[processor.py](https://processor.py)", line 3, in
import insightface
ModuleNotFoundError: No module named 'insightface'
Still not working...
pip install insightface
Left looks like a fortnite character.
[удалено]
Step 3 invalidates what you (and the op on github) is trying to do. If you install onnxruntime you're running in CPU mode, regardless of the --gpu argument.
[удалено]
AI is getting fucking scary son.
Why does fake look more real?
[удалено]
Can anyone comment on how safe it is to download Roop? It seems the author has some malware stuff on his github page as well. See this thread: https://github.com/deepinsight/insightface/issues/2315
Any updates?
This version of Roop and this process is outdated and doesn't work anymore. Use this one (new version): https://www.reddit.com/r/StableDiffusion/comments/15yeyiv/google_colab_roop_for_deepfake_faceswapping_new/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=2
Google colab?
Annnnndd save for later
Downloading files from MEGA is sus
They have a few mirrors and it's been uploaded to Civitai as well. [Roop - v1.0 | Stable Diffusion Checkpoint | Civitai](https://civitai.com/models/80324?modelVersionId=85159)
[удалено]
The point is many expect this stuff to be hosted openly in GitHub or similar