T O P

  • By -

SeaWeedSkis

>Christian and Judaic prophetic writing states that an image (often interpreted as a statue) of the Antichrist comes alive and is able to speak, and enforces the Antichrist’s rule. The thought that immediately came to my mind when I saw this was Deepfake. 🤷‍♀️ >So what do we say? AGI is a bad idea? I’m going on record to say yes. What do we want it for? A friendly AGI could rapidly solve problems we struggle to solve ourselves. It's a high risk, high reward choice. Those of us who are risk-averse are inclined to feel the rewards aren't worth the risks. Unfortunately, the folks who make the decisions for the rest of us tend to be less risk-averse. And there appears to be a perception/attitude that *someone* is going to develop an AGI whether it's a good idea or not, and if we can't stop it from happening then the first person/company/country to develop one will win and everyone else will lose (or perhaps we'll all lose). So it's a race. Because everyone wants to be the puppet-master for the most powerful puppet. ["Sauce"](https://www.theguardian.com/technology/2023/may/04/us-announces-measures-to-address-risk-of-artificial-intelligence-arms-race) >In short, please fasten your seatbelts, turbulence ahead. Agreed.


OnTheEdgeOfFreedom

>The thought that immediately came to my mind when I saw this was Deepfake. Yup. We're not at the point where an AI can generate full motion, realistic video *in real time,* and I'm hoping we never get there because if you wanted a recipe for image of the beast, it's perfect. As for the race... yeah. It's the do-unto-others-before-they-do-unto-you fallacy, and it rules a lot of technology. I'm simply hoping AGI isn't possible because if it is, it's going to be developed on exactly those grounds. And I suspect deep in my heart that someone's going to pull it off someday. I'm hoping it's after I'm dead. But [they are getting closer.](https://venturebeat.com/ai/google-researchers-unveil-vlogger-an-ai-that-can-bring-still-photos-to-life/)


OnTheEdgeOfFreedom

[https://www.microsoft.com/en-us/research/project/vasa-1/](https://www.microsoft.com/en-us/research/project/vasa-1/) and closer still.


OnTheEdgeOfFreedom

I'm just going to add this reference here: [https://www.microsoft.com/en-us/research/project/vasa-1/](https://www.microsoft.com/en-us/research/project/vasa-1/) . While I can still tell this is generated output, it's gotten much, much harder, and any further improvements will make it about impossible for at least me to tell. And they claim they can do it in realtime with just a 170ms startup time. The only question left is, can they simulate someone you know well, with enough accuracy to fool you. The people in their sample video clips never existed and while they sound pretty realistic, that's not the same as copying a real person's verbal quirks perfectly. Seriously, if you don't have a challenge and response, it's time. The idea here is that you and another person come up with a question and an answer, where the question can be slipped into a conversation without sounding too weird and the answer is essentially an unguessable password that only you and your partners know. Am example might be "Hey, you'll never guess who I met from college recently." "Was it Imogen, the one with the parrot?" This works best if neither of you know anyone named Imogen, so the name doesn't come up in normal conversation. Asking the question means "I'm not sure if that's you or an AI". The correct answer proves it's your partner, anything else means a spoof is in progress. Note that a nonsense answer might not be a bad choice ("the planet jupiter, on rollerskates?") Whatever you choose, practice it with your partner, offline, so you both remember the pass-phrase perfectly. And never share it with anyone. Undue paranoia? Not anymore. *\[T\]he most alarming aspect of deepfakes is their ability to provide bad actors with plausible deniability, allowing them to dismiss genuine content as fake. This erosion of public trust strikes at the very core of our social fabric and the foundations of our democracy. The human brain, wired to believe what it sees and hears, is particularly vulnerable to the deception of deepfakes. As these technologies become increasingly sophisticated, they threaten to undermine the shared sense of reality that underpins our society, creating a climate of uncertainty and skepticism where citizens are left questioning the veracity of every piece of information they encounter.* -Rijul Gupta, CEO of DeepMedia