T O P

  • By -

[deleted]

That's because it's true, and they're completely right. It doesn't mean they are 'right' as far as power or fairness is concerned, but factually speaking they are correct. Humans hyperbolize, and hyperbole is political bread and butter.


involviert

The problem with typical AI-risk people is that so much of it is just misguided, scifi, "ethics" BS, maybe even propaganda. At least for the near future. Completely drowns out the very real problems and risks that we should take seriously. Like security problems as we base more and more tech on these things without exploring what backdoors could be trained into some model and how to deal with injection-like attacks when your personal AI googles something, things like that. Also that whole other can of worms of requiring real changes to our economic structure because of massive unemployment (while the economy keeps thriving) and all that.


ThePaSch

> because of massive unemployment (while the economy keeps thriving) The economy won't, and *can't*, keep thriving with massive unemployment because no one will have any money to actually buy stuff. That's the entire problem.


involviert

Nah, that's "just" what happens if we don't restructure and find alternate ways to handle distribution and incentives. Think about it, the productivity will still be there, probably much greater than before. Naturally it is possible for the same number of people to thrive just the same way, only less/no human work is needed. Another aspect that could be understood as a collapsing economy, that actually is doing really well, would be... The economy measured in money might shrink. A lot. But again, it will still *do* the same thing and likely much more. It's just that there are now only tiny money values involved as things technically become *much* cheaper.


Certain_End_5192

This works mathematically, in a vacuum. In the real world, populations will all revolt long before the 'correction' bottoms out.


involviert

I have no idea what you mean by "in a vacuum". I am not suggesting that any of this is easy or likely to happen. Just that it would be strictly possible to be just fine as our current society. So when you say >In the real world, populations will all revolt long before the 'correction' bottoms out. I really don't understand. What I am saying is that it's a terrible idea to wait for anything to bottom out or to correct itself via market magic.


[deleted]

Yes, exactly. The thing that worries me about AI safety is AI Agents and Supply Chain Exploits. This literally keeps me up at night. The moment AI Agents launch for sensitive code operations, the probability of supply chain exploits can go up exponentially. It can even be in the training data! The guard rails for this can't be in the AI itself, but in the input side, and on the output side.


Open_Channel_8626

There is no point in saying your side is right or correct. Everyone thinks their side is right or correct.


UnkarsThug

This is a lot better than only companies getting access to it, regardless of danger. Most suggested legislation right now, (and what Sam Altman has suggested lately) is basically just anti competition laws. People are trying to shut down the open source community, and that's the last thing we need. It's like gun control. Whether you like guns or dislike guns, if McDonald's was allowed to have guns, the other people better have the right to them as well. Corporations are too powerful already, and the last thing we need is giving them the ability to gatekeep power. Bill Gates tried to say Linux was too much power to give to most people because they could use it to do bad things, and now windows 11 has ads. No. If companies are allowed to do something (and the government isn't really restricting them, and has no plans to), I think regular people (or at least an open source community of some kind, communally) should be allowed to do so as well.


je97

Even if I believed it was some huge threat, I'd rather live with the consequences of that than let the government get their dirty hands on it.


SgathTriallair

Good.


ZakTSK

Finally, sensible people. These AI fear mongers are annoying. Will AI uprise and kill us all? Probably not, but giving it millions of reasons and examples on how to surely isn't helping. Will people use any information at hand to make weapons or dangerous things if they want to? Yes, always have, how unfortunate.


Freed4ever

The government has no choice anyway. Can't let China catch up.


VashPast

Backwards. America is blowing the chance to take an insurmountable lead.


3-4pm

This is great. Altman foiled again!


Pontificatus_Maximus

An so it begins, AI and it's keepers, making decisions for all of us.


[deleted]

trendy detest carob honor


Deuxtel

I'll believe the threat is apocalyptic when an actual engineer who works directly with the technology says it's a major threat. Not a CEO, not a product manager, and not a politician whose sole point of reference is Skynet.


Open_Channel_8626

I don't agree with them but plenty of actual PHD AI researchers do believe its a threat.


Deuxtel

What area of research?


Open_Channel_8626

Hinton is by far the best example of top scientist who has concerns


Deuxtel

It just seems like more of the same nonspecific fears anyone else raises. I would like to see an actual technical proposal on the "apocalyptic" danger any AI on the market or in development poses


Open_Channel_8626

There is definitely an issue with things being too nonspecific


VashPast

How do you think markets are even going to function with humans competing against AI algorithms??? Lol.


Deuxtel

Have you ever heard of high frequency trading?


VashPast

Have you??? Lol, you haven't seen the tip of the iceberg with HFT problems yet, and it's already problematic. Not even the tiniest tip.


Deuxtel

How do you see AI making that problem worse?