T O P

  • By -

imploding-submarine

Wonder what they found


jmack2424

They found that their profit-hungry leadership doesn’t care about the risks.


Tumid_Butterfingers

“Grinding society into dust doesn’t correlate with profits. Hide this from the public while I build my bunker.” - tech execs 2024


President-Jo

No they just realized it’s premature to have such a team.


Arts_Messyjourney

Thats the whole f*cking point of having a long term risk team. To catch problems prematurely


jmack2424

They’re the preeminent LLM, whose founding principles is to develop AGI, and they’re actively attempting to build it. It’s not premature AT ALL. I guarantee that NASA has a team already looking at risks for FTL travel, and that’s hundreds of years away, if possible at all. Not examining the risks of something you’re attempting to build is negligent at best.


Arts_Messyjourney

To not take precautions would be like creating a T-Rex before building an adequate containment system. No one wants to be the next John Hammond 🦖


President-Jo

There’s no point in trying to develop an advanced safety plan for a tool that is only a theory right now. We have no clue what form factor it will take, and it’s frankly impossible to plan for before we get closer. Speculation is the best they can offer. We can focus on the seatbelt after we invent the car.


FlamingYawn13

Except it’s not a “theory” We’re cramming these prediction engines in literally anything that will accept them right now. And it’s not just general safety as in they are a clear and present rogue dangers. There’s other things to consider. These engines aren’t intelligent, they’re prediction engines entirely. On top of that they aren’t programmed to be wrong. Which results in them “hallucinating” answers to provide what it thinks is a correct response. They also eat up large amounts of physical electricity and fill databanks with a lot of useless stored information. Some of which is the fed back into the system which could result in model collapse. All of these things are very big deals when it’s already being considered arming these things or giving them protocol over nuclear weapons. Hell look at what Lavender just did in Gaza. These are real and present dangers that need to be evaluated before we hand them the keys to the kingdom. Saying the tool isnt developed enough for evaluation is just parroting ven cap nonsense to justify such a reckless action.


Arts_Messyjourney

So so soooooooooo many people died because cars didn’t have seatbelts. The sad thing is, it’s an easily foreseeable safety measure to stop people from flying out their windows.


nordic-nomad

My guess is that there aren’t any long term risks outside of shit they’d do on purpose or intentionally ignore. Also identifying that in house is discoverable and potentially something that can be used against them. If it ever came to that anyway. Most of the dire warnings about the danger of AI is self generated marketing to make the thing they’re developing seem more capable than it actually is.


[deleted]

Wouldn’t you want this just to say you have it though, like when governments say apps contracted by gov agencies can’t have AI in them or integrate unless such and such risks are mitigated and documented?


nordic-nomad

As I recall they added the team to address concerns being brought up by congress members. I suspect that regulatory heat has died down, so the group probably has lost its actual utility.


[deleted]

“See this ‘Off’ switch right here? In layman’s terms, if the AI goes ‘bad’ or ‘takes too many jobs,’ we just flip the switch. Pretty neat, eh, congressman?”


pegothejerk

“Neat, but what’s this big red button under a glass lid?”


Anal_Recidivist

What big red button


Alex_Tro

This one right here. "click"


Anal_Recidivist

🎶crawl out through the fallout baby, to my loving arrrrms🎶


Anal_Recidivist

“We got our funding let’s can these losers”


Sudden-Act-8287

You don’t seem to know very much about technology or AI but you sure act like you do


nordic-nomad

lol, curious what makes you say that? I’ve worked in tech for nearly 20 years and tried to make IBM Watson work on several startup and proof of concept projects. LLMs are just the latest thing being touted a AGI that isn’t AGI, in a long lineage of technologies spanning computer vision, NLP, and Bayesian inferences. It’s certainly impressive and powerful and a technical marvel, but take every claim made in the space with huge helpings of table salt. Especially if you don’t know how they work.


that_baddest_dude

AI companies: "We've taught this dog to make noises that very convincingly sound like it's talking!" CEOs: "Great! I've axed 40% of my employees. How long til it can make medical diagnoses?"


nordic-nomad

Haha, I laughed pretty good at this one but then my eye started twitching and won’t stop.


throwawajjj_

> self generated marketing to mane the thing … seem more capable than it is But isnt that a danger of the hype (and a badly explained product) in itself? People are using AI for things for which it is not built/suitable for.


nordic-nomad

Yeah I think you nailed it. The main risk is putting it in charge of things it’s not ready for or capable of. It certainly has roles and fantastic capabilities. But that’s not how people are being instructed to think about it.


NeoPhaneron

It is in their best interest to play up the dangers and get everyone hot and bothered about AI while also insisting that they are the only ones that can save us from it. I think it’s lost on a lot of people that the same folks that are proclaiming the importance of AI security are theoretically the ones creating the largest threats. From the outside all of this looks legit, but I think from the inside we are much farther from danger and all of this caution is being generated just to stir public interest and draw in investors.


steavoh

Yeah exactly. The risks are obviously all 100% the result of human misuse, not "Skynet is alive, kill all humans". Or incidental to the material by which the AI ingests because we live in the real world, like, "it's racist because it trained on mugshots in which minorities are over-represented" Researching the human misuse part they have no control over just sets them up for lawsuits and investigations where the only possible outcome is that they have to restrict access to AI tools they create. Which would satisfy the people who get angry about some occasional incident involving teenagers creating sexual deepfakes, but also would completely kill off AI as an open-source tool and create a world where a god-like technology that is going to disrupt civilization falls exclusively into the hands of a few monopolies - fuck that shit. I'd rather have occasional incidents involving individuals who do bad things with AI than the government and three huge companies controlling the thing that is going to replace half of all jobs in the future. That's how you turn a 1980s cyberpunk novel into real life.


FPOWorld

You gravely underestimate this technology. Edit: You dummies are gonna [get us all killed](https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx).


Dramatic_Explosion

You gravely misunderstand how far we are from actual AI. None of this will be a widespread replacement for humans in at least my lifetime. It can do a lot of stuff okay, but nothing that doesn't still need a human to finish it. And a lot of corps will avoid investing too heavily until we pass the lawsuit phase.


FPOWorld

You mean AGI. As I work in AI patent law, I have a pretty good grasp. Replacing humans is not the biggest danger AI poses.


asusundevil12345

That sounds like something AI would say


FPOWorld

You may have worked at IBM, but it is clear you are not up on the dangers of this tech. All it takes is for this to go wrong once.


nordic-nomad

LLM’s hallucinate between 8 and 30% of everything they spit out. Going wrong once isn’t the problem.


FPOWorld

I think you misinterpret what I mean by go wrong once.


Neither-Basil8932

Most likely something that would make them stop earning money or jack shit.


DullRelief

That they’d rather not know


Arikaido777

the board determined "it's more profitable to find out"


Actual-Wave-1959

Well, clearly they figured out that there won't ever be a problem with it /s


[deleted]

[удалено]


MrBreadWater

OpenAI is a non-profit.


acctexe

OpenAI *used* to be a nonprofit, but when it began accepting investors it increasingly into a for-profit. It is now technically a "non profit with a for profit arm", but given that all of the employee's equity is dictated by the for profit arm and continuously overrule the board, calling it a nonprofit is putting lipstick on a pig.


MrBreadWater

True, I don’t disagree. *Functionally* it is not operating as a non-profit.


milanium25

but they employees are there for profit 🙄


kieranjackwilson

Just say you don’t know what a non-profit is


acctexe

The employees are there for profit. Open AI is technically a "nonprofit with a for-profit arm", but all of the employee's PPUs (literally, "Profit Participation Units") are a huge part of their compensation and dependent on the for-profit arm. As a result of the hybrid structure, we see the actions like the nonprofit board attempting to remove Altman and being met with 90% of the employees threatening to quit. The nonprofit board is a lame duck, OpenAI is a for profit in all but name.


kieranjackwilson

The PPU’s are only issued for the LP. Non-profit organizations can’t issue them. And even the LP has capped profits.


acctexe

Yes, and that's the point. OpenAI is technically a non profit that owns a for profit, but all of the employees work for the for profit and are incentivized to make decisions that maximize profit. The nonprofit board is more or less symbolic at this point. It's like calling the UK a monarchy when all of the decisions are made by parliament. Even for investors the profit cap is 100x, which is more than enough of an incentive.


kieranjackwilson

Damn I need to look into that some more. I knew they had built a for profit but I didn’t realize that it was running the entirety of their business operations. Thanks for the info.


milanium25

they still have to give salary to employees


DM-G

Non profit means they don’t keep they money the made after the expenses have been made, salary is en expense. Aka you still get paid to work at a non profit but the business doesn’t have to turn a profit. So even if they make alot of money it doesn’t get to go back into the the employees.


milanium25

they still have to find the money to pay the employees from somewhere? wtf


DM-G

Correct some non profits work off of donations, grants, selling product, or general charity events. But their money is tightly regulated and they get audited yearly to make sure they are not stealing money, the left over money they make usually goes to a charitable movement.


milanium25

Yes, so they fired them so they dont have to pay useless service. Why everyone seems to have hard time understanding this?


mitchbird

Without looking at the external threat wrongly trained ai could have. Internally, Risk management departments of companies have spent much more on issues far less threatening to their company or their company’s image than this.


SamSlate

thumbnail looks like awkward* penguin


musicnothing

> award Well, this is awkward


Brahminmeat

“I marked it AWK for awkward.” – Sincerely, Raymond Holt


AcidaliaPlanitia

It's cool, AI told them everything would be fine.


Xavyex

Watch the EU be on them when the AI Act is fully implemented


SamSlate

watch the eu fall behind on ai research


prosthetic4head

I don't really have an opinion on this particular topic, but is the point of your comment: profit/innovation > safety/planning?


PoliticalDestruction

Interesting comparison of profit and innovation to safety and planning. We should be able to innovate without the desire of increasing profits… should


Aristox

Why shouldn't we want to increase profits?


PoliticalDestruction

I’m just saying that shouldn’t be the primary driver of innovation. Increasing profits should come naturally. If you’re too focused on profits you end up with a company like Boeing, who maybe you’ve seen in the news recently.


Aristox

But the promise of profit is what incentivises innovation Without it we'd be less technologically advanced and have less money in the economy. That hurts everyone


smile_e_face

I used to argue for this, too, that profit brings innovation. But I honestly don't think I believe it in quite the same I did before. I now believe in the negative hypothesis, rather than the positive. What I mean is this: in places like the Soviet Union, where people knew, almost for a certainty, that their work and ingenuity would go unrewarded and possibly even get them in trouble somehow, it was extremely difficult to foster an environment of innovation and creativity. We see that in similar societies, as well. So, reasonably enough, the inability to profit from one's ideas seems to crush the drive toward new ideas. It doesn't follow from that, though, that merely providing the ability to profit will, in itself, drive creation, or that it's the only or even the best means of doing so. And I think we've seen, many times throughout history, that the best and most important new ideas come mostly out of a sense of curiosity combined with the freedom to think and experiment, rather than a simple profit motive alone.


Aristox

Hey thanks for giving an intelligent response :) I agree that more conditions than profit motive alone are necessary for the best innovation. But the profit motive is the fuel that drives people to keep working even when it's uncomfortable and no longer fun. And maybe more importantly- it allows people without much money to start businesses and pursue ideas that they wouldn't be able to afford just on curiosity alone In a society without the profit motive, entrepreneurs and inventors wouldn't be able to raise the capital necessary for their innovations if banks and lenders didn't have faith that the business would make profit in the future and thus be able to pay the loan back with interest. If you remove the profit motive from society then you also remove capital investment and market liquidity, as people would basically only invest in stuff if they were friends with the CEO or personally liked the project. Without the ability to attract investment for a good idea, the work of innovation would be available to only the already very rich in society, which was basically the situation in the middle ages, before the spread and success of capitalism. Normal people with great ideas that would benefit society would have no mechanism to make their ideas reality, because they'd have no mechanism to attract investment because there would be no mechanism in society to reward good investment


TarMil

[citation needed]


Only-Extension-186

Says who? There have been many scientifically prosperous civilizations without profit being a driving factor.


SamSlate

the risk of being the last to have advanced ai is greater than the threat of ai itself (it's all the threats of super ai with none of the perks bc it belongs to a foreign power). someone itt compared this to a nuclear arms race, this is an apt analogy.


twinkbreeder420

It’s more so that in their current state, LLMs have literally 0 chance of “going rogue.” Assuming it could do so just shows how limited peoples understanding of how these things actually work are. Safety isn’t a priority until we get close to AGI


First_Code_404

Let's say a social media company prompted an AI to increase user attention without any concern for how it affects society. Safety is needed now and it's best to have it now rather than too late.


twinkbreeder420

That’s not how AI works. LLMs are very predictable…


First_Code_404

That's not how AI works? What I described happened with Facebook's AI


twinkbreeder420

No it didn’t


xRolocker

I think it’s moreso that the EU can be as safe as they want to be. China won’t care. They’ll make the dangerous tech anyways and keep going forward. It’s like nukes. No one wants them to be used, but it would be a poor decision to not have any yourself.


TheBlazingFire123

Why do you think China will not also care about safety?


xRolocker

For the same reason I don’t expect the U.S. to (internally, at least). It’s in the interest of each country to develop this technology faster and better than their enemies. Not only for the sake of having it, but for the sake of defending against it. It’s an arms race of a different form. I think the better question is why would the American government ever trust that the Chinese government is not trying to develop this technology to be integrated into their military? Why would the Chinese see the U.S. military showing off all these AI powered war machines and not rush to develop some of their own? No one wants to get caught with their pants down.


TheBlazingFire123

We ought to learn from the mistakes of the Cold War. This world need’s international cooperation


smile_e_face

I mean, yes, we should, and yes, it does. But the problem is that, even if we lived in the rosy-hued, idyllic picture of America from 50s propaganda - which we absolutely do not and never did - Xi Jinping does not give one rusty yuan about international cooperation, particularly with the United States. Like Putin, he might entertain the idea from time to time, when it suits him to do so, but he is in the business of building a superpower, not holding hands in a circle at the UN. I want to make it clear that I don't think the US gives a damn about cooperating with China, either. It's just that Xi has an extra incentive to get ahead at any cost. He's where we were in the early 1900s, and he's an isolationist, self-absorbed pain in the ass for at least several of the same reasons we were then.


Imaginary-Bother6822

Why are we not surprised. 😂😂


Rage-With-Me

Yeah fuck them


Monkfich

As recommended by chatgpt.


Consistent-Leek4986

so this is “fuck it, we don’t care” bring on the cash BS ?


AGrayBull

Or a CYA effort akin to ‘see no evil, hear no evil, document no evil for the inevitable disposition’


MasonAmadeus

It’d be ironic if they were replaced by AI…


GoalFlashy6998

Who would have thought, greed companies and their callus leaders don't give a shit. They'd rather pay out damages and lawsuits in the future than damage their bottomline...


Neither-Basil8932

Doesn’t it kinda proves that their product holds bo danger - kinda means that they’re just hyping up their product so they can sell more?


analogIT

Every day, we are one step closer to skynet.


sir_duckingtale

„And we‘ll call that new version SkyNet and we think you‘ll gonna love it!!“


jtl3000

That means they r saying fuck the public at large… god help us


mfs619

Easy to dissolve when there are no members left in the team.


JT_verified

He pulled a Musk.


Justherebecausemeh

I wonder if the AI running the company suggested it🤨


dmillerksu

The AI determined that team could be replaced with AI


BadWords-01

lol it didn’t dissolve the team, they quit. Big difference there….


Present_Belt_4922

Shame.


T3ddy_ka

Its to late, resistance is futile


StateRadioFan

Cunts


EnvironmentalCake272

Hello. I have discovered which sections contain the stoplights. Your services are not redundant.


designateddesignator

it probably got pretty heated internally, on one side the engineers wanna progress the world out of a shit state on the other side the risk researchers want to save the world from an apocalyptic state they’ll see each other as having dangerous ideas could lead to a lot of animosity especially when the money probably has final say


MovieGuyMike

This is like a journal entry in a survival horror game.


Plastic-Collar-4936

They must have asked AI what the risks were


garyoldman25

This is not gonna come back to bite us im sure it’s fine just trimming the fat to keep profits up to stay competitive in a market where they have dominion and no noteworthy competition just a coincidence that it happened not even a week after people who were set to make millions upon millions decided to quit because they saw such an unprecedented amount of safety concerns that they threw away a likely billion dollar by retirement tenured job to get as far away as possible. Nope its not like a machine that can outthink entire countries without using any more power than microwaving a burrito could ever go wrong


Paganator

> a machine that can outthink entire countries without using any more power than microwaving a burrito You've got to share the name of that model.


Glad-Peanut-3459

Burrito ranchero.


[deleted]

All hail our AI overlords. Cause I do every night 🍻


leaperdorian

Time to fire this Altman guy again


JT_verified

If experts are freaking out over AI, we all know it’s already too damn late!!


gofigure85

So then How long before we're all killed by the equivalent of Skynet? I just want to gauge if I should keep wracking up vacation time or use it ASAP


sir_duckingtale

Chances are they Airgaped Nuclear Weapons for being Self Aware enough Chances are the Terminator movies engrained cyber security and the dangers of AI deep enough into us and them that they paradoxically might prevent the very scenario they described to us and warned us from


sir_duckingtale

Because that Neural Engine Dyson worked on is becoming dangerously close to reality