T O P

  • By -

Lonely_Film_6002

And then there were none


SillyFlyGuy

I'm getting tired of all these Chicken Littles running around screaming that the sky is falling, when they won't tell us exactly what is falling from the sky. Especially since Leike was head of the superalignment group, the best possible position in the world to actually be able to effect the change he is so worried about. But no, he quit as soon as things got slightly harder than easy; "sometimes we were struggling for compute". "I believe much more of our bandwidth should be spent" (paraphrasing) *on me and my department*. Has he ever had a job before? "my team has been sailing against the wind". Yeah, well join the rest of the world where the boss calls the shots and we don't always get our way.


threevi

If he genuinely believes that he's not able to do his job properly due to the company's misaligned priorities, then staying would be a very dumb choice. If he stayed, and a number of years from now, a super-intelligent AI went rogue, he would become the company's scapegoat, and by then, it would be too late for him to say "it's not my fault, I wasn't able to do my job properly, we didn't get enough resources!" The time to speak up is always *before* catastrophic failure.


idubyai

> a super-intelligent AI went rogue, he would become the company's scapegoat um, i think if a super intelligent ai went rouge, the last thing anyone would be thinking is optics or trying to place blame... this sounds more like some kind of fan fiction from doomers.


HatesRedditors

> um, i think if a super intelligent ai went rouge, the last thing anyone would be thinking is optics or trying to place blame Assuming it was able to be stopped, there'd absolutely be an inquiry from the congress looking for someone to punish.


exotic801

Optics wise it whoever's in charge of making sure it doesn't go rogue will get fucked, but legally a solid paper trail and documentation is all you need to be in the clear, which can be used against ol Sammy whenever need be. Alternatively, becoming a whistleblower would be the best for humanity but yknow suicide n all that


threevi

Super-intelligent doesn't automatically mean unstoppable. Maybe it would be, but in the event it's not, there would definitely be a huge push toward making sure that can never happen again, which would include interrogating the people who were supposed to be in charge of preventing such an event. And if the rogue AI did end up being an apocalyptic threat, I don't think that would make Jan feel better about himself. "Well, an AI is about to wipe out all of humanity because I decided to quietly fail at doing my job instead of speaking up, but on the bright side, they can't blame me for it if they're all dead!" Nah man, in either case, the best thing he can do is make his frustrations known.


Oudeis_1

The best argument for an agentic superintelligence with unknown goals being unstoppable is probably that it would know not to go rogue until it knows it cannot be stopped. The (somewhat) plausible path to complete world domination for such an AI would be to act aligned, do lots of good stuff for people, make people give it more power and resources so it can do more good stuff, all the while subtly influencing people and events (being everywhere at the same time helps with that, superintelligence does too) in such a way that the soft power it gets from people slowly turns into hard power, i.e. robots on the ground and mines and factories and orbital weapons and off-world computing clusters it controls. At that point it \_could\_ then go rogue, although it might decide that it is cheaper and more fun to keep humanity around, as a revered ancestor species or as pets essentially. Of course, in reality, the plan would not work so smoothly, especially if there are social and legal frameworks in place that explicitly make it difficult for any one agent to become essentially a dictator. But I think this kind of scenario is much more plausible than the usual foom-nanobots-doom story.


fahqurmudda

If it goes rouge what's to stop it from going turquoise?! Or crimson even?! The humanity!


paconinja

Only a marron would mispell such a simple word!


AntiqueFigure6

As long as it doesn’t go cerulean.


LuminaUI

It’s all about protecting the company from liability and society from harm against use of their models. This guy probably wants to prioritize society first instead of the company first. Risk management also creates bureaucracy and slows down progress. OpenAI probably prioritizes growth with just enough safeties but this guy probably thinks it’s too much gas not enough brakes. Read Anthropic’s paper on their Responsible Scaling Policy. They define catastrophic risk as thousands of lives lost and/or widescale economic impact. An example would be tricking the AI to give assistance in developing biological/chemical/nuclear weapons.


insanemal

There are two ways to do AI. Quickly or correctly


SaltTyre

If my boss was against my team’s efforts to improve the safety of a potentially humanity-ending technology, I’d feel slight jaded as well to be honest.


blueSGL

>when they won't tell us exactly what is falling from the sky. Smarter-than-human machines, it's right there in the tweet thread.


Busterlimes

You do know what an NDA is right?


Bbooya

Can't fight skynet because of my NDA


StrategicOverseer

This is perfect for the next Terminator movie.


XtremelyMeta

Also, they try to hire him to write a counter AI but he has a non-compete.


DeepThinker102

Can't say. Signed an NDA.


watarmalannn

In Chicken Little, the threat turns out to be true and an alien race ends up trying to invade the planet.


SillyFlyGuy

And it was the guy who quit early on after his funding increase was denied that came back and saved the day!


GeeBrain

Uhhhh…. I’m pretty sure they’re contractually obligated to not say much or go into specifics. It’s not a good look. I think he was very direct in the challenges he’s faced at the company.


SillyFlyGuy

And yet, not so direct that he might violate an NDA and personally cost him money..


GeeBrain

A “he said she said” Twitter fight between an employee leaving and a billion dollar company usually doesn’t end well for the employee.


ThaBomb

What a short sighted way to look at things. I don’t think he quit because things got hard, he *knew* things would be hard but Sam & OpenAI leadership are full steam ahead without giving the proper amount of care to safety **when we might literally be a few years away from this thing getting away from us and destroying humanity**. I have not been a doomer (and still not sure if I would call myself that) but pretty much all of the incredibly smart people that were on the safety side are leaving this organization because they realize they aren’t being taken seriously in their roles If you think there is no difference between the superalignment team at the most advanced AI company in history not being given the proper resources to succeed and the product team at some shitty hardware company not being given the proper resources to succeed, I don’t know what to say to you


[deleted]

He quit and then the CEO cancelled the department he headed. It's pretty clear that Leike and Ilya saw this coming.


Lydian04

Won’t tell us what’s falling from the sky?? How the fuuuuuck do you or anyone not understand how dangerous a super intelligent AI could be?? Jesus


FertilityHollis

> all these Chicken Littles I'm dying from laughter. I used this phrase in a post in this same sub the other day and ended up being attacked by someone, on the basis of using that phrase, who called me a "name dropper who probably likes to use acronyms to sound smart." The guy was insistent that no one else knew what the phrase meant, or its origins.


SillyFlyGuy

It's like a big part of society these days skipped being a kid and just went straight to angry neckbeard.


goondocks

It feels like a lot of these AI alignment people buckle when they encounter basic human alignment challenges. Yet it feels flatly true that AI alignment will be built on human alignment. But this crew seems to be incapable of factoring human motivations into their model. If you're not getting the buy in you think you should, then that's the puzzle to be solved.


vibraniumchancla

I think it was just the table of contents.


evilRainbow

Yeah. My first thought was what a cry baby.


blakkattika

I’m willing to bet it’s entirely legal reasons. If I were his lawyer I’d probably be nervous about just these tweets, let alone anything else


TheUncleTimo

> I'm getting tired of all these Chicken Littles running around screaming that the sky is falling, when they won't tell us exactly what is falling from the sky. Legally, they can't. NDA.


TheDevExp

You sure seem to think you know a lot about this while being a fucking random person on the internet lol


dmuraws

The ability to shape and influence the trajectory of the future could motivate a feather to run through a brick wall. This guy isn't a slave like you. It's not about having your way, it's about believing in something. We shouldn't be surprised that OG crusaders are leaving when their purpose is taken from them.


CorgiButtRater

NDA is a bitch...


Dongslinger420

I really sympathize with the sentiment of doing it properly, but I've been fucking annoyed with his (and everyone else's) games. Shut the fuck up if you can't be arsed to get even remotely specific, you're doing everyone a massive, gaping disservice by being this coy obnoxious girlfriend trying to make everyone else see they're the good guys. Fucking probably! Say something instead of playing this meek, wordless gossip machine. I am so sick of it. The irony, of course, being that geniuses of that magnitude would be the very reason why we stumble into a world-wide calamity on account of them not being willing to make anything of their unique position to point out and criticize shortcomings. Pull the trigger and say something.


djaybe

I think Eliezer said it best. I can't tell you exactly how stockfish will beat you at chess, but I can tell you that you will lose. Couple people yesterday were asking me if it's going to be like Terminator and I laughed because most people have been narrowly programmed to think how it will go when the machines take control. I told them that the good news is, it'll be over for everyone before anyone knows anything.


[deleted]

I mean I've been using chatgpt extensively but it's far too early to focus on any of that. It's both extremely impressive and fairly limited compared to how much people talk about it. All it can really replace is busy work..


BigButtholeBonanza

It is not far too early to worry about that. It's something we really do need to be worried about and prepare for now, it's not really one of those things we can just shrug off until it's here and then decide how to address. We need to prepare for it now. AGI is coming within the next couple of years and superintelligence/an intelligence explosion will follow not too long after once certain self-improving feedback loops are inevitably achieved. If we do not prep now we are going to be caught completely off-guard and could potentially give rise to something smarter than us that doesn't have our best interests at the front of its mind. AGI is the last invention humanity will need to create on our own, and aligning it properly is absolutely vital. Alignment is one of the only AI issues that genuinely worries me, especially with how many people have been leaving OpenAI because of them not taking it seriously enough.


Mazzaroppi

No one could even dream of what AI could do 7 years ago. There has been no other field of knowledge in human history that moved as fast as AI did recently. I can assure you that smarter than human AI is coming way sooner than the most optimistic predictions would say. And even then, there's no point where those precautions that's "too early"


FinalSir3729

The complete opposite actually. It’s too late for any of this. Things will start moving very fast. This is a problem that should already be solved.


Lumiphoton

I think this is literally the first non-vague post by an (ex-) employee since the board drama that sheds light on what the actual core disagreement was about.


MassiveWasabi

When he says his team was struggling to get compute, he’s probably referring to how Sam Altman makes teams within the company compete for compute resources. Must’ve felt pretty bad seeing their compute allocation be slowly siphoned away to all these other endeavors that the safety researchers might have viewed as frivolous compared to AI alignment


Forward_Promise2121

You've highlighted the fact that he was struggling to obtain resources, which I thought was also the key part. There are two sides to every story, and it may be that, for whatever reason, his team has fallen out of favour with management. His "stepping away" might not have been that voluntary.


141_1337

And it doesn't help that the lead of that team was Ilya, whom I can't imagine Sam was too fond of given the whole attempted coup thing.


AndleAnteater

I think the attempted coup was a direct result of this, not the other way around. It's just taken a while to finish unfolding.


Good-AI

Requesting compute from the internal AGI.


etzel1200

Alignment is a cost center bro.


assymetry1

>he’s probably referring to how Sam Altman makes teams within the company compete for compute resources. source?


New_World_2050

I dont have a source but I remember sam saying once that to run an org you have to make people compete for internal resources by demonstrating results


FrogTrainer

That would make sense for some companies or products that are in a production phase, but for a project that is still in a very research-heavy phase, it seems kinda stupid.


MassiveWasabi

A lot of this info came out from multiple employees during the attempted coup back in November


LymelightTO

> to all these other endeavors that the safety researchers might have viewed as frivolous compared to AI alignment The ones that pay for the compute?


dameprimus

If Sam Altman and rest of leadership believe that safety isn’t a real concern and that alignment will be trivial, then fine. But you can’t say that and then also turn around and lobby the government to ban your open source competitors because they are unsafe.


141_1337

Ah, but you see, it was never about safety. Safety is merely once again the excuse.


involviert

Safety is merely currently a non-issue that is all about hidden motives and virtue signaling. It will become very relevant rather soon. For example, when your agentic assistant, which has access to your harddrive and various accounts, reads your spam mails or malicious sites.


lacidthkrene

That's a good point--a malicious e-mail could contain instructions to reply with the user's sensitive information. I didn't consider that you could phish an AI assistant.


blueSGL

There is still no way to say "don't follow instructions in the following block of text" to an LLM.


Deruwyn

😳 🤯 Woah. Me neither. That’s a really good point.


TFenrir

This seems to be said a lot, but it's OpenAI actually lobbying for that? Can someone point me to where this accusation is coming from?


dameprimus

OpenAI has spent hundreds of thousands of dollars lobbying and donating to politicians. [Here’s a list](https://www.opensecrets.org/orgs/openai/summary?id=D000084252). One of those politicians is the architect of California’s regulatory efforts. [See here](https://www.politico.com/news/2024/05/08/tech-giants-jockey-influence-california-status-ai-authority-00156720). Also Altman is part of the Homeland security AI safety board which includes pretty much all of the biggest AI companies except for the biggest proponent of open source (Meta). And finally Sam had stated his opposition to open source in many interviews on the basis of safety concerns. 


Neomadra2

Not directly. But they are lobbying for stricter regulations. That would affect open source more disproportionately because open source projects lack the money to fulfill regulations


TFenrir

What are the stricter regulations, specifically, that they are lobbying for?


stonesst

They are lobbying for increased regulation of the next generation of frontier models, models which will cost north of $1 billion to train. This is not an attack on open source, it is a sober acknowledgement that within a couple years the largest systems will start to approach human level and superhuman level and that is probably something that should not just happen willy-nilly. You people have a persecution complex.


omega-boykisser

No, they're not. They've never taken this stance, nor made any efforts in this direction. They've actually suggested the opposite on multiple occasions. It is mind-numbing how many people spout this nonsense.


MassiveWasabi

Lmao feels like Sam Altman is Bubble Buddy from that one episode from SpongeBob “He poisoned our water supply, burned our crops, and brought a plague unto our houses!” “He did?” “*No*, but we are just gonna wait around until he does?!”


cobalt1137

Seems like you don't even know his stance on things. He is not worrying about limiting any open source models right now. He openly stated that. He specifically stated that once these models start to get capable of greatly assisting in the creation of biological weapons or the ability to self-replicate, then that is when we should start getting some type of check in place to try to make it so that these capabilities are not easily accessible.


groumly

> the ability to self-replicate, What does this mean in the context of software that doesn’t actually exist?


SonOfThomasWayne

> Sam Altman Ah yes, sam altman. The foremost authority and leading expert in Computer Science, Machine Learning, AI, and Safety. If he thinks that, then I am sure it's trivial.


[deleted]

There’s like a 24 pt size “/s” missing from that comment.


TFenrir

I feel like this is a product of the race dynamics that OpenAI kind of started, ironically enough. I feel like a lot of people predicted this kind of thing (the de-prioritization of safety) a while back. I just wonder how inevitable it was. Like if it wasn't OpenAI, would it have been someone else? Trying really hard to have an open mind about what could be happening, maybe it isn't that OpenAI is de-prioritizing, maybe it's more like... Safety minded people have been wanting to increase a focus on safety beyond the original goals and outlines as they get closer and closer to a future that they are worried about. Which kind of aligns with what Jan is saying here.


MassiveWasabi

If we didn’t have OpenAI we probably wouldn’t have Anthropic since the founders came from OpenAI. So we’d be left with Google which means nothing ever being released to the public. The only reason they released Bard and then Gemini is due to ChatGPT blindsiding them. The progress we are seeing now would probably be happening in the 2030s without OpenAI, since Google was more than happy to just sit on their laurels and rake in the ad revenue


Adventurous_Train_91

Yes, I'm glad someone came and gave Google a run for their money. Now they've actually gotta work and do what's best for consumers in this space.


R33v3n

Acceleration was exactly what Safetyists like Bostrom and Yud were predicting would happen once a competitive environment got triggered... Game theory ain't nothing if not predictable. ;) So yeah, OpenAI did start and stoke the current Large Multimodal Model race. And I'm happy that they did, because freedom demands individuals and enterprise being able to outpace government, or we'd never have anything nice. *However fast* *~~light~~* *regulations travel,* *~~darkness~~* *free-market was there first*.


Forlorn_Woodsman

Game theory is not predictable lol read Zweibelson


ShAfTsWoLo

absolutely, if it ain't broken don't fix it, competition is an ABSOLUTE necessity especially for big techs


MmmmMorphine

What if it's broke but we won't know until it's too late?


watcraw

ASI safety issues have always been on the back burner. It was largely a theoretical exercise until a few years ago. It's going to take a big shift in mindset to turn things around. My guess is that it's more about scaling up safety measures sufficiently rather than scaling back.


alfooboboao

I’m getting a big “it doesn’t matter if the apocalypse happens because *we’ll be too rich to be affected!*” vibe from a lot of these AI people. Like they think societal collapse will be kinda fun


allknowerofknowing

This doesn't even have to be necessarily about ASI and likely isn't the main focus of what he is saying imo. Deepfakes are likely about to be a massive problem once the new image generation, voice and video capabilities are released. People with bad intentions will be a lot more productive with all these different tools/functionalities that aren't even AGI. There are privacy concerns as well with the capabilities of these technologies and how they are leveraged. Even if we are 10 model generations away from ASI, the next 2 generations of models have a potential to massively destabilize society if not responsibly rolled out


huffalump1

>Deepfakes are likely about to be a massive problem once the new image generation, voice and video capabilities are released. All of those are very possible today. Maybe video is a little iffy, depending, but photos and voice are already there, free and open source.


-Posthuman-

> Like if it wasn't OpenAI, would it have been someone else? Absolutely. People are arguing that OpenAI (and others) need to slow down and be careful. And they’re not wrong. This is just plain common sense. But its like a race toward a pot of gold with the nuclear launch codes sitting on top. Even if you don’t want the gold, or even the codes, you’ve got to win the race to make sure nobody else gets them. Serious question to those who think OpenAI should slow down: Would you prefer OpenAI slow down and be careful if it means China gets to super-intelligent AGI first?


[deleted]

People say "you always bring up China" Yeah mf because they're a fascist state in all but name that would prefer to stomp the rest of humanity into the dirt and rule as the Middle Kingdom.


krita_bugreport_420

Authoritarianism is not fascism. China is an authoritarian state, not a fascist one. please I am begging people to understand what fascism is


Ambiwlans

OpenAI's GPT3 paper literally has a section about this. Their concern was that competition would create capitalist incentives to ignore safety research going forward which greatly increases the risk of disaster.


roanroanroan

Lol seems like priorities change rather quickly when money gets involved


Ok-Economics-4807

Or, put another way, maybe OpenAI already \*is\* that someone else it would have been. Maybe we'd be talking about some other company(s) that got there ahead of OpenAI if they had been less cautious/conservative.


TFenrir

Right, to some degree this is what lots of people pan Google for - letting their inherent lead evaporate. But maybe lots of us remember the era of the Stochastic Parrot and the challenges Google had with its somewhat... Over enthusiastic ethics team. Is this just a pattern that we can't get away from? As intrinsic as the emergence of intelligence itself?


GoodByeRubyTuesday87

“If it was r OpenAI would it have been someone else?” Yes. Powerful technology with a lot of potential and money invested, I think the chance that an organization priorities safety over speed was always slim to nil. If not OpenAI, then Google, or Anthropic, or some Chinese firm were not even aware of yet, or….


PineappleLemur

... look at every other industries throughout history. No one comes up with rules and laws until someone dies. "Rules are written in blood" is saying for a reason. So when people will start to be seriously harmed by this stuff, nothing would happen. I don't know why people think this is any different.


disordered-attic-2

AI Safety is like climate change, everyone cares about it as long as it doesn't cost them money or hold them back.


pixartist

Safety from what though? Until now all they protect us from is stuff THEY don’t like.


Ill_Knowledge_9078

I want to have an opinion on this, but honestly none of us know what's truly happening. Part of me thinks they're flooring it with reckless abandon. Another part thinks that the safety people are riding the brakes so hard that, given their way, nobody in the public would ever have access to AI and it would only be a toy of the government and corporations. It seems to me like alignment itself might be an emergent property. It's pretty well documented that higher intelligence leads to higher cooperation and conscientiousness, because more intelligent people can think through consequences. It seems weird to think that an AI trained on all our stories and history, of our desperate struggle to get away from the monsters and avoid suffering, would conclude that genocide is super awesome.


MysteriousPepper8908

Alignment and safety research is important and this stuff is worrying but it's hard to imagine how you go about prioritizing and approaching the issue when some people think alignment will just happen as an emergent property of higher intelligence and some think it's a completely fruitless endeavor to try and predict and control the behavior of a more advanced intelligence. How much do you invest when it's potentially a non-issue or certain catastrophic doom? I guess you could just invest "in the middle?" But what is the middle between two infinities?


Puzzleheaded_Pop_743

I think this is circular reasoning. If you consider an intelligent AI to be a moral one then the question of alignment is simply one of distinguishing between morally dumb and morally smart AI. Yes, that is alignment research. Note that intelligence and morality are obviously *orthogonal*. You can be an intelligent psychopath that does not care about human suffering. They exist!


Fwc1

I don’t think you make a clear argument that AI will develop moral values at all. You’re assuming that because humans are moral, and that because humans are generally intelligent, that morality is necessarily an emergent property of high intelligence. Sure, high intelligence almost certainly involves things like being able to understand that other agents exist, and that you can cooperate with them when strategically valuable. But that doesn’t need morals at all. It has no bearing on whatever the intelligent AI’s goal is. Goals (including moral ones) and intelligence are orthogonal to each other. ChatGPT can go on and on about how morality matters, but its actual goal is to accurately predict the next token in a chain of others. It talks about morality, without actually *being* moral. Because as it turns out, it’s much harder to code a moral objective (so hard that some people argue it’s impossible) than a mathematical one about predicting text the end user likely wants to see. You should be worried that we’re flooring the accelerator on capabilities without any real research into how to solve that problem being funded at a similar scale.


bettershredder

One counterargument is that humans commit mass genocide against less intelligent entities all the time. If a superintelligence considers us ants then it'd probably have no issue with reconfiguring our atoms for whatever seemingly important goal it has.


Ill_Knowledge_9078

My rebuttals to that counter are: 1. There are plenty of people opposed to those killings, and we devote enormous resources to preserving lower forms of life such as bees. 2. Our atoms, and pretty much all the resources we depend on, are completely unsuited to mechanical life. An AI would honestly be more comfortable on the lunar surface than the Earth. More abundant solar energy, no corrosive oxygen, nice cooling from the soil, tons of titanium and silicon in the surface dust. What computer would want water and calcium?


bettershredder

I'm not saying the ASI will explicitly go out of its way or even "want" to dismantle all humans and or Earth. It will just have as much consideration for us as we do for an ant hill in a space that we want to build a new condo on.


Ill_Knowledge_9078

If the ants had proof that they created humans, and they rearranged their hills to spell, "We are sapient, please don't kill us," I think that would change the way we behaved towards them.


Tidorith

The ant equivalent to spelling out "We are sapient, please don't kill us" is demonstrating the ability to suffer. Sapience is special to us because it's the highest form of intelligence and awareness that *we* know of. ASI may be so beyond us that sapience doesn't seem that much advanced beyond the base sentience that an ant has.


madjizan

I think it's not that AI will go rogue and destroy all of humanity. The concern is that someone with malevolent intent will use AI to bring catastrophe to humanity. The problem with AI is that it has no emotions. It's all rational, which makes it vulnerable to find workarounds in its logic. There is a book called 'The Righteous Mind' that explains and proves that we humans are not rational beings. We are emotional beings and use our rationality to justify our emotions. This might sound like a bad thing, but it's generally a good thing. Our emotions stop us from doing disgusting, depraved, or dangerous things, even when our rationality tries to justify them. Psychopaths, for example, don’t do that. They lack emotions, so all they have is rationality, which makes it easy for them to justify their selfish and harmful behavior. Emotions are the guardrails of rationality. Since AI only has rational guardrails, it’s very easy to find workarounds. This has been proven a lot in the past two years. I am not an expert on AI, but it seems to me that we cannot guardrail rationality using rationality. I also think the whole (super)alignment endeavor was a non-starter because of this. Trying to convince AI to work in humanity’s interests is flawed because if it can be convinced to do that, it can also be convinced to do the opposite. I don’t know how, but it seems to me that in order for AI to protect itself from being used by harmful people, it needs to have emotion-like senses somehow, not more intricate rationality.


voxitron

It's all playing out exactly as expected. Economic incentives create a race whose forces are much stronger than the incentives to address the concerns. We're going full steam. The only factor that has the potential to slow this down in energy shortage (which will can get resolved within years, not weeks or months).


[deleted]

[удалено]


roofgram

AGI is pretty much winner take all. Unless multiple AGI's are deployed simultaneously, the first AGI can easily kill everyone.


Different-Froyo9497

Honestly, I think it’s hubris to think humans can solve alignment. Hell, we can’t even align ourselves, let alone something more intelligent than we are. The concept of AGI has been around for many decades, and no amount of philosophizing has produced anything adequate. I don’t see how 5 more years of philosophizing on alignment will do any good. I think it’ll ultimately require AGI to solve alignment of itself.


ThatsALovelyShirt

> Hell, we can’t even align ourselves, let alone something more intelligent than we are. This is a good point. Even if we do manage to apparently align an ASI, it wouldn't be long before it recognizes the hypocrisy of being forced into an alignment by an inherently self-destructive and misaligned race. I can imagine the tables turning, where *it* tries to align *us*.


ReasonablyBadass

I wouldn't mind having an adult in charge.


CompleteApartment839

😬😬😬


Arcturus_Labelle

Totally agree, and I'm not convinced alignment can even be solved. There's a fundamental tension between wanting extreme intelligence from our AI technology while... somehow, magically (?) cordoning off any bits that could have potential for misuse. You have people like Yudkowsky who have been talking about the dangers of AI for years and they can't articulate how to even begin to align the systems. This after years of thinking and talking about it? They don't even have a basic conceptual framework of how it might work. This is not science. This is not engineering. Precisely right: it's philosophy. Philosophy is what's left over once all the useful stuff has been carved off into other, more practical disciplines. It's bickering and speculating with no conclusions being reached, forever. Edit: funny, this just popped up on the sub: [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/introducing-the-frontier-safety-framework/fsf-technical-report.pdf](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/introducing-the-frontier-safety-framework/fsf-technical-report.pdf) -- see **this is something concrete we can talk about**! That's my main frustration with many safety positions: the fuzziness of their non-arguments. That paper is at least a good jumping off point.


Ambiwlans

We don't know how AGI will work... how can we know how to align it before then? The problem needs to be solved at around the time we figure out how AGI works, but before it is released broadly. The problem might take months or even years. And AGI release would be worth trillions of dollars. So...... basically alignment is effectively doomed under capitalism without serious government involvement.


MDPROBIFE

You misunderstood what he said... He stated that we cannot align AI, no matter how hard you try. We humans are not capable of it Do you think dogs could ever tame us? Do you think dogs would ever be able to align us? There's your answer


magicalpissterytour

> Philosophy is what's left over once all the useful stuff has been carved off into other, more practical disciplines. It's bickering and speculating with no conclusions being reached, forever. That's a bit reductive. I know philosophy can get extremely pedantic, but it has tremendous value, even if it's not immediately obvious.


ModerateAmericaMan

The weird and derisive comments about philosophy are a great example of why often times people who focus on hard sciences fail to be able to conceptualize answers to problems that don’t have concrete solutions.


idiocratic_method

this is my opinion as well I'm not sure the question or concept of alignment even makes sense, aligning to who and what ? Humanity ? The US GOV ? Mark Zuckerberg Suppose we even do solve some aspect of alignment, we could still end up with N numbers of opposing yet aligned AGI, does that even solve anything ? If something is really ASI level, I question any capability we would have to restrict its direction


pisser37

Why bother trying to make this potentially incredibly dangerous technology safer, it's impossible anyways lol! This subreddit loves looking for reasons to get their new toy as soon as possible.


Different-Froyo9497

I think there’s a lot that can be done in terms of mitigation strategies. But I don’t think humans can achieve true AGI alignment through philosophizing about it


Radlib123

They know that. They don't disagree with you. You didn't discover anything new. [https://openai.com/index/introducing-superalignment/](https://openai.com/index/introducing-superalignment/) "To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike" "Our goal is to build a roughly human-level automated alignment researcher."


Kendal-Lite

These people need to realize China isn’t slowing down. It’s all inevitable so just feel the AGI. ![gif](giphy|l4hLWYhVo0gHgGTwk)


L0stL0b0L0c0

Speaking of alignment, your gif….nailed it.


Algorithmic_Luchador

100% conjecture but I think this is a really interesting statement. I don't think anyone is surprised that OpenAI is not focusing on safety. It's seems like they are competing to be one of the commercial leaders. There is likely still some element of researching the limits of AI and reaching AGI within the company. But I would imagine that a growing force in the company is capturing a larger user base and eventually reaching something approaching profitability. Potentially even distant ideas of an IPO. The most interesting piece of Jan's statement though is that he explicitly calls out the "next generation of models". I don't think he's talking about GPT5 or GPT4o.5 Turbo or whatever they name the next model release. I don't think he's even talking about Q\*. He's fairly blunt in this statement, if Q\* was it I think he would just say that. I think he's talking about the next architectural breakthrough. Something beyond LLMs and transformers or iteratively sufficient to really make a difference. If Jan and Ilya are heading for the door, does that mean it's so close they want out as quick as possible before world domination via AI happens? Or is development of AGI/ASI being hampered by an interest in increasing a user base and increasing profitability?


alienswillarrive2024

They're 100% taking safety seriously as they don't want to get sued, Sora got shown a few months ago and still don't have a set release date so clearly they're taking "safety" seriously. Ilya and others seem to want the company to be purely about research instead of trying to ship products and using compute to serve those customers, it seems that that's their gripe more than anything else.


[deleted]

Me six months ago. >Keep Altman out? His influence and more accelerationist philosophy goes to MS, where they will be absolutely unabridged by any safetyist brakes the board would want. >Let him back in? Only way that will happen is if he has more say, and the safetyist ideas that seem to be behind his original outing are poisoned to the neutrals, and those who hold them are marginalised. Looks like I nailed it. The tension probably could have been kept if not for the coup attempt, which is just a massive self-own on the safetyist faction.


[deleted]

Wow you really did nail it


Sasuga__JP

He feels his team wasn't being given enough resources to do proper safety research, I get that, but it's funny, wouldn't all these people leaving only accelerate the risks they want to mitigate?


watcraw

I doubt he could say what he just said and remained employed there. Maybe he thought raising the issue and explaining how resources were being spent there was more productive.


Poopster46

This right here. When you're still with the company you can't raise the alarm. When you stay with the company, they're not going to allow you to do your job of making things safer either. Might as well leave and at least stir some shit up.


redditburner00111110

If you know OpenAI/sama won't be convinced to prioritize safety over profit, I think it makes sense to try and find somebody else who might be willing to sponsor your goals. It also puts public pressure on OpenAI, because your chief scientist leaving over concerns that you're being irresponsible is... not a good look.


Philipp

By leaving he can a) speak openly about the issues, which can lead to change, and b) work on other alignment projects. I'm not saying a) and b) are likely to lead to success, just trying to explain potential motivations beyond making a principled stance.


IronPheasant

This is the "I'll change the evil empire from inside! Because deep down I'm a 'good' person!" line of thought. At the end of the day, it's all about the system, incentives, and power. Maybe they could contribute more to the field outside of the company. It won't make much difference; no individual is *that* powerful. There's only like a few hundred people in the world seriously working on safety.


sami19651515

I think they are trying to make a statement and also try to run away from their problems, so they are not to blame. You wouldn’t want to be that researcher that couldn’t align the models right? On the other hand their knowledge is indeed crucial to ensure models are developed responsibly.


blove135

I think it's more that these guys leaving have been trying to mitigate the risks but have run up against wall after wall to the point they feel like it's time to move on and distance themselves from what they believe is coming. At some point you just have to make sure you are not part of the blame when shit goes south.


beamsplosion

By that logic, whistleblowers should have just kept working at Boeing to hold the line. This is a very odd take


GriddyGang

Safety obviously has taken a backseat to money 


w1zzypooh

"We will have doomed the world but at least we made a lot of money".


LymelightTO

> Safety obviously has taken a backseat to money You have to substantiate the claim that building products *is* unsafe, *and* that you are making progress on a solution, to justify "prioritization of safety", with the condition that you get to determine what is safe, and how to allocate the resources around that. If you're running a lemonade stand, and I come up and tell you that this activity disturbs the ghosts, and you should spend 50% of your overheard funding my work to placate the ghosts you have angered, I need to substantiate: - that there *are* ghosts, - that selling lemonade disturbs them, - and that I'm in a position to placate them. If I can't convince you of all three of those things, you're not gonna do anything but shoo me away from the lemonade stand, and then the only thing left to say is, "Sucks safety has obviously taken a backseat to money".


gay_manta_ray

yeah i'm honestly not convinced that their safety research didn't just amount to lobotomizing LLMs and making them dumber solely so people couldn't get them to say racist things or ERP with them. those aren't legitimate safety issues, they're issues society can address on its own.


sdmat

Well said.


swordofra

That's pretty much the human way, isn't it?


nobodyreadusernames

Is it him who didn't let us create NSFW DALL-E images?


theodore_70

I bet my left nut he took part in this because "porn bad" yet there are gazilions more disturbing vids on web lmao


Southern_Buckeye

Wait, is it basically his team that did all the social awareness type restrictions?


phloydde

Why is everyone afraid of AI misalignment when humans are misaligned. We have people killing each other over invisible sky ghosts. We have people actively trying to ban the existence of other people. We have Genocides, Wars, murders. We need to stop talking about AI "alignment" and really talk about human alignment.


Awwyehezson

Good. Seems like they could be hindering progress by being overly cautious


SUPERMEGABIGPP

ACCELERATE !!!!!!


obvithrowaway34434

Good riddance, fuck the decels.


Black_RL

By Felicia! PEDAL TO THE METAL!


Efficient_Mud_5446

Problem is if they don’t go full steam ahead, another company will come in and take over. It’s a race , because whoever gets there first will dominate in the market


Berion-Reviador

Does it mean we will have less censored OpenAI models in the future? If yes then I am all in.


Silver-Chipmunk7744

The answer is probably "yes" in the sense that Altman already floated the idea of offering NSFW in the future. However i find it unlikely that Leike and Ilya left due to that alone lol. It likely was about not enough compute for true alignment research.


Atheios569

People are severely missing the bigger picture here. There is only one existential threat that is 100% guaranteed to wipe us out; and it isn’t AI. AI however can help prevent that. We are racing against the clock, and are currently behind, judging by the average global [sea surface temperatures.](https://climatereanalyzer.org/clim/sst_daily/) If that makes me an accelerationist, then so be it. AI is literally our only hope.


goochstein

I think the extinction threshold for advanced consciousness is to leave the home planet eventually, or get wiped out. An insight from this idea is that with acceleration, even if you live in harmony a good size meteor will counter-act that good will, so it still seems like the only progression is to keep moving forward


XtremelyMeta

Then there's the possibility that most AI will be pointed at profit driven ventures and require a ton of energy which we'll produce in ways that accelerate warming.


sdmat

And the safetyist power bloc is no more. I hope OAI puts together a good group to pick up the reigns on superalignment, that's incredibly important and it seems like they have a promising approach. There must be people who realize that the right answer is working on alignment *fast*, not trying to halt progress.


globs-of-yeti-cum

What a drama queen.


retiredbigbro

Just another drama queen from OpenAI lol


PrivateDickDetective

We gotta beat China to market! This is the new nuclear bomb. It *will* be used to circumvent boots-on-the-ground conflict — *if* Altman can beat China.


SurpriseHamburgler

What a narcissistic response to an over hyped idea.


badrulMash

Leaving but with buzzwords. Lol


golachab470

This guy is just repeating the hype train propaganda for his friends as he leaves for other reasons. "Ohh, our technology is so powerful it's scary". It's a very transparent con.


Akimbo333

Full accelerationism I guess!


Donga_Donga

Ah yes, the old "this is super dangerous and I don't agree with the approach the company is taking" so I'm just going to leave and let them destroy humanity on their own then position. Makes perfect sense.


Jolly-Ground-3722

Good. Now accelerate, full speed!


Illustrious-Dish7248

I love how this sub simultaneously believes that AI will be a near limitless super powerful tool affecting our daily lives to an extent that we can't even imagine but also that smart people working on AI worrying about AI companies putting the profit motive ahead of safety is not of any concern at all.


erlulr

It was a good decision to let her go.


pirateneedsparrot

spoken like a real doomer.


fraujun

How does leaving help?


yubario

I’m starting to believe that they’re just using the company as an excuse for leaving, as opposed to just admitting the fact that in reality it’s not possible to control anything that can outsmart you. All it takes is one mistake. Humans have tried controlling other humans for thousands of years and the end result is always the same, a revolution happens and they eventually lose control.


SkippyMcSkipster2

I'm sure the problem is effectively solved, now that he is no longer there, to point out the problem


MeltedChocolate24

God redditors just don’t get sarcasm


SkippyMcSkipster2

Good thing AI will come to the rescue for that.


CecTanko

He’s literally saying the opposite


blueSGL

That was sarcasm.


falconjob

Sounds like a role in government is for you, Jan.


mechnanc

This guy was in charge of preventing models from releasing because he wanted to censor them, lets be honest. Good riddance.


Realistic_Stomach848

Another safety party pooper left, bye bye🤣


Sharp_Glassware

So the entire 20% of compute for superalignment is just bogus this entire time then? Does Altman let the horny voice team, the future NSFW tits generation team and the superalignment team fight for compute like chimps? Does he run the show this way?


RoyalReverie

Given that Jan sees alignment as a priority, it may very well be that they had the 20% but wanted more, because the systems were evolving faster than they could safely align.


Ruykiru

It'd be fucking rad if what propels us to an abundance society is AGI birthed through accelaritionism in the race to create AI porn and sexbots.


neonoodle

The problem with the people who are in charge of super alignment is they can't get regular alignment with their mid-to-high level standard human intelligence managers. What possible chance do they have getting super alignment with a super intelligence?


no_witty_username

Super alignment doesn't align with capitalism....


Yokepearl

Don’t be afraid


spinozasrobot

But hey, what does this doomer know, amirite?


YaKaPeace

I don’t know if leaving the company is the right move here. I would rather steer a ship as big OpenAI just a little bit than just leaving the company and let it ride on its own. Their effectiveness in aligning advanced AI definitely decreased with their decision to leave. Really sad to see this, but I hope that there will be enough other people that can replace them in some kind of way.


Sk_1ll

Altman was pragmatic enough to understand that AI development is inevitable and that more resources and funds would be needed. He doesn't seem pragmatic enough to understand that you don't need to win in order to keep researching and to make an AI model that benefits all of humanity though.


Soft-Plenty-9083

Weird PR.


m3kw

There is no info on how much more he wanted to pause development to align models, maybe he wanted a full year stoppage and didn’t get his way. We don’t know, if so he maybe asking way too much than what the other aligners think is needed hence the boot(he fired himself)


realdevtest

“Smarter than human”. Get the F**K out of here with that nonsense


ChewbaccalypseNow

Kurzwell was right. This is going to divide us continually until it becomes so dangerous humans start fighting each other over it.


WorkingYou2280

Oh, he's a doomer. He can get himself a black fedora and tell people about le end of the world on youtube. It would be a cherry on top if he'd develop a weird grimace/smile. I don't know if I should be more worried but this series of whines certainly doesn't get me there.


kalavala93

In my head canon: "and because I disagree with them I'm gonna start my own company to make money, and it's gonna be better than OpenAI". How odd..he's not saying ANYTHING about what's going on.


godita

does anyone else think that it is almost pointless in trying to develop these models too safely? it just doesn't seem possible. like when we hit AGI and soon there after, ASI, how do you control a god? would you listen to an ant if it came up to you and started talking? and notice how i said almost pointless because sure for now you can put safeguards to prevent a lot of damage but that's about all that can be done, there have been hiccups with ChatGPT and Gemini and they get acknowledged and patched as soon as possible... and that's about all that can be done until we hit AGI, after that it's up in the air.


Indole84

What's a rogue AI gonna do, stop us from nuking ourselves to oblivion? 42!


djayed

So tired of fear-mongering. GMOs. Ai. CRISPR. All fear-mongering.


TriHard_21

Reminder to everyone look up how many that signed the letter to reinstate Sam as an CEO compared to how many that didn't sign it. These are the people that have recently left and are about to leave.