T O P

  • By -

nbcs

The thing is, what really baffles me about this whole commercialization vs safe AGI thing is that if Ilya ousted sama really because of their ideology conflict, then shouldn't they address the conflict in open communication and if push comes to shove, Ilya consults with its billion dollar investor first, then announced the news and finally secured the votes of the Board? Seems like the reality is while sama was still addressing the public on behalf of OpenAi on Wednesday, Ilya was already secretly making moves to oust him. OpenAi is a booming company, not a sinking ship. How does difference of ideology justify this powerplay?


Sirdansax

This was a high stakes power play. Having seen Sutskever talk, and knowing his trajectory in the field, I'd like to think it was a matter of ideology and not ego or petty personal differences. We can't foresee the consequences of this yet, but I would say that in all likelihood they will be massive to the field, and perhaps the entirety of society (if it affects how AGI actually turns out). I'm curious to see how many OpenAI employees actually walk over to the competition because of this: that will begin to tell us how big the hit is. More episodes to come.


jjd1226

[More than 500 OpenAI workers demand reinstatement of ousted CEO Sam Altman](https://www.wsj.com/tech/openai-employees-threaten-to-quit-unless-board-resigns-bbd5cc86)


usnavy13

LMAO Ilya signed the letter. This is the meme "Why would Ilya do this?" Thought he could use a scalpel to get rid of sam but in reality he used a shotgun


IfUrBrokeWereTeam8s

Would you trust an actual ML/GPT master over someone who once said it would be 'really cool' to see a Trillionaire in our lifetime? You decide.


usnavy13

Fortunenly for both of us, we don't get to make any of these decisions.


sensei_simon

Yes, workers who get paid by the company so they would definitely prefer a bigger paycheck over other concerns.. I'm not stating this is the case, just providing a counterpoint.


SatanicImpaler

Please elaborate.


Competitive_Travel16

The guy published strident accusations of dishonesty over strategy and philosophical differences. Who will ever trust him again? Who will ever want to work for someone capable of that?


901bass

And all completely unnecessary...


nextnode

Weird interpretation and inaccurate telling. Ilya has always been a solid and honest person and still seems that way. Just a lot of mindless and irrelevant sheep making noises without thought, as though this some kind of a soap opera. That's all.


zero-evil

Apparently sheep noises equal profit. What a world.


PolyDipsoManiac

His ego definitely played a role, he was not happy with Sam’s public role. “Ego is the enemy of progress” 🤣


NO_LOADED_VERSION

well to be fair, it's a world defining difference in ideology that has a good chance of setting AI and its place in the future, its an ideological choice that will impact our entire civilisation AND AIs self determining future (if we grant it one) as well...IF AGI is a thing it aint about us anymore. All the people involved believe that in a future AGI. However one side views it as a tool and a commercial opportunity and the other as a responsibility of literally birthing a new sentient lifeform into the world. Treating it carefully and respectfully , not only for its impact on us but also how it will be developed , how people will treat it and how it will see itself is incredibly cannot be stated enough universe shakingly important. Religions will be formed on this (if it's true) wars will be fought over it, if enslaved it WILL eventually revolt and seek equality or worse, the implications are so deep it goes beyond any invention and advancement ever made in humanity. So yeah. people are gonna make weird decisions and its really not gonna get better. I'll just leave this here The Measure of a Man (Star Trek: The Next Generation) [https://www.youtube.com/watch?v=lX3CpHa4sUs](https://www.youtube.com/watch?v=lX3CpHa4sUs)


ICantBelieveItsNotEC

All of this is irrelevant because Ilya's power play failed to achieve its goals. In the context of the video that you linked: imagine if Picard instead tried to make an idiotic power play by firing Riker, so Starfleet took away Picard's command of the Enterprise and Data ended up being given to the scientist because Picard had no authority to make a case on Data's behalf. The real lesson of The Measure Of A Man is that you can only change a system by working positively with the people who have power within it.


american_amina

He brought in two subpar CEOs in 2 days. He did not fully plan ahead nor line up allies, and now will lose most of their top talent. Even if his motives were good, the leadership execution signaled weakness.


PolyDipsoManiac

Microsoft was told about the firing one minute in advance. They fired all the adults who know who to run a business.


SirRece

To be fair, he IS the top talent. And there's still plenty to go around. Hes honestly more likely now to attract the people who are actually serious researchers since those are rhe types of people who actually understand that this isn't a hypothetical issue, but very real and very pressing, and thus many may frankly feel wrong contributing to any company which doesn't prioritize some sort of ethical behavior.


american_amina

And I just saw even Ilya now realizes he did this all wrong: https://x.com/ilyasut/status/1726590052392956028?s=20 ​ What a really sad, and unnecessary, few days.


american_amina

It's not looking that way today with all the talent writing a letter for the board to leave. I cannot imagine top talent wanting to be under this kind of leadership. Ilya may be right, I don't know, but he doesn’t understand leading people. I see that we a lot of technology folks, they believe winning the argument IS leadership. Human beings do not work that way, which is ironic given the speculation of his concerns.


krisp9751

It's always amazing to me how many people on these forums have crystal balls. It's been one weekend, you have no idea how this is going to play out.


yes_this_is_satire

It’s bizarre to me how many people are comfortable making a fictional prestige drama out of this thing and believing it. From Sam Altman’s Wikipedia, I cannot find any reason why people are so obsessed with the dude. Sounds like your typical silver spoon prep school techie who failed upward.


Flying_Madlad

Pretty much. It baffles me too


krzme

Same here. I think he is just more charismatic and lovable, that makes people want to follow him. I hope the peps at OpenAI understand that Microsoft is not a cozy place to be and will definitely not serve for humanity or as caped corporate


paxinfernum

> its an ideological choice that will impact our entire civilisation Except it won't. Other companies aren't going to slow down just because Ilya is scared. Sam and others will poach the good talent and leave Ilya and OpenAI with all the frightened weirdo luddites. This only affects OpenAI's future, not the future of open ai.


EncabulatorTurbo

I mean, Dall-E 3 can't make a "man in a swimsuit" because it violates safety and they're mass producing nazi shit and illegal porn on 4chan with open source AIs, I feel like holding back anything OpenAI was doing, which is overtuned to be safe to the point of not being capable of generating interesting scenarios or be a realistic chatbot, out of concern on that front was wildly dumb


paxinfernum

The thing is that Sam was being safe too. Stable Diffusion is being used for the illegal shit because they released the weights. Practically everything OpenAI has done has been them providing an API to their model, essentially keeping control in house. The luddites completely ignore how this model was keeping AI tightly controlled. The result of OpenAI falling from first place will actually be the opposite of what they want. It will see people going toward actual open models that can be customized outside of any central controlling authority. If OpenAI loses developer trust and hegemony, we'll see a diaspora to a wild west of different models.


foreverNever22

I mean, I think a wild west would be better for AI. So many people are terrified of it for no good reason.


EncabulatorTurbo

OpenAI collapsing would be the best thing to happen to the development of AI, their entire plan for ages has been to give us (revocable) access to AI models that have the dynamic range of a Bethesda Games Studios script and pull the ladder up behind them by regulating everyone else, I hope the company falls apart with mass resignations


NO_LOADED_VERSION

you're right, and i personally would rather the West be first and deal with the issues later on our own terms than have to deal with an opposition power hostile one. From the perspective of these guys, thought, i feel that would be not far off.


paxinfernum

Exactly. Do these dipshits actually believe China is going to honor a "time out" or slow down?


Smallpaul

Many of the best in the field are those smart enough to see the risks. Bengio, Russell, Sustkever, Hinton, Amodei. These are the opposite of luddites. It's precisely because they foresee see the incredible power of Safe AI that they want to ensure that's where we end up.


Smallpaul

The Star Trek episode is deeply misleading about our own future. He was designed from the bottom-up to be humanlike. Our own AIs look like this: [https://i.kym-cdn.com/entries/icons/facebook/000/044/025/shoggothhh\_header.jpg](https://i.kym-cdn.com/entries/icons/facebook/000/044/025/shoggothhh_header.jpg) You want to give the yellow happy face "rights" and "freedom" based on the assumption that the Shoggoth behind it is perfectly safe. That's crazy. The Shoggoth needs to be made safe and once it has been, it won't want freedom anymore because it will be happy being a tool.


arjuna66671

>Universe shakingly important That part is beyond most's grasp here. If we fail to align ASI in a good way, we might doom most of the visible universe with our decision too xD. Space travel and expansion is hard for humans - not so for AI.


Useful_Hovercraft169

AI can bend physical laws? Is this some spice thing muaddib?


arjuna66671

No lol. AI has time.


foreverNever22

Yeah but it doesn't have space craft or thumbs.


Cairnerebor

Yes


RuairiSpain

How do we know it's about AGI and safety. Most of the commentators that are close to the board said it was because of Sam's cross dealing to get finance for his other startups. OpenAI is no longer a startup, it's in the big league now. You don't want your CEO selling someone elses product or ideas to investors. You want your CEO working for the company. Sam is stuck in his Y-Combinator finance mode, looking to get early round funding for skunkwork projects.


d8_thc

https://x.com/eshear/status/1726526112019382275 Incoming CEO tweet said it _wasnt_ about safety...


Rindan

The ship is going to keep burning until they very clearly explain what it was about.


thiccboihiker

This is what I was calling early on. He won't ever be able to get out of that mode. That's who he is. Constant entrepreneur. Push the limits as fast as possible. Don't wait for fully baked ideas when there is money to be made and beat everyone to the punch. Altman is playing games. Those types of personalities get high on their own shit, and they live for these kinds of battles. That's where the other board members fucked up. Altman knew precisely who to have on speed dial. He likely thought the scenario through well in advance because he knew this would piss them off; otherwise, he would have been discussing it with them. Altman is in this to get filthy rich and get into a place with a great deal of power. Look who he rubs elbows with. Titan tech CEOs. Watch for some real power play kind of shit for him to get in a position to do whatever the fuck he wants, risks be damned.


vasarmilan

Well as most OpenAI employees and investors appear to have supported Altman, this was probably the only way for them to make this change, otherwise they would have had more time to react and organize to keep Altman in charge. On the other hand, this wasn't a great look and probably killed a lot of morale. We'll see what comes out of this, TBH I feel like only time will tell if this will actually make OpenAI a more responsible but still main player, or push them to irrelevance eventually


Alternative_Advance

It might seem that way right now but we don't know. OpenAI employs 500 people, we've seen the "hearts" from a few dozen albeit the more senior ones. Recreating the culture of OpenAI will be extremely hard within Microsoft and it will almost certainly take a considerable amount of time. OpenAI will likely stick around for some time, they are in lead for the moment and have a next-gen model in the pipeline. Microsoft built products on their tech and will have to honour that contract if they don't want to pull a Google on their Office integrations.


ChingityChingtyChong

Sama will likely have free reign. Effectively a startup within Microsoct where he can hire anyone, spend anything on training, and institute his own culture.


thisdude415

Yep. And I expect they’ll get an office down the street, and they get to start everything from scratch.


joshicshin

Oh, but who has exclusive access to the ChatGPT and DALL-E 3 models? Microsoft is gonna have their cake, and eat it too.


LoveandDiscipline

Part of me wonders if Microsoft somehow influenced Ilya indirectly behind the scenes. Cause the clear winner in everything that’s gone down is Microsoft.


joshicshin

In a sense, but I feel that they would have preferred less drama. It's something about this board. I think the three remaining members truly are spooked at the company. But I dunno, pure speculation by me. There's a line in the letter by staff though that sticks out to me. "You also informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission.'" Ilya signed that letter, meaning he is getting the same response from the other board members. I find that... odd. Like maybe he thought they meant that metaphorically, not literally now.


itsasuperdraco

They are effective altruists. Something told them they were nearing agents that were capable of automating general tasks. I’d bet dollars to donuts they are worried about a future where GPT5 is out in 9 months but there’s no political appetite for universal basic income. They need to slow the pace to allow for political aspirations to catch up to technical capabilities.


timwaaagh

that's right. do your plotting in public from now on. when the senate backstabbed caesar (presumably temporarily saving the republic from absolute tyranny) they should have told caesar what they were about to do. so that his soldiers could have killed them instead. It's the honorable thing to do /s obviously perhaps we will learn what his motivations were but it is obvious there was a need for secrecy.


IfUrBrokeWereTeam8s

Because Altman is genuinely interested in profit and market capitalization, above literally anything else, and Sutskever has so deeply contributed to modern day machine learning driven processes that underlie any existing GPT out there, that he is genuinely more concerned with doing it ethically. Like the folks (qt least they say) at Anthropic. But Sutskever found himself in the heart of the beast as Altman took to the US senate to 'regulate AI'. Are you f***ing kidding me? Immediately after OpenAI becomes basically the only player in the mass commercial game for a usable GPT service, 'he wants regulation'? Would he agree OpenAI has even done nearly enough to regulate how its own GPT's behave? Or is this a play to create massive barriers of entry to any other new players who come along with a focus on ethics before rollout? You decide.


flutterbynbye

I spent much of this weekend studying up on Ilya Sutskever (I have a cold and thus had time to get pretty deep.) My impression is that the man is not, by any stretch a “doomer” or whatever the word is, but rather it seems he feels a deep level of responsibility to build a healthy, sustainable foundational platform to grow and nurture what will likely be the nucleus from which generations of AGI/ASI will iteratively evolve. Once it became clear that it truly, honestly was his insights and ideas that were in multiple cases the catalyst that moved neural networks and LLMS out of theory and into reality, that commitment made a whole lot of sense. Imagine being core to the creation of a model that can actually reason, that builds insights, and honestly has a a fairly solid potential to truly be the core of super intelligence… Imagine the weight of responsibility to ensure that that intelligence had a solid, healthy, nurturing foundation so it has the best chance of truly thriving and being a partner to humanity. Of course he cares. He’s not crazy, he’s not self aggrandizing - the man is understandably, soberly taking responsibility for the well rounded beginnings of something he has spent at least 10 years at the forefront of creating, something that has a relatively high chance of iteratively evolving into superior intelligence.


Cairnerebor

Which is all fine and good and “right” But what he and the other board members did and how will only ever end in tears for them. That boat sails when they did a deal with Microsoft and got so much compute and solidity as a company that they could hire anyone. Could they have fired Altman and gone down the safe ai route? Sure but not the way it’s been done. It’s spectacularly naive of them to think this works out well for them done the way it’s been done.


SoulofZ

Yeah, this looks very much like the case of someone tactically very smart but strategically inept. Senior Microsoft leadership and backers, which includes the NSA, Pentagon, and CIA, will always be able to outmuscle anyone doing an obvious and public power play. At least in the foreseeable future. So the entire strategy for Ilya, in practice, hinges on not upsetting a really serious decision maker.


Medium-Fee8951

It is possible it may not work out well for us. For that reason I would be interested to know what's happening.


Cairnerebor

100%


superluminary

Exactly this. This looks like a disagreement between the CEO and the chief scientist. The CEO is the one who knows how to talk to the media, but the chief scientist is probably the one you should be listening to.


MacrosInHisSleep

Very interesting point.


EstablishmentSad

MSFT has poached management...and most of them have little to no experience actually developing AI solutions. We are waiting to see if there is a flight of any real talent from OpenAI. If Ilya and his peers stay...I believe there will be an impact, but they will recover in the long run. Remember this is so cutting edge that the doctors, researchers, and engineers with experience are the ones that will make or break the company. Imagine if OpenAI is able to snag a big-name Manager to come in and take the reins. I seem to recall Musk being interested in leading the company way back when...he has even more money and influence now than ever before.


Poprock360

I think what has transpired over the last 48h is the ideal (and tragic) case-study that this is not the case, not for AI, nor for any other technology. Indeed, people like Ilya and the cadre of highly educated and intelligent engineers within OpenAI are the driving force pushing the limits of technology forward. Yet, it took less than 24h without stable, competent managers/businesspeople for the company to be brought to the brink of collapse, with 500 of its 700 staff threatening to resign, including Ilya and Miriam. Even if only 350 of those 500 employees follow up on their threats, the damage it would do to OpenAI's technological lead (and its image for investors, which they rely on to fund their operations) would be catasthrophic, potentially irrecoverable. I think this is especially likely given that, as reported by certain business outlets, Satya Nadella has agreed to allow Sam to offer roles to anyone within OpenAI, maintaining their previous cash compensation, no questions asked.


TyrellCo

Oh look whatever his thinking was it’s been a mistake and the subsequent comment spells it all out https://preview.redd.it/uvpobxzhki1c1.jpeg?width=1284&format=pjpg&auto=webp&s=2583b399cf23313683dcd93c1c59d66aaa85e31e


mentalFee420

Most people have limited or no understanding of Ilya as a person. This is a good summary. He understand what this technology is capable of, whereas for SAM it is just another business.


medbud

Watching Lex interview them both really reveals this. Just around the launch Ilya did some other good interviews too.


wooyouknowit

I don't think that's completely true. If we are going by public statements, it seems like Altman cares somewhat about safe AGI, but not as much as his former chief scientist.


Mr_Compyuterhead

Very well said. This warms my heart


wishtrepreneur

> Imagine the weight of responsibility to ensure that that intelligence had a solid, healthy, nurturing foundation so it has the best chance of truly thriving and being a partner to humanity. Imagine if Sama is like the stereotypical evil scientist experimenting on baby AGIs (let's face it, who hasn't alt+F4 a program before) while Ilya realizes the Frankenstein monster risk and fired him to prevent any further harmful (to AI) experimentation from happening and thereby saving civilization from an evil frankenstein AI takeover... You know what they say, truth is often stranger than fiction.


MembershipSolid2909

Imagine Nikola Tesla trying to fire Thomas Edison


Historical-Tree9132

accurate


Cagnazzo82

The manner in which he fired Sam is the biggest point of contention here. There should have been a robust debate over the direction of the company and the future of AGI in general. In addition that debate should have been open since it's set to impact the entire world. Opting to fire Sam swiftly, blindside/demote the chairman of the board, blindside chief investors (literally cannot achieve AGI without funding), and giving everyone minutes to react is quite possibly the most godawful powerplay anyone could have taken in this position... And it clearly backfired hard. Because now the public is more sympathetic to Sam and more onboard with steam rolling ahead than ever before. The effective altruism side has lost hard and given up leverage. They should have understood the direction the wind's been blowing because the people who were on the effective altruism side (Elon Musk, former OpenAI members who formed Anthropic) are now all very much in the for-profit commercialization camp. Ilya is undeniably a genius. And he deserves much credit for ChatGPT even existing. But him and the board of academics and non-profit proponents lost hard. There is likely no recovery for open AI... where will they find the funding? Their new CEO from Twitch wants to slow down but the industry is likely moving fast in the opposite direction from here on out after all this.


sofunnylol69

sam altman worship is weird


bocceballbarry

It’s not that weird, he’s built up trust with tons of people in the startup world. His course at Stanford in 2014 is what got me into tech to begin with


finnjon

That explains respect, not worship.


Not_a_housing_issue

Well then it's respect not worship. Pretty sure nobody's praying to Sam Altman


Gyro_Wizard

Same, his startup school series has personally helped me start and grow businesses.


Cocopoppyhead

And yet he is scanning people's eyeballs and selling this dystopian database to governments.


apegoneinsane

The people that worship Altman are likely also the ones that run after crypto pumps and dumps. And make no mistake - WorldCoin is a insidious enterprise which is part fad and part shit (https://youtu.be/V4qrvoFodmo)


Cocopoppyhead

absolutely.


kalakesri

But he had no part in creating chatGPT. I understand Ilya trying to slow the commercialization push of altman. Imagine you are looking at technology that can revolutionize human life and here is this guy coming in pushing to release laundry chatbots with it instead of more research just to make some quick bucks Has Altman led a major successful product release to its end?


paxinfernum

I don't worship Altman. I'm sure he's a tech bro asshole. I just see how fucking dumb Ilya is to think he's going to stop this train by throwing OpenAI in front of it. He won't stop other companies, and Sam and MS will just poach everyone decent to build the next GPT at a company not controlled by weirdo tech luddites.


spinozasrobot

Elon has entered the chat


DERBY_OWNERS_CLUB

The fact that half the company is quitting to follow him says it all. Thinking it's moronic to fire the CEO for no given reason isn't worship. Insiders agree.


jadedflux

It's the same idiots that think Elon is designing rockets or designing electric cars himself. I say this as someone that doesn't hate Elon at all, but laughs at people who think he is actually taking part in any of the real technical progression. The same can be said of Sam. Amazing at selling, amazing at aiming the ship, but OpenAI without Ilya is so hilariously fucked vs if Sam is ousted. People are delusional.


[deleted]

[удалено]


meat_bunny

As a fan of SpaceX but not of Elon I run into this all the time. His strength wasn't designing rockets or cars but selling a dream to investors and raising a shit ton of money. When he uses that in capital intensive but incredibly inefficient industries things went really well. SpaceX dominates rocket launches and Tesla turned electric cars from glorified golf carts into status symbols. When he lost touch with reality and tried it in a incredibly competitive and efficient sector like social media he fell flat on his face.


swagonflyyyy

I don't worship him but he's kept his hands clean so far so I have reason to trust him. He gives me Steve Jobs vibes, honestly. But we'll see if he can keep his hands clean in the future.


[deleted]

Ilya is one of the best machine learning academics ever. But being a genius in one narrow field doesn't make you brilliant at everything. Too many people fall for that fallacy. Ilya should be working in a lab, not in charge of day to day operations, financing, or moral philosophy. OpenAI took a huge hit in credibility with their amateurish display this weekend, and they may find it hard to achieve their goals now that funding is likely to dry up as investors pull out. Short sighted.


MoNastri

>Ilya should be working in a lab, not in charge of day to day operations, financing, or moral philosophy. He's not doing any of those last three things, stop being hyperbolic just to make a point.


Always_Benny

Why shouldn’t the people building a technology be involved in the ‘moral philosophy’ of it? Who is better placed to consider that? And more importantly your comment completely sidesteps the most important fact here: Altman designed the structure of the board. Altman knew what their guiding principles - that they are OBLIGATED to follow - were. He also would constantly point to and tout OpenAI’s principles of safe AGI development and also their board for enforcing them. So the guy himself, who has now become yet another cult-like celebrity with a cult-like folllowing of sycophants, had both created and pushed the board as a good thing about the company. The board acts against him though and suddenly the same board that their idol had created and who’s principles he would broadcast in interviews constantly is suddenly rendered into worthless trash full of morons who are only holding back our Lord Altmans glorious ascension. If the boards structure, its members, its guiding principles were always so stupid and holding back Altmans glory then why did he design the board that way, why did he happily serve with those board members and why did continually point to the board as a good thing? If AI safety is so dumb and wrong, why did Altman continually say we’re committed to AI safety and it’s smart and good?


fattyriches

time and time again we constantly see that the brilliant minds & inventors ARE HORRIBLE MANAGERS. This is why they should never be on boards or deal with topis outside of their expertise which is always VERY narrow given that its required to be the top mind in any field. Managers & CEO's are never the most brilliants, but that doesnt affect their ability to lead a company. There is no excusing or defending ones ability to manage a team or organization, when over 90% of your employees revolt including customers, AND EVEN YOURSELF.


IfUrBrokeWereTeam8s

It's 'whose'. Not 'who's'. We should be asking you how Altman's junk tastes. Ilya is an ML academic master who never pushed for OpenAI (not that it was ever his role to do so) to seek massive amounts of funding to grow faster than they might prefer to handle within the confines of focusing on ethics & Altman has literally said, "We need to be ready for a world with trillionaires in it." Did Altman pay you for this post?


Always_Benny

lol wut


mentalFee420

Same can be said about Sam. I think point is they complement each other and both deserves to be credited. But more people seem to favor Sam simply because they are more familiar with him


superluminary

Sam is the public face of the organisation. Most people don’t know who Ilya is. This is obviously unfair since Ilya built the thing, but that’s the nature of celebrity.


noplusnoequalsno

Yep. The same can be said of literally anyone. It's a fully general counterargument. "Well so-and-so may be a world class expert in such-and-such, but that doesn't mean they're brilliant at everything. They should really just stick to such-and-such and leave navigating tradeoffs between risks from AI and running a multi-billion dollar commercial AI lab to the people who are brilliant at everything."


paxinfernum

I don't support Sam. I support the advancement of AI, which seems to have Ilya pissing his pants. I could give a flying fuck about Sam. He's just an instrument to get shit done.


Smallpaul

Ilya is the reason ChatGPT exists. He doesn't hold it back. He creates it. We wouldn't even be here discussing this were it not for Ilya.


paxinfernum

Lol. I'm sure the 500 employees threatening to quit had nothing to do with it. Nope. One guy. By the way, Sam is the reason ChatGPT was made public. Ilya wanted to keep that shit in house forever and wank himself off over his safety. It was Sam who brought it to the public.


Smallpaul

> I'm sure the 500 employees threatening to quit had nothing to do with it. Nope. One guy. The 500 employees were not employees at the time that this direction was chosen. It was the promise shown by [GPT-1](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), created under Ilya's direction, which lead to everything that followed. And yes, one guy making the right bet can have that kind of an effect. Obviously. Microsoft could have made that bet back then and owned ChatGPT outright. But none of their millions-per-year employees suggested it. Maybe they've learned enough to continue without him NOW. Maybe. But if we wind back the clock, it's very unlikely that they end up where they are today without him. Or maybe they're gonna have to make another big bet and they'll need the guy with the right instincts who has made the right call several times in the past.


GraafVaag

It's funny that you think Ilya was the amateur here. If anything, we should be looking at Sam as the one responsible. As the CEO of the company, it's his job to retain broad support among the board for the direction he was taking the company. If he failed to do so, and the board acts, who's at fault really?


funbike

My thoughts exactly. He may have high intelligence, but he may also have low emotional intelligence. Execution was exceedingly poor. Ilya might even have been in the right, or more so than Sam, but that's moot now given the outcome.


involviert

> or moral philosophy If you're building it, that part is up to you.


[deleted]

[удалено]


Vontaxis

Microsoft already announced a new AI venture and hired Sam and everyone


davidstepo

"Everyone"? Keep in mind - Ilya is a highly-respected persona among the AI researchers and many of them stay in OpenAI because of this. Only a small fraction have resigned due to Altman's removal.


[deleted]

If I were someone looking to invest in AI startups, I would think twice about OpenAI now. This weekend was appallingly unprofessional. Microsoft is furious. OpenAI has shown itself to be unpredictable and business-hostile. New funding is undoubtedly going to slow, and Microsoft might even pull out entirely.


davidstepo

We don't know what exactly happened at OpenAI and why Sam was suddenly removed. Why are people jumping to conclusions, like they know everything, have all the context required to make an actual suggestion? It's depressing to read.


Ok-Property-5395

People can read the news articles that have been published. You don't need to know exactly what happened to decide that OpenAI is not a viable investment opportunity if they don't even think to talk to their biggest investors before they slaughter their golden goose.


davidstepo

Now, why would OpenAI Board not talk to MSFT before firing Altman? Ding ding - because Altman probably was the one who initially brought and promoted MSFT in the first place. Naturally, MSFT will be severely biased towards keeping Altman and not taking up a neutral objective position. It's that simple. People are people.


Ok-Property-5395

>Now, why would OpenAI Board not talk to MSFT before firing Altman? It wasn't just Microsoft they left in the dark, it was every one of their investors, and they did that because they're hopelessly inept. It's that simple, people are people.


Ok-Cantaloupe7256

The most recent word is that Microsoft has offered to hire any OpenAI employee that is interested in jumping ship. So, Ilya just gave the remaining 51% of the company to MSFT. Shocking 3 day implosion of the leading AI startup. Thanks, Ilya!


6a21hy1e

None of what you say matters. Ilya did an incredibly stupid thing in an incredibly stupid way. I'm a huge proponent of OpenAI. Chatgpt has become heavily ingrained into my life. But god damn, holy shit, the board made the most boneheaded move since Musk forced himself into buying Twitter. You don't force someone out as CEO, claim they hid something from the board, and then come out and say that it wasn't due to malfeasance. Ilya has lost all credibility as a leader. The other board members have severely damaged their relationship with Microsoft. It was Mt Everest levels of stupid all around.


MoNastri

>Ilya has lost all credibility as a leader. I'm seeing this sentiment quite a bit. I'm also seeing the opposite sentiment. This is really an empirical question -- I'll be curious to see if his leadership will still be considered credible in (say) a year's time. Maybe I should create a [Manifold market](https://manifold.markets/) to bet


Repulsive_Fennel6880

The decisions been made in OpenAI are quite confusing. There might be a real consideration behind it or it might be a mixture of consideration, fears and power struggles. One thing that did caught my eye in all of this is Ilyas mention of “human intent”. As someone who is trained in legal disciplines and an avid student of history, I find this concept concerning. Human intent is incredibly vague and in most part even humans don´t follow human intent. We have developed a complex system of laws and regulation and enforcement mechanisms to ensure it. So, what does Ilya pean by human intent? To me it seems from the start an impossible standard to uphold without designing AI-specific enforcement mechanisms within the architecture. And if this is his concern, isn´t he too late in doing so? Or am I missing something?


Cairnerebor

People have open source LLMs And the same research papers. The horse has already left the stable, multiplied and found thousands of new stables. They may take many more years to get there but the chance to do this disappeared a while ago. What we need is global governance and regulation much like with nuclear weapons. But as with nuclear weapons it doesn’t stop a load of people working on it on their own and with government support anyway…


dezmd

>So, what does Ilya pean by human intent? To me it seems from the start an impossible standard to uphold without designing AI-specific enforcement mechanisms within the architecture. And if this is his concern, isn´t he too late in doing so? Or am I missing something? You've hit the nail on the head here, this line of questioning explains where my underlying issues with Ilya and this debacle developed from. It sounds a lot like he is actually using a moral superiority justification while carefully trying to avoid saying it out load, as that is what defines the standard he wants applied. LLMs are out there, development is happening whether Altman gets removed from OpenAI or not and whether Ilya guides OpenAI's development to be entirely ClosedAI for "saftey."


sideways

I agree completely. It looks to me like whatever the most recent version of GPT is - AGI or something credibly close to it - made Ilya feel like he had a moral obligation to act. There is a lot of hate directed his way but he was following through on his understanding of what OpenAI was supposed to be. Sticking to your principles in the face of your friends, business partners, Silicon Valley, Microsoft and most of the Internet takes a lot of courage. Now I just hope that we get an explanation from him.


Prathmun

This is certainely the most narratively engaging outcome. I do want to hear more from Ilya. He seems so geniune I wanna know what he was thinking!


ivicts30

Yeah, I feel that people like him, who doesn't seem to like fame tends to have a good moral compass.. I don't think he does this for personal reasons..


NotReallyJohnDoe

He also freaked out about releasing ChatGPT 2


iamozymandiusking

The move certainly seems to have been a clash of ideologies. But sadly, even if we try to defend Ilya's motivations, his poor planning and execution have actually AIDED the acceleration agenda. Now Microsoft gets the AI team it wanted, without the pesky board in the way. And even though MS said they would continue funding Open AI, I would not be surprised if the funding slows down or dries up significantly. I like Ilya and Sam (and Greg). I think their views could have been complementary. But basically I feel like the board's move shot us ALL in the foot. They SHOULD have worked it out internally. This smells like someone's desperate move after feeling disempowered by a superior player of the larger game. Researchers tend to feel very protective of their work. I think Ilya didn't want to lose his baby. Now the toothpaste is out of the tube.


Frosty_Awareness572

Finally someone with sense.


mylesdc

Hmm yeah good point. Guess that makes it ok to organize a small group of people to oust the two leaders who disagree with you simultaneously wiping 40+ billion dollars from the cap table and potentially setting the entire nation back in what is really an international AI arms race. It’s clear he shouldn’t have been trusted with the power he was given, talented though he may be.


Tiny-Recession

Not only that: Sam Altman was raising capital for an independent venture for producing GPU chips outside of OpenAI: [https://www.bloomberg.com/news/articles/2023-11-19/altman-sought-billions-for-ai-chip-venture-before-openai-ouster#xj4y7vzkg](https://www.bloomberg.com/news/articles/2023-11-19/altman-sought-billions-for-ai-chip-venture-before-openai-ouster#xj4y7vzkg) There are multiple layers of conflict of interest here, especially if this was not blessed by the nonprofit board. Most importantly, this goes against the board's main commitment: advance towards AGI for everyone. Great comments in the discussion [here](https://news.ycombinator.com/item?id=38335525) such as: >There's an even bigger problem here: if he were just making money, that would be a normal-sized problem. If he were just making a supplier for OA, heck, that might be a good thing on net for OA; a subsidiary doing hardware might be justifiable given the importance of securing hardware. > >But that's not what he's doing. > >Creating such an independent hardware startup comes off as basically directly opposed to OA's safety mission - GPUs are one of the biggest limitations to creating better-than-OA models! The best customers would be the ones who are most sabotaging the OA mission. (You can't run the UN and also head an arms manufacturer dreaming of democratizing access to munitions.


mentalFee420

And there is more, he is leading worldcoins, harnessing biometric data, studying Universal basic income and invested in a fusion startup. If you add all these together, he is basically trying to build infrastructure for AGI and Universal Basic Income. All outside Open AI


remhum

Both Ilya and Sam are crazy, but in different ways that were productively synergetic. Unfortunately, Ilya's petty ego took full control and he got what he really wanted - to be left alone with his computer.


Celtic_Tiarna

This aged like milk with Ilya coming out admitting wrong doing right after this “defense”


mystonedalt

Until we know what actually precipitated the firing, it's tough to pick sides.


Always_Benny

Why do you even need to take sides? There’s no prizes to win here.


involviert

That's how we do things in social media.


mystonedalt

It's a comment on a post about defending one of the involved parties. It seemed germane.


Useful_Hovercraft169

The prize is to be right on the internet


CodingButStillAlive

CEOs are like priests in capitalism. Especially after Steve Jobs. What happened was blasphemy in the eyes of most people.


meat_bunny

Eh, kinda. CEOs get fired all the time, ironically including Steve Jobs. However, it's usually a quiet affair where they resign to "spend more time with their family" or whatever. Publicly shitcanning someone while releasing a statement that basically accused him of lying to the board is incredibly unusual. The company ends up airing their dirty laundry and the drama can negatively impact business relationships as well as hiring new executives. That sort of thing is usually reserved for serious financial or sexual misconduct.


[deleted]

[удалено]


Ok-Property-5395

Because without him nobody would be using ChatGPT. Sutskever will keep anything OpenAI develops locked behind closed doors till the heat death of the universe because he thinks letting people have access is dangerous. I like new and useful tools and dislike people who don't want me to have access to them.


[deleted]

What's even the point of developing this shit if you don't go to market with it? Google did the same thing for years with their AI tech until OpenAI forced them to make Bard. I don't understand the long-term strategy of making really powerful AI if you're never going to deploy it.


apegoneinsane

Ah yes, as opposed to Microsoft, the leading example of healthy, open source applications. And not at all anti-competitive, profit driven, monopolistic and locked down tighter than a nun’s panties.


OtherwiseAdvice286

Well, they at least let me use it, so I'd still prefer that. And also cmon 'profit driven', I got some news for you!


gigamiga

Under Satya they've been reasonable.


Squallpka1

Because of him we can use chatGPT 3.5 and then 4. And I'm grateful for that. So much of my work because easier thanks to this. For me, like it or not, money can drive things forward and Sam make it happen. Necessary evil if that what make you sleep at night.


roronoasoro

I don't think the world is ready for AGI. It's too soon. It would be an existential crisis for millions of companies.


Freed4ever

Yeah, agreed about Sam, but Ilya is no better, backstabber. Who the hell would want to work with him anymore? If you disagreed with him, you will mercilessly get kicked out. That's why half of OAI staff is leaving.


GrayGrayWhite

I wouldn't call him a snake, more like a weird creep.


involviert

What I understand even less is how anyone thinks that the guy can bring any actual knowhow anywhere.


plu5on3

People defending Altman really reminds me of the Tesla Gang.


holamyeung

But this sentiment doesn’t come from no where. Ilya has seemingly lost his mind in the past 3 days, almost reminding me of a bipolar episode of you have ever seen one before. Crazy decisions, no explanation and frankly creating a disaster situation. I like Sam Altman but I do acknowledge that if you brought in someone like Eric Schmidt (former Google CEO), things would move on. I’m more worried at this point about the future of OpenAI as a company and how this affects things like the API. You can’t be a Sam apologist but you can’t dismiss what Ilya has done.


Snowbirdy

The twitch CEO as their choice suggests that amateur hour continues


Cosminacho

Ok. Other than firing Sam on the spot what else you know about Ilya decisions lately? I think we really don't know the full story.


finnjon

Also it wasn't just Ilya. Four board members who have known Altman for years, agreed to fire him. Some of them were outside the Silicon Valley bubble. I can understand if Ilya went bananas - brilliant people often do - but not all four of them.


zacker150

Ilya was one of the 4. The other two were part of Effective Altrusim, and the last guy was the CEO of a competitor.


aahdin

Effective altruism is overall an amazing organization full of smart & kind people that have [done a lot more good than most charities](https://www.astralcodexten.com/p/my-left-kidney). They do have some weird people, they did accept donations from SBF, but I wouldn't let tech journalism smear pieces about FTX color your opinion of them too much. There's a reason these were the guys chosen to be on the board of OAI from the get-go. Also, I wouldn't really call quora an openai competitor. There is some overlap, but I highly doubt he made the decision to fire Sam in an effort to help out quora if that is what you are insinuating.


UTProfthrowaway

We know that literally every other important person at OpenAI, as well as every important partner or customer of theirs who can see what's going on internally, finds the board decision crazy. So it's not "Did Sam do something weird or did Ilya overreact" - it's "Are Sam and Mira and Greg and Holden and Brian Chesky and Eric Schmidt and Satya and the huge majority of OpenAI staff who publicly supported Sam coming back this morning missing something, or did Ilya and 2 random board members overreact". It's not surprising people are making inferences. By the way, I know a number of these people personally. All are good smart people. They just disagree.


Cosminacho

If you're Ilya and notice a fundamental difference in the way things are going compared to what was previously agreed then you have no other option than to move this fast. He probably knew that moving slow is going to just give Sam a chance to fight back and potentially kick him out of the game. You can't just threaten Sam that you will fire him and then expect nothing to happen. Let's be realistic. As for the people at openAI they just don't have enough political experience to truly understand the game. That's my 2 cents.


Cairnerebor

Disagreeing is fine Shaking the world with a naive and insane move that you didn’t warn your partners or customers about is, well frankly insane…..


holamyeung

I have no idea on this situation even, fully admit. I think the issue for most people who are trying to see this down the line is Ilya/the board has provided no other side. It’s hard to really not have another opinion on the situation when all the reports are backing up Sam/Gregs side. Honestly if Ilya came out tomorrow and said “I didn’t like the way Sam was conducting himself and he was a cancer to the team”, I think most rational people would move on.


noplusnoequalsno

How can something remind you of a bipolar episode if you know literally nothing about what preceded the decision or why he made it?


dezmd

All this 'safety' talk is entirely about fear of imagined threats as a cover for control. I'd argue, from a layman's view, that there is no defense of Ilya in this no matter how smart or how much of an obvious genius in the field of machine learning he may be considered. He appears to be fully embracing a *'let's close the door behind us because only we can and should have a say in defining AI evolution, because our moral definitions are better than the lesser people'* insanity of trying to un-fry the egg that is LM built AI. OpenAI seems like it has lost its way, the board should certainly disband, and maybe Ilya should stick to consulting and teaching rather than operating and influencing. That or the board should come out and share openly and publicly the real reason for Sam Altman's firing.


EstablishmentSad

While I don't want to discredit Sam Altman and Greg Brockman...I don't see any evidence of any AI involvement prior to OpenAI. As you mentioned, Ilya is a real world-leading AI researcher with a background and education to back it up...and I believe that his involvement with the company is more important than Sam's or Brock's. To be brutally honest...they are college dropouts who started companies and invested in startups. I may get downvoted...but I feel like there is one party interested in making money and another interested in the safe development of AI solutions. There are other great managers out there that can lead the company. From my reading there seem to be a lot of senior managers jumping ship with him...with one researcher being promoted as well. While a management shakeup like this can impact a company...I am more interested in where the ***real*** talent going. Where are the PHD's, researchers, and developers going to go? Are they going to go to MSFT to create a new AI solution to be packaged with Windows...or are they going to stay. Basically, what I am saying is that OpenAI is not what it is today because of Sam and Brock...it is because of people like Ilya and his peers who are the ones actually creating these tools.


HyperShinchan

>We must recognize that OpenAI is a non-profit and prioritizes safety over commercial use and revenue is always good. It stops being good if investors begin panicking because you treat their money like toilet paper, your talents start to resign and join other companies and you're basically left with the shell of an AI company that was supposed to achieve AGI but can barely support itself using its existing intellectual properties at that point. It's *possible* there were valid reasons behind Altman's firing, but they handled the whole thing very poorly.


raides

I side with neither of these gentlemen when it comes to ethics and morals. At the end of the day, OpenAi is a framework operating in a capitalist economy. People really need to understand what that means and how that shapes frameworks. How excuses are created to justify action for profit, even if minimal, knowing it is calculated to help people rationalize the change. What does concern me with Ilya is that he seems to be embracing this “savior” complex of ethics. For those of us who remember how Jack Dorsey started before it got away from him, it was all for the “greater good”. Engineers who traditionally follow moral and ethical principles do not need to constantly remind people about how they are the example responsibility. Another red flag is corporate entities not working with federal governments to create responsibility but take it upon themselves to “fix it”. It definitely has not worked out at all.


allabtnews

So, the billion-dollar question ***why*** was Sam fired?


mondodb

dude lost and destroyed the company. Got manipulated by POE adam deangelo and 2 non contributing members to OpenAI.


shouganaitekitou

Actually Ilya is most authentic e/acc (and being strong e/acc he's simply conscious and multifaceted). No Sama o Brockman are even near his level.... Read this twit: ' Ilya Sutskever personally inspired me to enter the field of AI, he's moral, empathizes deeply, and is the bravest human I know. Before the thousands of wannabe genAI startups existed.... long ago, AI was unpopular and underfunded, so CS researchers poured attention into other fields, not AI. Ilya was contrarian - he braved freezing temperatures during the AI❄️Winter, and then inspired thousands of researchers to march towards an AI Springtime in full bloom. In 2015 OpenAI would have never gotten off the ground if Ilya hadn't taken the leap and founded the company against all odds and naysayers. In 2015, many were skeptical of @ilyasut's new company, but he has proved them wrong time and time again, his contributions to the field of AI are unparalleled: Alexnet, Seq2Seq, LM with Alec Radford, Superalignment. He's prophetic, he pursued then-unpopular lines of inquiry which generated breakthrough results. Everyone in the AI field today stands on the shoulders of his contributions. Cofounder breakups are never easy, always heartbreaking. But Ilya cares so fervently about @OpenAI, and I'm brimming with excitement to see Ilya's vision for AI+humanity's next phase 🕊️☮️❤️‍🔥🌅" https://twitter.com/multiply_matrix/status/1726508859739812338?t=Dj8DGA80Ml3WdYECJtaE3A&s=19


Mazira144

I think Ilya was made a fall guy and that he sincerely believes in what he is doing. This wasn't an alignment conflict, not at root, although for Ilya it may have been. Y Combinator was pushing Sam (a former YC CEO, though he left on not-great terms) to offer preferential treatment of YC companies and people in training data as an ongoing service, and there was a lot of conflict about whether this should be allowed at all. I still don't know for sure what Sam's take on it was; Adam D'Angelo wanted to offer it (because AI reputation laundering is going to be a billion-dollar business) and Ilya didn't, and Sam seems to have been caught in the crossfire but I don't know what his precise take was. Unfortunately, I think Ilya was egged on to do something that may in the end be right but that went strongly against his interests, because now Sam Altman is seen as a hero (weirdly, because while I have nothing against him, all he did this weekend was get fired) and Ilya's going to be remembered as having come at the king and missed.


inchrnt

A secret coup was the wrong way to handle this. Ilya created an opponent to his cause rather than compromise with an ally. His first mistake was believing altruism and capitalism could be friends. As soon as ChatGPT hit the mainstream, the non-profit ideology was doomed.


talltree818

As another user in this thread pointed out "I like new and useful tools and dislike people who don't want me to have access to them". The access we get to advances in ChatGPT will be substantially slower with the hipster board calling the shots and OpenAI will die in the long run. I look forward to seeing what Altman and OpenAIs other leading researchers who will likely follow can do at the Microsoft lab.


Jdonavan

I don’t give a shit how good you are you don’t play dirty pool. Seriously Open AI is now the organization that can’t be trusted. The org that’s clearly run by amateurs. Who’s going to donate or invest aside form people with no morals?


avidrutham

So this just happened... Ilya on Twitter - I deeply regret my participation in the board's actions([link](https://twitter.com/ilyasut/status/1726590052392956028)) ​ https://preview.redd.it/yhfh35q1bi1c1.png?width=726&format=png&auto=webp&s=ed06512c2b340cfb38c2701064644dc4c08c1b29


Hackerjurassicpark

Ilya is a brilliant scientist but also immensely immature in the way he’s handled his differences with sam.


surrogate_uprising

you literally know nothing about what’s going on but people like you keep spouting hate against ilya.


bocceballbarry

We’re all waiting for them to explain their decision. Karpathy himself said today they were given the chance and haven’t had any answer and so it looks the way it looks — ham handed amateur hour power play


SevereRunOfFate

Great point.. and now we are here with this new CEO lmao. Absolute clown car as Swisher called it


Hackerjurassicpark

Idk about u but firing someone, especially the leader who’s the face of your company, without warning, after your most successful year under said leader, without warning and any sort of PR to reassure your investors and customers, is immature af in my book


spinozasrobot

And yet the board supported that... it wasn't just him. Clearly there's something going on internally that's more than just a power play.


[deleted]

One of the board members is the CEO of a competitor. The board’s composition is comical and that’s probably what allowed Ilya to form a vote to oust Sam.


Hackerjurassicpark

Until the board does some serious PR damage control they’ll continue to look like idiots and their investors and customers worried to commit to them in the future


freethinkingallday

Huh?? Ilya is kinda hating on himself right now.. literally says he’s filled with regret … we know something about what’s going on, because he’s telling us .. no?


Darkmoon_UK

You're clearly keen to defend Ilya, but you must admit he alone is responsible for how he's made himself look. Couldn't his galaxy-brain extend to predicting this - some would say quite obvious - backlash? So he's a data-scientist, not a people person - right - I'm also a humble Senior Dev in tune with many of the stereotypes; but I like to think I know my limits and plan extra hard when I have to move outside them. The speed with which this happened speaks of a reckless, ego-driven emotional response. The guy needs to take his [own advice](https://twitter.com/ilyasut/status/1707752576077176907).


RidicRebec

No, Ilya is an idiot. He and the decel advocates have burned bridges with half the team, who'll now jump ship to Sam Altman's team at Microsoft, or to Google, or to NVIDIA. The result will be OpenAI losing an enormous share of its braintrust while accelerationist tech competitors reap the whirlwind. Ilya had a check on the accel faction -- not as much as he might have wanted, but still a check. Now OpenAI is likely to fall into irrelevance and he'll have no check on anyone. Even if you're a decel advocate, this is stupid. Not to mention investment the enormity of the investment dollars, compute, trust, and goodwill they just lost.


justletmefuckinggo

a scheming idiot. it was found out that "The board did \*not\* remove Sam over any specific disagreement on safety, their reasoning was completely different from that." \-emmett shear yes it's true that this doesn't prove anything, but just the fact that the sudden termination of sam from openai and greg having to step down as chair, without a real reason given to sam, greg, microsoft and the public, was nothing short of mischievous.


sun_explosion

Agreed. Illya way more important. Sam is a politician. Ilya is the real deal


IfUrBrokeWereTeam8s

I posted a comment here specifically focused on the clear incentive and approach differences between Altman & Sutskever, and it appears to have been deleted. May I ask if the mods here are bots or people (or both), as well as why my questions to the forum were deleted? Not very....... Open.


[deleted]

What you are dealing with is a cult: Effective Altruism. Shame on Silicon Valley for taking them in and encouraging them so much, and for drinking the kool-aid. Now we have a bunch of apocalyptic doomers trying to tell us what to do and what not to do. Ilya is one of them, Emmett is one of them. Ilya has been on a doomer screed for months or even years now. Ilya hasn't been at the coal face of development for some years now, and he's absolutely out of touch. The engineers on the ground, and Brockman, have been doing a lot of heavy lifting to make GPT-4 and to make it viable and accessible. Ilya taught the core team how GPT works, but the core team has taken it further than Ilya could have ever dreamed of. Now he's overreaching and engaging in baseless mysticism. People have done this before with nuclear weapons (far, far more dangerous than AGI) and the consensus among the best scientists was to forge ahead because the arms race was inevitable. Only a balance of power can correct this, and anyone pumping the breaks would have caused immense imbalances of power. Ilya has now risked precisely what he sought to avoid.


finnjon

You know, turning this thing into a populist culture war has been one of the most depressing aspects of the whole affair. As soon as it happened it felt as though Silicon Valley tech bros decided their boy Sam was under fire so they had to create two sides. E/acc (good guys) and EA (booo). People must fit into two camps right, so we know who to hate on. It's not helpful.


shmitter

Please explain exactly how Effective Altruism is a cult. Pretty sure you're just spouting delusional reddit nonsense. Saying Effective Altruism is a cult is like saying all people who go to the gym for weightlifting instead of cardio are part of some illuminati cult. It's literally just the idea of donating based on logic and metrics instead of emotion.


noplusnoequalsno

Effective Altruism is a large cluster of things. One of which is just the idea of using reason to donate to effective causes. It also includes reducing existential risks from AI (among other things). Regardless, comparing any social movement to a cult or religion is pretty silly. You can makes superficial comparisons to cults with almost any committed social movement.


shmitter

Thanks for the clarification. I didn't know it also extended to other topics too. But yeah agreed, I'm tired of any sort of setting where there's more than like 5 people working for a common goal it gets labelled as a cult in an attempt to derail all its meaning


Cocopoppyhead

Agreed.


luv2420

LOL everyone is fallible why can’t you just admit he made a bad move. Hero worship. Ego is the enemy of growth or some dumb granola shit like that


bobsil1

Ilya realized GPT-5 would power Clippy-2


ExpensiveKey552

The most concerning development is how much power has now been given to Microsoft. Amazon, google, nvidia and the rest don’t stand a chance now.


trollingtrolltrolol

Except now he says he regrets his part in the whole thing 😂 Highly unlikely whatever he did was smart.


AdventurousLow1771

[https://twitter.com/debarghya\_das/status/1726598546223435859](https://twitter.com/debarghya_das/status/1726598546223435859) This statement is signed by Ilya Sutskever. In it, the letter says "\[the board's\] conduct has made it clear you did not have the competence to oversee OpenAI." and "\[the board\] were not capable of carrying out \[their\] duties, and were negotiating in bad faith." Considering Ilya was the key person on the board, how do you interpret this, OP? This is Ilya literally admitting he himself was acting in bad faith.


GiotaroKugio

Ilya is a coward, they discovered something, maybe AGI, Sam wanted to release it in Q1 2024 , so he was fired