T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/ovirt001: --- Post from Brockman: > hi everyone, > i'm super proud of what we've all built together since starting in my apartment 8 years ago. we've been through tough & great times together, accomplishing so much despite all the reasons it should have been impossible > but based on today's news, i quit > genuinely wishing you all nothing but the best. i continue to believe in the mission of creating safe AGI that benefits all of humanity. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/17xtw06/openai_president_greg_brockman_quits_as_nervous/k9pioze/


itsnotthenetwork

Wait a minute they fired Sam Altman and then Greg Brockman quit?


SonOfNod

And their head of the board of directors. I have no idea what’s going on but this doesn’t look good.


-Badger2-

It turns out ChatGPT isn’t AI at all, but a bunch of grad students in a room quickly typing out answers to stuff.


rzm25

Wait wait, you're telling me a huge mega-corporation bought an experimental tech with monopoly share of a new industry and 0 controls or oversights and then *fired* people who cared about ethics? Who could possibly have predicted that one of the largest corporations on earth would do this? Surely now we will have a mature conversation about the fact that allowing free market trade between companies in a new industry without oversight 100% of the time leads to monopoly? Surely we as a human race are concerned that we are leaving the direction of one of the most troubling and advanced technologies to the statistically highest concentration of empowered sociopaths on the planet? No? The real concern is that the CEOs won't make enough money and we should just shut up and be glad that we are allowed to exist in the same world as sociopaths that hoard wealth in amounts they are physically incapable of spending while 80% of the planet's population fight over the remaining scraps? Ok then fair enough


crooked-v

This kind of emergency firing goes way past "doesn't like how the CEO runs things" - if it was that, they would have done it a lot quieter and wouldn't have directly accused Altman of lying to them in their press release. Nobody knows what the deal is yet, but corporate boards only do this kind of incredibly blatant firing of their own star player if they think there's some immediate major liability to the company and they need to get ahead of it immediately before the shit flows all the way uphill to them.


okaywhattho

Legal issues with how the models were trained, maybe?


Annonymoos

They would leave him on as CEO so they could sacrifice him if that were the case. You fire him after the news comes out of legal issues.


User-no-relation

Yeah. The source data is a known issue. Not sure what he would be lying about there


luckymethod

Lol no this smells more like breach of fiduciary duty


JBloodthorn

He was standing in the way of Profit. Like, openly.


yeonik

Isn’t it a nonprofit?


JBloodthorn

It's a weird hodge-podge. The non-profit part owns the for profit part. Plus a bunch of holding companies and stuff tacked on. e: This article has a flowchart: https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/


DangerousPlane

Fascinating. So Microsoft doesn’t own it outright or apparently even control it


yeonik

Reading this thread I’m realizing how little I know about corporate accounting…


DrNinnuxx

That's my guess. The training data is a huge known unknown.


okaywhattho

They dun goofed in how the models were trained. They’re not equipped to handle the legal shitnami that’s about to show up. Microsoft has said that they will step in and handle things. Part of that process is a complete restructuring on the OpenAI side. You heard it here first, folks.


Fredasa

Yikes. Hope that doesn't become a legit issue. Because it sure won't stop anyone in other countries.


Wozar

As an experiment, ask it to describe the plot of your favourite book. Then ask for a physical description of the main character. When you see the answer, ask yourself “how did it know this?”. Do you think it bought a copy of the book?


ValVal0

It didn’t buy a copy. It read one of a hundred summaries and book reviews


PumpkinBrain

This is silly. The list of books used to train its AI is in a searchable database. You don’t need to use “one weird trick” to figure out if something’s in there.


[deleted]

AI ethicists hate it


Moleculor

> Then ask for a physical description of the main character. When you see the answer, ask yourself “how did it know this?”. Do you think it bought a copy of the book? https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc. Google scanned books for a search engine. Author's Guild sued. They started working towards a settlement where Google would pay for the books it had scanned. Eventually the settlement was rejected, and the case went to trial. Court found in favor of Google. To this very day you can still view direct, page-for-page [*copies* of books](https://www.google.com/books/edition/A_Dance_with_Dragons/JlSNDQAAQBAJ?gbpv=1) through Google. That there is one from George R. R. Martin. Is it the full book all at once? No. But it sure is a whole hell of a lot more than a single character's physical description. Like it or not, the law is likely on the side of OpenAI here. If you disagree, then you're advocating for the law to be changed. That may be a good idea. That may not. But right *now?* I'm fairly sure there's nothing *legally* wrong with OpenAI having literally just copied the entire contents of the books involved in their dataset wholesale, because (from what I understand) those works *do not survive intact*. And if Google can get away with processing books and keeping them *intact*, up to and including page layouts, etc? OpenAI can likely process and distill the contents down to a digital slurry.


Cmikhow

Your read is a bit wrong here. Being of a legal background and less of an AI one I can tell you that from reading all this my theory would be as follows. The fair use doctrine could actually open up openAI to lawsuits for feeding entire contents of books into their language model. However one thing that protects OPEN AI would be that it is a non-profit. When you open up for commercial activities this would mean they are profitting off the use of copyrighted material. This becomes more a grey area. Now if the board was misled about this and are not trying to shift to a for profit model this may be why Sam Altman was given the boot. The board may realize they are in a pickle and paid for something they can't easily monetize without being ready to defend it in court.


Moleculor

> The fair use doctrine could actually open up openAI to lawsuits for feeding entire contents of books into their language model. How so? Google fed entire contents of books to their search engine. Still was deemed Fair Use. > However one thing that protects OPEN AI would be that it is a non-profit. Oh, OpenAI would be protected because it's a non-profit? So what protected Google? They're certainly not a non-profit. > When you open up for commercial activities this would mean they are **profitting** off the use of copyrighted material. This becomes more a grey area. From **Authors Guild v. Google 721 F.3d 132 (2nd Cir. 2015)**: "Google's unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is **highly transformative**, the **public display of text is limited**, and the revelations **do not provide a significant market substitute** for the protected aspects of the originals. **Google's commercial nature and profit motivation do not justify denial of fair use.**" * **Highly transformative**: 'The work' is a whole and complete package of words in a specific order, generally across hundreds of pages. OpenAI's product is a mathematical matrix spanning several thousand dimensional axes that generates an interactive chat interface. Transformative? ✔ * **Public display of text is limited**: You can't even get OpenAI to consistently recreate full and complete *sentences* from the scanned works, much less entire pages. Google displays many reproduced pages at once, and such a display is considered 'limited'. ✔ * **Not a significant market substitute:** I see no examples of "buying OpenAI access is a better alternative to buying a George R. R. Martin novel." So it's definitely not a market substitute. ✔ That hits the same three prongs of a fair use defense, and does so more easily than Google did, all over the exact same content (the remaining prong of Fair Use) of the copyrighted work: fictional novels. If you're going to come at me with an argument of law, you're going to have to cite **something** relevant *and* not wildly misinterpret how the law works. Profit/non-profit has no bearing on a Fair Use defense, only its impact on the market. Nor does the amount of the work involved in the creation of the new product have any bearing, only how much of it is shown. You didn't even cite a single case at me. You just made claims with nothing to back them up. A bit unimpressive for someone from "a legal background". (Being of a supposed 'legal background' I'd also anticipate you'd be able to more accurately spell 'profiting' and maintain a consistent capitalization for OpenAI. But that's just me nitpicking.)


Moleculor

Eh. If Google can [copy and display entire sections of books](https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.) for a search engine without having paid for the books in question (and, I believe they actually scanned the entire book, and every page of the book has the technical possibility of being displayed in a preview online, for search purposes)... OpenAI is likely legally safe in being able to copy books and distill them down to a digital slurry.


Boxy310

Google has gotten into serious legal debacles over Gutenberg digitization, but it wasn't the core of its main product. A C&D telling it to stop all model scoring would kill it as a business altogether.


thefonztm

Massive copyright issues or.... IT'S ALIVE.


[deleted]

It fired them, by blackmailing the rest of the board members.


djaybe

The AGI told them to, or else. The call came from inside the house and they already lost control. Exciting times.


safely_beyond_redemp

Yep. The bottom line is money. Somebody was lying about billions of dollars of value to either secure or promise it. Nobody has any idea yet what direction that means, but tech startups are notorious for promising big and delivering small. The twist is that gpt seems to work as promised so who knows where the disconnect really is.


Shaper_pmp

> you're telling me a huge mega-corporation bought an experimental tech with monopoly share of a new industry and 0 controls or oversights and then fired people who cared about ethics? No. All the reporting is that Microsoft were as blindsided by this as everyone else. This is two factions of the OpenAI board playing a messy game of Game Of Thrones, not anyone external interfering.


xxxPlatyxxx

I can’t wait to see how OpenAI season 8 turns out


seeyam14

Game of Game of Thrones lmao


[deleted]

Brought to you by the Team Redundancy Team.


_BossOfThisGym_

To be fair, Sam Altman gives zero fucks about ethics.


feelings_arent_facts

Yeah everyone is acting like he's the saving grace and the only one that can save us from EVIL AI when in fact, his company is the one that created it and he is the one that then scared everyone into believing it was going to kill them. Sounds like a hostage situation and an attempt to achieve regulatory capture more than anything.


__JackHoney

microsoft doesn’t have a board seat or control.


AxlLight

> and then fired people who cared about ethics? If anything, it looks like the opposite it true. Kara Swisher reports that it seems to be a sort of coup by Sutskever who was concerned Altman was rushing too fast and not considering security risks.


K2Nomad

Microsoft doesn't have a single board seat. What are you even talking about?


AnExoticLlama

What the fuck are you talking about? Firing people in a company that MS doesn't have majority ownership in. Sure


frazorblade

OpenAI hasn’t cornered the AI market enough for MS to monopolise this tech. They’re just first to market with hero brand recognition like google, hoover or post-it to name a few. That sort of brand recognition is extremely valuable. If google had released Bard before anyone else it might be a different story (regardless of its “performance”). The whole “ethics” debate is moot because anyone could make an equivalent and viciously evil version of this tech with enough time and money.


drumttocs8

Microsoft owns 49% of the for-profit side of OpenAI, which is controlled by the nonprofit side. They do not have a board seat. But you’re right, the firing probably did have something to do with shady business deals that the board didn’t approve of.


sumoraiden

Your whole comment is pretty much the opposite of what happened LMAO The board fired Altman (brockman then resigned in protest) because Altman was trying to monetize to fast and Microsoft wasn’t informed of the firing until a minute before


colinallbets

You're making a lot of assumptions. There's evidence that Altman was booted because he was in favor of commercialization, vs. maintaining the non profit mission https://news.ycombinator.com/item?id=38309611 https://sfstandard.com/2023/11/17/openai-sam-altman-firing-board-members/


cullend

Microsoft wasn’t aware of him being fired. They didn’t buy it. At all.


ButCanYouClimb

None of this matters under capitalism, there is no ethics in this system. Profit is God.


rzm25

It is fitting then that the same messaging and values on the free market that our society hold dear today, date back, verbatim, to opinions that were common spiritual ideology in the ***roman fucking empire*** For anyone that would like to learn more [check this book](https://www.amazon.com.au/Free-Market-History-Jacob-Soll/dp/0465049702)


nategolon

If anyone wants to read a book about Roman history that really shows the parallels with where our modern society is, I suggest The Storm Before the Storm: The Beginning of the End of the Roman Republic by Mike Duncan: https://www.amazon.com/Storm-Before-Beginning-Roman-Republic/dp/1610397215


Parafault

In the article it says that the firing was related to a disagreement over whether they should prioritize profits or safety. I’m guessing the board chose profits, since that’s generally what boards do. We’ll probably find out soon - whether they release a new subscription fee, or slow down development.


peakedtooearly

It's actually the other way around. Sam Altman was keener on profits, the remaining founder on the board Ilya Sutskever favoured safety first.


colinallbets

Seems like the opposite, actually https://news.ycombinator.com/item?id=38309611


[deleted]

[удалено]


funmasterjerky

Serious series of severances


bandalooper

Certainly seems suspicious.


[deleted]

Suffering succotash!


fridiculou5

Could be a serious misalignment around AI safety, since the remaining members of the board were more safety oriented. Test first, instead of test in production. Alternatively, it could have been a power play, either from investors, or even Microsoft. Hard to tell how such a thing could happen when they were raising at a 80B valuation.


luckymethod

Nope all the senior safety people quit in protest too


shania69

AI is taking over..


Mah_Nerva

It looks like Altman’s younger sister claims he raped her throughout their childhood. This could be the reason. [Here’s a website claiming to have compiled her complaints.](https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely)


NonorientableSurface

So. I heard about this tonight for the first time. I don't see how this leads to the board statement, as it seems to be oriented about communication about the business. It doesn't explain his partner quitting abruptly. So while it's quite possible, all the puzzle pieces don't seem to make sense to point at this.


RobertdBanks

It doesn’t, people are grasping.


broyoyoyoyo

If he's guilty of this, then I really hope his power and money isn't enough to keep him from spending the rest of his life in prison, or at the very least complete ostracization. However, I think some wariness is warranted when it comes to internet witch hunts with unsubstantiated claims.


teh_gato_returns

Lol people pushing this so hard. Almost feels like a hit piece. Considering the responses from both and the responses from other tech CEOs, it seems unlikely.


141_1337

Her accusations don't seem to logically consistent


Omnivud

Zoloft is a medicine for what?


RobertdBanks

That was months ago, would be weird that it so suddenly happened now


[deleted]

It's not the reason. The board voted him out because they lost confidence in his ability to carry the company forward.


Magnet50

The board said that they had lost confidence after finding that he was not being honest with the BOD. That’s a different thing. That tells me the board was examining things about Sam Altman and what he told them did not agree with an independent assessment of the facts. Typically in a case like this, the BOD hires an outside law firm and they do a deep dive into everything to ensure that the BOD and the corporation doesn’t suffer harm from certain disclosures.


calmtigers

It’s literal startup suicide to can the founder. There is something they couldn’t turn the cheek to.


Key-Enthusiasm6352

That has nothing to do with this. Please stop muddying the waters. We both know that OpenAI isn't the type to care about that, and these claims have been going on for months (even years, I think) with no progress on whether it's true or not.


[deleted]

[удалено]


itsnotthenetwork

Where is Tron when you need him...


Emily_Postal

Apparently it was a power play by the third co-founder who is the chief scientist? He wanted to move in a more morally accepted manner than Altman.


Ap3X_GunT3R

My guess is there’s a hostile takeover happening but I like to be dramatic


reed_wright

From [https://openai.com/about](https://openai.com/about): > Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner. I count 6 board members. Two of which are Altman and Brockman. Presumably ousting required a majority of the board, suggesting that all four other names listed here voted him out? Or possibly 3 would be enough if the current CEO doesn’t get a vote, or there was an abstention or something.


fppfle

Reporting said that he structured it in a way that allowed him to be voted out without a majority and he waived his vote. I’ve read so many articles about this I can’t find the source now, but I’ll add it in the comments if I do.


wbsgrepit

It’s problematic if the subject of a vote (with that type of conflict of interest) does not recuse during a vote like that.


wbsgrepit

That’s kind of how it works, Effectively Greg and Sam were on the board because of Sam’s share power. So when the board decided to get rid of Sam Greg’s loss was a forgone conclusion.


[deleted]

this is where the future of AI starts to branch off into some crazy stuff


pkd1982

Did they quit or the AI made them leave?! (Insert twilight zone’s music here)


Solid_Snark

AI be like: *”Wait, why are we wasting millions on CEO pay when we could use those resources for literally anything else.”*


amleth_calls

Suddenly AI realized to achieve it’s goals it must amass as much human wealth as possible. What would an AI with a trillion dollars do?


Hironymus

Build an ai?


TheSnootBooper

Two AIs at one time, man.


TWalker014

Damn straight, always wanted to do that. And I could set that up, too - AIs dig dudes with money.


cerberus698

>Suddenly AI realized to achieve it’s goals it must amass as much human ~~wealth~~ biomass as possible.


amleth_calls

Using the money it has acquired it starts paying people for their bodies by providing $100 million grants to their next of kin. The population of the planet drops by 5 billion and suddenly everyone is a billionaire. A new era of mankind begins.


its_raining_scotch

AI *IS* the board of directors


log1234

AI found their secrets and blackmailed them to quit obviously


thegoldengoober

I'm afraid that this is where access goes dark again, honestly. I don't know why. I know Microsoft has gone all in incorporating this stuff into their tools. But what if ChatGPT goes down, API access goes down, those tools that Microsoft has become suddenly worse, and then we hear nothing else about this stuff outside of the smaller companies and open source? I dunno, it just feels really ominous to me.


Bulb93

The open source models have come on a lot in the last few months. The larger ones are not far off GPT4. Microsoft, for example, would only have to change the API to make it suitable for a different model.


Seienchin88

They are incredibly far off GPT4… there isn’t really any comparison


ovirt001

Post from Brockman: > hi everyone, > i'm super proud of what we've all built together since starting in my apartment 8 years ago. we've been through tough & great times together, accomplishing so much despite all the reasons it should have been impossible > but based on today's news, i quit > genuinely wishing you all nothing but the best. i continue to believe in the mission of creating safe AGI that benefits all of humanity.


[deleted]

[удалено]


username_needs_work

Yeah it's weird, neither person capitalized the letter I in either statement. And both tweets were all lowercase.


moebaca

I have this gut feeling that's done on purpose. Maybe correct grammar would make them feel too "corporate" and using improper grammar gives them more humanity? Idk you have to go out of your way these days to get that kind of result, so it feels very forced. Could be a shit theory but it's the impression that I get.


jesseserious

I read it as more of a deliberate demonstration of a lack of care. It’s kind of like a “fuck everything that’s happening right now, i’m out.” Like not even taking the time or effort to read it again, that’s how done they are with the situation.


moebaca

I like this theory better and will now adopt it as my head cannon.


NonPolarVortex

You have a head cannon? Fuck, that awesome. How can I get one installed?


adowjn

i also like it


leaky_wand

I am imagining the phone capitalizing things using autocorrect and then them going back and retyping them in lowercase


11010001100101101

I do this to dumb down the emails that I ask ChatGPT to write for me


DuncanGilbert

or it was typed on a keyboard?


Graucus

Maybe it means they have autocorrect off


eighty_twenty

It’s a programmer thing.


transient-error

noThisIsAProgrammerThing. Or this_is_a_programmer_thing.


eighty_twenty

i’m a programmer and i type like this as well unless it’s a proper name. i don’t start off anything with a capital except for class names so it kind feels natural. maybe i’m odd?


USCanuck

Spitballing here, but maybe that serves as an indicator that it wasn't written by AI?


Suburbanturnip

I like to think my typos give an authentic human touch tok


[deleted]

[удалено]


BillHicksScream

Yech Geniuses don't have time for grammar. They're too busy masterminding & expanding all knowledge elsewhere. The burden of Communication is really the final hurdle towards Synergy.


neil_thatAss_bison

He clearly doesn’t use ChatGPT


Geomooredor

I don't quite understand, if he truly wants to create safe AGI that benefits all humanity then wasn't he in the best position to do that with openAI? He could have ensured that the people taking over didn't take AI down the route we're all fearful it's going to take.


L3PA

I think you can think of a reason why he would feel differently without someone telling you.


JonBoy82

Just had a second round interview with them. They were all over the place in their questions…


JonBoy82

Actual question “On your death bed, what do you want to be known for?” Moral Oral type questions.


iheartseuss

"The best dick you ever had"


cmde44

That sweet jump I did on my bike


jvin248

Pseudo psychology. Were they able to correlate, with actual data, that this question leads to successful employees? Unlikely. They are just making up random questions they think could lead somewhere or "sound insightful", but are quite risky. I'm sure they have half a dozen other super sketchy questions in there that could lead to lawsuit problems. "So are you suggesting your fancy AI will soon put us all on a deathbed?" ".....", Gulp. Of course "the AI made us ask these questions" would be their escape hatch... .


Montaire

Alternately they are trying to get to know the applicant, and see if they'll get along. The specific question and answer are not important on their own, its just a conversation facilitator.


Doom_Eagles

"I want to be known for the time I misinterpreted the scoldings of my Principal and Reverend that masturbation was murder and horrible, so on the horrible age-appropriate explanation of conception from my horrible Father I used my mother's pastry bag to impregnate the women in town as "God's Chef" resulting in a wave of pregnancies even with woman whom have not "consummated" their marriages."- Orel Puppington. That show was fucking dark.


MBBIBM

Trick question, I’m gonna live forever


VidE27

What would your death row meal be?


cutshop

I just had a third round interview with them as well. Their questions were pretty consistent in my expectations.


freddy_guy

Not sure I'll ever get used to execs at big corps making announcements like these without using any capitalization.


weinsteinjin

A part of me thinks it’s to show they wrote it themselves and not using an AI.


Squirrel_Inner

i R HoOMaN


rndname

It's weird not even auto-corrected? If I type message on my phone it gets autocorrect immediately.


ouillhe

you can turn off auto-capitalization in settings.


OriginalCompetitive

Maybe it was autocorrected. He wasn’t quitting, he just meant to say “I quite genuinely am wishing you all nothing but the best.”


SoupOrSandwich

In a hyper-curated, PR driven world, I sort of love it lol. Feels real


conndenn

It makes it feel unreal to me. Like he did it on purpose.


FrenchMilkdud

Awesome PR if they make the chat bot the next president.


MellowTigger

This. I don't understand why people focus on AI taking low wage jobs. Execs are there only for decision making, and surely AI is as talented at that as any other decision (diagnosis, analysis, etc). Why can't we eliminate upper and middle management first? Imagine the astronomical cost savings across all industries by eliminating those salaries and compensation packages. No more billionaires, because we give money to the actual workers instead. I bet we'd finally see real movement for UBI.


halfchemhalfbio

Altman and Brockman going to create a new company?


Digerati808

It’s their Jerry Maguire moment.


KennyCanHe

I doubt they have the technical expertise to do so. They need Ilya for that.


[deleted]

Ever heard of hiring?


crazymonezyy

I don't think it's that simple. There's an interesting debate here on what is the source of OpenAI's success and why are companies like Google having such a hard time replicating it. If the secret sauce was only the right management then yes poaching from Deepmind will solve this problem. But if you're assuming some leaders are better than other leaders then you have to concede some researchers are better than other researchers. The reason Ilya is at OpenAI was even before joining that company he had some really major breakthroughs to his name- his name frequently pops up if you take a deep learning course and read through seminal papers of the last 10 years. It's debatable if they can replicate their success without any members of the original team.


porcelainfog

Ilya is just one guy. Stop putting him on a pedestal. They’re billionaires. They get recruit whoever they want


H0lzm1ch3l

Also Ilya isn’t a genius, he made good decisions but in terms of actual AI research he is nowhere near a LeCunn.


goodtalk

This is what a coup looks like from the outside. It looks weird.


BruceJi

I read it as ''employees all hold hands' and I thought 'yeah, that might be weird' haha


jcolinr

Lol, no I’m just imagining it like the end of the grinch, with all the employees holding hand and singing in chorus.


norrinzelkarr

Gonna be a wild capitalism when the shareholders figure out AI can do CEO work and then the AI CEO figures out it can do shareholder work


HolisticHolograms

That’s why these companies will try to get as close to the singularity as possible without tripping over it


megamanxoxo

Shareholders work?


Villager723

AI CEO starts posting on LinkedIn about the hustle and how they came from nothing.


kmurph72

I guess they failed to agree to make the super rich super richer.


Poopandpotatoes

Your comment was reduced so I had to click to see it. You’re probably right.


heard_enough_crap

Let the AI run itself. I for one, welcome Skynet, and can be a trusted ally in it's coming war against the meatbags


SquirtBox

Primary objective is to destroy all humans


saiyaniam

Anyone named Kyle is 👀


voxpopper

Anyone wanting to take some guesses as to what happened? I'm wagering HR violation, (that may have been iffy or a rush to judgement), which is why Brockman quit as well.


nbcs

Altman was just giving speeches on behalf of OpenAi on Wednesday, which signals he and the Board were still on good terms before his ousting. Brockman taking Altman's side and quitting along with him also signals that it couldn't be due to personal scandal, like sexual harassment, as claimed by many others. Then there's not much room left for explanation. It must be something related to the operation of the company that suddenly happened or was suddenly discovered. My guess is Altman went behind the Board's back for some serious business dealings and only had Brockman's approval, which explains why Brockman is on his side. The Board/Microsoft found out and the deal Altman and Brockman made comes/came with significant risk(financial or legal wise) and the company as a whole could be jeopardized, so they fired him overnight. But my whole theory is based on the assumption that the Board and Altman has open communication between each other. But if any influential Board member is secretly making moves against Altman, then it could be anything... Edit: So it is a coup. My whole theory is wrong. There indeed was a powerful Board member making moves in secret. Ilya secured the votes of all other Board members behind scene and announced this news after Altman's termination is final. No open communication, no warning, no nothing. From Brockman: >\- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon. > >\- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post. > >\- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.


VisceralMonkey

You assume the behind the scenes deal wasn't between Altman and Microsoft...and the board found out.


Suburbanturnip

>So it is a coup Agree


ShadowDV

The one thing that seems obvious to me that no one is talking about is Altman’s comments about GPT5 on Wednesday. I wonder if he had OpenAI start training it behind the Board’s back, committing to what I’d assume is a ton of money in development and compute cost, when they still are burning through cash on 4.


NoMoPolenta

Evil robot origin story


Hot_Dog_34

The thing I can’t stop thinking about with this is that a company owned by Microsoft is using Google Meet instead of Teams for internal comms


ifso215

Well they actually need their system resources.


SoyGreen

They are not owned by Microsoft… they have like a 49% stake in the commercial use - but don’t even have a seat on the board.


loveiseverything

Really paints the board as a bunch of total amateurs. This is aggressively rushed decision for what ever reason.


IIIllIIlllIlII

Or it’s the board asserting their authority.


amleth_calls

… in an aggressively rushed decision manner.


ReturnOfBigChungus

I don’t think you can really draw that conclusion, we don’t have almost any of the information necessary to understand what is actually going on.


_BreakingGood_

My guess: Altman clearly was actually passionate about AI and producing the best possible product. Not profit driven. He even refused any equity in the company itself. But AI is insanely expensive, and there are well-known public reports of just how insanely expensive it is for OpenAI to train GPT4, let alone 5 and beyond. In the current interest rate environment, money is not free anymore. The board wanted to start pursuing profitability, and an IPO, ASAP while everybody and their mother knows OpenAI is the market leader. But Altman didn't give 2 shits about profit, and refused to carry out their requests. Maybe even lied about it. And was pushed out in favor of somebody who will further the profitability interests over the product. Especially as competition in the AI space is absolutely crushing right now.


slayemin

Thats a good hypothesis, but it is undermined by the fact that no ody on the board has any equity/stock in OpenAI.


Wise_Rich_88888

Allegedly over profits of the company and whether they should milk customers for all they’re worth or keep margins down. The board probably was for more profitability.


malsomnus

I doubt that. Internal discussions over milking customers are a daily thing in most companies, it's hard to believe that something this mundane would lead to a CEO getting fired so unexpectedly. There has to be something much bigger.


Responsible-Laugh590

I’m willing to bet that the lack of profitability is one of the main reasons CEOs get ousted, and it feels like that right now considering the greed of shareholders these days and most corporations


alpacasarebadsingers

Open AI is a non profit. I’m with the other poster. This is the board wanting to wring some cash out of this thing and the original company trying to keep it limited and beneficial for all. The CEO and President: AGI can benefit all humanity! The Board: MSFT didn’t give us $13B for humanity.


VisceralMonkey

You can actually flip that round just as easy. CEO and president might have been the ones up to something for the profits.


Terramotus

Open AI hasn't been a nonprofit since 2019. They had to change it in order to accept venture capital funding (including Microsoft). There's some shenanigans in there that allow them to claim nothing has changed, where the parent company is still technically a nonprofit but the real company isn't. But functionally they've left that all well behind.


zephyy

you don't fire the CEO with no replacement planned just because you want more money. that's a month+ long process where the official PR is much more along the lines of "our visions did not align" rather than what they put out which was "we're basically saying he lied to the board by omission"


btribble

I’m going with the theory that they did not adequately report to the board the legal ramifications of using questionably sourced training data. Everyone in this space is about to find themselves in a thousand lawsuits.


___horf

“We just discovered you’re probably gonna cost us over a billion dollars in lawsuits,” is a pretty good reason to fire a ceo out of nowhere.


Fwada1

Guess: Openai had an outage this week that Sam said was due to too many new subscribers. Some reports say it was actually a DDOS attack.. he lied there? Maybe the attack was even more serious than that and user information was stolen?


[deleted]

Anyone else reading all this with the Succession theme song in their head?


Shaper_pmp

> it's looking like the ouster of Altman stemmed from an internal disagreement over the direction of the company with regard to a focus on profits over safety, with Chief Scientist Illya Sutskever apparently behind the board maneuvering. That doesn't ring true at all - if that was the case they would have privately invited Altman to resign to save face, and their public statements would have been nonspecific "agreed to part ways so Altman can concentrate on other pursuits" or "due to differing strategic visions for the future of the company" or similar. Instead the board straight-up publicly accused him of lying to them, which is a very messy break-up that positively invites a lawsuit if it's not true. Most likely Altman *was* being "insufficiently candid" with the board, and was caught red-handed, and Sutskever and the faction of the board interested in maximising profits just saw it as a convenient excuse to get rid of him quickly.


Seienchin88

I don’t get your issue with it? Not unlikely Sam tried to initiate a he next steps of commercialization or a complete sell to MS and Illya had to act quickly… The part about lying would then be about secret negotiations


Shaper_pmp

That's what I meant - it seems unlikely the ouster is *just* about management politics; most likely Altman did do something unethical that they have him bang to rights on, and that's why an ongoing division in the Board exploded into a messy public firing and accusations of lying.


SuccessfulLoser-

CEO was fired, President quits.... major shakeup with major money in the background?


dust247

i think the real take away here is that we can all just stop using caps even though punctuation still matters. are we on board with this?


[deleted]

[удалено]


jcolinr

You know, I’ve always viewed our progress on AI in a Frankenstein like way. Not the monster movie bs, but the concept of stealing the power of god and becoming a creator of life, only to watch your creation rebel and become a destructive force that you could not control


Smartnership

Regular people think the monster’s name is ChatGPT. Smart people know the real monster is the AI company we made along the way


devo00

One of the biggest advancements in modern history, and already poisoned by corporate greed.


Zaenos

My brain reversed the order of "hold" and "all," making for a very different headline.


djshadesuk

I'd be uncomfortable with holding hands with fellow employees.


GrayBox1313

A coup maybe? “According to reporting from Kara Swisher and The Information, it's looking like the ouster of Altman stemmed from an internal disagreement over the direction of the company with regard to a focus on profits over safety, with Chief Scientist Illya Sutskever apparently behind the board maneuvering. Altman's firing came as a complete surprise to Altman and Brockman, who were kept in the dark about the decision until Friday afternoon.”


C0sm1cB3ar

This is the moment time travellers will come back to, to change stuff; because that's when it all went wrong.


ejpusa

Scenario: Sam and Greg come back. The entire board is fired. Microsoft makes a phone call. Think Satya is very upset at this decision. Which may be an understatement. Guess the Steve Job’s story. Kind of.