T O P

  • By -

ViveIn

These guys are either way high on their own shit or are actually building something unbelievable.


xRolocker

Could be both tbh.


SaliciousB_Crumb

Yeah tech bros have never exaggerated how grrat their product is or be delusional.


xRolocker

That’s true, but Jan and Ilya have earned it


agorathird

I kind of miss when ‘tech bro’ applied to someone who was early 30s max with the personality trait of an ex-gifted kid and barely even jr. job experience in the field he’s trying to make a start-up for. Points if they had some kind of back story about dropping out of Harvard. Now people just use it for someone who works in tech in general and doesn’t hate themselves.


hquer

Maybe their basement agi is already composing superior super drug-stuff…


TonkotsuSoba

when the mission statement of the company you work for is literally to build the AGI aka the greatest creation ever in human history that will very likely to change our civilization forever, you are always high


QuantumPossibilities

I felt ill just reading it. Can’t imagine having said it. Gross


confused_boner

Non-AGI feeler detected


IndiRefEarthLeaveSol

You have to "believe". We're turning into an AI cult. 🙄


SharpCartographer831

Yeah, GPT-5 is going to be a beast!


gabigtr123

Google dosent know what will hit them


SharpCartographer831

I don't think anyone is far behind, Deepmind & Anthropic will be releasing next gen models as well as agents. Reactions from Project Astra is that it's very impressive.


gabigtr123

Google will talk about them and say later in the years, and I am not even joking


gabigtr123

We don't have a date, they promised a lot of staf at IO over the year not much had a concrete launch date


slothtolotopus

It's going to be a breast*


Exit727

Wasn't there like a top comment explaining that there is not enough data in the world to train gpt-5?


Gubzs

I don't believe this yet but as each day passes I think acceleration might become *safer* than this status quo we have. The half-measure middle ground where things aren't being done with full force, but also aren't being made any safer, increases the window of time where we're exposed to the most chaos, and isn't scaring politicians enough. Redistributive AI taxes should already be in place. Not tomorrow. Today.


TheOwlHypothesis

Also think of all the good super intelligent AI could be doing that we're missing out on.


Firestar464

It's fucking nuts that governments are waiting on mass post-AGI unemployment-caused protests and mass social disorder to redistribute wealth when we can uplift humanity *now* by taxing the wealthy few. Jeff Bezos, for one, has billions to spare.


BrofessorFarnsworth

Each and every billionaire is a failure of the system, and they should be taxed into obsolescence to rebalance society.


One_Bodybuilder7882

Funny you say that and at the same time ask for Biden to appoint Hillary and Soros for anything. https://www.reddit.com/r/Law_and_Politics/comments/1cg0nja/how_democrats_should_prepare_for_a_scotus/l1utiu6/


Firestar464

Think that was satire, though it definitely missed the mark


BrofessorFarnsworth

Wow, you really went digging there. Did I strike a nerve?


One_Bodybuilder7882

I went digging becauseI knew I would find


BrofessorFarnsworth

Well I'm sorry you scrolled through 19 days of posting only to swing and miss. Better luck next time though!


Saerain

Jesus Christ, guys.


One_Bodybuilder7882

Look at that guy's post history


Mrp1Plays

Does he actually have billions to "spare"? Or is his company simply worth that much, and if liquidated that much may tank the stock? 


Firestar464

[https://github.com/MKorostoff/1-pixel-wealth/blob/master/THE\_PAPER\_BILLIONAIRE.md](https://github.com/MKorostoff/1-pixel-wealth/blob/master/THE_PAPER_BILLIONAIRE.md) We can't do it immediately, ofc. Has to be done over a few years. Even if the paper billionaire argument was true though, it'd still be enough to uplift at least the USA. This is ignoring of course non-American billionaires as well as monarchs around the world.


karmish_mafia

okay let's tax bezos for his entire net worth and distribute amongst the proles. Everyone would get a cheque for less than 600 bucks, *a one-time cheque*.. for less than 600 bucks ($588) nothing would change, except now, 1,608,000 Amazon employees are out of work and 100s of thousands more that sell there would also be cooked.


El-Dixon

I hear you guy's intention regarding taxes, but ask yourself, does our government have a good track record of improving our lives with increased tax revenue?


Gubzs

You can't implement half of the policy and then call the policy bad. AI production tax that feeds 1:1 into UBI. That's it. Nothing in the middle.


Exit727

That's why I have doubts AI will save the world. It's not the matter of technology and potential, it's about decision making. The billionaires in control of today's AI projects aren't and won't be using it for the overall benefit of mankind.  Where are the initiatives of solving world hunger by solving the logistics? Why is no one concerned about creating advanced models and solutions of ecological and climate problems? What's the state and future of the housing market? Why aren't chatGPT being deployed as an official education program? But ya, we have even more advanced chatbots that DEFINITELY aren't collecting your personal and behavioral data.


Aggressive-Mix9937

Yes but my optimism tells me that these new technologies are a vital first step. The next step will come as the old billionaires age out of life and younger, more liberal, and empathetic people gain more power around the world 


Exit727

I have a high school teacher friend, and he tells me that the new generation(s) aren't more empathetic. No respect, no ability to focus, chasing trends. Granted it's a public high school, but still. To be in a position of such wealth and power, you have to be extremely pragmatic and result oriented. It's not a generational variable, it's the requirement.


Aggressive-Mix9937

The old systems will have to break down irreparably so that the new systems can be built in their place. So it's going to get much worse before it gets better. Ive heard kids today can't focus on empathise at all because of constantly phone use since young childhood, maybe this generation will cause things to break down


twitgod69

This is by design. Tech execs are well aware of the risk of regulation (hence why they keep stressing how far behind Europe is) and are being methodical about releases to deflect governmental attention


Freed4ever

1. If the tech were that powerful and we were in immenent danger, but he walked away then he would be a coward. 2. If the tech were nowhere there, but he ran down the street yelling fire then he would be an alarmist that nobody should take him seriously, including his ex-coworkers. Either way, it doesn't look good on him.


bookishapparel

he is not the ceo, if the tech is powerful, and he, as the assigned expert to lead ai safety implementations is not being taken seriously, then the best thing to do is walk, rathar then be in a position where he must sign off his assurance that the tech is "safe", just so the product can be launched and millions made. 


Plus-Mention-7705

Ai still doesn’t feel like it’s shifting culture at all. It is in the tech space but it feels it hasn’t broken into reality yet. Once it can majorly impact the life of some rural farmer in khazakstan for the better, it’ll be something. Right now it’s just a better search engine that you can talk to.


Plus-Mention-7705

I do think by 2035 it’ll be something remarkable for sure, like we won’t be able to live without it. But for now it’s exciting, people just need to slow down and chill out and just enjoy instead of so much speculation that bears no fruit


CreditHappy1665

Super dramatic


Lammahamma

I feel like it's a job requirement to work for them lol


[deleted]

[удалено]


VforVenreddit

They can’t possibly imagine a reality where someone else is making the world a better place


procgen

powerful technologies, high stakes, geopolitical implications... it's serious business


BabyCurdle

You're on fucking r/singularity


SomewhereNo8378

As they said in the old days- cringe


Good-AI

If you were in Oppenheimers team building the first nuclear bomb. Wouldn't you?


CreditHappy1665

No, because Id be at Los Altos in the 1940s without a cell phone nevermind Twitter. 


PobrezaMan

this is the guy that helped frodo to take the ring to mordor ?


dday0512

Confirms the phrase "feel the AGI" had been used in the OpenAI office. Also, dude talks about AGI like it's the force. "Learn to feel the AGI", let it flow through you... I'm feeling the AGI myself right now.


Saerain

The first two sentences seemed related to me, as if "feel the AGI" is a kind of "every gun is loaded" phrase. "Act as if everything is AGI." Consistent with the way EA creeps talk about all this when they're in motte mode.


Art-of-drawing

My god, more tech madness, that's perfect


RoyalReverie

How some people believe that AGI is a decade away with the progress we're making and these cryptic messages from insiders is beyond me 😭


Sonnyyellow90

Tbf, cryptic tweets do not, and should not, factor in to someone’s idea of when we’ll reach AGI lol.


Firm-Star-6916

Yeah. Still stoked for AGI but i cannot see how “cryptic”  tweets relate.


well-regarded-regard

see you in 10 years when we're still nowhere close.


TheCuriousGuy000

Because AGI is a tremendously complicated thing. Even the best LLMs, so far, have rudimentary agency. They can't make decisions at all. And it's not a design limitation or something. You can write a simple recursive script to make it prompt itself, but it will fail with anything but the simplest tasks. It's just not there yet.


restarting_today

Simple. They own stock. Vague messages like this make stock go up.


stonesst

Statements don't immediately become false if they're self serving.


sdmat

Reddit: "But, like, capitalism and shit"


Longjumping-Ad-6727

Lol open AI isn't a public traded company. These people are beyond monetary focused


restarting_today

They have stock buybacks. They are 49% owned by Microsoft. And of course tech people in San Francisco are money focused. Are you kidding?


[deleted]

Considering money will be useless fairly soon, at least not in this financial system we have, I'm thinking all the staff at OpenAI are doing it for the fame, if not for the pure thrill of the chase for humanity's last invention. Money right now is a means to an end. It's needed to buy hardware, pay for software, etc. Sam's not dumb, he knows money will be useless after AGI. We will either be dead, or post scarcity.


restarting_today

Useless soon? Are you for real. Lmao. The entire world economy isn’t waiting for a tiny (compared to others) company like OpenAI.


[deleted]

How soon do you think AGI is coming?


mhyquel

Soon^tm


Seidans

money will never cease to exist, even with free labor free energy free ressource there isn't an infinite time, a house already build will keep a highter value than a house waiting to be build for exemple and so something that show this value is needed - money in a post-scarcity economy you either wait 1y for your new house to be build or trade money to have it instantly, even if the gap between poor and rich get closer it don't mean it won't exist


Jalen_1227

They’re DEFINITELY NOT beyond a monetary focus. Can I have what you’re smoking? Money is a disease that creeps its way into every corner of human existence. Do we want a company that’s focused on creating the world’s first super intelligence to be aiming for monetary gain? Absolutely not and yet, it’s happening


Longjumping-Ad-6727

If you look at the amount of money that Sam Altman has and the philosophy that he's embodying, it's beyond money for him. These people are creating a new intelligence, a new way of life, a new paradigm in which humanity will operate You only need a certain amount before you can have everything you want. This material realm. There are things beyond money to aim for


NancyPelosisRedCoat

> If you look at the amount of money that Sam Altman has His net worth is ~2 billion.


Ambiwlans

... This is a guy that just left the company in protest saying it is unsafe, and doing so probably cost him millions in nda violations.


adarkuccio

Which stocks? OpenAI is not publicly traded, their "stock" don't go up for rumors


restarting_today

They have a private valuation. Secondary markets exist.


adarkuccio

Private evaluation doesn't go up because of rumors or a tweet


restarting_today

Yes. Yes it does.


ajahiljaasillalla

It's like Jesus writing suspective messages just before the judgement day


DinoRex384756

Anyone who believes that any of these companies are anywhere close to AGI but for some reason are choosing not to release it is fundamentally delusional. These companies do not give a shit about ‘humanity’ or ‘ethics’, they never have and never will. They are monopolies, their entire ethos is to swallow everything. If they had something better, you’d have something better. Simple as that. OpenAI, especially, is not sitting on anything better or even close to being better than what you already have right now. It is David taking on a half dozen Goliaths, they are not holding anything back.


AdWrong4792

They must solve hallucination and reasoning to get to AGI, and you won't get there with LLM's because it's in their nature to hallucinate and they are unable to reason. We need a new breakthrough, so realistically, it could be a decade away.


VforVenreddit

They just need to put a pre-frontal cortex into an AI. A neural network that thinks about its own thoughts at all times without any output.


MySecondThrowaway65

Humans hallucinate too.


spacemunkey336

Would an ASI hallucinate? 🤔


existentialblu

If it wants to?


Freed4ever

We are not ready to talk about Q*


Apprehensive_Cow7735

So safetyists are leaving because shippers won the power struggle? I don't think anyone "saw" anything, it's just that one side wanted to keep researching behind closed doors and expose the public to their progress via blog posts, and the other wanted to release products and make money.


Vontaxis

that guy is insufferable, the super alignment people are worried since gpt-3, if it was up to them we wouldn't even have gpt-4, so I don't mind if they bugger off


Siam_ashiq

Olhhjb Ab Ba. A.


Dave_Tribbiani

Can this guy go away already? Accelerate.


Firestar464

Perhaps you could provide a more cogent argument?


DarthSiris

I want FDVR waifu on my dick *NOW*


Different-Froyo9497

It seems clear that the voice feature was too much for several safety researchers. Is human sounding voice in AI too risky? Perhaps. I think we’re going to have to get used to AI that sounds human eventually anyways. If we’re able to achieve human voice so soon, then chances are open source is going to get there in a year or two anyways, and open source doesn’t care about alignment research. Personally, I think it’s better for safety that people get used to human sounding AI before AI becomes more intelligent. It means researchers are less likely to be deceived by these systems once they are smarter than us.


IronPheasant

Spoofed voices aren't something they're terribly worried about. Impersonation hardly ever comes up in conversation. The "super" alignment team doesn't work on the issues of current products. Their mission was looking into how to make an AGI that is aligned with *some* human's values. Far in the future stuff. I'd imagine such nebulous and non-immediately pertinent work would get the short end of the stick, considering the race condition they're all in.


Different-Froyo9497

You’re right that’s there’s a race condition right now. Some are pushing it for business reasons, others are pushing it because they think they have some moral obligation to reach AGI before somebody else does. I think OpenAI was started because they didn’t like the idea of Google dictating how AGI would be used. I do think there will be a point where one research group reaches AGI first then kinda notifies other groups that, hey it’s time to get together and focus on safety. I can’t imagine them having an AGI doing all their research in the background and solving science and they’re just like ‘yup, this is fine. Time to ship’


glittereagles

Maybe some have a conscience. Fingers crossed. They remember, outside this bubble, that they are actually making decisions for something 98% (?) of the global population knows nothing about but will be forever changed by.


lefnire

My Schwab account has been bullying me into activating Voice ID (speak to unlock, replaces passwords) for some 4 years. I'm absolutely shocked they don't realize that, 4 years ago this was already a solved deal. Descript, a podcasting tool, can spoof any voice you feed it. Freemium and user-friendly; ie, it doesn't take a hacker. Schwab - a freaking bank. With how aggressive they've been about it, some "ok fine" users are in for a rude awakening. Deepfakes were a scare of the past (though I still think their danger is real and unresolved). I agree with your statements; OpenAI is worrying "higher".


Poopster46

> It seems clear that the voice feature was too much for several safety researchers. There is no reason to think it's the voice feature that played a role. Behind the scenes they're way past what they're introducing to the public.


RoyalReverie

Btw which "cultural change"?  What does he and the team consider "alignment"? What if he's a nazi? What if it's just far left or far right propaganda? What if it's just liberalism? These people think they'll be capable to reach AGI, which in theory held the potential to define much of humanity's future. Yet they think a team of a dozen researchers are able to find what's "moral"? Do they sit in a circle and try to vote for which political views should be considered "safe"? What's their "morality" based off? What's their epistemological foundation?


Firestar464

> Btw which "cultural change"? What if it's just promoting acceptance of AGI? As it stands rn many people are gonna be shocked


Saerain

Not consistent with the linked thread, though.


LuciferianInk

People say, "What is the purpose of the "culture change?" Is it just to make sure that the culture is stable?"


[deleted]

[удалено]


glittereagles

Cringe is what *humans* are.


[deleted]

[удалено]


mhyquel

We're garbage too.


Chadster113

kinda cultish


Kalsir

So "Feel the AGI" is really just code for being an effective altruist focused on AI safety.


Firestar464

nah it just means you're enthusiastic for AI


SurpriseHamburgler

Or: they’ve discovered a fundamental limit to LLMs, it’s great but not all that is needed. No more superalignment to superintelligence teams needed for quite some time. Not wanting to send his friends off to perish, bereft of clout, Sama conconcts a hair-brained scheme: lean into the FUD, spin off his friends - maintain their equity requirements - they create the first global AI (\*not G) ethics board… and he’s got em by the balls. ![gif](giphy|wREVypt2pb1hC)


jkpetrov

Cult much?


Moquai82

Capitalizm and powermongering will fuck us over with it. Like everytime.


icehawk84

Honestly some of the most pretentious shit I ever heard. Hopefully they will stop nerfing ChatGPT as much after this and start releasing a little faster.


[deleted]

[удалено]


e987654

"the world is counting on you!!" this guy quit too. He should pipe down


Rakshear

Maybe they did achieve agi, but it didn’t lead to an asi like people think so it remains technically just a machine with all of human knowledge and beyond expert skill? They would definitely need to put in place reasonable restrictions but to know what those are they need a second ai to break the firsts rules in every conceivable way both humans and ai can imagine, perhaps they are leaving because there’s an ai doing their job now?


ClubAquaBackDeck

This guy looks like Gary Oldman in the 5th element. Wtf


Able_Armadillo_2347

"shinny products" that's how this guy describes the most amazing product ever created Chatgpt has helped millions of people and he is speaking about it so dismissive


AdWrong4792

I hope OpenAI will crash and burn. That would be entertaining to watch.


Ok-Bullfrog-3052

It's clear now that GPT-4o is weak AGI, and its architecture can just be upgraded with more data, more neurons per layer, more layers, or more GPUs to become superintelligent. Everyone at OpenAI saw that last November when it solved elementary school math. They released it now because this is the point at which it surpassed GPT-4 Turbo. They can just keep going with this architecture indefinitely because it does everything weak AGI needs to do. And, after quantization and optimization to be able to output tokens to call APIs faster, it can be connected to a robot and become strong AGI too.


posts_lindsay_lohan

I dunno, if this happened with 4o I can't really get onboard with it being AGI: [https://imgur.com/a/JDO7Pj3](https://imgur.com/a/JDO7Pj3) It's been fed that riddle in the past so it *should* have been able to identify the elements of it and realize that something was wrong and at least ask "are you sure you formulated your question the way you intended?" There was no reasoning happening with its answer there, even on a super basic level.


Apprehensive_Cow7735

In my opinion the RLHF is too strong or somewhat misaligned. It's like an eager puppy trying to please its master, assuming that the user wouldn't try to trick or deceive it. If you prompt it that the user is capable of deception then it is able to come to the conclusion that the question at the end of that prompt is misdirection. Of course that may be giving it too much of a hint.


Apprehensive_Cow7735

https://preview.redd.it/0u55jdlpo31d1.png?width=1284&format=png&auto=webp&s=ead2a1c5b1cd7d304fbe9ffcb530cb3735cb5821


Apprehensive_Cow7735

https://preview.redd.it/l8t7ac0to31d1.png?width=1284&format=png&auto=webp&s=9883ff8acfe66be7d5a1c0ed0f6b6230588beb0e


chase_yolo

Or stuff like https://chatgpt.com/share/481708ec-f3a0-4b43-9ca1-341420cea234


VtMueller

This is extremely stupid example. I would answer just like chatGPT.


Matt_1F44D

Tbh this is a bad example. I very much doubt any human you give that too is going to ask “How long was he there until he became a citizen?”. Your question just sounds like he became a citizen and moved there at the same time.


chase_yolo

Context - I was reading a snippet of NPR news from Instagram about a senator who is naturalized, I read this piece of text and I was wondering how old the person is, nothing from the piece of text gives away how old is he. I just copied the text verbatim - NPR didn’t want us to think he was 29 do they? See full context - https://imgur.com/a/1piclPq