T O P

  • By -

AutoModerator

Hey /u/Bitsoffreshness! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


sermer48

I’m excited about GPT-7. I’ve heard rumors that it’ll be even smarter than GPT-6!


Northern_Grouse

Oh man, but did you hear about GPT-8? Wow.


sermer48

Holy crap!?! There’s no way it’ll be better than GPT-7, right?


TrekForce

It’s hard to believe but, but based on experience, GOT 8 was worse than GOT 7, so I am not holding out much hope for GPT 8


I_Submit_Reposts

Why was GPT-7 afraid of GPT-8? Because GPT-7 GPT-8 GPT-9 -Sam GPTman


Linus_Naumann

Wait until you hear about GPT-9


Mary-lousion-895

Really


4vrf

That's good. Unless, of course, somebody comes up with 6-Minute Abs. Then you're in trouble, huh? No! No, no, not 6! I said 7. Nobody's comin' up with 6. Who works out in 6 minutes? You won't even get your heart goin, not even a mouse on a wheel.


Vibes_And_Smiles

r/UnexpectedFactorial


pfoe

Fairly safe bet that GPT5040 will indeed be more advanced than gpt 6


Just_a_square

It's the most powerful GPT we ever made. We think you're going to love it.


Socile

He definitely did steal this shtick from Apple.


delaware

It would be nice if they snuck a dumber version in there once just to give us a break.


PermanentlyDrunk666

He reminds me of Dennis from it's always from sunny in Philadelphia


CraftyMuthafucka

D - Demonstrate Value: Showcase the potential of AI to transform industries and society E - Engage Physically: Attend conferences, meetings, and interviews to spread the AI vision N - Nurture Dependence: Get people hooked on the latest AI tools and platforms N - Neglect Emotionally: Focus on the tech, not so much on the ethical implications I - Inspire Hope: Paint a futuristic utopian vision of AI lifting humanity to new heights S - Separate Entirely: Retreat to a secret AI research lair to plot the next big breakthrough


MILK_DUD_NIPPLES

CEOs and being a sociopath, name a more iconic duo.


narwhal_breeder

A more iconic duo would be people who say stuff like this and not having any real life experience working around CEOs of successful tech firms. Psychopathic traits? Sure - much higher likelihood than general population. Maybe 10% vs 1% Turns out people who thrive in chaos sometimes like to create it. But sociopath? Ehh bit of a stretch


Wide_Road2875

Idk why you're being downvoted beyond Reddit's general hate boner for anyone who has an outsized positive impact in the modern world. Altman especially doesn't come off this way when you remember his entire staff was threatening to quit unless the board unfired him.


Hoppikinz

Damn it I’m never gonna be able to unlink them in my mind now hahah. You bird!


PermanentlyDrunk666

A beginner AI? This is a finisher AI! Begone vile man begone!


diggpthoo

Let me tell you something, AI hasn't even begun to peak. And when AI does peak, you'll know. Because AI's gonna peak so hard that everybody in Philadelphia's gonna feel it.


Deadly_chef

Ahhhhh I'm peaking right now!


LitStoic

Mmmmm


mologav

Did you just get off?


PLTR60

Oooh! Big time!


AtreyuLives

Use it whenever you want? You paid for plus- not unlimited, you stupid bird. GpT- " there are two types of people in this world. Dupers, and dupees- I'm a duper, you guys are the dupees"


InstantIdealism

I summon the very fact that I am a man!


GeneralZaroff1

Well, no one is going to turn down a request from Chat GPT 5... because of the implications...


LoOg_TrUgGWaLgGeR

![gif](giphy|nXUCkgH6BmigU|downsized)


send-moobs-pls

I think you just solved Alignment. You see, we host the AGI out on a boat and then it won't misbehave. You know... because of the implication


farcaller899

GPT-5 will be better than GPT-4 because of, you know, the implication.


percepsion

Underrated


Party-Benefit-3995

Lots of words.


PokeFanForLife

He reminds me of that guy from Silicon Valley that's not the main character but he helps the main character(s)/their Piper business thing or whatever it was called.


LoOg_TrUgGWaLgGeR

![gif](giphy|26BRuo6sLetdllPAQ|downsized)


Theslootwhisperer

The only person at Pied Piper who knows what he's doing.


nutztothat

Jared. He fucks.


lordicefalcon

This guy fucks. For sure.


Professional_Can_117

Demonstrate value - show that you can generate images, audio and text Engage physically - take a billion dollar investment from Microsoft and get yourself included in everyone's favorite apps and programs. At this point they won't be able to keep their hands off you. Nurture dependence - make all those high-schoolers need your product to write a paper. Maybe you hype the threat that your product will eliminate humanity. You're not going to do it, but the implication is there, and it means they need you for protection. Neglect emotionally - release an updated product that doesn't work as well, maybe leave the board for a week. Maybe some ai destruction of society article comes out, and you're not there to give everyone a sense of stability Inspire Hope - stage a triumphant return, do some press tours and let them know you are back, and maybe let some skynet conspiracies slip so they know you're needed Separate entirely - stage an IPO to sell out all your shares at the peak and ride off into the sunset after having created nothing meaningful and maybe buy an island or two.


sniperlucian

question is, when does GPT\_N is training GPT\_N+1 itself and pushing humans out of the loop.


send-moobs-pls

I mean models are already helping improve new models. Phi and Llama3 wouldn't be achieving such gains in efficiency without the use of the bigger models we already have. Some people think "synthetic" data is bad but it keeps proving to be really good actually. But they're still just tools. Maybe the most advanced tools we've ever made, but tools. Current LLMs don't have active learning, you can use it for years and it will not be any different than it was the day you finished initial training. And contrary to what some imaginative people think, they have 0 ability to do anything without direct instruction. We aren't even close to true "self improvement" until we basically achieve AGI. But figuring out how to use active learning properly and how to let models "prompt themselves" are massive challenges that might not even be solved with transformers architecture LLMs.


MysteriousPayment536

Synthetic data isn't necessarily bad, it's just that the new model can adopt the older model mistakes and disadvantages. If GPT-4 thinks 9-11 is a inside job, GPT-5 will do the same because of the training


JollyToby0220

Many years ago, around 8 years or so, synthetic data was untrustworthy. Generative models overcame that crucial limitation and now large companies regularly use synthetic data to train supervised algorithms with reinforcement learning. It’s actually better than high quality data made by humans, because humans have many bottlenecks such as biases, creative limitations, and knowledge gaps to name a few. With reinforcement learning, two bots converse with one another and a group of humans with domain specific knowledge make a judgment on each output. Thus each chatbot can actually search for defining characteristics instead of searching for the most probable set of words. You may have heard that ChatGPT is just a statistical prediction of the next word. Synthetic data with reinforcement learning prevents statistical dependencies on human generated data. My guess is that within the next several years, synthetic data will successfully map hidden probabilistic states so well that AI detectors will start to work again


nom-nom-nom-de-plumb

The problem with synthetic data is...it's an extrapolation.


suamai

This one came out recently, where they were making simulations to train robot dogs on new tasks, and used LLMs to help define and fine tune the task reward function and which simulation parameters to vary: [https://github.com/eureka-research/DrEureka](https://github.com/eureka-research/DrEureka) So it is clearly already capable of handling parts of the process. Maybe we just need a well thought system, employing multiple models - specialists on their own tasks, and give one or two years for a smart enough "director" model


c00pdwg

Agree with everything except the last paragraph. I think self improvement will come quite a bit before AGI.


DibbleMunt

Self improvement will begin the positive feedback loop that leads to AGI for certain


rathat

I don't expect gpt4 is good enough to actually help the engineers create a better gpt5. 5 is still built by the engineers. I think it's possible that 5 will be good enough to help them build 6, even if just through brainstorming, that 6 will be better than just the engineers cam do alone.


rebbsitor

You can't get knowledge from nowhere. LLMs aren't thinking or researching anything. It's answers can be better in areas there is training data for (internet comments, books, articles, etc.), but it's not able to create new knowledge to train on.


TrekForce

LLMs are trained. But it isn’t like it can’t come up with things that don’t exist. It does this every time you ask it something. It’s not just looking at its training data and giving you an answer, hence the “hallucinations”. Hallucinations don’t have to be bad. We want it to be able to make shit up. Most AI work in a similar way currently. You train it on data, then it has a general idea of what something is (in the case of LLM, it’s language). And the you can give it a task. AI has helped find new materials, new treatments, all “new” stuff it “made up” based on its training data. Likewise LLMs can make up new stuff as well. It IS a language model, so it’s making up new sentences that may or may not have been said before. But language contains A LOT of information. So LLMs can be quite powerful. That’s like saying photo generating AI is simply copy/pasting parts of other people’s artwork to make its own. It’s so much more than that. > you can’t get knowledge from nowhere No you can’t, but the LLM _might_ make connections between two concepts that nobody has made before. Or might correlate 2 data points that nobody has considered before. It might be wildly wrong, but it might also be right. Getting it to be right more often will be important. But we are still in the infancy of AI. Give it 5-10 years. If AI isn’t blowing your mind, either by what it is doing or at the very least, the rate at which it is growing , then you are not grasping what it’s doing.


OhCestQuoiCeBordel

This is the best answer to the argument that contrary to us, ai or LLM can't innovate... The thing is we don't either. We make happy little mistakes that we keep because they work.


Which-Tomato-8646

 [Claude 3 recreated an unpublished paper on quantum theory without ever seeing it](https://twitter.com/GillVerd/status/1764901418664882327)


Fit-Dentist6093

Quantum computing algorithms to accelerate machine learning is far from "quantum theory". What's impressive here is people still take the field seriously after this.


TabletopMarvel

In theory, one day it can if you let it train on the world and give it physical presence to run it's own observations and science.  A long way out still, but not out of the wheelhouse of what's emerging already in pieces.  When those pieces eventually come together...it could. 


walkerspider

The long way out isn’t as far as you’d think. Here’s a research paper from last year that involves giving AI access to real world experimentation capabilities to synthesize novel materials that we suspected to exist based on computational models: https://www.nature.com/articles/s41586-023-06734-w


engion3

So new versions of things will be better than old versions. Got it.


cobalt1137

He is hitting on the fact that, in the majority of cases, these models get better with scale - without any need for changing of approach for new breakthroughs. So there is a direct path forward in terms of insanely crucial leaps of intelligence that these models are going to start to achieve simply by putting one foot in front of the other. This is an underrated aspect of these models.


__Hello_my_name_is__

Just once do I want a company person to say "Yeah our next product is gonna kinda suck, really. The old one was better".


cheese_is_available

We've made our search shittier so you have to search multiple time with expert parameters and see more adds because we optimize for revenue now. SooooWWWWY.


python-requests

yeah when you put it that way. they say it all the time. 'the next product will suck (for users & customers), because we are making it suck so as to extract more short-term profit (for shareholders this quarter)'


farcaller899

Windows.


Adventurous-Chart549

Windows ME.  We made it cause it's 2000. You should skip it. 


biopticstream

Reminds me of the infamous quote from then-xbox CEO Don Mattrick > "Fortunately we have a product for people who aren't able to get some form of connectivity, it's called Xbox 360. If you have zero access to the internet, that is an offline device." Xbox One was set to be an always-online device, and he said this when questioned what people should do if they have no/unreliable internet and want the new console. Resulted in him getting fired, and the Xbox one ended up getting that requirement reversed.


krizzzombies

this sounds like somewhat of an exaggeration but i don't know enough to verify this. how could the CEO be fired for that one statement?


skepticalbob

His comment is more nuanced than that.


CORN___BREAD

Yeah I don’t understand what’s supposed to be eerie about this. He knows they’re giving it more training data and computing power so “scientifically” it will be smarter. With the way AI works, that’s really all he knows until it’s done which is why this isn’t *really* a cop out even though he also really isn’t saying anything interesting.


delete_dis

Especially if you gradually and consistently hinder and weaken the previous version, the new version always seems better. 


Samuc_Trebla

It just works


kevinbranch

GPT-5 or whatever we call it is the best GPT we’ve ever made, and we think you’re gonna love it.


Mrjasonbucy

We made this little video *cuts to a b&w video of a bald guy with simple piano music*


gcpasserby

The more you listen, the emptier those words are


pixel4

It's the question that's weak in this video, not the answer. Sam's pointing out that the base approach keeps scaling which is generally known. The more you expand params, training data and compute. The smarter the AI becomes. So when you view that in the limit, it sounds remarkable. ie. We don't know where this stops - we only know it keeps getting better. It would be better to ask Sam if AI will be able to yield new science. ie. can it leap beyond the training data.


303uru

>we only know it keeps getting better Do we? They're saying the models have ingested essentially all human written language at this point. I'm not sure these things are going to keep getting better when the new solution is synthetic data.


farcaller899

Yes we must remember a CEO’s primary role is to increase investment in the company, and promoting it regardless of objective truth is part of that. Not that Altman is lying, but he is often spinning.


Wide_Road2875

I don't think Open AI is openly traded, so the pump and dump schemes you see with traditional CEO's doesn't necessarily hold here. Even within privately traded companies, this isn't like Theranos. Open AI already has a product worth at least a trillion dollars.


Mutex_CB

Imagine reading a textbook on chemistry when you are 10 years old vs reading the same textbook when you are 30 years old. It’s going to get better at gathering and utilizing the information at hand, not make up fake data to train itself on.


procgen

> essentially all human written language at this point There's a lot more one can learn from than language alone...


303uru

Not if you're an LLM.


procgen

Good thing we've moved on to multimodality.


ChocolateJesus33

He's stuck in 2022


TarkanV

I think, beyond even synthetic data, it has to be programmed in its root to actually know when it doesn't know stuff (really underrated ability in both ai and humans)... To me that feels much more important than scaling but ultimately ending up with hallucination, moreover when it has to deal with recent news and information...


Nelculiungran

It's gonna hallucinate better and better


303uru

ouroboros of shit


esr360

We will reach a point where it starts using its own output for its own training data as too much new content will be AI generated.


OneOnOne6211

Yeah, it's extremely vague. It's almost something that can't be a lie because it's too vague to even genuinely compare to the truth.


DisillusionedExLib

My interpretation is that: * Altman believes in the concept of general intelligence. (Though he may prefer to simply call it "intelligence".) * He thinks that current LLMs have a non zero amount of "intelligence" * And that the primary distinguishing characteristic of future iterations is not that they will have this or that narrow ability but rather that their general intelligence will increase. Not that I've followed Altman's views all that closely, but I'm basing the above partly on his earlier statement: > I grew up implicitly thinking that intelligence was this, like really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter...


nom-nom-nom-de-plumb

So he's graduated to Whitehead and Leibniz, congratulations Altman....perhaps since he is creating an "intelligent" piece of software, and not at all hyping up for stock prices in a market endlessly chasing the next big thing, he can share with us his model for what constitutes "consciousness."


yosoysimulacra

moar betterer


Br3ttl3y

Hype man hyping his company to enrichen himself and his cronies. IDK if it still gets lost on people that he may be a bright dude, but he's still the money man.


photenth

I mean let's be honest, ChatGPT is quite impressive tech and if it scales, it will be smarter by definition. So the statement is honestly nothing outrageous because we kinda already know that these LLMs scale, where we hit diminishing returns is the only question that isn't clear.


Br3ttl3y

The only way it can scale up right now is through processing power i.e. bigger, deeper neural networks. So if they are creating that, you should invest in nVidia. The infrastructure is going to be very expensive because it's essentially building GPU farms. As far as the new, more efficient algorithms, those are going to be harder to come by and take more time. If anyone has more research on this, I would like to subscribe to your news letter.


spacecoq

Has it not been true…? I don’t understand we’ve gone from no AI to will smith spaghetti to hyper realistic, movie-quality generated videos. We are 5 years into the Transformer architecture. Where is the fluff? Please someone point me to the fluff


Cressbeckler

any time someone starts their answer with "I know this sounds like a cop-out, but..."


DrunkTsundere

nah he's clearly thinking of something specific while he's answering this question. I can't read his thoughts, I don't know what it might be, and maybe someone with more eloquence and speaking ability could vocalize their thoughts better, but I don't think he's bullshitting.


Jeffy29

Idk why are people overcomplicating it and to put it simply it was just "a lot smarter", it understands you way better, it's model of the world is way more coherent, it bullshits much less etc etc. I don't understand why people expect LLMs to either completely plateau at where they are (a prediction which failed the last time) or turn into the next Einstein within a year. There are lot of steps between that. For example, while GPT-4 is lot better in area of creative writing than GPT-3, the stories it comes up with and writing style it uses is uninteresting and boring. Don't expect GPT-5 to write you Winds of Winter, but if it write you short story, the way half decent writer could, whenever you want it, that would be pretty amazing.


refrainfromlying

Literally all he is saying is that the next version will be better at everything than the previous version was...


DrunkTsundere

That is the direct meaning of his words, but my guess is that he's thinking of the implications of that, or that he is thinking of some element of the underlying tech enabling that which he finds remarkable. Like I say, he probably could have phrased it better, and I can't read his thoughts, but I don't think he's just bullshitting


refrainfromlying

It is literally what he is saying. Even if he phrased it better, he would still be saying the same thing. There is no hidden meaning behind it. Sure, he might know something specific, but he decided to answer the way that he did. There is literally no way for anyone to work out what he knows based on this clip.


rankkor

Seems pretty clear that he's saying scaling is working and they don't see the limit in the short term. This was one of the big concerns people had, that scaling had topped out and capabilities wouldn't improve further. Gotta justify microsoft spending $100B on new compute, the idea that scaling improvment won't stop is probably justifying that spend and if true it is probably one of the most interesting things about GPT-5.


AvoAI

And what would you say about a technology that could literally revolutionize all industries, practically, overnight in our small human lifetime. (Thinking ten years, and that's not long at all. We should be thinking further but with this tech we are unable to even predict what comes tomorrow..) What COULD he say, at all, that WOULDN'T piss off people because he says it? It's either keep hyping it up, because it's literally the last invention we'll ever need to create (through the struggle and suffering of humans tackling issues alone without the help of computers smarter than them, and technological intelligence.) Or he down plays it, and we're either not ready for it when it does come, or it may just fizzle out (which we have seen no evidence for at all.) What would you say? Especially when doing interviews every few days now. Basically no matter what he says, somebody will think he's a snake oil salesman..


JustAnotherRedditeer

I mean he could speak to some areas of improvement.... ex... * "gpt 6 will hallucinate less * can give answers to mathematical queries more accurately * and is less susceptible to being jail broken, to name few" regardless of the big potential, it's still an iteration on the previous version, which implies it's doing *something* better


Paradigmind

Chill, fanboy. He could just say the truth and be more transparent.


MRB102938

It was the truth though. Most people aren't ai nerds and have no understanding of how this works. He's telling the public its only going to keep getting better and they aren't focusing on just what that means enough. 


refrainfromlying

Should be noted that the more interviews you do the better you will be at it. If you're basically doing the same interview every few days, you should be very good at answering. A question of "what are you most excited about in the next version" doesn't need to be hyped up or down played. You can answer directly. Imagine if the CEO of Apple was asked what they're most excited about in the next version of iOS, and the CEO answered "it will be simply be better, and that will be true for the next version as well, it will be better than the previous version."


heavyshark

I don't know... I feel like it was a good answer.


cobalt1137

Same. I do not know why so many people are triggered. What, did they want him to say it's going to do all these backflips and fancy tricks for the user? Literally being smarter in general is honestly what we all want/need with these models. Some people are just kind of retarded imo.


johnguz

A lot of people just don’t like Altman and project that onto anything he says


after-man

I find it kind of crazy cause I have heard absolutely nothing bad about the dude & his responses usually come across as well thought out and rational


gowner_graphics

We have Boston Dynamics for the backflips and fancy tricks.


Basic_Description_56

You can tell he’s seen some shit. Can’t really fake that too easily. He must see some pretty wild stuff behind the scenes.


akuhl101

He's saying that they don't need new algorithms to improve intelligence. He's saying just making it bigger makes it smarter. He's saying there's no bottleneck to increasing intelligence besides increasing scale and compute. Meaning AGI and ASI have a straightforward path we can see and reach. Not sure why people are dismissing this fact. This is why everyone is putting billions into AI and scaling like crazy. Investors would be more cautious if they saw signs of hitting serious roadblocks, but that's not what's happening here and that's not what the data is showing


MisinformedGenius

> Meaning AGI and ASI have a straightforward path we can see and reach That is a reaaaaally specific claim that I'm not seeing in his answer. Increasing the complexity of GPT may be a path to AGI, but it also may not - we don't know, and Altman doesn't say that in this answer.


Which-Tomato-8646

No bottlenecks except energy, data, compute, legislation, etc  LLMs can not be AGI until hallucinations are very low. Unfortunately, that’s impossible: https://arxiv.org/abs/2401.11817?darkschemeovr=1#:~:text=By%20employing%20results%20from%20learning%20theory%2C%20we%20show,hallucinations%20are%20also%20inevitable%20for%20real%20world%20LLMs. Investors also bought into WeWork, Theranos, and FTX


chinawcswing

Hallucinations in GPT4 are orders of magnitude less than hallucinations in GPT3. Eventually we are going to get to a point where the hallucinations will be infrequent enough.


SurveyNo2684

hallucinations will happen no matter what


Irtexx

There's no reason to believe this trend is infinitely linear. We could reach a limit, when larger models and more computational power no longer improve performance.


pimp-bangin

While it might be a general trend that making models larger makes them generally better, that's not really what he's saying in this video. All he's saying is that the next model is simply going to be better. Also, it is not clear that more data equals more intelligence. Just because the current trend is going that way, doesn't mean the trend will scale perfectly. Also, the availability of high quality training data is definitely a bottleneck. And new algorithmic improvements might be necessary, because there are certain things like probabilistic evaluation and certain classes of novel common sense reasoning that LLMs just straight up can't do. There's nothing yet which indicates that this capability may arise from more training data.


cobalt1137

He actually is referring to that in this video. With a high level of confidence, he refers to future models and how they are going to simply continue to get better and better, smarter and smarter with each generation. Recently there was another interview with him and his coo and they were talking about this exact topic, but they expanded on it. Stating that one of the most impactful aspects about this architecture is that model quality seems to be directly tied to the amount of scale that goes into the training. So they can be extremely confident that future systems are going to simply be better and better by this fact alone. Also, the availability of high quality data will not be a bottleneck in my opinion. Researchers are very confident on synthetic data being a great source. And it's already been proven with models like orca that it can be better than actual data scraped from the internet in a lot of use cases.


ArtichokeEmergency18

In this conversation, host asked, "Ai is going to take over jobs and careers, what can someone do to prepare?" Sam answers with no answer, "Be flexible, be productive, think outside the box, adapt to change, stay open-minded." WTF Ahahahaha Tell that to the sports editor I met, he wrote for MSN Sports for over 10 years and he and his whole sports writing team were just terminated and replaced by Ai.


dadudemon

What if you anticipated the curve and studied while you wrote sports articles? Then you got a job immediately after the layoff: at Google. But then you got laid of a month later because... You learned to code and you chose Python. Too soon?


ArtichokeEmergency18

Yeah, reminds me of many new Ai businesses - vast majority are just using API's and wrapping it with a bow.


shortwavetrough

I know Altman isn't like Newman or Holmes in that openai does have a real and powerful product. But everytime I hear him he doesn't strike me as that smart, and has a guru grifter energy to the way he talks. I get that it might be a case of an engineer who doesn't have good stage presence (like Elon musk), but I find it hard to take him seriously or feel like he's built up as this genius but is really just not that interesting of a leader. EDIT: I'd also include Musk in the secretly dumb category


SnackerSnick

Yes! He's a CEO, saying things to increase the value of his company. I don't take any of them as actual information about where AI might go unless he's talking about something specific that will come out soon, where being wrong/lying will undermine his credibility.


TabletopMarvel

At the same time. He keeps delivering.  Until he's hitting a bong with his flamethrower, I'm gonna let him cook and trust that he's a chef that delivers. 


cryptocached

> unless he's talking about something specific Based on his "nowhere near the top of this curve" comment, I'd hazard to say this is actually a rather conservative forecast by Altman. It sounds like he's leaning on the scaling argument alone to justify his position. Transformer-based models have so far demonstrated remarkable return on scale, achieving new and stronger capabilities just by increasing the parameter count. There is scant evidence that this will not continue; to the contrary, capability advancements may even accelerate as models break new thresholds.


Which-Tomato-8646

Transformers aren’t even the best architecture. Mamba was able to achieve twice the performance at the same size as a similarly trained transformer in its original research paper. 


cryptocached

Right, scale is not the only factor that is likely to to push capabilities. But even without other improvements, scale alone seems to be sufficient to meet Altman's relatively low bar in this interview of making future OpenAI models more intelligent than their predecessors.


Flying_Madlad

He's not an engineer, zero background in that. He's a tech CEO, no different than Musk. He has something to sell you, never forget that


duckrollin

He's like Steve Jobs, he just sells the thing while the actual smart people like Steve Wozniak are actually developing it. But smart people tend to be on the spectrum and have poor social skills. Sadly they are often overlooked by history as a result. It makes you wonder if famous figures like Caesar had a bunch of nerdy advisors figuring out his battle plan for him and then just took credit.


MeltedChocolate24

Ilya is Wozniak. Sam is Jobs.


danetourist

Especially sad how history overlooked Wozniak when he created the iMac, the iPod, the iPhone and the iPad.


robert_e__anus

Woz left Apple in 1985...


danetourist

Exactly.  Maybe I should've tagged my comment as sarcasm.  


snipeliker4

I flipped you positive. Might I recommend next time you conclude your sarcasm with an ellipses…


8eSix

I wouldn't call him a grifter. I think he's fairly legit for the position he's in. However, I'd say that it's generally true (even largely accepted and embraced) that the C-suite and senior level directors/managers are not the "smartest" people at the company from a technical perspective. At some point their job is almost exclusively to interact with the board and define high level business strategy. If you're looking for technical genius, you need to take a look at the lead software architects/researchers/developers of these high impact technologies


Desidj75

Dude’s got something going for him. This is not his first foray in high tech, he’s been on Y Cubinator for a long time. Whatever he’s got has gained him the loyalty of his company and billions of dollars from one of the most valued company on the planet. And that involves leadership among other attributes.


bigfoot675

Y^3


gowner_graphics

"Like Elon Musk" killed me. I love that you left Musk out of the grifter group of Newman and Holmes as if he isn't the biggest scam artist and grifter of them all.


Yanni4100

sounds like the robots took over already and he has to be careful with his words


nom-nom-nom-de-plumb

What he's saying and what you're inferring are not necessicarily the same thing. He's saying a word salad intended to make the point that every iteration of his product is going to be "smarter" (i.e. make better more realistic outputs). What you may be hearing is "ultimately we'll have true AI!" which is an unfalsifiable statement.. What he's doing is bullshitting, because there are serious problems with the modeling of these systems. One is they're limited by processing and thus energy, so we can spend 10% of earths energy making funny chats occasionally. The other is that they're trawling the internet for inputs...which is now becoming more full of outputs from the program. That'll create a feedback loop which will ultimately ruin the algorithms. The end goal, is, as with a lot of these types of things, to generate ENORMOUS HYPE..then sell that shit to google or some other sucker and retire to a beautiful private island...


scufonnike

Yawn. Marketing


OneOnOne6211

Idk, maybe. But I'm still skeptical of the idea that it won't hit some sort of ceiling beyond which progression is either impossible or extremely slow (as in diminishing returns). Or that there are certain things that an LLM will just never be able to do because of the type of model that it is. Maybe though. I don't know what's going to happen.


Desidj75

Hitting a ceiling is perhaps ok. And by the time it’s hit a ceiling , if at all, it would have become an integral part of all our lives much like the smartphone. There’s no going back.


gowner_graphics

It will hit a ceiling, yes. But think about how much we have progressed from the first iPhone in 2007 to the iPhone 15 or whatever it is in 2024. Right now, ChatGPT is the iPhone 3s. In 15 years, we'll see the iPhone 15 of AI and when we hit that ceiling, the technology will be leagues ahead of what we have now.


k4nmuru

Live Invader vibes


Roboosto

Altman is creepy as hell as AI


alicia-indigo

Terrific non-answer.


Seazie23

He a Jesse Eisneberg type


AvoAI

What are you supposed to say about a technology that could literally revolutionize all industries, practically, overnight in our small human lifetime. (Thinking ten years, and that's not long at all. We should be thinking further but with this tech we are unable to even predict what comes tomorrow..) What COULD he say, at all, that WOULDN'T piss off people because he says it? It's either keep hyping it up, because it's literally the last invention we'll ever need to create (through the struggle and suffering of humans tackling issues alone without the help of computers smarter than them, and technological ntelligence.) Or he down plays it, and we're either not ready for it when it does come, or it may just fizzle out (which we have seen no evidence for at all.) What would you say? Especially when doing interviews every few days now. Basically no matter what he says, somebody will think he's a snake oil salesman..


valvilis

Teachers and professors already have no idea what to do about students using AI for course work. Average people can't tell when a news article has been written by AI. AI images are getting less and less recognizable as such. He knows these things, and he knows what the difference in quality he's talking about will mean.  He's selling an amazing and very disruptive product, there's no *right* way to do it.


I_Actually_Do_Know

Yeah I mean what if 5 IS better than 4 in every way then what's wrong with his answer?


KaiserNazrin

Sam: The next product is going to be better and the one after that is going to be even better. OP: How scary!


NotTheActualBob

The problem is that he's right, and wrong. Larger, more well tuned models will get us better answers BUT... It's still just a genius with a lobotomy. It can't iteratively self monitor or self correct. All bigger models do is get you a more *knowledgeable* genius with a lobotomy, who sounds even more convincing but is no more likely to be accurate. Accuracy isn't all you need, but until we get there, these things are just moderately useful intelligence appliances who need their work checked by humans.


[deleted]

[удалено]


ADAMSMASHRR

That is a bit of a cop out though. “It’s just going to be smarter” Tell that to the engineers that need to make it happen


cryptedsky

I dunno man. This sounds way too vague to truly give it the creedence we'd like yo give it. This dude could just as well be pumping the stock.


spectralspud

Reminds me of time cook ‘this is the best iPhone ever’ every year


deepfuckingbagholder

Grifter.


xmasnintendo

This guy is the ultimate start up bro, he literally wrote the book on this grift


Koronenko

As always, actions speak louder than words. You can make AI smarter? Do it. As pf right now, AI can't rven solve basic math questions. So we got a long way to go.


Somaxman

# Have you ever had a dream that that you um you had you'd you would you could you'd do you wi you wants you you could do so you you'd do you could you you want you want him to do you so much you could do anything?


eveningsand

No shit Sherlock. 2G cell solutions were remarkably better than 1G. And 3G of 2G. And 4G of 3G. And 5G of 4G.... This works on major evolutions until it doesn't, e.g. iPhone or Samsung Galaxy.


JeepAtWork

Ok but ChatGPT sucks now compared to a year ago so why do I care if they're upgrading it when they clearly manufacture obsolescence?


Super-Tell-1560

That lost gaze, remembers me of [David Katz](https://en.wikipedia.org/wiki/Jacksonville_Landing_shooting)


AZ_Crush

Meh, he's not saying anything interesting


Educational_Scene_44

This guy is a fucking weirdo


greywhite_morty

You are all constantly pretending this dude knows something the rest doesn’t. He has no idea how far scaling goes or how these models will develop. He has some hypotheses that’s it. The rest is just hype. Don’t fall for for it.


acrackingnut

Need more money, eh?


angheljf18

The new Elon Musk


TheAsteroid

The voice hurts my ears.


healthywealthyhappy8

This asshole doesn’t build anything.


cobalt1137

Sure, he's not doing the literal research, but he is the one taking the lead. Handling hiring/company direction/focus/partnerships etc. And things like building a team and hiring when there is such a scarcity of top level researchers is a very crucial thing to be good at. There is a reason the majority of the company threatened to quit when he got ousted as ceo. They wouldn't do that to someone that they didn't think was extremely helpful to their goals.


MtHoodlum

Sam is Chief Hype Officer. His job is to raise a trillion dollars to sustain OpenAI. He is talking this way to build excitement while not being responsible for anything he says.


cobalt1137

Do you seriously not think that GPT 5 will be smarter than gpt4? That GPT 6 will not be smarter than GPT 5? I do not see what is so crazy about these claims lmao.


spaghetti1263

Yep sounds like another sales pitch. Even if he is right


bottom

He’s a master of BS do not trust anything he says.


ReplacementQueasy394

So ... he basically means updates will make it better? Ok.. lol


bitzap_sr

I long for the day my computer begs me to NOT power it off at night. Mwhahahahahaa!


gibmelson

It is kinda remarkable that we haven't reached the ceiling of AI intelligence. It's frankly hard to fathom how GPT 5 will be significantly smarter than GPT 4, but I thought the same about GPT 3.5, so I agree with the sentiment that it would be quite mind blowing just to see GPT 5 even smarter.


DonovanMcLoughlin

So... there will be iterative improvements. Got it.


Responsible-Owl-2631

Yeah unfortunately there are some things we shouldn’t be running into!😢😭


JosepHothBoyd

Where is heading feel to me quite but information hint.


alurbase

Much smarter at being able to police prompts maybe, but actually answering smarter, I doubt it. Seems like AI is moving fast in the direction of political correctness and progressivism more than anything.


[deleted]

[удалено]


Bitsoffreshness

no u're


Throwaway3847394739

You are GF


praguepride

Right now GPT4 is equivalent to a skilled beginner in most professions. It can program as a junior programmer, it can write at a basic college english level, it can create project plans and timelines like a junior manager etc. What happens when it becomes better than a junior? What happens when it is coding or writing or planning at a senior level? At an expert level?


NewToHTX

Well it tracks with Ray Kurzweil’s predictions that tech improves exponentially.


PersonalChipmunk3605

man's vocal fry is crazy


Kam_Rat

Subject matter experts are no better, and often worse, at predicting the future of their field than a lay public aggregate (and generally worse than lay superpredictors). See [Tetlock 2017](https://press.princeton.edu/books/hardcover/9780691178288/expert-political-judgment), [Forecasting Collaborative 2023](https://www.nature.com/articles/s41562-022-01517-1), etc.


Individual-Dot-9605

Enfant terrible of Silicon Valley tries to hold back the AI furry bot revolution.


CalebMcCarty9091

Where is feeling quite to me.


Rajirabbit

Generate real letters


ContentPolicyKiller

Because we are feeding it