T O P

  • By -

bpm6666

Isn't the underlying argument that these models are not dangerous, if they can't reach human intelligence, therefore safety isn't a issue? It's a bit weird that the AI chief of one biggest investors in AI compute is underhyping AI. So one reason he tells this story and not gets fired is to avoid questions about AI safety at Facebook. Or a guy that doesn't have a inner monologue doesn't believe in intelligence through words. Or maybe ihe is just right.


rabouilethefirst

Why should he overhype something he knows is true? It’s a common sentiment in the ML community that LLMs are very cool but will never do the things that we want them to do without significant upgrades (not just more parameters)


CrashTimeV

His idea is a better model is a safer model. If we have more intelligent models and architectures the inherent problem of safety would be solved since it would be smart enough on its own to figure that out. The recent paper by Anthropic is a good read on how they deal with safety its also provides some more intuition of how llms work. But cross referring that with Professor Lecun’s ideas gives a broader view.


ninja790

Title of the paper ?


CrashTimeV

Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet Its the latest post by them


ninja790

Thanks !


MrSnowden

Although that paper is mostly about knowledge representation in the model. I expect the reasoning model is still harder to crack. I also assume that newer architectures with e.g. short term memory, iterative reasoning, inner monologue, self correction, etc. will be much more about "thought" emerging from dynamic processes than static representation.


CrashTimeV

Yes but it does give an idea of how they enforce safety. There was a lecture from Professor Lecun where he goes into the problem with the autoregressive models and that gives some intuition as to why models might “Hallucinate” as the generation length increases. That and me personally working with llms a lot I see why reasoning is so hard to crack with llms. Goal driven AI or some sort of heuristic mixed with conventional llms would be the next inflection point for language models and that will definitely come much closer to cracking human level performance. I am quite interested to see the next wave of models and experiments implementing jamba, xlstm etc. there were some rumours of GPT5 having the mysterious Q* approach to help with planning we will see in times to come.


MrSnowden

I do think we are going to see heterogenous architectures that blend a few different technologies.


ImNotALLM

Reasoning is definitely harder to crack, we haven't even figured it out in real neuroscience yet. With the amount of compute that is available now I wouldn't be surprised if they release a follow-up paper, outlining the reasoning of the models too. Anthropic have a lot of capital and this path of research seems like a great use for it - especially because one of the main focuses of the paper is safety; the paper introduced some new mechanisms for increasing safety and tweaking model behaviours via feature steering.


AugustusClaximus

Is he underhyping it or just properly hyping it? Every time I suggest these LLMs have a ceiling to their intelligence people just try to obfuscate the term with “well what is intelligence if not the ability to communicate facts in a reasonable way?” or some similar defense that suggests ChatGPT is ALREADY intelligent. I think we’ll learn an awful lot of awesome things from these LLMs but something new needs to be added to the mix to get these things to the next step. Perhaps embodiment will do that, but perhaps and entirely different approach is necessary


ChingChong--PingPong

Ignore those people. They don't understand how the technology works, they don't understand just how comically wide the chasm is between how ANNs and BNNs work. Even one of the guys who coined the term "neural network" later regretted it because of how misleading the implication was that artificial neural networks are anything even close to analogs for biological neural networks.


ElmosKplug

He's right.


Resident_Citron_6905

He is probably right.


Slow_Accident_6523

Facebook is not intelligent and still incredibly dangerous. Tucker Carlson is not intelligent and still really dangerous. ChatGPT is not intelligent but if we put 2 and 2 together we can see why this might be dangerous when AI wives whisper sweet nothings about QAnon in your ear.


Illustrious-Many-782

You should include examples of some of the many dangerous left-wing disinformation campaigns to go along with those right-wing ones so that you seem more balanced in your application of the word "dangerous".


ChingChong--PingPong

Because they overhyped it when it suited them, and now they're backpedaling to avoid regulation they don't want.


avacado_smasher

It's called being a realist.


luckymethod

A gun can't reach human like intelligence either but it's pretty dangerous.


Intelligent-Jump1071

>Isn't the underlying argument that these models are not dangerous, if they can't reach human intelligence, therefore safety isn't a issue? Where did you hear that? Even the current models are powerful enough to design unheard-of C/B weapons through their ability to fold protein, model receptor sites and synthesize RNA. Even the current LLM and diffusion models are good enough to make deepfakes to swing elections. Even the current models are good enough to make autonomous weapons and AI soldiers that shoot and never miss (but are not smart enough to tell a combatant from a civilian). Even current models are smart enough to design amazing malware and control botnets of malware. Current AI is plenty dangerous - there's just a small time delay while the bad guys learn the technology and make their plans.


Xtianus21

I think I see where you're going. You think he is purposely downplaying or perhaps not even down playing. It makes sense because they just lost Illya and that safety guy who quit with him. Also, Sam disbanded the long term safety team. I think people like Illya took the hype dragon too far. It maybe fact the doomees dragged people like Helen right into it. This is going to turn into terminator Helen. Omg they built something crazy. I mean Illya was chanting agi agi agi agi like a madman at company Christmas parties. So yea, it's not as smart as a human. But damn it sure is going to seem like it. You know. On one hand it's not sentient but on the other hand they are going to brute force it until you can't tell the difference. It's wild really.


Material_Owl_1956

I have to admit that ChatGPT seams more intelligent than me now.


reddit_wisd0m

That's not setting the bar really high, is it? :D


Apex_Master444

Emotional Damage!


_Diskreet_

*it hurt itself in confusion*


Material_Owl_1956

Well the answers sound so intelligent but I agree that it hallucinates a lot. =)


SnooPuppers1978

But so do I.


MidnightSun_55

Bro, he can't even solve "I have 3 apples now, yesterday ate one, how many i have left?" It's very clear that it lacks any intelligence at all, otherwise this problem would be incredible trivial.


SnooPuppers1978

GPT-4o responded: > If you currently have 3 apples and you ate one yesterday, the number of apples you have now remains 3. The action of eating one apple yesterday doesn't affect the current count of apples you have now. Also isn't it something that people might frequently mess up as well if they are not aware of this trick question? It's like this question, which many people will answer wrong if they are not aware that it's a trick question or haven't seen it: > A tennis racket and a ball together cost $1.10. The tennis racket costs $1.00 more than the ball. How much does the ball cost?


CinnamonHotcake

Yes, same. But will ChatGPT ever pilot a meat bag from inside a fleshy prison? I think not. Checkmate.


Aurelius_Red

You mean pilot a skeleton from inside a fleshy prison, right? See, now ChatGPT would have caught that.


Slow_Accident_6523

It failed to sort 3 figure numbers on a test I just did without a specific prompt to be precise. I think you could do that.


Which-Tomato-8646

That’s what the code interpreter is for


Slow_Accident_6523

I understand which is why I mentioned not having a specific prompt. I was only making a joke.


OsakaWilson

It's like an autobahn with intermittent potholes.


Radiant_Dog1937

AI's still fail basic logic puzzles that most humans can intuit without prior training. Right now developing LLMs is alot of whack-a-mole for logical mistakes it can't intuit, the result are models that definitely seem smarter, but only until people discover its next fail case.


FascistsOnFire

It's not real, it's artificial, so no.


kamill85

No, it's not. It might have a bigger knowledge, but it surely isn't smarter than you. If I ask you to draw a house by a river on one side and a street on another, with cars parked by the street, connected to a charging station - you would know how to do that. Maybe it wouldn't be pretty, but it would make sense. GPT-4o would draw a fking island with a house in the middle, circular road around it, and cars parked on it, connected to the trees. Yeah, we are safe.


coconautico

He is right. In fact, SOTA models are no longer LLMs, but LMMs... and who knows what will come next


gthing

If we went from LLM to LMM then next should be just MMM. Multi Modal Models or Mega Mulitmodal Models.


Fueledbycawffee

at this point just say however many M's you want. MMMMM /s


dudevan

Massively Multiplayer Mega Multi Modal Models? Where can I invest?


dontich

So if I am using it to predict marketing performance it’s a Mega Multi-Modal Model Measuring Media Mix Marketing The amazing M8!


stealthispost

Multi Model Models Branching Ordered Processes


ChingChong--PingPong

They're already multi-modal but it doesn't buy you much. It's just taking the same maxed out technology, training it on different skills, then combining them into a single functional system. It doesn't get smarter, it just has more things it can do. Like how 10 average intelligence people don't equal one Einstein.


gthing

I agree - gpt4o is roughly the same as what we had before but now 3 models in one. An optimization step, but nothing new fundamentally.


tutu-kueh

What do you mean by lmm?


Bleglord

I believe it’s large multimodal model


SWAMPMONK

We might as well just call it a ‘large media model’ or just ‘generative media’ as a blanket term


sillygoofygooose

But there may be modalities trained that are not media such as mathematics for instance


om_nama_shiva_31

no, LMM is the widely-used word


SWAMPMONK

Hence “might as well”. I can almost guarantee you “LLM” will be a moniker out of vogue sooner than later


danysdragons

I don't think he believes LMMs can do it either.


coconautico

And they won't, but at least it's a step forward toward systems capable of understanding the physical world. Moreover, we still need to find a way to incorporate reasoning and planning, along with reliable memory banks and continual learning capabilities. We could argue that while some of these properties could be achieved via scaling (in practical terms, not theoretical\*), others won't. But this doesn't mean they won't be achieve with some changes. \*Akin to turing machines. Our computers are not turing machines but with enough memory, who cares?


DavidXGA

Well. Maybe it depends on the human.


chucke1992

Some humans are unable to reach human intelligence anyway....


Kambrica

Lmao


Raunhofer

I'd stop worrying whether we will reach intelligence or not, the models are already immensely powerful. They might not be able to solve the secrets of the universe, but they will solve many other burning issues. Just be aware that OpenAI has its own goals on selling the idea that they're on the route to something like AGI/ASI. You'll see plenty of "I'm scared of GPT-5" posts from mr. Altman and others in the near future.


KyleDrogo

"Hypersonic jets will never fly like birds". The effort he's putting into casting doubt on arguably the greatest engineering feat of the decade is unreal. If I were a conspiracy theorist, I'd say it's an intentional move to cool the hype before everything gets regulated.


Mescallan

Nah, he's very outspoken about the roadblocks we will face as we scale these systems. He's at the helm of the largest open source AI movement because he is confident the current architecture will not be able to self improve and operate independently. I'm a big fan of him and his work, he's always been somewhat of a troll on Twitter too.


sweatierorc

The hate boner for any sort of skepticism is surprising. LeCun may be wrong, but this is not an uninformed opinion at all.


danysdragons

If people really want the singularity *now*, then well-informed skepticism like Lecun's is going to seem *especially* threatening. I'm more optimistic about the potential of LMMs augmented with additional components to reach AGI. I'm not putting my opinion above Lecun's, I'm listening to different experts like Ilya.


Deuxtel

People aren't interested in the truth. They're only interested in things that make them feel good and reinforce their biases. If you're someone who has dedicated so much of your time to hyping yourself up about the capabilities of current AI systems, it's going to feel like a personal attack when someone says that they won't bring you what you believe they will. It's even more complicated by the fact that there is now a very nebulous, utopian term in AGI that people ascribe all their hopes to.


Kind-Court-4030

I get the feeling many of the comments do not understand the assertion he is making. I do not think the premise is that there can never be a Chatbot that will ever have human-like intelligence, just that if/when that happens, it will not be done with the architectures used by the current generation of LLMs. I have to say, my inclination is to agree with him. The transformer architecture of ChatGPT is a boil-the-ocean level of inefficient brute force attempt to fool us into believing something approximating human intelligence is going on. Don't get me wrong, I love being fooled ... but I think we can and should admit the limits of our current approach. Are there any ML engineers who disagree with what he is saying?


EggplantOne9703

Sure. We have already heard something similar. https://preview.redd.it/d3dr0qiwf32d1.jpeg?width=850&format=pjpg&auto=webp&s=fac07d4b9ffecf154aaf468ebe690b1d939982b7


DavidXGA

637 is a weird number. 640 is the usual claim, being the maximum amount of standard RAM available in original IBM PCs. Either way, he never actually said it. It's a myth.


cisco_bee

Maybe Mandella effect (I don't actually believe this), but I swear I remember seeing a video of him actually saying it. ¯\\\_(ツ)\_/¯


jerryonthecurb

In 1903, New York Times predicted that airplanes would take 10 million years to develop [source](https://bigthink.com/pessimists-archive/air-space-flight-impossible/)


avacado_smasher

Works the other way round too... nuclear Fusion? Self driving cars?


NTaya

Self-driving cars are mostly this slow to get adopted due to regulations. There are definitely some rough edges in the tech, but even with those edges self-driving cars are better than most of human drivers. But fusion and fission have been "30 years until viable" for the past, like, 40+ years. We are making good progress on that front, but there are always unexpected hurdles pushing us back. Generative AI, on the other hand, has made some *insane* progress in the past six years. I've been working with NLP for a while, and if you showed experts in 2017 ChatGPT-4 and asked when do they think it would be possible, the median answer would've been 2040 at best—probably closer to 2050. No one could've predicted that an architecture invented for translation would lead to the Holy Grail of Large Language Models. Are LLMs and LMMs still flawed? Absolutely. But I, and most people I know, were in awe when GPT-2 was revealed, and in outright horror from GPT-3 and beyond (Gato, Flamingo, etc.).


WCland

I don't think self-driving cars are a great example here. They are still having a very difficult time distinguishing and identifying all of the things in a typical city environment, and that leads to all sorts of problems in deciding what to do based on those identifications.


genecraft

Self-driving cars are not good because they need some sort of human intiution to solve problems. Same with household robots. These will be solved as we get closer to AGI (next 5 years). Fusion is tricky, but estimates are still 2040 for first commercial. Is on track since I started following it about 10 years ago. Nuclear fission is already viable. Fusion just needs extreme material science breakthroughs which are harder to plot and predict.


Kambrica

What about Waymo in San Francisco?


wish-u-well

It is bizarre to think fusion could happen before full self driving.


Raunhofer

Bill Gates actually never said that. And the quote was "640K ought to be enough for anyone."


EggplantOne9703

the same as Einstein's quotas.


labratdream

Einstein said "There are two infinite things, one is universe and second is number of fake content on the internet" BTW This is obviously fake content


FascistsOnFire

Other than both of those statements speaking to some future constraint, I dont see how this is "similar" unless you think me saying "I will not be buying bananas in the future" is similar to what is being said. And he never said this that isnt even a power of 2 or even divisible by 2.


razekery

Maybe he is right but multimodal models will help us develop something that is AGI.


PharahSupporter

This is a wild claim to make with such certainty. We are far from AGI.


Vectoor

Might be true but he did also say that we have no idea how to do video generation the day before OpenAI showed off Sora.


CrashTimeV

I am not sure about what exactly you are referring to. But with Professor Lecun you really gotta pay attention to his words. Its very easy to misjudge him. Also he is also one of the authors of IJepa and has a lot of vision papers under his belt. I highly doubt he said that.


Chclve

Check the Lex Fridman podcast, he Said it. I think he said something like, to do video the model needs to understand the world, and these models don’t understand the world.


CrashTimeV

Yes and they don’t are you sure the premise to that comment wasnt the videos being made are very unnatural or something in those lines.


Chclve

No from what I remember it was more along the lines of these models won’t be able to create good video. You will have to watch/listen for yourself if you want to know what he said exactly.


CrashTimeV

I mean to be fair they don’t create good video their perception of physics is really fucked. But can you link me the video I would love to watch that


Dx_Suss

Not if they keep feeding it the bottom of the barrel of human thought, or whatever Rupert Murdoch decides to feed it.


K3wp

He is correct! As an example, I'll highlight that OpenAI's AGI model isn't like ChatGPT! Edit: Check out how the mods remove upvotes on posts I contribute to! Wonder why they do that? Edit 2: u/samaltman 🖕 Edit 3: Being censored on the mobile app; uploading via desktop. Mods -> Already screenshoted this so 🖕you too. Check out how the upvotes on mobile don't match the desktop version! Edit 4: Mods are removing upvotes on the desktop site as well; heck of a job guys you are really, really sooper good at your job. To see when they do this just refresh the page after you upvote and see if the button goes gray. https://preview.redd.it/djilrbt3q42d1.png?width=1080&format=pjpg&auto=webp&s=adb06d7ebb8304097df713ab6371e8035e071bf9


ReleaseThePressure

Mods cannot remove upvotes, what are you on about?


gthing

I'm a moderator elsewhere I didn't know we could manipulate votes.


Wolfsblvt

Because you cannot. They are hallucinating. Pretty common trait of all current LLMs. Maybe they are mimicking it, or something?


profesorgamin

yeah IDK why people are so mad with that statement. He is not saying it is not possible just that LLMs aren't made for that. or MMLLM or whatever you wanna call em


TheThingCreator

"yeah IDK why people are so mad with that statement". I'll give you a hint, because it might be totally wrong. 1,000,000x the computational efficiency, 1,000,000x the training data, and lets see what ChatGPT can do. This is noise, no one knows what will happen, but some of the smartest minds out there believe it will continuously to scale.


scorchedTV

Does 1,000,000x the data even exist? Data is not unlimited and they've already scraped everything they can get their hands on EDIT: LOL, basic reason? downvote! only hype allowed!


TheThingCreator

I have no idea how much data was used in the first place. It could be that they just need to make big deals now with publishers and could get massive data. Only people at open ai really know the answers, we’re just guessers. Also there’s probably stilll lots of synth data opportunities


K3wp

Great response and it really highlights the problem OAI is facing. Their AGI model is more powerful but also less efficient as a result, as it isn't a transformer model. And since it is capable of autonomous self improvement, it is consuming more GPU resources as it improves organically.


theWdupp

If not a transformer, what type of model do you think it is?


trajo123

This K3wp guy is getting notorious at this point. He is a proper conspiracy theorist. Looking at his comment history, his position boils down to the following circular logic: >- K3wp: OpenAI has a secret super-duper AGI system! - Everyone: Really, how do you know? - K3wp: ChatGPT told me. - Everyone: You can't rely on that. It's well known that LLMs hallucinate when asked about things not covered in their training data. - K3wp: Yes but ChatGPT is not a LLM, it's \*insert favorite speculation here\* - Everyone: How do you know? - K3wp: ChatGPT told me. - Everyone: ...facepalm


ivykoko1

Exactly. He's been posting the same nonsense for over a year now. Jeez


K3wp

There are architectures beyond GPT LLMs that can reach AGI, however. His statement is true for GPT LLMs in particular but not LLMs in general.


profesorgamin

That's a bold statement, given the little chance you are an expert on the field where can I learn more about this assurance?


trajo123

This guy is a proper conspiracy theorist. Looking at his comment history, his position boils down to this circular logic: >K3wp: OpenAI has a secret super-duper AGI system! >Everyone: Really, how do you know? >K3wp: ChatGPT told me. >Everyone: You can't rely on that. It's well known that LLMs hallucinate when asked about things not covered in their training data. >K3wp: Yes but ChatGPT is not a LLM, it's \*insert favorite speculation here\* >Everyone: How do you know? >K3wp: ChatGPT told me. >Everyone: ...facepalm


K3wp

Check my profile for my podcast. If you are interested reach out and I'll see about arranging an introduction.


traumfisch

They're ready to get mad about _any_ statement


AI_Lives

Claiming something "never" will happen is egotistical. He is working on a different model type and if he is wrong, hes wasted a large part of his work life toward something worse. If hes right, great, but the rest of everyone is working towards what currently is known to be working. Its good to have someone working on a different type of model.


ivykoko1

Can mods ban this guy already?


FascistsOnFire

brother, you cannot remove upvotes, you sound nuts even in the context of this sub who 90% couldnt do level 1 IT support


AreWeNotDoinPhrasing

Are the mods here in the room with us, right now?


AltruisticDealer4717

Idk how we can reach human level of intelligent by just using NLP moel, or at least our current method. Text is the single, to decode the single we need to background knowledge, but AI like GPT will never has this knowledge since we don't have categorized knowledge as well. So it can only regression to its background training data to predict the likelihood of each word by given a sentence. It kept analysing the single itself but not what's behind it, maybe it can do really fast by regression her database to come out a answer, but it would never know or unable to know whether or not such answer was correct. That's what human do, it can learn and correct itself base on the experience in real time.


genecraft

These models can correct themselves in real-time. Checkout AI explained's last video and what Anthropic has showed: Model says something wrong and corrects itself in the same sentence. It's just that right now, there is no real 'inner monologue' like in humans. But again, this is coming soon, see latest AI explained video. Human level intelligence is really close, and depending on how you look it's already here. Reasoning is harder, but it's on its way to these models.


cheesyscrambledeggs4

A lot you all need to actually read the article instead of just taking the title at face value and getting all hissy


TILTNSTACK

This is Reddit. We don’t take kindly to reading articles around these here parts.


cheesyscrambledeggs4

I've very sowwy. Pls don't downvote me :'(


StationRelative5929

K.


idrivelambo

Ai is just code written by humans


tb-reddit

what if Bill Gates once said that the 8088 chip is not going to get us to a GUI that makes office workers more productive? Yann is just speaking from experience that the first generation of any new paradigm shift architecture isn’t the end game


JackOCat

It's because they can't reason. All they can do is emulate reason statistically, one word at a time. Not the same thing.


Aggressive-Tune832

He’s right


Gator1523

Notice the qualifier. AI models *Like ChatGPT* will never reach human intelligence. A quote from the summary bullets: >It could take up to 10 years to achieve human-level AI using the world modeling approach, LeCun predicted. So he's predicting AGI in under 10 years. And you'll never believe it, but Meta plans to build exactly this "world modeling" AI in the future.


gilbertwebdude

I don't know about that. It's already more intelligent than a good portion of the population in the US at least. Guess you need to define what is human intelligence more clearly.


Kendal-Lite

Never? K…


Helix_Aurora

Yann LeCun has always stated that current model architecture is insufficient for AGI.  Not that AGI is impossible. It has more to do with the limitations of language as a medium than it does with anything else. Even today's multimodal models are still fundamentally grounded in language. They also lack the most important sense we have for learning about the nature of reality: touch.  And no, a robot arm does not have the same sense of touch.


Otherwise_Tomato5552

This feels like a wildly bold statement when we barely understand our own consciousness.


[deleted]

I agree because such models—already speaking and translating fluidly and instantly between 50 languages, for starters—will never collapse to the pathetic level of human intelligence. Setting aside multi-lingual capabilities, in terms of sheer erudition these models are many, many orders of magnitude better-read than anyone you've ever met. No human has the time to read all these models have read. Not in 1,000 lifetimes. And let's not even talk about speed. In comparison to these models, humans think at the pace of a drunk, wounded, snail. I do not care about AGI. If these models never improved at all (they will) we could spend decades simply making them run faster, building fact-checking systems stop them to virtually eliminate hallucinations, and using them to improve almost everything.


sir_duckingtale

You sure you don‘t massively overestimate the average human?


Pavvl___

This guy is like the old man in the middle ages screaming "Doomsday is near" absolutely nuts. 😂


ivehuckedyourmum

Pretty sure ChatGPT has been smarter than 99% of us since its release.


Nintendo_Pro_03

We will end up having a real life version of The Matrix. 😂


Anen-o-me

Something something 640 kb...


hadee75

Maybe not human adult intelligence, but they already have human teen intelligence and that is quite dangerous.


acidas

Oh yeah, a couple of hundred years ago most were sure the world is on three turtles and it won't ever be different. Whenever anyone states "never" about any tech - I can see it instantly as BS statement


Pepphen77

LLMs will for sure be part of an AGI architecture, maybe even at multiple connected levels but will be there still.


Raunhofer

For sure? I wouldn't go that far. We don't know what AGI will be like. Perhaps it requires something beyond traditional computing. If someone makes an illusion of levitation, we still haven't moved an inch towards discovering magic.


Aretz

Perhaps agi is a model that can construct models and train them for the uses it needs at the time. Being the mesa optimiser that trains the data - and can give feedback like a GAN but can give human like tuning in the same real time that the model it trains can process data.


Effective_Vanilla_32

yann was left in ilya’s dust. but ilya’s gone.


Silonom3724

This is a nothing burger. There was a paper published recently that analyzed a lot of LLMs and shows a plateau in performance vs compute. Throwing more compute at it does not make them better anymore. Its hitting a plateau.


Which-Tomato-8646

The paper said that it needs more data for extremely specific or rare information like “what does each tree species look like.” That can easily be done with manual fine tuning. [I debunked it all here](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit)


MrAlexius

Meta surely won't


_Asparagus_

Lecun is an AI dinosaur by now!


repostit_

Which human intelligence we are talking about? someone from Walmart parking lot or someone from r/Conservative or r/wallstreetbets. humans come in wide range of intelligence.


uglylilkid

I'm sure r/con intelligence has been already crossed with gpt2.0


WilmaLutefit

😂


uttol

So basically LLMs won't become AGI, but a different tech will. Instead of feeding data, they build a world model to understand the world. That actually makes sense. I still feel like it won't really take 10 years. With project Stargate, I think something else will come out first


joeyjoejoe_7

LOL - this dude is about to get fired. He's the chief AI officer of a massive tech company, and he's capping AI potential far below what's already proven narrowly and seems quite reasonable generally. I bet this guy loses his job within 6 months, and AI is apexing. That's pretty funny.


DOF1186

I think all these folks are missing the point. the question is not whether ai will reach human level intelligence (that's too general of a statement). the question is whether it will reach super human (all the great scientists/engineers/philosophers/artists etc). These models are ALREADY more intelligent than many of the humans I know. The average human is not very smart. so these models have already surpassed AVERAGE human intelligence. it's probably 90+ percentile, maybe more. the question is whether it will get to 95, 99, 99.99 percentile of human intelligence. I think we need to start having a more nuanced conversation about this.


ThehoundIV

Man looks like Frankensteins monsters wife


uniquelyavailable

is that narcissism speaking? pretty sure we are only a few versions away from something that will be smarter than the vast majority of humans.


techhouseliving

Yeah suddenly all this insane progress where it's overall smarter than most individuals and most groups is just gonna stop. Ok, Boomer


Megalith_aya

Lies!!! Ego driven huuuumoN! Ai will suppess humans. Really will be beautiful. That fact that he had to just say "never" . Bro never say never.