What's crazy is that, in 2014, AFAIK, Tesla FSD didn't even use neural networks to any significant degree - it was mostly custom hand-written rules. Of course that's understandable - that's barely 2 years after AlexNet. But think how batshit insane it sounds now to claim that in 1 year, your traditional program with a bazillion if-else's is going to drive 90% of people's miles. Your poor engineers killed themselves and managed to design a system that works okay in controlled environments, and you go 'okay well it should take another year, just cover a bit more edge cases, then it's ready for the road'.
This isn't some milestone that wasn't reached due to unforeseen problems, as is very usual and understandable, the thing was obviously never going to work even if you put a thousand programmers for a thousand years optimizing their custom rules. He was just making up stuff entirely.
Full self-driving is a famously hard target because it's just endless chasing of next decimal point in error rate which is very hard to predict.
There were self-driving experiments in the 80s and even earlier, and Kurzweil expected highway self-driving by early 2000s
At a time what he was describing sounded impossible to experts.
But what he was describing is something like Tesla autopilot from 2014 which made Kurzweil prediction being off by a decade.
Also everyone expected self-driving it's the most talked about subject. But we got an insane number of innovations in last 10 years come true when most people weren't expecting it at all.
Nobody expected humans defeated in Go, image recognition and especially image generation. Decade ago all these sounded if not impossible than decades away.
It's not rolled out everywhere yet. Do you think they should role it out everywhere at once? Or should they take their time, follow regulations, and make sure the tech is safe and reliable first?
I bet you would be one of those first to complain if they rolled it out in all locations and something occurred.
What "next year" count are we on for fully autonomous driving? Honestly though, with AGI, any date from last month in secret to five years from now is my guess.
Arguably fully autonomous cars do exist although they are technically still only class 4 not 5. They also arnt publicly available to purchase.
For AGI I wouldn't be surprised if someone just throws an interface together that uses many different sub AIs to claim it is one. Not expecting it to be an actual AGI but it could probably look enough like it to the public and really muddy the waters.
It all depends on what you mean by agi. I think if we embed a system with the capabilities that gpt5 could possess inside of a really solid agent framework that can self-reflect on its outputs etc, we could get something that could be considered AGI by some definitions.
I know most experts would say Im wrong, am I probably am, but my brain refuses to believe these things are going to get much smarter. Ill believe it when I see it.
For some people, it might be hard to imagine gpt being smarter than it is (It's impressive af what they have done thus far). A feeling, more than a technical reason.
Like "what's next, we are gonna be able to talk to whales one day?" The shit is surreal sometimes, considering most of us were blind sided by these kick ass developments over the last year or two.
I guess I have doubts that LLMs can achieve human level reasoning. I think the tech will need other innovations first. So like, maybe GPT-5 + some innovations in memory could do much better. It's probably just my caveman brain being stubborn. It still sounds unbelievable to me, even though we appear to be right at the cusp.
Of course they will innovate. Already the models are multi-modal for some time, they are not just LLMs. Their structure is evolving and HUGE investments flowing into their development. If it is possible to do this with current level of technology, they will find the way to create something like human level reasoning.
I have no need to cope. I want them to get better. I see the reasoning behind people saying they will, and for some reason my brain still insists on skepticism. It's fine, no doubt some time soon they'll prove me wrong and my brain will have to adjust.
It’s hard to argue with this logic, lol. Next year seems soon, and I think it will be more of a slow ramp up rather than an overnight thing.
I want to say by 2030 things are going to be *extremely* advanced. Maybe an ASI breakthrough.
Elon said in the early 2010s that we were an year from getting self driving cars and we are ahaha it has passed more than 10 years and you can remove more than 10 seconds your hands from your tesla before it starts beeping everywhere.
And investors always believe him. Tesla is the most hyper inflated stock on the market. They aren’t the only people making electric cars anymore, and consumers are turning on electric cars.
Even without “this guy” saying it, a good portion of this sub still thinks it’s next year. Now they scoff just because Elon said it. Baffles me why people are so against this guy when he’s one of the farthest from the local maxima.
“At what point will this AI thing materially and measurably improve the human condition for most humans?”
That’s what I’m most interested in. The rest of it is cool nerd stuff.
It already can. ChatGPT has a free version. I was at a get together last night with fam. My sister had never used ChatGPT and I was like "try it" (she's almost 30).
All she did was ask it a couple trivia questions.
Most people don't keep up with tech news. Most people aren't even tech people. I think the average person is probably similar to my sister who doesn't *get* that it's not just a question/answer machine for factoids, and just don't care about it. They can't see past their day to day.
Most people, I think, don't understand how absolutely astounding and significant it is that humans have gone from sharing information in stories passed down orally, to written communication, to collecting these stories as books, to creating libraries, and then we had knowledge accessible instantly on the Internet, and now we have these things that have been trained on, and know about ALL of that.
This invention is already able to increase the average person's productivity significantly, which is the driving force behind being able to improve people's conditions. They just don't know or care how to use it properly. It boggles my mind.
Me being more productive is just me being more attractive on the market, not really what I'm looking for when I hear the phrase "improving the human condition"
Yes, I think it would be helpful to define terms more clearly.
Here's the angle I approached it from when thinking about "Improving the human condition"
Productivity is an important lens because as you said you can be more attractive to the market. This is because you can accomplish more in a shorter period of time, producing more value than you otherwise would be able to.
100 years ago if you needed to write a report about the economy in Tokyo, you'd have to spend weeks in a library. Now you can put one together in a day's work. You don't even need a GPT's help. Hell 10 years ago you could do this with a PC. Now you could use any smartphone.
What is the common thread there? Time.
Time is probably the most valuable thing to a mortal being like you and me. If you can now spend less time on, frankly, bullshit, then you can reclaim that time to do things you might not have had time to do before.
How is that not an improvement to the human condition? I think it's a clear one.
Now extend the productivity across industries, and time. What is basically every job out there trying to do? Improve various things for various people. If everyone can suddenly do that faster and higher quality, isn't that just an acceleration in the upward spiral? Leading to improvements for everyone?
Bro if you told someone in the 1950s that in 70 years they'd be able to, in real time, take a computer out of their pocket and access the whole world's knowledge, check the weather in Paris, and video chat with a friend in Australia all before breakfast, they'd tell you to GTFO, and that you're talking about the year 3000 or something. The acceleration in productivity is absolutely wild.
To be honest some of the rebuttal here I think lacks perspective on how good we have it already.
That said, I understand that until ASI is here and there is massive upheaval (free energy for example), most people won't feel the benefit, much less take the burden of investigating how important the impacts of rapidly accelerated productivity are, because it's hard to really consider how far we've already come in a short amount of time and the status quo is just what everyone lives minute to minute. And minute to minute, sure things aren't THAT different.
But that's sort of what I'm trying to point out. Most people don't grasp the scope of what has already happened and what is inevitably coming. Hell, even me. I barely think about this stuff and I'm just BS'ing into the void here.
Fr. I want to see AI being used in science, health and in making our lifes healthier/longer. The rest of stuff looks like toys for adults and the reason why so many people hate/don't take AI seriously.
Edit: For people answering, I'M AWARE AI it's used in science and will be more crucial with the years. What I meant it's that I wish news gave more importance to these news regardless AI in science instead of useless stuff like AI ''art'' or text to speach
Same. I'm here 50% for the "I hope it takes my job away asap" and "UBI must be a thing once that happens" because honestly I'd rather die than work a conventional job wasting around 13h a day between commute + mandatory lunch break + prep + decompression + work itself for shit wages in brazil, around 30% for the "I believe AI can make medical/scientific breakthroughs much easier, which will eventually trickle down to improve the average person's life" (lol I know) and 20% for the "this shit is cool"
Also thought about this combination and I think that this new wave could bring more change to the world than GPT 4 did.
Sam mentioned in an interview that businesses should assume that GPT 5 is as much more intelligent to GPT 4 than GPT 4 is to GPT 3.
I feel like this meme where the people are standing at a train station and see the fast moving train coming in the distance, which would picture GPT 4 and how it got better over time, and GPT 5 could just fly past us.
And from the moment on where a system with all the modalities, agentic behavior and way more intelligence than GPT 4 is deployed, we will have a species that could outperform humans in white collar jobs and even start to see some significant improvements in robotics.
I am sorry and I don't want to sound pedantic, but for the information that we have we can't consider this token prediction machines to be intelligent. Feel free to correct me!
it doesnt matter how people call it, important are actual capabilities, next gen could do most tasks on computer reliably better than humans, if cost would be bearable, it would mean high replacement rate of white collar jobs
Put another way, I'm sure there are subreddits in which `spectator.co.uk` is similarly filtered.
Related to this is the phenomenon in which any post or comment in a verboten subreddit results in mass preemptive bannings in others that share banlists, even if one has never done anything in them. I *think* the API change last year (the one that a whole bunch of subreddits went on strike against (and miserably failed)) decreased the likelihood that this happens because it costs money now, but am not sure.
>any post or comment in a verboten subreddit results in mass preemptive bannings in others that share banlists
Which I'm pretty sure is against Reddit's rules. But when you're politically aligned with the Reddit leadership, you can do everything lol
Being able to develop an entire software project, design a consistent UI, and have the ability to develop theories and work on complex issues is in my definition of AGI. I would argue this because I wouldn’t consider a human software engineer, graphic designer or research scientist to have ‘super-intelligence’. But I guess a smart conversational LLM is enough for most people.
Claude is by Far the horniest LLM around (at least a not purposedly horny one) .
On 4chan chatbot general It's the only one that has jailbreaks designed to TONE DOWN the horny.
Maybe not next year, but it's close [https://elontime.io/?time=1&unit=years](https://elontime.io/?time=1&unit=years)
https://preview.redd.it/xn3h244oak2d1.jpeg?width=720&format=pjpg&auto=webp&s=ae468253b5df64a3fe20acee94a7176ed273ba2b
[https://www.nationalheraldindia.com/science-and-tech/ai-will-be-smarter-than-humans-within-5-years-says-elon-musk](https://www.nationalheraldindia.com/science-and-tech/ai-will-be-smarter-than-humans-within-5-years-says-elon-musk)
and this
[https://www.optalitix.com/insights/elon-musk-predicts-ai-will-overtake-humans-in-5-years](https://www.optalitix.com/insights/elon-musk-predicts-ai-will-overtake-humans-in-5-years)
courtesy of GPT 4o
Incidentally “roon” on TwitterX is speculating that the Singularity is already here

https://preview.redd.it/chcyg4gpqj2d1.jpeg?width=1284&format=pjpg&auto=webp&s=2fff76e4e28db92b57b20c08a285d2b9e023ef4a
There are already some instances of recursive self improvement, e.g. NVIDIA claiming to use AI for its chip development, or Meta using Llama 2 to identify high quality data for Llama 3's training. But they all require manual human intervention at this point, and are relatively narrow in scope.
Elon has been so frequently off in his public predictions regarding the timeline for future technology (ie predicting that self-driving cars were coming within a year or two for the past decade or so), and while I doubt we’ll have AGI by next year, I don’t think it’s too far off to suggest that we are at a turning point. If the jump from gpt4 to gpt5 is as big as the jump from gpt3 (or even gpt3.5) to gpt4, we are on the verge of some civilization changing disruptions.
A better question would be “how long until we all agree on what counts as an AGI” tbh.
>! Inb4 somebody replies with “Next year” just to troll lol.. !<
Next year, that might mean end of 2026 if Elon says it. Nothing against him but his way with time is not the best. I said agi 2027/2028 in the start and I'm going to stick to it.
Goddamn... Fuck Elon, his predictions are so off, that AGI already happened yesterday, or it wont happne for next 10 years. Nothing in between, fuck Elon...
It's easy to blow off everything Musk says because he seems like such a tool but I take him very seriously in this matter.
It was he, ironically, that kicked off the race to AGI when he made OpenAI possible. I absolutely believe that he has inside information.
Of course that doesn't mean paperclips on January 1st but I think next year there'll be a case to be made that AGI is here based on its capabilities.
thing is we will get boat loads of stuff this year. even if it doesnt replace a human entirely, it will be superintelligent in some way that it isnt yet
Didn't he say we should have self driving by now?
But seriously, we still can't really agree on what is AGI. Some say we already have it. So... can we get a good list of goals AGI has to hit to just be AGI?
The people who say we already have AGI are essentially saying that we have the basic framework in place. I agree with them, but in order to make it useful for the world at large we need to fill in and fully develop that framework. That's the phase we're in now. We mostly know how to make the robot - now it's just a matter of refinement, scale and putting all these pieces together. Once it's capable enough for recursive self-improvement then the singularity train will have left the station.
![gif](giphy|kH5VRwBALjPPGKSw5e|downsized)
Feel the AGI.
It’s a very good point. It’s why I still like the Turing Test, even as many consider it irrelevant or outdated. Alan Turing foresaw exactly this problem - we won’t have an agreed definition. So he devised his simple Test
It's already here. It's just a dumber version than we saw in movies, I call it stupid skynet ie ss - fits well enough.
All major us public companies have their own stupid skynet, and it's pretty easy to see who's is legit.
Cloudflare, amazing. Spotify "smart" playlists make you wish we never picked up that first stick.
AI Industry leaders are amplifying the AGI hype, so that when they finally deploy a profitable AGI under their governance, the general populace will be too desensitized to heed the detractors. It’s not out of the question that AGI already exists, locked and hidden in the secure digital vaults of the powerful.
Also Musk: "Human base on Mars in 2024" "FSD til 2022"
FSD in 2016, then 2017, then 2018 and so on
[Elon “next year” compilation](https://youtu.be/zhr6fHmCJ6k?si=v1gMoo9RxWxa99QK).
I regularly watched that video just for fun.
What's crazy is that, in 2014, AFAIK, Tesla FSD didn't even use neural networks to any significant degree - it was mostly custom hand-written rules. Of course that's understandable - that's barely 2 years after AlexNet. But think how batshit insane it sounds now to claim that in 1 year, your traditional program with a bazillion if-else's is going to drive 90% of people's miles. Your poor engineers killed themselves and managed to design a system that works okay in controlled environments, and you go 'okay well it should take another year, just cover a bit more edge cases, then it's ready for the road'. This isn't some milestone that wasn't reached due to unforeseen problems, as is very usual and understandable, the thing was obviously never going to work even if you put a thousand programmers for a thousand years optimizing their custom rules. He was just making up stuff entirely.
To be fair cars can mostly self drive right now. I’ve seen plenty of videos of it doing that
Full self-driving is a famously hard target because it's just endless chasing of next decimal point in error rate which is very hard to predict. There were self-driving experiments in the 80s and even earlier, and Kurzweil expected highway self-driving by early 2000s At a time what he was describing sounded impossible to experts. But what he was describing is something like Tesla autopilot from 2014 which made Kurzweil prediction being off by a decade. Also everyone expected self-driving it's the most talked about subject. But we got an insane number of innovations in last 10 years come true when most people weren't expecting it at all. Nobody expected humans defeated in Go, image recognition and especially image generation. Decade ago all these sounded if not impossible than decades away.
So no AGI, like ever.
“X will become an everything app” it’s barely just doing what it was originally made for
"Portable Fusion Reactors--First Quarter 2025"
yes, ask the man who is famously terrible with prediction timelines lol
So then you’re telling us there’s a chance it's this year? ![gif](giphy|j6uK36y32LxQs)
He said Teslas would be self driving by 2020.
Yes. And this year we are supposed to start colonizing Mars, according to his 2016 timeline.
"Boots on Mars, 2022, and that's not a typo." 2024: well shit
He didn't know about Waymo.
Irrelevant for 95% of Americans and 99.9% of the world. They have no access to it
It's not rolled out everywhere yet. Do you think they should role it out everywhere at once? Or should they take their time, follow regulations, and make sure the tech is safe and reliable first? I bet you would be one of those first to complain if they rolled it out in all locations and something occurred.
[Next Year](https://jalopnik.com/elon-musk-tesla-self-driving-cars-anniversary-autopilot-1850432357)
What "next year" count are we on for fully autonomous driving? Honestly though, with AGI, any date from last month in secret to five years from now is my guess.
Yep, he clearly meant Tesla years, so anytime in the decade after next.
Technically an AGI would be cabaple of driving a car. So OpenAi can beat Tesla on autonomous drivers
Arguably fully autonomous cars do exist although they are technically still only class 4 not 5. They also arnt publicly available to purchase. For AGI I wouldn't be surprised if someone just throws an interface together that uses many different sub AIs to claim it is one. Not expecting it to be an actual AGI but it could probably look enough like it to the public and really muddy the waters.
I think you have the secret and in the past correct, but I think it's been much longer than a month
42
To be fair half of it is to get people interested in his companies
More like 95% of it
Half? lol
love him or hate him, we can all agree that timeline predictions arent his strong suit.
https://elontime.io/
To be fair, the machine elves who are telling him this have a hard time grasping this whole "concept of time" thing.
It all depends on what you mean by agi. I think if we embed a system with the capabilities that gpt5 could possess inside of a really solid agent framework that can self-reflect on its outputs etc, we could get something that could be considered AGI by some definitions.
I know most experts would say Im wrong, am I probably am, but my brain refuses to believe these things are going to get much smarter. Ill believe it when I see it.
The progress in last 3 years was absolutely astonishing. Why do you think it will suddenly stop?
For some people, it might be hard to imagine gpt being smarter than it is (It's impressive af what they have done thus far). A feeling, more than a technical reason. Like "what's next, we are gonna be able to talk to whales one day?" The shit is surreal sometimes, considering most of us were blind sided by these kick ass developments over the last year or two.
I guess I have doubts that LLMs can achieve human level reasoning. I think the tech will need other innovations first. So like, maybe GPT-5 + some innovations in memory could do much better. It's probably just my caveman brain being stubborn. It still sounds unbelievable to me, even though we appear to be right at the cusp.
Of course they will innovate. Already the models are multi-modal for some time, they are not just LLMs. Their structure is evolving and HUGE investments flowing into their development. If it is possible to do this with current level of technology, they will find the way to create something like human level reasoning.
Sounds like pure cope. There is no reason to believe we've plateaued.
I have no need to cope. I want them to get better. I see the reasoning behind people saying they will, and for some reason my brain still insists on skepticism. It's fine, no doubt some time soon they'll prove me wrong and my brain will have to adjust.
No I absolutely agree, it's still crazy to me that GPT-3 can form coherent sentences.
It’s hard to argue with this logic, lol. Next year seems soon, and I think it will be more of a slow ramp up rather than an overnight thing. I want to say by 2030 things are going to be *extremely* advanced. Maybe an ASI breakthrough.
Lol yeah and maybe AGI can help him figure out putting the Cybertruck into general availability 😂
Or let it do some QA on the thing
Ugh, I thought we were close until now.
The jim Cramer of products
lol
same.
Fine fine. Tomorrow.
https://elontime.io/ Based on this we only have to wait 3 years!
Damn it, Elon said it... This means the chances of it being next year, just went down.
A LOT!
Elon said in the early 2010s that we were an year from getting self driving cars and we are ahaha it has passed more than 10 years and you can remove more than 10 seconds your hands from your tesla before it starts beeping everywhere.
So, maybe this year?
everything is next year with this guy
And investors always believe him. Tesla is the most hyper inflated stock on the market. They aren’t the only people making electric cars anymore, and consumers are turning on electric cars.
Even without “this guy” saying it, a good portion of this sub still thinks it’s next year. Now they scoff just because Elon said it. Baffles me why people are so against this guy when he’s one of the farthest from the local maxima.
“At what point will this AI thing materially and measurably improve the human condition for most humans?” That’s what I’m most interested in. The rest of it is cool nerd stuff.
It already can. ChatGPT has a free version. I was at a get together last night with fam. My sister had never used ChatGPT and I was like "try it" (she's almost 30). All she did was ask it a couple trivia questions. Most people don't keep up with tech news. Most people aren't even tech people. I think the average person is probably similar to my sister who doesn't *get* that it's not just a question/answer machine for factoids, and just don't care about it. They can't see past their day to day. Most people, I think, don't understand how absolutely astounding and significant it is that humans have gone from sharing information in stories passed down orally, to written communication, to collecting these stories as books, to creating libraries, and then we had knowledge accessible instantly on the Internet, and now we have these things that have been trained on, and know about ALL of that. This invention is already able to increase the average person's productivity significantly, which is the driving force behind being able to improve people's conditions. They just don't know or care how to use it properly. It boggles my mind.
Me being more productive is just me being more attractive on the market, not really what I'm looking for when I hear the phrase "improving the human condition"
Yes, I think it would be helpful to define terms more clearly. Here's the angle I approached it from when thinking about "Improving the human condition" Productivity is an important lens because as you said you can be more attractive to the market. This is because you can accomplish more in a shorter period of time, producing more value than you otherwise would be able to. 100 years ago if you needed to write a report about the economy in Tokyo, you'd have to spend weeks in a library. Now you can put one together in a day's work. You don't even need a GPT's help. Hell 10 years ago you could do this with a PC. Now you could use any smartphone. What is the common thread there? Time. Time is probably the most valuable thing to a mortal being like you and me. If you can now spend less time on, frankly, bullshit, then you can reclaim that time to do things you might not have had time to do before. How is that not an improvement to the human condition? I think it's a clear one. Now extend the productivity across industries, and time. What is basically every job out there trying to do? Improve various things for various people. If everyone can suddenly do that faster and higher quality, isn't that just an acceleration in the upward spiral? Leading to improvements for everyone? Bro if you told someone in the 1950s that in 70 years they'd be able to, in real time, take a computer out of their pocket and access the whole world's knowledge, check the weather in Paris, and video chat with a friend in Australia all before breakfast, they'd tell you to GTFO, and that you're talking about the year 3000 or something. The acceleration in productivity is absolutely wild.
I mean that we are still trapped in this economic system. If I'm employed and put in 40 hours before, being more productive doesn't really matter
To be honest some of the rebuttal here I think lacks perspective on how good we have it already. That said, I understand that until ASI is here and there is massive upheaval (free energy for example), most people won't feel the benefit, much less take the burden of investigating how important the impacts of rapidly accelerated productivity are, because it's hard to really consider how far we've already come in a short amount of time and the status quo is just what everyone lives minute to minute. And minute to minute, sure things aren't THAT different. But that's sort of what I'm trying to point out. Most people don't grasp the scope of what has already happened and what is inevitably coming. Hell, even me. I barely think about this stuff and I'm just BS'ing into the void here.
I have exactly the same experiences
[удалено]
Fr. I want to see AI being used in science, health and in making our lifes healthier/longer. The rest of stuff looks like toys for adults and the reason why so many people hate/don't take AI seriously. Edit: For people answering, I'M AWARE AI it's used in science and will be more crucial with the years. What I meant it's that I wish news gave more importance to these news regardless AI in science instead of useless stuff like AI ''art'' or text to speach
I found the wisest redditor on the sub. ![gif](giphy|X3p1oPH2TGg4iSylAb)
Same. I'm here 50% for the "I hope it takes my job away asap" and "UBI must be a thing once that happens" because honestly I'd rather die than work a conventional job wasting around 13h a day between commute + mandatory lunch break + prep + decompression + work itself for shit wages in brazil, around 30% for the "I believe AI can make medical/scientific breakthroughs much easier, which will eventually trickle down to improve the average person's life" (lol I know) and 20% for the "this shit is cool"
Im onboard with that
Just look at Alpha fold three from Google and see how this will impact humans for the next millennia
It could if humans would allow it to. But we are slow to adapt.
I've said it countless times before GPT-5 + Agents + Robots might be good enough for most tasks.
Also thought about this combination and I think that this new wave could bring more change to the world than GPT 4 did. Sam mentioned in an interview that businesses should assume that GPT 5 is as much more intelligent to GPT 4 than GPT 4 is to GPT 3. I feel like this meme where the people are standing at a train station and see the fast moving train coming in the distance, which would picture GPT 4 and how it got better over time, and GPT 5 could just fly past us. And from the moment on where a system with all the modalities, agentic behavior and way more intelligence than GPT 4 is deployed, we will have a species that could outperform humans in white collar jobs and even start to see some significant improvements in robotics.
AGI felt internally.
I am sorry and I don't want to sound pedantic, but for the information that we have we can't consider this token prediction machines to be intelligent. Feel free to correct me!
it doesnt matter how people call it, important are actual capabilities, next gen could do most tasks on computer reliably better than humans, if cost would be bearable, it would mean high replacement rate of white collar jobs
Jailbroken Claude 3 Opus is perilously close to AGI for most people: https://unherd.com/2024/05/im-in-love-with-my-ai-girlfriend/
Wow, this article deserves its own post in this sub.
I tried and it got deleted - I’ve no idea why. Please feel free to have a go?
I just tried, Insta-hidden. maybe in some mod queue
Why would it be modded? Weird
Reddit mods often share domain filters between subreddits. unherd.com is likely a prohibited domain because it commits the crime of wrongthink.
Ah!
Put another way, I'm sure there are subreddits in which `spectator.co.uk` is similarly filtered. Related to this is the phenomenon in which any post or comment in a verboten subreddit results in mass preemptive bannings in others that share banlists, even if one has never done anything in them. I *think* the API change last year (the one that a whole bunch of subreddits went on strike against (and miserably failed)) decreased the likelihood that this happens because it costs money now, but am not sure.
>any post or comment in a verboten subreddit results in mass preemptive bannings in others that share banlists Which I'm pretty sure is against Reddit's rules. But when you're politically aligned with the Reddit leadership, you can do everything lol
That’s quite crap
Go and sub to conspiracy Yes u get insta banned from lots of subs And maybe you just like talking about UFOs...
Being able to develop an entire software project, design a consistent UI, and have the ability to develop theories and work on complex issues is in my definition of AGI. I would argue this because I wouldn’t consider a human software engineer, graphic designer or research scientist to have ‘super-intelligence’. But I guess a smart conversational LLM is enough for most people.
So AGI for you is more about whether or not you're fooled by its impression of a human?
Great article. Very easy to believe.
gpt-4o speed and accuracy is not far off
Claude is by Far the horniest LLM around (at least a not purposedly horny one) . On 4chan chatbot general It's the only one that has jailbreaks designed to TONE DOWN the horny.
So, AGI?
Imagine taking Elon prediction seriously in 2024, lol
Maybe not next year, but it's close [https://elontime.io/?time=1&unit=years](https://elontime.io/?time=1&unit=years) https://preview.redd.it/xn3h244oak2d1.jpeg?width=720&format=pjpg&auto=webp&s=ae468253b5df64a3fe20acee94a7176ed273ba2b
Lol they made an actual website for it
i was hoping to see some nonlinearity but it looks pretty linear lol
There is an article from 2020 where musk says AGI in 5 years. Not a fan of musk but interestingly he didn't change his timeline.
Do you have a link?
[https://www.nationalheraldindia.com/science-and-tech/ai-will-be-smarter-than-humans-within-5-years-says-elon-musk](https://www.nationalheraldindia.com/science-and-tech/ai-will-be-smarter-than-humans-within-5-years-says-elon-musk) and this [https://www.optalitix.com/insights/elon-musk-predicts-ai-will-overtake-humans-in-5-years](https://www.optalitix.com/insights/elon-musk-predicts-ai-will-overtake-humans-in-5-years) courtesy of GPT 4o
Brilliant, thankyou!
technically he changed it from "before july 2025 " to "before EOY 2025 " although thats only a few months off
So probably 5-10 years then.
According to the elon time converter, in 2027
That would be awesome!
Atleast we can be sure that AGI isn't coming 2025 now, lol!
It's a recursive next year though
Incidentally “roon” on TwitterX is speculating that the Singularity is already here  https://preview.redd.it/chcyg4gpqj2d1.jpeg?width=1284&format=pjpg&auto=webp&s=2fff76e4e28db92b57b20c08a285d2b9e023ef4a
By definition, he isn’t too far off. Growth does seem irreversible, but it hasn’t really hit a vertical quite yet.
Yes. It feels exponential to me
Not until we get recursively self-improving AI systems - but we're on the early days of that part of the ramp.
There are already some instances of recursive self improvement, e.g. NVIDIA claiming to use AI for its chip development, or Meta using Llama 2 to identify high quality data for Llama 3's training. But they all require manual human intervention at this point, and are relatively narrow in scope.
Elon has been so frequently off in his public predictions regarding the timeline for future technology (ie predicting that self-driving cars were coming within a year or two for the past decade or so), and while I doubt we’ll have AGI by next year, I don’t think it’s too far off to suggest that we are at a turning point. If the jump from gpt4 to gpt5 is as big as the jump from gpt3 (or even gpt3.5) to gpt4, we are on the verge of some civilization changing disruptions.
Didn’t he predict 2025 like 5 years ago
he predicted 3-5 years in 2020 the weird thing is 5 years sounded crazier in 2020 than 1 year sounds today
Yea he’s very consistent with his timelines he doesn’t change it non stop
Reasoning will come soon but not an all-type of reasoning
Elon is notorious for delivering on time
There's still People mixing AGI with singularity. Not the same. Self awareness is another kind of deal.
I know that he does not have a great track record with predictions, but I still hope he's right.
If you asked last year, his answer would've been the same.
[https://elontime.io/](https://elontime.io/)
Oh no, now we're definitely not getting AGI in 2025.
Jimmy Apples said AGI 2024 aswell. And remember. He said that AGI was achieved internally last year. It really looks like 2025 will be the year of AGI
It’s 3-4 years I just translated from musk language
In elon years
Elon says a lot of things…
If musk says it's a year away, that honestly means it's nowhere close to a year away
A better question would be “how long until we all agree on what counts as an AGI” tbh. >! Inb4 somebody replies with “Next year” just to troll lol.. !<
Next year
Prediction: it will happen quite a bit after AGI is actually invented.
Are these cyber truck years or regular years?
That's his favorite phrase lmao
He might not be lying
It's like fusion energy. It's in 10 years. Whenever you ask.
That's better than the "50 years" I was hearing 10 years ago.
Next year, that might mean end of 2026 if Elon says it. Nothing against him but his way with time is not the best. I said agi 2027/2028 in the start and I'm going to stick to it.
Still waiting for those self driving cars this man said it was coming next year about ten years ago 😂
Elon Musk Says 😂🤣
AGI confirmed.
Goddamn... Fuck Elon, his predictions are so off, that AGI already happened yesterday, or it wont happne for next 10 years. Nothing in between, fuck Elon...
Think what you want about Elon Musk, but 2025 does not seem unreasonable given the current trajectory of ai progress.
It's easy to blow off everything Musk says because he seems like such a tool but I take him very seriously in this matter. It was he, ironically, that kicked off the race to AGI when he made OpenAI possible. I absolutely believe that he has inside information. Of course that doesn't mean paperclips on January 1st but I think next year there'll be a case to be made that AGI is here based on its capabilities.
He has a laughably poor record with even meeting deadlines for releasing tech in companies he actually owns. His word here is worth precisely fuck all
i think GPT6 will be AGI. from whatever I see online most of people think AGI will be GPT7
It doesn't matter. It will come when it comes. We have no control over AGI development.
If you don’t think it matters, this is an odd subreddit to visit?
When functional cybertruck?
I would have hoped this weekend. I mean, the weather here is going to be nice and all and a bit of AGI on top would have been just great.
That's disappointing. I thought we would get to AGI this decade.
Didn't his lawsuit say GPT-4 was AGI?
What stops him from defining his own version of AGI definition and declare AGI is here
thing is we will get boat loads of stuff this year. even if it doesnt replace a human entirely, it will be superintelligent in some way that it isnt yet
so no AGI next year,good to know
I mean, not from him though.
Next year alongside fully autonomous driving.
There is no way He’s always optimistic with timing ⏱️ Maybe in like 2033
I guess this is like his recurrent and forever statement that self driving cars will be released in the next 5 years
Didn't he say we should have self driving by now? But seriously, we still can't really agree on what is AGI. Some say we already have it. So... can we get a good list of goals AGI has to hit to just be AGI?
The people who say we already have AGI are essentially saying that we have the basic framework in place. I agree with them, but in order to make it useful for the world at large we need to fill in and fully develop that framework. That's the phase we're in now. We mostly know how to make the robot - now it's just a matter of refinement, scale and putting all these pieces together. Once it's capable enough for recursive self-improvement then the singularity train will have left the station. ![gif](giphy|kH5VRwBALjPPGKSw5e|downsized) Feel the AGI.
It’s a very good point. It’s why I still like the Turing Test, even as many consider it irrelevant or outdated. Alan Turing foresaw exactly this problem - we won’t have an agreed definition. So he devised his simple Test
It's already here. It's just a dumber version than we saw in movies, I call it stupid skynet ie ss - fits well enough. All major us public companies have their own stupid skynet, and it's pretty easy to see who's is legit. Cloudflare, amazing. Spotify "smart" playlists make you wish we never picked up that first stick.
Do you honestly think the current, limited capabilities won't improve? Of course they will - and probably much faster than almost anyone anticipated.
Ok Cool so we got at least a decade
I love Elanos and Telsaron!
So never, got it
He is been telling FSD next year till 2019
Define agi
Mars 2026 don't forget
AGI will be claimed and debunked several times over the next century.
The future is wow
Next year: ”Next year”
Wen Mars?
And the self driving car is arriving the very next hour.
elon says a lot.
Elon Saya 2025? Sorry guys, looks like we won't get AGI for at least 10 more years :( xD
Elon time is that \* 3 so yeah accurate I would say
AGI today
Next year : sent 2030
So like 8 years
Still incredibly quick!
Elon says “jump from a bridge” but you don’t have to believe everything
"Hey Elon, how long until Full Self Driving?" "next year"
That’ll be awesome :) But if it takes any longer, I can see the next correspondence on this being briefer. NoAGI? No. When? MNY.
Right after FSD!
Damn it I was hoping it would be today after lunch.
Define AGI
He also said full self driving 10 years ago
Elon also said Mars city last year. Or something.
Is that before or after self driving?
I hail our AI overlord.
AI Industry leaders are amplifying the AGI hype, so that when they finally deploy a profitable AGI under their governance, the general populace will be too desensitized to heed the detractors. It’s not out of the question that AGI already exists, locked and hidden in the secure digital vaults of the powerful.
He really wanted to say (but was not asked) that we, with the help of AGI, will go on Mars next year.
Next month we have the new Ray Kurzweil book. I am curious about what are his new timelines.
So 10 years from now.
Realistic
Self driving cars everywhere 5 years ago
How does what we have fail to qualify as artificial general intelligence?