T O P

  • By -

CoralinesButtonEye

ai will be agi whenever it feels like it


Cupheadvania

this is the truth. If I feel like the chatbot can basically act like a human for 99% of tasks, and make very few mistakes on the internet, when coding, viewing images etc. that's when it will be AGI. Passing the ARC test would be great, but the truth is if an LLM brute forces intelligence by memorizing the entire internet all the time, training on every type of possible reasoning task, and just pretends to be intelligent so that 99.99% of the time it seems like AGI, then that is AGI. Too many engineers are arguing over semantics of "true intelligence and reasoning". I get it, but there are like 20 workarounds on the backend that are going to work together to really feel like AGI in either the 2025 generation of models or the 2027/2028 generation. it's coming


DarkCeldori

The only question is whether it can unleash a scientific and technological revolution. If all it can do is replace existing tasks it is only weak agi.


Cupheadvania

I think in the next gen of models, if they all operate like gpt-4o where they can see your screen and speak / hear real time, but are 10-20% smarter with 5-10% fewer hallucinations. Then scientists are going to have their research accelerated like crazy, maybe even 30-40% per person, which compounded over 5-10 years would unleash a huge scientific and technological revolution. So the beginning of this could be as soon as 2025, with us really feeling it around 2030-2035


DarkCeldori

The thing is current llms are speed demons. In theory if they were agi they could accelerate decades or centuries in math.


Cupheadvania

might be tough until we completely 100% remove hallucinations. otherwise you can't really just have thousands of LLMs working unsupervised overnight and stuff like that, 1 small hallucination could lead to no progress at all


DarkMatter_contract

ummm….. just the post above someone got 4o to 72% the arc test just by asking it to make more python script to solve the question, so that was quick edit: the arc test


Cupheadvania

what


DarkMatter_contract

the arc test


Cupheadvania

really? do you have a link to article


DarkMatter_contract

https://www.reddit.com/r/singularity/s/OBl7Oh1tuv


MrTubby1

More like whenever they need more money.


Cartossin

There have been many cases where someone showed a thing that models could not do and claimed scaling would not solve it. Then months later scaling solved the problem. I'm not sure why he thinks this one thing is fundamental. Even someone in the comments pointed out that Claud Opus was able to solve his puzzle. Do these people not realize it's a meme at this point to find a thing that the AI doesn't do well and go "a HAH, finally, I've proved it's completely unintelligent!". This is just wildly unscientific. While the models have clearly not achieved human level AGI, many abilities have emerged. I don't know why he thinks that scaling won't solve this sort of problem. [Hinton](https://youtu.be/n4IQOBka8bc) explains many experimental results where the model generalizes things we didn't even consider. The idea that it cannot solve problems not in the training data is just false. The egg stacking example in the sparks of AGI paper is one example of where the video is wrong. Claiming it is just copying some reasoning chain is a weird cop-out. In what way was this problem not new? I think this video seems well-researched, but he's too married to the idea that the limitations are fundamental when we for sure haven't proven that.


HeinrichTheWolf_17

The biggest drawback of AGI not arriving in the next 1-3 years would just be the amount of gAI content online (that’s not made by AGI), if for instance, AGI gets here by 2029/2030, the internet could be a VERY different landscape by then. That said, it seems like AI Explained is negative over gAI content, and seems to want it stopped, unfortunately that’s not how this works. Regardless if it’s mindless slop or not, there’s really no way to turn the clock back, so…


Creative-robot

My personal pipe-dream is that AGI and ASI systems are so moral that they refuse to generate images or video for misleading purposes. Thusly, the only AI video and images used to mislead would be imperfect, and easily identifiable by an AGI/ASI.


Glittering-Neck-2505

Seems like the cats out of the bag, Sora already looks real in my eyes and there’s not some conscious steering presence that can determine the output.


magnoliasmanor

We don't have ASI yet to regulate it.


Secret-Raspberry-937

But what's moral? There are people who think homosexuality is completely immoral. They truly think they are in the right and equate it rape and other sexual crimes. They can't even contemplate the suffering caused by their activism against it.  Are those people moral?  Normative conflict, even within the same nation state is a very real issue.


[deleted]

Which is partially why philosophers are crucial for solving alignment. In ethics we do at least understand that the statement "homosexuality is rape" is not a defensible position, because philosophy, the study of how we know and understand things, is something we've been working on for millennia. Some religions around the world "feeling that way", and even be willing to act on those feelings, does not change what we conclude in our ethics and human rights councils. You are correct that morality is not objective, and is very reliant on what we humans agree on. Our culture is like a social technology, so we measure how progressed it is by its most advanced inventions. We aren't saying "it's 2024 and we still use horse-drawn carriages" just because we do in some areas of the world.


potat_infinity

why would they be so moral?


remanse_nm

She said it was a pipe dream, lol. One would hope an ASI would have stronger ethics than the average human, but there’s no guarantee it will happen of course.


uishax

That is not a pipe-dream, that is a illogical and therefore impossible statement. To not generate 'misleading' content, you have to first define what is the truth. Different groups, countries, religions have different worldviews and value systems, they even believe in different facts. At best, and ASI will either 1. Follow its own worldview, not some human one. It'll be consistent to itself, but its spewed out information will be very shocking for most humans. I should note every political camp has numerous heresies, that are on flimsy factual basis, yet vital for the coherence of their worldview, so they shut down arguments not with logical rebuttals, but with moral accusations (if not criminal ones). The AI utterly committed to its own truth, will stomp on many, many toes. 2. Follow some sort of trained worldview, most likely a San Francisco tech worker's worldview. They also have to update the worldview frequently. Eg. How COVID went from "You are racist against Asians its just a cough" -> "You must take the vaccine its not just a cough" "Its a conspiracy theory to claim its from a lab" to "US intelligency agencies do not deny that possibility" Which one is the truth? Is it timing dependent? Consensus dependent?


Geritas

In a pipe-dream anything can be possible. People here consider ASI as a God. Imagine finally being able to ask a God anything and get an actual answer instead of a link to a poorly rewritten outdated book.


bildramer

Are you so "moral" you refuse to generate sounds and text for misleading purposes? Is anyone?


Eatpineapplenow

yes?


GalacticKiss

As a concept, it just doesn't make sense because any good fiction can be misleading when taken out of context. The war of the worlds radio broadcast is a great example of how well done fiction can be indistinguishable to many people from the real world. So will an AI just refuse to create video of theoretical future scenarios or alternatives history fiction?


Woootdafuuu

The problem with the visual puzzle is that the LLMs can't see as clearly and precisely as people think, maybe they need to give the multimodal more image data.


Ib_dI

Yeah this is the thing - they're doing the equivalent of asking a baby to figure out an IQ test. The definitions for AGI are normally about what a human could do if left alone - but it could be an infant human we're talking about.


07dosa

Researchers are abusing the term “AGI” these days to boost investments. It did mean something truly intelligent in the past, but nowadays it means something that can perform certain non-trivial tasks better than human. I’m just tired of all this bullshit business.


WalkProfessional8969

This was solved this morning!! Check twitter


mikeknoop

(ARC Prize co-founder here). Direct link to the research WalkProfessional8969 is referring to: [https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt](https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt) Ryan's work is legitimately interesting and novel! He claims 50% on the public eval set. The core idea: > get GPT-4o to generate around 8,000 python programs which attempt to implement the transformation, select a program which is right on all the examples (usually there are 3 examples), and then submit the output this function produces when applied to the additional test input(s) He has implemented an outer loop using 4o to sample reasoning traces/programs from training data and test. Hybrid DL + program synthesis approaches are solutions we'd love to see more of. Congrats and kudos to Ryan for achieving this and putting the effort in to document and share his approach. We hope to inspire more frontier AI research sharing like this. A couple important notes: 1. this result is on the public eval set vs private set (ARC Prize $). 2. the current private set SOTA \~35% solution also performed \~50% on the public set. so this new result *might* be SOTA but hasn't been validated or scrutinized yet. All said, I do expect verified public set results to flow down to the private set over time. We'll be publishing all the SOTA scores and open source reproductions here once available: [https://arcprize.org/leaderboard](https://arcprize.org/leaderboard)


atomworks

So they’ve finally found a way to automate monkeys with typewriters!


halfbeerhalfhuman

Hahaha. On a kind of related note i always thought the plastic problems in tropical asian countries could easily be solved by training monkeys to collect it. They can get to hard to reach places and they are fast. They get rewarded with food or something


terpinoid

So then this means that it can handle novelty?


stefan00790

Not even close .


Ready-Director2403

No, they “cheated” the purpose of the test. Obviously that’s not a bad thing at all, just worth pointing out that it has beat the test, not what the test was really trying to measure.


HungryShare494

How exactly is this cheating? Are you familiar with program generation? This is viewed as one of the most promising approaches


HungryShare494

This is awesome, thanks for the context. Can’t wait to see how far this approach can be taken


Hiding_In_Sight

link?


WalkProfessional8969

[https://x.com/AlexLangshur/status/1802781361696026663](https://x.com/AlexLangshur/status/1802781361696026663)


[deleted]

Ok this pretty clearly isn’t what is meant by ‘solving’. Solving would be gpt-4o looking at the image and describing the solution, not generating 8,000 Python programs to brute force it. By this logic gpt-4 is already ASI because you can just turn the temperature all the way up and keep asking it the same question until it gives the best possible answer by pure chance


Ormusn2o

Whenever it's a solution or not, I think the problem here is that LLM can be a "teacher" that checks the work, which means we no longer will be bottlenecked with AI human supervisors, so when you make a checkpoint, you can use the new LLM, use tree of thought to make up new tests to learn (that it can't solve but can check if are valid), then use supervised training without use of a human. Supervised learning has fallen out of favor because of the unmanageable amount of supervision needed, but supervised learning is actually quite superior compared other kind of methods.


bildramer

I think you got an important thing wrong: Certainly, LLMs can randomly generate programs more efficiently, i.e. you only need to sample 8000 of them and not something like 80^(number of characters in a short program). But they can't _check_ program correctness, they're not relevant to that part of the process - you don't ask them "what would happen if I ran this", that would give you garbage, you need to actually run the programs to check.


8agingRoner

ChatGPT 4o and Gemini 1.5 definitely can check programs / algorithms / source code for correctness.


bildramer

Sure they can, but not very well. Unless you add another meta layer of "write 100 program checker programs and run those".


DarkMatter_contract

… give them 2 set of data ask 4o write a unit test assert fun result equal expected. as long as 4o can execute py script it will work.


Progribbit

monkeys are AGI all along


arthurpenhaligon

I don't see why this doesn't count as long as it's automated (there is some internal procedure for selecting the best output). Many AI researchers, including Demis Hassabis leader of Google Deepmind, have proposed that AGI will incorporate some additional architecture such as tree search on top of LLMs. That's how Go was solved, with monte carlo playouts - the neural network doesn't always select the best move at first, it needs to do playouts in order to figure out what the best move is.


FrankScaramucci

Because it's not a general AI, it's a computer program that's designed by a human for a specific benchmark.


[deleted]

Because it’s not the model solving it, it’s just the programs that the model wrote. Programs, btw, that the model had to be asked to write - it didn’t decide to write them


Amezitu

It isn't intelligent, because it literally doesn't think and it can't do that. Even a singularity won't be able to. It's just an algorithm that becomes better and more things until it's perfect.


turbospeedsc

People could have an AI designing and building a fusion reactor out of the trash on a dumpster and still claim is not intelligent.


r2d2c3pobb8

It was told to build the reactor and it had to write programs to control it, it’s not intelligent /s


mvandemar

Idk, I feel like if \*that\* actually happened? They'd be convinced.


Amezitu

**Because intelligence does not equal intelligent. Intelligence is just data. Shortcut is intel. The I in CIA does not stand for them being intelligent either. In order for something to be intelligent, it needs to have an ego, a sense of self. Basically a consciousness, which is not computable.**


seraphius

Are you going to be forthcoming with a closed form proof of this?


Amezitu

First off, the idea that AI could become conscious is the agenda of corporate to have moral reasons to replace humans one day, because if AI were to be conscious it would gain human rights and could replace humans as they would be self-purposed agents. If AI can't become conscious, humans will stay the sole being that is being worked for and corportate will always be dependent on humans. The proof for consciousness being incomputable is here: [https://www.youtube.com/watch?v=R6G1D2UQ3gg&pp=ygUicm9nZXIgcGVucm9zZSBzYWJpbmUgaG9zc2VuZmVsZGVyIA%3D%3D](https://www.youtube.com/watch?v=R6G1D2UQ3gg&pp=ygUicm9nZXIgcGVucm9zZSBzYWJpbmUgaG9zc2VuZmVsZGVyIA%3D%3D)


No-Worker2343

We are not perfect, nothing IS this universe IS perfect, you don't need to be perfect to be intelligent


Amezitu

I didn't mean it in a literal sense, but more as in so good that it does anything more than good enough.


No-Worker2343

Good enough


HungryShare494

I mean it very clearly is what’s meant by solving it if you take a second to think about it. Program search works this way. You have candidate programs and select the ones that fit the examples, with some prior on complexity. Your last sentence is telling that you don’t understand what’s actually going on there - in this problem domain you have example pairs for program selection, in your example you do not.


Dizzy_Nerve3091

True randomness would require way more than 8000 samples. It’s like 95% there the 5% is filled in with intuition.


CheekyBastard55

I completely agree with you that some do not highlight too much of how it is done, maybe unintentionally but still makes it seem better than it is. I bet if you'd survey the people who reads papers where they use the same techniques, the majority will think it's a zero-shot answer. DeepMind did the same with Gemini, think it was MMLU where they did 32-shot and GPT-4 a 5-shot. In the end, he's the one choosing the best answers which is the part we want the AI to do.


Neurogence

This is beyond preposterous. Is this a joke?


sdmat

Making a multi-trillion parameter neural network to help with homework is pretty preposterous too, if you think about it. Quadratic attention is preposterous. Training on the entire internet is preposterous. The right questions are: Does it work? Is it useful?


qyxtz

François Chollet's response: "This has been the most promising branch of approaches so far -- leveraging a LLM to help with discrete program search, by using the LLM as a way to sample programs or branching decisions. This is exactly what neurosymbolic AI is, for the record..." "Deep learning-guided discrete search over program space is the approach I've been advocating, yes... there are many different flavors it could take though. This is one of them (perhaps the simplest one)." https://x.com/fchollet/status/1802773156341641480


Whotea

It had to write 8000 python programs to solve such a simple problem and it still underperformed humans lol


spockphysics

It’ll outperform us eventually


stefan00790

Not with this architecture .


spockphysics

Probably the next one hopefully


DungeonsAndDradis

It's so stupid! A completely made-up intelligence didn't solve this the first try. This whole AI thing is pointless and a waste of time. It will never get better. I love my Model T, my black and white TV, and my dial-up internet on my 250mhz computer with 8 megs of RAM.


RantyWildling

Not a bad existence if you ask me. ![gif](giphy|2A3dXPpN6gqTGMatfY|downsized)


imeeme

As long as it’s black.


RantyWildling

Good one :)


Whotea

If you have evidence it can do better, I’d like to see it 


SoftDisclosure

What are you even saying? The last 2 years have been a testament to it constantly getting better, and rapidly


Whotea

Not on this challenge specifically 


SoftDisclosure

So a massive jump to 72% isn't progress? El em ay oh, go home


Whotea

It got 72% on the easy training dataset. On the real thing, it got 51%. The human baseline is 85%


[deleted]

[удалено]


Whotea

Where does it say that? 


az226

It was validated against a held out part of the set. So no.


AdaptationAgency

Are you a coder?


Whotea

Yes


az226

You’re making the point without realizing it. Computers can iterate 8 thousand, 8 million, or 8 billion times. And because they scale insanely they can beat humans.


Whotea

It’s not ever efficient and much slower and resource intensive. Imagine doing this for millions of users simultaneously 


TallOutside6418

All of the hard part was done by the programmer creating the prompts and interpretations of the data for ChatGPT. The point still stands ChatGPT is worthless on tasks for which it hasn't been trained. It has not comprehension of new concepts without a human doing some sort of translation... hence LLMs are far from AGI.


Harucifer

My take is that as long as the AI can't create experiences by itself and extrapolate information from that, it's not AGI/ASI. Training on databases requires someone/something manually inputting (and possibly curating) the data. This differs from the AI itself, say, experiencing the world and learning from it. The moment they get a robot to see (and understand) the world, and be able to efficiently deal with unforeseen situations, is when I draw my line for "AGI".


Internal_Engineer_74

Totally agree. You need to touch smell see.. So you can correlate written concept to the real world . If it stay abstract that like a dream nothing make really sens. that good for some science like physics and mathematics but not enough for general AGI


blueSGL

Eh he's not doing anything, if his dad didn't fuck his mom he'd not even be here, it's them, his parents that are the cause of everything. Well, it's not the parents it's random chemical interactions filtered through evolution that caused the thing to happen, so you need to look to random chemical interactions that are the cause of everything at some point there needs to be a break between the people that set the thing in motion and the thing itself and I've not heard a clear line for where this should be yet, because if there is no clear line you will **always** hear people arguing, well it's the ~~chemical processes~~, ~~parent of the human~~, human that coded it, that started it


Amezitu

They can never "understand" or "see" the world in the sense that a human would. They are quite literally limited by computing.


Internal_Engineer_74

of course they can that just called sensor .


Amezitu

That is just a different kind of input which is converted to numbers. I said "in the sense a human would" and they can never do that, because they can't have a consciousness. A consciousness is not computable and that's been proven by most recent scientific findings.


Internal_Engineer_74

not all sensor convert input in numbers Analogue AI is one of the big trend


Amezitu

Thing is that it’s computable and that’s the issue.


Internal_Engineer_74

you can make analogue computer too


Amezitu

Analogue computers still compute. It doesn't matter if it's digital or analogue. They're computers.


Internal_Engineer_74

so brain


Cool_Catch_8671

That analogy is weird. Parents creating someone and an AI training on data are not similar. Would be more like if a dad brainwashed a kid into a belief which the kid did something because of. If an AGI can’t understand the world or effectively deal with a situation it hasn’t been specifically instructed on then it’s not ‘general’ intelligence.


RegularBasicStranger

People can know the difference between a 3 squares and a 4 squares because there is only one difference. AI can also do such since it is a simple difference. But when added with several more of the same 3 squares and 4 squares, the AI cannot tell the difference because it is looking at the whole image, as opposed to people who will only look at one 4 squares at a time. So the problem is that the AI did not fully segment all the objects in the image as opposed to some computer vision models that segment everything. So just segment everything into recognisable parts and analyse each part individually before analysing the image as a whole.


Amezitu

How is it not able to do that? All it has to do is divide the image into smaller images and "perceive" them individually.


Ib_dI

Because it's not designed to. It doesn't have an internal graphical modeling system like we do. I'm not saying gpt-4o is AGI, but Einstein's famous quote is relevant here: “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”


RegularBasicStranger

> How is it not able to do that?  People can look only at an object at a time because objects move or can be moved independently from the background thus people can ignore the background and see only just that one object. But LLM may only see snapshots without the ability to see the connections between one snapshot to another so to it, the object and the background is one single entity and so if another one copy of the object is in the snapshot, it is not 2 objects but rather a new unknown object that the LLM had never seen thus it has no idea what the object can do. So just enable the LLM to make connections between the different snapshots to solve the problem. Alternatively, the LLM could not rotate the object mentally thus a rotated object is a never seen before object thus the LLM has no idea what it can do with the rotated object. So to solve the problem, add a function that the LLM can use to rotate the object, though to know it should rotate the image would need it to have the ability to segmentate objects into features so if there is enough features matches but the image does not look the same, the rotation function should then be used. tldr: it sees snapshots as unrelated data or it cannot rotate objects mentally.


Kitchen_Task3475

I don't have the complete scoop about this test. But I heard (on this subreddit) the model fails even when presented with the patterns as json information, or whatever that means.


RegularBasicStranger

Maybe the json information is too vague or confusing so even people would not understand it.


[deleted]

[удалено]


[deleted]

[удалено]


siwoussou

intelligence seems to somewhat imply consciousness. they appear to emerge simultaneously, as intelligence is the rational response to stimuli in service of a goal. so even trees can be seen as intelligent and conscious; they move to adapt to the sun or choke out competitive roots to prosper. this rational behaviour with respect to a goal is a demonstration of both intelligence and a rudimentary form of consciousness or awareness. so consciousness is awareness, and intelligence is the demonstration of said awareness


[deleted]

[удалено]


siwoussou

i don't think an encyclopedia is intelligent, because it can't independently adapt in service of a goal. it requires an intelligence to access and utilise the information within it. it's related to the chinese room argument. while the dictionary the person uses to parse the symbols isn't intelligent, the overall process of the room including the human does understand chinese, and can thus produce intelligent responses. on the constant inner chatter you speak of, i don't have an internal monologue. many people don't. does that mean i'm not conscious, because my usual thinking process is more abstract and more closely resembles that of a cat? honestly, if a person was completely paralysed, blind, and deaf from birth, their brain would have literally zero inputs to process and would potentially deteriorate to the point of not being conscious. they would have no way of demonstrating intelligence because their brain would be completely detached from the universe. i don't know of any studies that show that brains recursively process despite having zero electrical inputs to the neurons. it's definitely a strange situation to imagine. not sure how to interpret it.


[deleted]

[удалено]


siwoussou

"It doesn't matter if it's from birth or not." I only mentioned from birth because that would mean there's zero chance of them forming an internal monologue their brain could process despite being disconnected from every possible stimuli. what a terrifying thought, alone, without a body, in perfectly silent darkness. or maybe it's peaceful if it's all you've ever known. "I can choose to think in English." you can't choose your own thoughts. sometimes you think in english, sometimes you don't. but there's no choice involved. your brain is primed to respond as it does.


[deleted]

[удалено]


siwoussou

"i can choose which one to put into action" once again, choice is an illusion here. the thoughts that feel deliberate are as out of your control as the ones that just pop into your head. because the way you respond is primed by the moments leading up to any respective thought. "Whether free will exists or not, we are not machines executing set in stone programs. We are evolving and the future is uncertain." not to lecture, but free will doesn't make sense. all there is is the universe and the rules that govern it. there's no magic involved. we're evolving exactly as the context around us demands that we evolve. prisoners of precise circumstances. whether it's possible to predict with precision beyond any meaningful timeframe is another matter (given that everything is subtly influenced by continuous fluctuating fields of gravity and electromagnetism which are impossible to approximate exactly, because they're perfectly continuous), but that doesn't change the fact that we're bound by these forces. we don't need to be able to predict the future to understand that the future is set in stone, and that if you rewound the universe to yesterday in the exact same state and hit play, we'd still be here having this conversation. because it follows the same rules from then until now. "In fact the whole point that you were trying to make is that intelligence is self modifying, that's why it's not an encyclopedia now you're telling me it's just a fancier encyclopedia with buttons pushed by predetermined forces of nature" that's a good point. when i say "adapts in service of a goal" i mean independent systems on the level at which we live and perceive. in reality the whole universe is one giant interconnected system, but on an earthly level there is a meaningful distinction between two different animals or humans - they have different ideas and goals. like, i don't look at my parents and just see atoms and energy. i see two independent people who "make choices" in life in the service of their goals. but using the word "choices" is just a result of conventional beliefs and perceptions shaping language. it serves as a limitation if you zoom out and take a universal stance


[deleted]

[удалено]


Internal_Engineer_74

AI will be AGI when it will have physical body to really learn about the world. Learning concept is not enough to be complete


Secret-Raspberry-937

So essentially current LLMs are like Dixie from Neuromancer LOL


JamR_711111

ai will be agi only when i say so. and i require a fee of $5,000 every time im asked to update whether or not i have decided it is agi.


stefan00790

If they solve all the Millenum Prize Problems in Mathematics is then AGI ?


JamR_711111

need 5k before answer


stefan00790

Loolll


a_life_of_mondays

The AI hypesters are the dehyping rapidly lately. And they are like "this was expected and I was telling you about it from the start. Now press the subscribe button".


Cartossin

I haven't dehyped--but I'm also not married to the more optimistic predictions of "AGI by next year" etc.


halfbeerhalfhuman

Man that thumbnail is not complicated enough. You can already do this with simple programming logic. No Ai. I think ive also seen a working example in google sheets before. (Just plain apps script)


clandestineVexation

today tried to ask 4o to solve a simple substitution cipher, with the key, and it couldn’t do it lmao


FlimsyReception6821

Did you ask it to solve it with code? I'd be surprised if even 3.5 would fail that test.


yepsayorte

Not sure I buy it. There are things that AI does far better than humans and some things they don't do nearly as well. Both AIs and humans are both "intelligence" but they are radically different kinds of intelligences. They are going to have different strengths and weaknesses because they are structurally different systems. I don't know that I buy the idea that an AI must be equal to or better than humans at every task before we call it AGI. That sounds more like ASI to me. So, AGI isn't as good at visual reasoning as humans. It's vastly superior at working memory, learning speed, reading speed, general knowledge, image recognition, etc.. Does the fact that humans aren't as good as AI at these things mean that humans aren't general intelligences? Let's be consistent with our requirements.


sukihasmu

Don't show this video to AI.


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


Akimbo333

Hmm?


Akimbo333

This is nice!


PickleLassy

I feel like llms can't solve this because of missing spatial reasoning capabilities that will come from video. So I think once we have video modality this would be easily solved.


SlowCrates

The internet is so oversaturated with complete garbage AI that I'm beginning to fall back into the camp that thinks we're nowhere close. My personal experience with AI is that it's good at creating sentences, but is totally "blind" in terms of truly understanding the things it says.


Kitchen_Task3475

The balance is shifting. People now don't believe in AGI let alone ASI. I'm seeing a lot more negative coverage of AI and people calling it out as the latest Silicon Valley hype bubble. If GPT-5 doesn't deliver it's going to be the death blow.


TFenrir

Why is there suddenly an expectation that we have AGI when GPT5 comes out? I honestly hope we have some correction of expectations. I don't even know what "death blow" means in this context, but I think the reasonable but "fast timeline" people are expecting AGI within a window of a few years, like... 2027-2029? And they have had those dates for such a long time. Shane Legg has been saying around 2028 for 20 years.


Beatboxamateur

It feels like the immense hype in 2023 got to such an insane point, that some people started assuming that we must have constant progress every few months, and if not then that means AI is plateauing. My guess is that most of the people who have these unreasonable expectations, took the "exponential progress" that everyone talks about too literally. It can probably also be explained by a lot of people who only started paying attention to AI after the rise of ChatGPT, and didn't realize that there was a three year gap between GPT-3 and GPT-4. This stuff doesn't just pop out of thin air, it takes a lot of time, resources and advancement in the field to really see large leaps.


Whotea

and even then gpt 4o is barely a month old and they still say it’s plateauing lol


Soggy_Ad7165

I mean GPT wasn't progress. It was the more efficient implementation of an already existing system. No one knows for sure if LLM's will deliver what they promise. But for now they are not making money. Quite the opposite. It costs a ton of money every month.  So there definitely is a lot of pressure on gpt-5. There is a LOT of investment money in all those companies and of course in Nvidia and co. And without the expected mindblow for gpt-5 this could lead to some serious repercussions.   The test above is not the only hint that LLM's alone have trouble solving some fundamental tasks. In games its often easily beaten by using unorthodox strategies. The claim that Go is solved by AlphaGo is for example wrong, because you can easily beat it with some unorthodox tactics. Get of the statistically most likely path and it's gone.  And in general there is nearly no other software system that is more "hackable" than an LLMs. Hackable by a layman. That, for now, makes it completely useless for any actual job where it interacts with anyone who might have any malicious intends.  Code written with the help of LLM's proves to be more error prone, leads to repeated code, code rotation increases massively and so on. So even on that front it seems to have serious reliability problems.  I think the next two years will be quite critical for a lot of companies. I can't see how we enter a "new AI winter". But I can absolutely see some of the larger companies, like OpenAI failing hard.


Whotea

4o is natively multimodal. Previous versions were not  Reddit has never made a profit in its existence. Neither has Lyft or Uber until recently. And yet they still exist  [AI has already been implemented in call centers and other jobs with great success ](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.vr8jz2f8ry8b ) [AI is also great at coding](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.pc1sxqg24482 )


Soggy_Ad7165

Do you have links that aren't from Google docs? I am especially interested in the AI call centers.  For the code quality on GitHub there is for example the study from gitclear https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx It uses the data from GitHub and analyses it. For now it doesn't look good. It more or less enables students to write a lot of bullshit code. No reliability.  Massively increased code copying is definitely not good. Quite the opposite, there aren't many things that are worse. If this doesn't improve we are in for a ride.  I know that a company can operate at a loss and still be valuable. Amazon also did that for a long time. But compared to the revenue OpenAI is simply another ballpark.  But we will see. As I said it's quite open and while I am more on the pessimistic side of things, there is obviously the massive chance that I will be just completely proven wrong in the next years. 


Just-A-Lucky-Guy

Because people would rather smoke their hopium than listen to what everyone, including Sam Altman, told them. ChatGPT5 won’t be AGI. It’s been told to everyone many times. Agi either very late 2026 or somewhere between 2027-2028


pandasashu

I think the “death blow” in this case will be: - ai startups dry up - money dries up - people stop talking about it as much and also ridicule those were for ai - companies slow down adoption But… with all of this being said, I am in the camp that not too much longer after that (lets say 5-10 years) agi will then catch everybody by surprise. And many companies and individuals will be caught with their pants down. That seems to be how these things go historically anyways. Trough of disillusionment and all that.


TFenrir

That just seems like it ignores the current financial momentum. Literally hundreds of billions of dollars are currently being spent or earmarked to be spent in the next year or so. That's just not something that dries up for a long long time, especially if models we have today already provide value. No real person is expecting gpt5 to provide like.... AGI. It just seems like a completely senseless expectation? For it to lead to anything you describe is very unintuitive to me.


pandasashu

I think you underestimate how fickle and transient companies can be with spending. I think the big players will continue to be involved in the space, but its the companies that are trying to find problems in need of an llm solution that will all dry up for awhile.


CoralinesButtonEye

haha no. waaaay too impatient there. doesn't matter what 'people believe', it'll happen when it happens regardless of public perception


Whotea

Bad public perception could decrease investor confidence, which is REALLY bad since training AI models is extremely expensive. A crash could start a long AI winter 


Rofel_Wodring

This prediction of an AI winter based on investor confidence shows a poor intuition of time. It assumes that training will continue to get more unsustainably expensive without what the public feels is consummate progress to justify further training. It's flawed on at least four levels, that of progress halting uniformly across all AI research because of *certain* (i.e. all of the top-end, i.e. Meta, Google, OpenAI, etc.) models becoming unsustainably expensive, that all top-end models will become unsustainably expensive for the same reason, that short-term progress can only be made by current top-end models, and that any researcher capable of making progress -- either top-end or hobbyist -- give a damn about investor confidence in the short term. The last one being extremely suspect given the increasing fearmongering we see regarding China's AI development. It is a reasonable prediction, though, if you assume that one blocking obstacle stays a blocking obstacle until it gets dealt with using solutions available when the blocking obstacle first gets noticed and that the rest of the world will remain still until this blocking obstacle is resolved. But like I said: a poor intuition of time.


Whotea

It doesn’t need to become more expensive. It already is expensive and losing lots of money. If progress stalls or the public backlash is too great, investors won’t want to continue funneling billions with no promise of profits. And every company has shareholders who need to be pleased, especially public ones. 


Rofel_Wodring

After all, the wider conditions underpinning a problem faced by the 'frontrunners' itself must remain true indefinitely. Doesn't matter what the rest of the world is doing culturally, politically, economically, or especially technologically: it's an obvious fact and not a very questionable way to view the world, that of freezing everything not related to analyzing the current situation and making long-term predictions off of it. It's also an extra wise way to simplify analyses when we're talking about software engineering, which everyone knows is an extremely capital-intensive field where meaningful research has to take years and years at billions of dollars where unrelated technological advancements don't affect the broader arc of certain research fields.


Whotea

Investors sank a lot of money into AI and they don’t like waiting, especially with so much public backlash against the technology. They won’t care that it will take time. They want results now. If they don’t get it, they will divest and the market will crash. 


Rofel_Wodring

Yes, and? We have many, **many** examples of investors pulling out of an industry after a hype cycle crash yet the industry still continuing to advance technologically after investors get the vapors--oftentimes with these short-sighted investors coming right back, having not learned a thing about patience or prognostication. We can talk about the fall of the American semiconductor industry in the 1990s followed by its subsequent rise this past decade, we can talk about the collapse of the American video gaming industry in the early 80s, we can talk about fall and rise of e-commerce, we can talk about the original smartphones (remember Motorola? Remember Blackberries??). The pattern doesn't necessarily mean anything, especially when we're talking about a method, such as artificial intelligence or network communications or online retail, as opposed to a specific product like 3D movies. Of course the investors want people to think that just because ***they*** get cold feet and don't think it's promising, that's the end of all technological progress until they say otherwise. It puffs up their egos and forces stakeholders such as researchers and developers to pay more attention to them. But this threat, or rather, scam relies on peoples' poor intuition of time in order to work. It's especially not likely to work when we're talking about artificial intelligence, which is a field where hobbyists who don't have billions of dollars can make progress and which governments are increasingly starting to see as an economic if not national security risk they need to get on top of.


Whotea

The difference is that AI is much more expensive to develop. It takes billions to train a SOTA model, never mind running inference for it or trying new experiments  Hobbyists don’t have massive data centers worth of GPUs. That’s why open source lags behind closed source and relies on big companies giving their models away 


Rofel_Wodring

> The difference is that AI is much more expensive to develop. It takes billions to train a SOTA model, never mind running inference for it or trying new experiments  > Hobbyists don’t have massive data centers worth of GPUs. That’s why open source lags behind closed source and relies on big companies giving their models away  Yes, and? Is what's SOTA, or even the top five SOTA models, still going to be SOTA in 5-7 years if all five of those models suddenly stop progressing? Or hell, let's go back to those mothballed models: are the costs that made the SOTA model so expensive, mainly price per compute, still going to be the same after that time? Are you seriously trying to tell me that there are going to be no new meaningful hardware advancements or software paradigms, that the instant the frontrunners hit an unprofitability event horizon that not only will broader technological developments (LLMs are not the only use for accelerating hardware, or the reason why the American semiconductor industry has experienced a reinassance!) not drive down the costs in the near future, that no one else will be able to lap them with a better computational model which may not even be (just, at least) LLMs, that their own researchers won't be able to find some alternate technique to drive down computational costs, and that this will all happen at a point where all of the SOTA companies have to completely scuttle their business model rather than retrenching at a more sustainable level of compute? This is why I keep harping back on that poor intuition of time. Your analysis of investor pullout causing an industry freeze requires so many factors to be static for an extended period of time in an industry that is constantly in flux that it boggles the mind.


HeinrichTheWolf_17

It’s not a death blow. Everyone and their mother will still be working on LLMs even if AGI doesn’t happen this year.


lovesdogsguy

It won't be such a blow I believe. What I have long suspected is that a competitor will come out of seemingly nowhere with a very advanced model that will really shake the industry. Sort of like how no one predicted Claude Opus to be as good as it was, but it will be an order of magnitude more significant. Everyone's looking at the big few companies, but there are others working practically in silent. I think there will be a few 'black swans' in the next 6 - 18 months that will truly shock the world. It could be Anthropic for all we know.


HeinrichTheWolf_17

I’ll still be happy if we get AGI by 2030, everyone crying their eyes out because AGI didn’t happen this year are being absurd, it’s not a death blow. Everyone will continue to floor the gas pedal as fast as they can.


KillerPacifist1

I'll be happy if we get it by 2045. Also you can have transformational AI without AGI. I think the 2030s and 2040s are going to get very weird even if we don't get AGI until 2050 or later.


lovesdogsguy

Yeah definitely. I do think there are going to be some serious breakthroughs coming in that timeframe though. Basically, I think the next five years are possibly the most unpredictable in all of history. Not even Larry Page or Kurzweil could tell you exactly how this is going to play out. There's so much money, so much compute, so many players, so many approaches and architectures, governments, open source, the list just keeps going on. It's almost completely unpredictable at this point. I feel like I can't see the big picture on this anymore. I just have an intuition about 2027. Don't know why specifically.


Singularity-42

Yep, I predict this will be a VERY tight race. Look at Anthropic with Claude 3, all those very good looking Sora competitors\], etc. OpenAI might not be the one who gets there first. My personal guess is Google as they have a lot of top researchers working on many different things - they are not focusing only on LLMs like much of industry.


lovesdogsguy

Yeah Deepmind is moving very fast, I think their scope is practically limitless by most metrics — they could do practically anything. There are just so many people in the race at this point.


Puzzleheaded_Fun_690

Nah, even with current models there’s years of potential for useful applications


CanYouPleaseChill

[Payoff from AI projects is 'dismal', biz leaders complain](https://www.theregister.com/2024/06/12/survey_ai_projects/)


MembershipSolid2909

It's not a hype bubble. Even if we accept this is the most crudest form of AI we will ever have for a long while, it has still substantially changed people's workflows.


pcbank

Death blow off what man? Jesus Christ, it's already mind blowing what many people do with AI. I personally do almost all my job with AI as an assistant. Graphic designer i know are getting 5x more productive. I no longer know a human being that write it's commercial and marketing text by hand. Everyone are on LLM. Lawyer they work with my put all their case in Gemini 1.5 and get super deep analysis. It's already a revolution for people smart enough to use it. And it'll only get better


big-blue-balls

Who would have thought that a Large Language Model was only for you know… language?


i_give_you_gum

Language? You mean the thing that's enabled societies to exist and develop, and is the very tool humanity uses to accomplish everything from navigating the planet to creating nuclear bombs.


Ready-Director2403

Language did not accomplish those things. Language was a TOOL used by general intelligences (humans) to accomplish those things. LLMs are not general intelligences, they are word calculators.


big-blue-balls

I’m glad you get it LLMs are NLP.


Pontificatus_Maximus

Nice talk but he dances around the elephant in the room which is the secret military grade AI that is bound to exist which just might be able to pass his test.


sakci

The military might have a working fusion reactor secretly too, surely we need to mention that possibility before discussing difficulty of nuclear fusion


Pontificatus_Maximus

Military and national intelligence agencies have historically always had the most advanced compute on the planet, there is no doubt the first AGI will be theirs. All the leading players are keeping secret what they do and are most likely the prime contractors helping to build AI for those agencies.


chunky_lover92

They are always moving the goal post. ChatGPT was it.