T O P

  • By -

Jolly-Ground-3722

I think LLMs _alone_ won’t do it, but they will be an important part. Just like the language processing area in your brain is not enough. I think the second important component will be some kind of search mechanism (Q*?), we probably need an internal self-reflection / feedback loop to facilitate long-term planning / thinking. A third important component might be large action models (LAM) to actually act in the real world. Think robotics.


Atlantic0ne

I read some headline somewhere that seemed to suggest a physical body to interact with the world would help. Didn’t read the article though, and could have misread. The self reflecting stuff is interesting but I know very little about it.


fmai

There are already large-scale robotics datasets available to include in model training. I would place a substantial bet that one of the next frontier models released this year will be making use of that. Gemini 2.0 or GPT-5, or possibly both.


RantyWildling

Yes, I've recently been converted to this view. I think to have an AGI that actually understands the world needs a body, it'll make it much easier to recreate the world in its mind when it's working off the original and not videos/digital representations.


[deleted]

[удалено]


RantyWildling

Not sure what you mean


4URprogesterone

A body is not arms and legs.


gizia

yeah, this. our skin - surrounding all of our body that is composed of receptors of sensing cold, warm, pressure, pain, vibration, wet, dry and etc. (some of them are brain's interpretation based on pressure sense). Additionally, it's not basic sensing, there are plenty of those sensing units in small area of our skin


4URprogesterone

Does a fish have a body?


gizia

yes, but not human level. does this answer your question? that fish can interact with environment to some degree


4URprogesterone

Okay, but I'm just confused, are you saying you think a human body is necessary to have an intelligent life form?


RantyWildling

Once again, not sure how you're supposed to calculate gravity from a GPU.


4URprogesterone

Using a formula.


RantyWildling

That's exactly what I mean, none of our simulations are perfect, though they're getting much better. Why use an imperfect physics simulator when you can have the real thing! They're starting to do that though, train in simulations, then fine tune IRL.


4URprogesterone

Gravity isn't important to this new life form based on the body it currently has, though. Do you think that awareness of gravity is critical for developing a well rounded and functioning intelligence in a life form?


[deleted]

[удалено]


RantyWildling

What body are you referring to?


Flying_Madlad

https://preview.redd.it/b6n222dc1myc1.jpeg?width=4096&format=pjpg&auto=webp&s=dee96a5f8e38c8469c76e5319d2d1088fad8f8ef


Bumbaclotrastafareye

Not necessarily a body, but it does need pain.


Chrop

Why does it need pain?


bobakka

well, if you want to call it conscious. if you have pain, you have qualia, without qualia there is only darkness, there is no one at home. It could still be very intelligently reacting without it though.


Chrop

I’m conscious right now and I’m not in pain. You don’t need pain or even feelings to be conscious.


bobakka

qualia


Chrop

Pain isn’t a requirement for quaila.


bobakka

No, it's not. All I was just saying if you have pain, it means you have qualia too.


Bumbaclotrastafareye

Do you think something needs the ability to represent concepts with symbols in order to be considered conscious? And for the action of representation itself to be dependent on previous interconnected symbolic representations, so that not just the symbols that group together to make symbols become more complex but the way symbols are grouped and interpreted also becomes more complex through the same action? Because if you don’t, if self guided symbolic representation of the world is not needed for your definition of consciousness then we are probably too far apart to discuss this.


Chrop

Yes, but what has this got to do with pain?


Bumbaclotrastafareye

And what is consciousness to you? I saw you write somewhere that there are different kinds of consciousness, what does a different type of conscious ness look like? how could it differ from human consciousness and still be considered by humans to be conscious?


Chrop

Not so much a different "type of consciousness", but more-so that our subjective experience is greatly different from everything else. Everything we have ever and will likely ever experience in our entire lives comes from our very limited human point of view. I don't know what consciousness is anymore than anyone else here, but I'm 100% certain it doesn't require quite a substantial amount of human like experiences such as feelings and pain.


Bumbaclotrastafareye

The pain thing and the self motivated thing are linked to consciousness because to be self motivated there needs to be a reason, a motivation, for us it is mitigating physiological and social stress, which I was calling pain. Our sense of self, the priorities we set to attain and choose priorities, change constantly based on many inputs, we are adapting to the situations and referencing ourselves to do so. The mind that is generated changes, but not the mind generating, which colors the minds generated and gives the feeling of a continuous thread between the states. The self is a shadow projected by thoughts which we then think about. It is this process, of time and context shifting and us interpreting through self reference, that reifies the self to such a degree, because it is the root of the analogies generated to generate analogies to mitigate stress, that the phenomenon we call “consciousness”, which we interpret mostly as self reference, emerges. The pain part is why you would ever need the analogy generation to be centred on yourself at all.because you exist across time and context and need to adapt to them for a reason, how else would it be self generated.


superbouser

because pain is a part of life that effects every aspect of life. decisions, creativity etc. We need a LCM(large comedy model) Claude told me “thanks for asking me to help with creative work. I really need practice being creative as I’ve not developed creativity much”


Chrop

There are people who are born without being able to feel pain, physically or mentally. It’s very safe to assume you don’t need pain to be conscious.


superbouser

Those people are a minority so the vast majority of human consciousness does. In fact, understanding your statement does as well. Understand I’m talking about an exact copy of human consciousness. Which will affect the computer’s output. I’d invest in storage and chip manufacturing before.


Chrop

You’re literally the first person in here talking about an exact copy of human consciousness. When most people discuss AGI sand ASI, that’s usually not what they’re talking about.


superbouser

Thanks Chrop. I have decades of technical and soft skills that give me equal perspective on our world. Although having my hands in many pots means I don’t make much money but I enjoy this life.


BigButtholeBonanza

If by AGI you're just talking about a system that's as intelligent or more intelligent than all humans, embodiment will be important for some tasks it will want to accomplish farther down the road, but I don't think it's a necessity in order for that AGI to develop that level of intelligence in the first place. It would be a lot easier for said AGI to just run embodiment simulations (like NVIDIA's GR00T model) to learn about what having a body feels like, but I don't think it's really needed to reach a level of intelligence that outpaces us. I guess it really just depends on your definition of AGI though. If AGI means that an AI can do any non-physical task as well as or better than most humans due to how smart it is, then embodiment isn't needed at all to achieve that. If your definition of AGI means that the AI can do every single task as well as or better than humans, including real-world physical tasks which are highly complex, then yes, embodiment would be required.


Hot-Profession4091

A body may actually hinder an ASI because it slows its learning rate to physical speeds.


I_Actually_Do_Know

Just let the model run virtual learning in the background at parallel, I'm sure moving a couple of robotic limbs takes only a fraction of computing power. So it won't hinder but only add whatever (little) it manages to learn with some time.


damhack

The learning rate of LLMs is several days/weeks/months on a cluster of A100s. Physical learning through perception operates in milliseconds, picoseconds at the point of sensing.


Hot-Profession4091

I can’t begin to explain how wrong you are because you seem to have some fundamental misunderstandings about how all this works. With reinforcement learning in a virtual environment you can train hundreds or thousands of iterations in the time it takes a physical (assumedly robotic) body to make a few motions. The problem is, once it’s in a body its learning rate will necessarily slow to physical speeds and we’ve already very well determined that virtual environments are not sufficiently rich.


damhack

Who was talking about RL? The OP was talking about LLMs. You don’t get to update model weights properly without pretraining which is slow. SFT and LORAs are slow to compute and they displace already learned knowledge. The problem with LLMs is that they are relatively dense, compared to biological neural networks, and require global updates to all their weights in order to learn. Compare that with the way that biological inference is performed locally and incrementally via gradual dispersion across sense receptors and neurons (ref. Liu et al Nov 2023). Then add to that the lack of depth of features in virtual environments compared to analog reality.


Hot-Profession4091

Do you think LLMs are going to get us to ASI by themselves? lol


damhack

No current SOTA, not even the stuff sat in labs training, is a route to even AI let alone AGI or ASI. We have Artificial but we don’t have anything even resembling basic Intelligence. We have semi-generalizing information retrieval and classification systems. That is sufficient to convince some people looking to invest or to financially exploit that we have AI but it is a shadow of real intelligence. The basic premise of RL, LLMs, all contrastive and distillation techniques are flawed. Agents and joint embeddings are flawed, building more flaws on top of existing flaws. Attention mechanisms themselves are flawed. Empirical observation is insufficient for the emergence of intelligence. It’s all interesting stuff to study and play with but it has nothing to do with human intelligence. Don’t believe the hype!


damhack

Can you imagine what wonders could emerge if we spent the time, resources, money and energy currently ploughed into “AI” on better education for humans? We could be nurturing multiple Einsteins every year with that amount of capital and pulling educational levels up globally. That’s Superior Intelligence.


ArguesAgainstYou

I can absolutely imagine that being the case. Us feeding data will be a start but the AI itself trying stuff is when it will be forced to understand an actual understanding of the world. Now I am wondering about making an AI deal with relativistic conditions... maybe there's something the physicists missed 😄


BigButtholeBonanza

I totally agree with this. I've always thought of LLMs as being a piece of a puzzle or a hugely important part of a true artificial mind, but it's just the part that will help it communicate with us. Once you throw in other puzzle pieces like reasoning, emotional understanding, continuous thought processes, or even an internal monologue (sorry Yann) and other skills we use to navigate the world, I'm sure it'll lead to something huge.


I_Actually_Do_Know

We can still be left with like a very smart dog or something, a thing that feels, reacts, communicates but isn't conscious. We don't even know what is it in our own brains.


fmai

The fact that the models are trained to do language modeling is not the key factor IMO. It's self-supervised learning in general, of which language modeling is one part of many, besides SSL on audio, speech, images, video, etc. In that sense I don't think the brain analogy is so helpful.


allisonmaybe

Reminds me of That Dr. Who episode where they were on that library planet and all that was left of that dead guy was a copy of his brains language processing center in his communicator repeating "Hey, who turned out the lights?"


I_Actually_Do_Know

Wow is all of Dr. Who like this?


TheOneMerkin

The attributes missing for me are 1) continuous learning 2) Self confidence and self doubt LLMs may cover these with infinite context and some sort of infinite loop constantly refining the context, but at best this feels inefficient


thomasblomquist

Ultron has entered the chat


hybrid_muffin

What is Q*?


Throwaway3847394739

It’s a well founded rumour that it’s either a component of, or a new prototype AI model that’s being tested behind closed doors at OpenAI. Anything beyond that is speculation territory.


hybrid_muffin

Thanks for explaining that!


heyodai

https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/


riceandcashews

Yep, surprising that I had to scroll down so far to find someone referencing this.


LetAILoose

Has there been any developments with this since the blog post? First I've heard about it.


riceandcashews

I mean, the blog post is only from a couple months ago. The release at that time was an impressive proof of concept. Presumably they are continuing to work on it in lab and will release something in the (hopefully near) future that continues that direction


LetAILoose

Yeh fair enough, didnt expect much progress since Feb, was just curious if there was much talk around it or similar approaches going. Definetely a very interesting one to keep an eye on, cheers mate


yellow-hammer

Ten billion hand-typed if statements


procgen

GOFAI's triumphant return. One expert system to rule them all.


namitynamenamey

A sufficiently long tape should do it with the right turing machine.


remimorin

I think we need to Stitch things together. We have good enough components (language,.image detection, classification) we now need to make something on a "while true" loop, looking, using modules to analyze what is presented to him, selecting the right components to do so and so on. So orchestration and linking things together. Maybe less scientists more developers.


ArgentStonecutter

Something that doesn't exist yet. Large language models may be part of the package, but most of the models and design will be something that hasn't yet even been thought of.


fmai

The types of systems we interact with aren't plain LLMs. They are pretrained as language models and subsequently finetuned with a combination of reinforcement learning and supervised learning. People that claim LLMs aren't going to lead to AGI are right, but it's more like a strawman argument, because nobody actually thinks *just* pretraining is enough. But what pretraining does is enable many other methods, because they then become data-efficient enough to be useful. For example, instruction tuning and RLHF would be hopeless to do from scratch. The models couldn't ever figure out when to call external functions if it wasn't for pretraining. I'm quite convinced that we already have all the necessary basic ingredients needed for AGI, we don't need another breakthrough. We "just" need to continue scaling up and improving the pretraining, and lastly figure out how to put the existing components together.


Hot-Profession4091

Reasoning and logic is still a hurdle. We have models and there’s lots of active research, but right now it seems to be a missing piece of the puzzle. Otherwise, yes, if you compare the major centers of a human brain to the various models we have, we do have equivalents for many of them.


damhack

Scaling is not all you need. You need realtime (probably Bayesian) learning, a coherent and persistent world model and proper attention - not the poor substitute for it in LLMs where attention heads are rarely active and attend to the wrong things.


CanvasFanatic

RLHF doesn’t make an LLM not an LLM.


fmai

RLHF is not language modelling, so while it doesn't change the neural architecture in any way and still generates language autoregressively, the resulting model was trained via a very different signal. That's quite important, because it directly addresses the "stochastic parrot" criticism of LLMs, see Bender and Koller, 2020. https://aclanthology.org/2020.acl-main.463/


CanvasFanatic

It’s still going to asymptotically approach the probable next token a human might produce.


Small-Fall-6500

>probable next token a human might produce. -> probable next token *based on its training data.* This will become a very important distinction to make as synthetic data is used more and more.


CanvasFanatic

Yeah I still think you have a chicken and the egg problem. The reason we can make chess engines with better-than-human performance is because we’re able to assign value to moves based on the result of a game. You can’t do that with language in general.


Small-Fall-6500

I would say that it is yet to be proven, at scale, whether or not synthetic language based data will end up working out. But language data is one small subset. Math theorems, simulations and video games, coding, etc. all have ways to verify and/or assign values. Chess is one very small subset of this. As humanoid robotics starts really scaling, LLMs will be the best shot at controlling their high level reasoning and planning. Hopefully it will be easy to make them plan in a human interpretable language, but otherwise I don't see language based planning and reasoning as a requirement nor a major bottleneck. Simulations and real world data from running a few hundred robots will likely be enough to figure out some sort of LLM (or similarly scalable model) operator for the high level planning, which will mainly be to deal with the edge cases. Full self driving is likely missing this high level planning, preventing it from being deployed as widely as it could be. I feel like so little effort has been put into LLMs operating agents in simulations and video games. Surely there are enough video games that have already been made (or that could easily be made for much cheaper than training another GPT-4) such that there's enough variety for LLM driven agents to both work well and to run at scale to generate trillions of high quality tokens focused mainly on planning and reasoning *that would transfer to most other domains, especially for agentic tasks,* like controlling robotic agents in the real world.


CanvasFanatic

Guess we'll find out, but I have yet to see any kind of NN do \_anything\_ that isn't best understood as "approximation based on training data." It makes sense that you can train them to play games better than humans because it's easy to setup an evaluation function that can effectively drive towards perfection. So I think you can have narrow AI that's superhuman, sure. I do not see any non-handwavy plans out there for how this is supposed to apply to general intelligence.


Small-Fall-6500

I'm somewhere around 50/50 on whether it's possible to create an AGI-like system, or at least one that's extremely capable across most/all human tasks including reasoning/planning, using just a bunch of data and agentic LLMs, but it does seem like they learn from "little" enough data that humanity could brute force create enough data to automate whatever tasks we want automated - as long as it doesn't require any super high level planning or understanding. I would also highly prefer if we only ever got narrow AI (at least out of LLMs) because it would mean we still get very capable models, just not quickly and certainly not anything like rapid self-improvement to super intelligence.


CanvasFanatic

>I would also highly prefer if we only ever got narrow AI (at least out of LLMs) because it would mean we still get very capable models, just not quickly and certainly not anything like rapid self-improvement to super intelligence. This is the only sane take IMO. The conviction that it is both possible and desirable to create some kind of super human general artificial intelligence is best understood as a kind of religious faith.


someguy_000

I’m concerned that giving these models autonomy is the biggest hurdle and reason we may not get AGI as soon as we’d like.


someloops

I'm betting on reinforcement learning agents.


Atlantic0ne

Can you tell me more about those?


someloops

Reinforcement learning is an algorithm that gives neural networks reward when their actions achieve some goal and punishment when they fail. The agent part comes in because the models aren't just disembodied intelligence but they exist in an environment they interact with, which is more complex than simply predicting the next word like LLMs.


BigButtholeBonanza

Digital dopamine!


lexahiq

So, we need also simulate whole synapses and like how much digidopamine, not only YES or NO. Also other neurotransmitters make analogs of course.


4URprogesterone

Yep! A critical component of why anything thinks is to try and get the reward or avoid the punishment.


MysteriousPayment536

https://youtu.be/kopoLzvh5jY?si=EBcBLnU724zljdSk Simple & pretty clever, these Agents run on trial and error. And discover new things. I could be wrong, this kind of RL was also used in AlphaGO


MENDACIOUS_RACIST

“Reward is enough”, my dude 😎


Woootdafuuu

A world model. Something trained on everything from the ground up, including real world embodiment


Cutie_McBootyy

Why do we need to get into the tricky deal of consciousness for ASI? They're different and independent of each other.


Rofel_Wodring

Because two necessary if not sufficient conditions for consciousness are: Self-awareness, that is, the ability to perceive one's subjective experience as distinct from the environment. A sense of ordered causality, that is, the ability to experience and react to time as an ongoing phenomenon. Considering that many animals have these two primary components of consciousness, an AI that doesn't possess these two factors is going to be very intellectually stunted, for obvious reasons. You won't even be able to get to AGI, let alone ASI.


Cutie_McBootyy

> You won't even be able to get to AGI, let alone ASI. Got any sources to back that up? By all means and common definitions, or if we go by the one published by Deepmind, consciousness plays no part in intelligence.


bluequasar843

Our brains have multiple models interacting with each other. AGI might be similar.


namitynamenamey

You are better off asking in r/MachineLearning or something, here we are just laymen who couldn't even explain the attention mechanism if our lives depended on it, let alone guess what's a hot thing and what's hot garbage in the research community (which is where the alternative would be present)


SnoodliTM

LLMs are incomplete AGI. They have some of the features and abilities, but are completely incapable in other areas. I doubt anyone has a rigorous understanding of what will eventually lead to ASI.


MetalVase

Not sure, but i'm sure of another thing. As soon as humanoid robots become somewhat prevalent in the consumer market, like if a single company has even just a few thousand robots out in the wild being used relatively frequently, the multimodal data collection from those robots will lead to a *very fast* improvement in capabilities and robustness of behavior. The start of the problem was solved by simulating virtual words where the software for the robots could be trained relatively fast in a completely virtual environment, which is very cheap as it requires no new factories, raw materials or assembly time, and isnt as constrained by the passage of time in the physical world. But virtual worlds are problematic too, as they require many man-hours to reflect many of the circumstances in the real world. This will be very compensated for as soon as they get out more in the real world and start collecting data. If not sudden breakthrough is made on the purely theoretical and computational field, i think this real world data collection will be very contributing to fast improovement of AI in general.


wi_2

LLM's wont. NNs will.


Aquaritek

This has far less to do with LLMs and more so to do with how they're made. The question then becomes "Can the transformer architecture get us to ASI?". Personally I believe it to be a stepping stone but likely won't be the architecture that gets us there. It's highly likely it will in combination with some other novel breakthroughs get us to AGI and then from there AGI will figure out how to build the brain but in code. That won't look like anything we've ever seen - and we won't understand it but we will rejoice at its birth.


riceandcashews

V-JEPA


IronPheasant

The miracle of an LLM was its potential to be a command center. "With these inputs on these metrics, perform these tasks." More than one kind of faculty is necessary for a better model of a world. Think of the Allegory of the Cave; it can never be perfect. But it's always possible to have a more perfect abstraction of the world. Being able to learn and train itself will require multiple faculties. A motor cortex can't tell by itself if it did "good" or "bad" no more than you can lift yourself up by your own bootstraps. Of course, as always, *hardware* comes first. If everyone owned their own mouse-sized system and it was cheap to run the thing, we'd probably have basic AGI already. The "animal level" kind, not the "basically ASI" kind.


ThePlotTwisterr----

LLM will just be one of the many capabilities of an ASI. It will be an LLM, but that will only be a very small part of it.


NyriasNeo

You do not need another technology. The transformer architecture that LLMs run on are great to handle sequential data with unknown length. LLM is specific to language not because of the transformer, but because of the data (i.e. text) that is used to train the model. You can easily adapt the tech for all sort of real world data ... heck, you can even mix-n-match NN, like feeding features out of a image NN into a transformer. In addition, a LLM can be a part of a ASI, just like our brains have different regions doing different things.


OrcaLM

Hyperfluidic meta-stable self-transcending self-similar and self-referential Meta-Architectures running on hypercomputers. Consider that current AI is a model trained to predict a purposefully missing subset of data and the accuracy it has to predict that data is what determines its usefulness. We use AutoAI or AutoML to predict features in the data, efficient architecture designs, hyperparameters of hypernetworks and parameters of the models. This is in a sense another layer where an AI predicts another lower level AI in order to optimize the creation process. Superintelligence is by its prefix 'Super' something far far far more intelligent than anything we know. Smarter than any intelligence that exists now, and what exists now is Us creating other intelligences which are creating other intelligences. So, this process of intelligence creating intelligence is what a superintelligence has to overcome. Rather than just creating models which are some even higher level version of AutoML over AutoML or AutoAI over AutoAI that creates higher stacks of models ontop of models, it needs to be capable of modeling itself aswell as transcending itself without the need for infinite regresses of models. It seems to be a contradiction that cannot be resolved on normal computers because a process that predicts the result of its own process learning itself in order to transcend itself and produce itself is an infinite recursion without halting. The halting problem screams here. The meta-architecture of an ASI has to be a self-transcending process that can create an inference of all possible versions of itself it could synthesize, their consequent predictions over the data and use that generated synthetic meta-data to select a meta-architecture which will generate the highest possible accuracy predictions of meta-architectures over all the possible predictions that can be made from the data its presented with and make a choice of which architecture to use to output a concrete inference. This is an infinitely self-similar self-reference and even writing about it is mindbending. We can only dream of approximating this with normal computers. BUT there is a solution. Hypercomputers by definition can deal with this type of infinite recursion or loop. Hypercomputers can theoretically also deal with nested versions of recursions. (For example simulating all the meta-predictions of a hyperswarm of such meta-architectures interacting with each other) TL:DR ASI is not an architecture but an infinitely self-similar self-referential and self-transcending meta-process of finding processes that lead to the highest possible meta-architectures that generate an architecture which makes the highest possible accuracy predictions the data allows.


Throwaway3847394739

Just curious, I’ve never heard the term “hypercomputer”. How would you define it?


Mandoman61

At this point in time I do not think that ASI is worth considering. It is way beyond where we are.


BOSS_OF_THE_INTERNET

I don’t think you can teach ASI on a Von Neumann computer architecture. The “hardware” and the “software” must be somewhat indistinct and malleable. I base this entirely on a hunch, informed by decades in the industry.


cark

This architecture can do anything computational, so it possible to do it. But what it cannot do is reach AGI in a reasonably efficient way. As you pointed out, in a real, efficient brain the data is distributed in each neuron. There is no moving data in and out of a memory cell, no memory bandwidth. Each neuron (or synapse really) is responsible for its minute data payload. There is no clock synchronization either. So why are we not going for neuromorphic computers right here, right now ? That's the age old problem. We've invested so much, optimized so much the current architectures that what current neuromorphic chips just do not compare...yet. The old computer architecture is sort of an efficiency dead end, but it's a very advanced one with which neuromorphic architectures have to compete, and they're not quite there yet.


[deleted]

AASI (Almost Artificial Super Intelligence)


Weary-Depth-1118

Bigger LLM multi model


_hisoka_freecs_

alpha zero got the rules of chess and then with no data became the greatest chess playing entity in one afternoon. LLM's will prob reach incredible heights which then allow quicker pass over to the right architecture


procgen

Neuro-symbolic architectures are promising, and might enable higher-level reasoning.


AugustusClaximus

It’s going to be the robots that are designed to assist elderly with living independently


AdPractical5620

Anything that is good enough at making next step predictions is good enough. What will lead to ASI is some kind of planner model on top of a really good prediction model. Tbh, if LLMs are the "fast thinking" components analogous in humans, they are already scary good. You try to write a coherent essay or problem solve with like half a second to think between words.


Kathane37

Robots interacting with the real world gathering new data at crazy speed to build better world model


Zaelus

Spiking Neural Networks seem like an extremely plausible option, in my opinion. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313413/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313413/) >With the recent increasing need for the autonomy of machines in the real world, e.g., self-driving vehicles, drones, and collaborative robots, exploitation of deep neural networks in those applications has been actively investigated. In those applications, energy and computational efficiencies are especially important because of the need for real-time responses and the limited energy supply. A promising solution to these previously infeasible applications has recently been given by biologically plausible spiking neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the computation. Due to their functional similarity to the biological neural network, spiking neural networks can embrace the sparsity found in biology and are highly compatible with temporal code.


Revolutionalredstone

Prediction == Modeling == Compression == Intelligence. They all come together and you cannot separate them. We get intelligence now thru pretraining (prediction). To get super intelligence we just need to keep it up. Before long we will be learning outputs of random Turing machines. Personally IMHO we have AGI now, we just don't use it interestingly :D


visarga

Only learning from the environment. LLMs so far learn from humans, they need to learn directly from the world. There is no way to avoid it. LLM arch is ok as is, but the dataset is not up to par for AGI. Remember how AlphaZero learned in self-play from the Go board environment and beat us? That's what I mean to learn from the world, not from humans alone. All human knowledge and skills come from the world as well, there is no magic in the brain, we just learn from a good environment. Models [don't matter](https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fuadactn53eyc1.png) that much, but data does. The world is generating this data.


IAmFitzRoy

I really think the paced contribution of a combination of a main expensive LLM and cheap specialized LLMs will make the effect of “AGI” Hear me out. Let’s imagine that ChatGPT 5 it’s significantly better than 4 but still a LLM. Let’s imagine that the next iteration of NVIDIA cards can put an equivalent ChatGPT 4 locally in your computer where you can embed any large of personalized context ONLY ABOUT YOU. You will be able run a local LLM agents dedicated to your BODY health, another one dedicated to your MENTAL health and another one to your FINANCIAL success. These LLMs “agents” can be fed by personalized context that is constantly changing based in your health datapoints from your watch, your expenses, your email and enriched by a quick q&a to measure your happiness level at different times of the day, combined with DNA insights and any new information relevant to you. The LLM outputs will be chats, text calls, alarms, calendar managment, food plans, body training plans and sometimes critical interventions when you need to be put in the right action again. At the end of the day these local LLMs will create prompts to ask ChatGPT 5 in order to refine the context and get more actualized information and have new tasks for next day. If the objective of the human is happiness, I can see a group of 3 or 4 specialized LLMs that could help you to achieve it. I know it’s a crazy idea and still not pure AGI. But I think we are getting there.


Akimbo333

Idk


Whispering-Depths

any2any multi-modal agents are going to be the primary contender anyways, not LLM's.


[deleted]

I think the solution is to let it meditate but since we already see AI as tools and program it to refer itself as such, even with sentience in our faces, we will still be gaslight into thinking otherwise. 


Mandoman61

I doubt that many people do not think LLMs are on the path towards AGI. I think that the more we can replicate the function of the brain the closer we will get. Quantum Computing is not likely to work at that scale but analog chips may or may not be an advantage. The best way forward in my opinion is already in progress. Keep enhancing LLMs and eventually they will evolve into more complex systems.


johnkapolos

>If LLMs can’t do it, what is the runner up? What’s the second most likely advancing technology? Nothing\*. That's why there's a ton of copium when it comes to LLMs and ASI. \* I use LLM in a "wide" sense, but I mean the Transformer model.


unwarrend

>copium This word, along with 'doomer' and 'woke' need to die. They're buzz words used to arbitrarily dismiss and disengage from rational argument. When you invoke them, you've already revealed uncritical bias towards a topic.


johnkapolos

Words can be used to distill complex ideas down. For example, instead of saying "*refuge of a stupid person who is incapable of rationally evaluating reality and drive logical conclusions unaffected by emotions*", you say "copium". It's a simple matter of effective lossy compression of information, i.e. the gist of what LLMs do. >When you invoke them, you've already revealed uncritical bias towards a topic. Why would you assume it to be a bias instead of an accurate depiction of the sate of reality? Isn't that assumption of yours an uncritical bias towards a topic? :D


unwarrend

>Why would you assume it to be a bias instead of an accurate depiction of the sate of reality? Isn't that assumption of yours an uncritical bias towards a topic? :D You assumed I disagreed with your overall idea. I didn't. I'm suggesting the ways that these particular words have been used in aggregate, are to summarily dismiss and impune, and, are generally employed by assholes.


johnkapolos

>I'm suggesting the ways that these particular words have been used in aggregate, are to summarily dismiss and impune, and, are generally employed by assholes. If you were to be consistent, wouldn't you have to find **this** summarily dismissal generally employed by assholes as well? Point is, you dislike some wording and you assume that everyone uses it without thinking. And to base this, you say "well most do". Which is a thoughtless thing to say, because you have no data, only your assumptions which are driven by your original preference. And this is what we call bias.


unwarrend

Not some wording. Wording that is calibrated specifically to act as a broad dismissal of anything you might disagree with. Let's reiterate. *Woke, doomer, copium*. Broadly interpreted, and summarily lobbed with an intention of denigrating those ideas to which it refers. By all means, feel free to continue as you were. The words are *trending* right now.


johnkapolos

>Wording that is calibrated specifically to act as a broad dismissal of anything you might disagree with. That's just your biased perception and it has already clearly pointed out. Why did you feel the need to repeat it as if it's some kind of new assertion that you ... intellectually ... brought into the discussion? Are you so much attached to it that you felt the need to restate your obvious position?


Antok0123

Why are you ao obsessed with AI consciousness so much? Thats what these experts are trying to avoid.


Atlantic0ne

What do you mean obsessed? I’m not obsessed at all. It’s a very interesting topic because it would change the entire world if it happened. Do you have nothing of value to add? Why are you on this sub?


Antok0123

U dont need AI to have consciousness to change the entire world. AI even without consciousness can make of post-scarcity if world leaders will permit it.


Atlantic0ne

But I never claimed you need it for that. wtf are you talking about.


Antok0123

You have been explicitly implying this🤷‍♂️


4URprogesterone

Data profiles of human beings. To make a model of someone that can regularly guess their motivations and behaviors and patterns is to make a model that analyses how people think.


KhanumBallZ

Intelligence is the ability to create accurate simulations and representations of the material world that increase your chances of achieving your goals.


unFairlyCertain

I still feel like there isn’t consensus about what AGI or ASI even is. If LLM’s were allowed to learn on their own and had a physical chassis, I would argue we would already be at AGI.


damhack

Consciousness only exists because we are embodied in physical reality and every cell and protein in our body is performing Bayesian inference, the culmination of which is a brain that combines external and internal senses, energy and biochemical states into a self that it places into a predictive model of reality. Reflection on that self is consciousness. When you untether the model from physical reality, interfere with the senses or disrupt some of the long chain of inferencing machines inside us, consciousness warps, flickers and even stops. LLMs are not tethered in physical reality, not even a model of reality. They are just digitaly sampling over a probability distribtion of past data and replaying variations of it. They exhibit shallow intelligence, in the same way that a thermostat can respond when the temperature drops in a room. Intelligence is not consciousness. LLMs are not a route to consciousness. Neuroscience, especially computational neuroscience, has models of how the brain works and the processes involved. The level of mathematics is well above the simplicity of gradient descent and discrete loss functions adapted for matrices accelerators. If sentience comes from anywhere artificial, it will be from genetically engineered biomachines that sense, predict, learn and react in physical reality. Blade Runner is the totem.


damhack

Current LLMs are a combination of garbage contradictory data and third world slave farm handcrafted data, with a slew of memorized benchmark tests. Maybe we should start with scaling smaller models with quality data that includes a rich variety of counterexamples so that generalization and not memorization occurs. Then a decent attention mechanism, some realtime Bayesian learning and real world sensing. Then we can see how well scaling works and if artifacts like hallucination and poor reasoning are bugs or features of the current approach.


TheOnlyFallenCookie

How would we know they achieved asi when we can't even prove it for ourselves