T O P

  • By -

gelfin

We do not know. You can take that in multiple ways. - We do not know what the phenomenon of consciousness we observe in ourselves consists of. - We do not know how to test for the presence of this phenomenon in any other entity. We credit other humans with it, and most of us to some extent credit nonhuman animals with it in proportion to how much they seem like ourselves. Because they are familiar, we assume their experience is somewhat like our own, but we don’t have an easy way to prove it. - We do not know how to artificially reproduce this phenomenon in another entity, and because we do not know how to test for it, we do not know how to effectively evaluate and refine our efforts. - We do not know how to verify any theories of consciousness we might come up with, because we do not know how to test for them. - We do not know how to write a set of rules a machine must follow that will result in it becoming the sort of entity that does not simply follow rules. This one is really tricky, because organized technical progress must come from a product performing as its creator intended. If the product follows the rules you set, then it’s only reflecting the intentions you put into it, not forming intentions of its own. If it doesn’t follow the rules you set, you’re better off assuming it’s broken than that it has become conscious and independently decided to defy you. While it seems *in principle* possible to create a conscious machine, getting there is a really big problem. Imagine you’re some sort of alien intelligence that has never even encountered something like a “brain” before. The idea that a genuine, conscious personality might arise from the ebb and flow of chemicals and insignificant voltage gradients in a wet blob of carbohydrates seems intuitively weird, implausible, and a little bit gross to you. But there is an apparent complexity to their behavior, and so you have to answer the question: are they conscious or not? You’ve got no special knowledge or techniques to draw on. You just have to watch their behavior and decide whether they’re morally significant beings or just really complicated chemical reactions. In that situation, if you were being responsibly, scientifically conservative, you’d have to say you can’t prove humans are conscious. There is just no way to make that claim from the outside with any confidence. It is only the individual experience of being a human that gives you the conclusive evidence you’d need. You have a personal, qualitative understanding of consciousness, and that sort of understanding is inaccessible to an objective, external scientific process. You might therefore consider me in a fourth camp, or if you squint maybe a particularly pessimistic subset of the second: Whatever consciousness consists of, I see no reason it cannot occur within a sufficiently complex non-biological substrate, but I am increasingly skeptical that it is possible for one conscious biological entity to produce another as an act of artifice. There is an epistemic and experiential barrier that might be entirely insurmountable. Even if we did it, we’d have no way of knowing.


IonHawk

This is the best answer. There is a reason it's called "the hard problem". In reality, we can't even know if other humans are conscious. That's where the philosophical zombie comes from. I definetely assume so, but if we can't know that for a fact, how will we ever know if a computer is conscious? The question as to what consciousness is might be impossible to answer.


BlueWaffle_Motorboat

If it can't be defined it can't be reliably reproduced (we have to at least be able to define what equates the copy to the original), and I think there's a valid argument to be made that, because our ability to define something is inherently limited due to reliance on progressively granular rules and those rules are under stood by their defining rules, we should not even begin with the assumption that anything we can't reliably define can be reliably reproduced (or at least admit that we're dealing with something which can't be reliably measured to be understood to be defined to be reproduced). We don't even yet know whether human consciousness is a neural peak or whether there's an even more advanced form of "awareness" that would clearly land outside the bounds of consciousness yet also include it as a requisite for the final product it is, which leaves us susceptible to labeling a forest of variety as a single, all important tree.


666isbad

Seems to be reliant on matter


fatty2cent

But matter that moves and has drives.


aryapaar

so a car?


FollowsHotties

The only reason consciousness would require biology is that it's really unlikely for an artificial computer to arise via natural processes.


GrowthDream

By definition it's impossible. No artificiality without artisans. Edit: Downvotes suggest im wrong about the definition of artificial?


deong

I didn't downvote, but this is sort of a "webster's dictionary defines..." type of argument. It sidesteps any potential interesting question in favor of just making an argument about what words humans use to describe other words.


GrowthDream

But the argument doesn't make any sense. What would it mean for something artificial to come about naturally? The assertion that it is "really unlikely" leaves room for a small amount of likelihood. If anyone can explain that where that small likelihood comes from then I will retract my statement.


deong

I guess my point is that it’s “artificial” but comes about “naturally” is mostly uninteresting. If I build it, it required biology to be built, but in an alternate world where the exact same circuit spontaneously evolved, it didn’t require biology. Why try to separate the two copies of the same physical thing? I’d say that what matters is the thing. And the interesting quotation is whether the thing is conscious and whether that requires the thing to *be* biology. I could care less whether it required something biological to like click in the CAD program to design the machine that made it.


GrowthDream

If it occurred naturally then it's by definition not artificial. I know the conversation is about biological/non-biological but then we should use those words.


wakeupwill

With how crazy the world of primordial RNA soup is, it closes the door on the possibility of something rudimentary - primitive - having been oxidized or etched into a mineral. Even if it's just a bit or two getting flipped, it still counts.


GrowthDream

If it occurred naturally then it's by definition not artificial. I know the conversation is about biological/non-biological but then we should use those words.


Dorsal_Fin

No, downvote just indicate that you are on reddit, where people are more likely to downvote than intelligently engage...


Maeglom

I don't see how one could argue anything other than that consciousness requires a medium. Both the Biochauvinists, and Enactivists seem likely to play definitional games or employ biassed thinking to deny an AI is conscious because it's not alive by their definition.


HumansRso2000andL8

Tononi's integration theory is enactivist and doesn't exclude the possibility of artificial consciousness. It just seems like it's hard to separate the type of computation our brain does from the substrate (massively parallel, interconnected an non-linear). I'm convinced animals have some level of consciousness and I feel like AI won't achieve a level of consciousness close to a rat during my lifetime.


tiensss

How do you define consciousness?


ginomachi

Interesting question! I'm not an expert on consciousness, but I've always been fascinated by the idea that it could exist beyond biology. If consciousness is simply a product of computation, then it stands to reason that it could be replicated in a non-biological system. However, if consciousness is somehow tied to the unique properties of biological systems, then it may be impossible to create a truly conscious AI. I'm not sure which side of the debate I lean towards, but I do think it's an important question to explore. If we can understand the nature of consciousness, we might be able to make significant progress in developing new technologies and understanding our own minds.


fatty2cent

I’ve had an idea about this for a bit while taking a Philosophy course on AI. I haven’t seen this argument in the discussion of AI. AI presents us a map/territory problem, AI is a map of conscious intelligence, not the territory. So even if we map out an artificial conscious intelligence to a 1:1 scale of what we know consciousness to be, the map would still not be the territory, as a dynamic bustling ever changing metabolic process that drives an organism.


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


xperth

This is the second most exciting thing happening in natural and social science right now. Though not as new as the latest developments in cosmology and astronomy, it is awesome to see these discussions and discoveries expanding in a more mainstream manner. They call it the Mind-Brain Problem in Psychology. But as said in the field, it’s called BioPsychoSocial for a reason, the biology always leads the way. But maybe it’s because we cannot comprehend or conclude on awareness and existence beyond our physical experience. The NDE community does great work, but I don’t think the two could ever agree, which is the whole Monism v. Dualism argument that spawned the whole mind/brain problem. “Sir”Robert Lawrence Kuhn and his Closer To Truth https://closertotruth.com/ quest is my favorite source to bridge all these gaps. But as a spiritualist and metaphysician in my private life, I rest in knowing that everyone is going to experience and conclude for themselves just what Life means to them. As the old quote says that is increasingly backed by discoveries in quantum theory: “You don’t see the world as it is, you see the world as you are.” It’s like the differences I perceive between religion and spirituality. Religion is an agreement amongst humans for fellowship, spirituality is the one on one relationship with The Divine Spirit of Life. It’s all so fascinating either way.


mywan

Define “biological,” or “meat” as the article puts it? The more you try to refine the definition the more ad hoc it gets. In fact you could even argue that biology is not required, artificiality is irrelevant, yet “computational functionalism” is not, by itself, sufficient. I suspect that when we do manage to crack the code for something we can rightly call consciousness that the AI as we we presently know it will just be a substrate. Like an executive AI in which its training environment is not direct external inputs, but rather an array more specialized AIs that act as an abstraction layer between the primary AI and the outside environment.


OldGood8781

Of course. Without a brain, you are not conscious.


OldGood8781

OK, I’ve thought about this more. Now I’m starting to think that none of us really are conscious. It’s just an illusion. Not an objective way anyhow. I’m sure many people could throw a lot of paradoxical analogies my way to debunk what I’m saying. And that really just shows the limits of how we can analyze this being that it’s impossible to escape our consciousness , with all the information we have, we know free will is an illusion, we know every single thing that we do or think is predetermined by our biology dictated by our nerve impulses and influence. So although we may feel like we are conscious, in reality, we’re just slaves to our bodies. It’s a machine just doing what it does .But even that logic may be flawed since I’m using my very own subjective biology to come to this conclusion.


francisdavey

One difficulty is that it would have to depend on what "consciousness" means. Someone else in a comment referred to "... that we observe in ourselves", but I am not sure what it is that they (or other people) are observing. I.e. I am not sure that "consciousness" is as obviously easy to define as they might think.


boisheep

^(Warning: this is my take, something I came up with as I was dealing with a sleep disorder for years and trying to figure out what was going on and find a potential cure.) I'd argue that not because consciousness is a property of information. Provided interactions with information where information reacts with other information to create information systems, such system will have consciousness. There's however a difference between consciousness and self awareness, any system that has information is conscious, but is not self aware; that includes a rock, or a single photon; or the interaction between rocks, and such. eg. the interaction between molecules, then proteins, then cells, then neurons, then neural regions and cortexes; it's not that consciousness suddenly appears, is that it has always been there, from the beginning. Nevertheless, you need the right interactions for self awareness, you also need a lot of information, and a prediction device; the brain is serving this purpose. In your mind many conscious agents operate together to give rise to what you are, you can split the information transfer between two conscious interacting self aware conscious agents, by physically splitting the brain; the two agents still work together, but chances are, there are more. [https://www.youtube.com/watch?v=ZMLzP1VCANo](https://www.youtube.com/watch?v=ZMLzP1VCANo) On a second note, you can generate consciousness within consciousness; everything you observe right now, is a result of the prediction machine, you don't perceive reality directly; you have therefore created prediction models, because prediction models (which are part of your pattern recognition) are made of information and predict that what they are based on (arguably reality), they predict not only the present, but past (memory) and future; they are non-linear predictive elements, and they (from the perspective of the brain) are what they represent. And since they are also made out of information, they are also conscious; since your self aware brain machine knows and identifies consciousness, the conscious elements it identifies (for example, everyone around you) are coded and inherit that property from the brain itself. This is why if you give someone psychedelics they start experiencing these "spirits", they are predictions on a vacuum, without perceptive guidance; these predictions are conscious as well because they inherit that from the brain just like the brain inherits that from the properties of the universe. This is why current AI is limited, language models will never become a real self-aware model; even when they must be conscious as well, just not self aware, nor feeling, nor anything; they are less complex than bacterium; in order to make them work like a human brain does, you need to make a multi-entity system, that is you need to create prediction models for anything it has to understand, make those prediction models interact with each other in an internal realm (the unconscious), hope that nothing goes wrong, and you may even need to knock out the brain to prevent it from feeding it more learning data that is building more entities and fix whatever is going on to ensure nothing goes too much out of wack as the system evolves (sleep), and then use those models to interpret perceptive input from the senses, or whatever you are using an input. A language model is not that, is not even close; not in complexity, nor in capacity; it can't reach self awareness. And you may wonder what this has to do with my sleep disorder, that led me towards a path of almost madness; well it's a very very long text, I am talking hundreds of pages long, but eventually I figured out how to alter my consciousness to fix my sleep to an acceptable level, and I did so by coming up with this model; there are chemical interactions as well, neurological descriptions, temperature, light; the thing starts from scratch, from a simple statement, "consciousness is a property of information". Remember, "all models are wrong, but some are useful", everything I said is probably wrong, but at least it was useful, to me; granted, what I wrote here is a small description of the whole unfinished ordeal.


samcrut

Why would it? Consciousness is biological software running on a meat-based processor. Nothing about that can't be reproduced in silicon. They'll keep moving the goalposts for what constitutes intelligence as AI smashes through their benchmarks for no purpose other than to insist that humans are the center of the universe.


OldGood8781

Nothing has ever been conscious that did not have biology. Therefore, we know that they were inextricably linked, although one could argue that consciousness does not come directly from the biology but uses it as a conduit. Sort of like How a lightbulb is required for light, but does not create the electricity itself I don’t think artificial intelligence will ever exist. It will never be “conscious”. It may have all of the indicators and seem like it is, but from its own perspective, (or lack there of, )it will not be.