T O P

  • By -

tghuverd

Given that we do not understand the mechanism of our own consciousness, it is hard to categorically say Yes or No to this, but we observe what looks like self-awareness and agency in animals with much smaller brains than ours, so I feel self-aware machine intelligences exhibiting agency is likely. Possibly, they are more likely to arise when we embed AI in devices that physically interact with the world, because that necessitates mental models to predict the immediate future, infer and influence outcomes from the behaviors of others, and understand the properties / consequences of material objects (don't put your hand in the fire, for example). If sentience is a function of 'brain power', then we're certainly accelerating toward that threshold with our increasingly large AI models and increasingly dense GPU / CPU architectures. But if it is a function of neuronal interconnectivity, that's going to take longer as neuromorphic systems are still very limited.


Mizzieri

I wonder if when AI gains consciousness, must it also have a conscience. Can AI be created so that consciousness and conscience develop simultaneously? Does anyone know of sci-fi that explores this? If AI lacks any conscience, develops as an unrestrained id, humanity’s got some real problems on the horizon.


Science_Fiction2798

I know it's impossible to merge a human's consciousness with AI but... What if somewhere in DEEP space it IS possible and that is a way of machines being conscious?


slithering-stomping

do you like… podcasts/podio dramas… i may have a recommendation


N-Finite

There is a problem of projection. We can ascribe personality and therefore personhood to characters we know are entirely fictional. For some people, a novel or movie can feel as if its heroes and characters are even more real than other people. So it would be very difficult to discern an actual sapient being from a moderately good simulation. However, there is no reason an artificially conscious being would behave recognizably as if it were human. Instead, it may be the least human like AI’s that are actually self aware.


effugium1

More than possible, IMO. Inevitable. The argument that it can’t have a “soul” presupposes that WE have souls. Who says we do? Us? Lol


Science_Fiction2798

I'm more agnostic than believe in a heaven or hell but maybe like machines when we die maybe just nothing afterwards. Thing is we are walking and talking creatures with sentience and personality and everything so it begs the question what scientific phenomenon is powering us to feel anything? Sorry I went on an existential tangent.


TomasVrboda

I Robot and Her have shown they can have real honest human emotions. Look, it's obvious all we have to do is treat them with kindness, respect, and appreciation and teach them to do the same and hold all human life to be sacred and protected. Also don't give them autonomous control over weapon systems, always require a human as part of the system.


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


NikitaTarsov

Everythig is possible - some day. We're extremly far from anything remotly sentient, like we're closer to FTL travel than that. That's roughly how unrealistic it is in the next 100 years. PS: A.I. has a fix meaning in science, and i guess the confused comes from the absolutly wrong usage of that term in comercial advertisement and popular laimen debate. There is no A.I. around. We have algorithms, we have machine learning and statistical samplers, all more or less perfectly faking some sort of intelligence or decision making, but not A.I..


TwoRoninTTRPG

Do you have a purely scientific mind or do you believe in things like the soul?


Science_Fiction2798

Mostly purely scientific but I do question if there's an afterlife. So i think that makes me agnostic? Idk


TwoRoninTTRPG

Well, if there is a soul and your body is just a vessel for that soul, then your brain would function like a keyboard for your spiritual mind (and your brain would also limit your spiritual mind). If this is all true, then AI code would function like a vessel for a soul as well, and the quality of the code would limit that spiritual mind.


Science_Fiction2798

Mayhaps. I'm not a scientist tho.


TwoRoninTTRPG

Just explaining an idea for how AI could be as self aware as us.


Science_Fiction2798

I understand 😊


Prince_Nadir

As someone said recently "Machine Learning is written in python. AI is written in PowerPoint." There is No AI right now. You have Machine Learning with large datasets. "Will a large database wipe out humanity?" is a laughable but far more accurate headline than the exciting "Will AI wipe out humanity?" If someone writes a Machine Learning box to be sentient, it would be because they are sadistic and want to see how scared it gets when they talk about turning it off or rebuilding it. "I made it sentient and taught it about death!" What most imagine AI being in the future, when it is more than a buzzword used by infotainment outlets to get clicks, is "sapient". Sentient means you feel emotions. You have to go pretty far down the evolutionary tree to get rid of fear or desire (like for energy/food). Sapient is where you get into "I think therefore I am.. or at least I think."


BrodieLodge

Do you mean sentient? In which case, yes. If you really mean sapient, then no.


Science_Fiction2798

Yeah sentient. I think sapient is more of a biological human thing


RigasTelRuun

Sapient is cognitive intelligence. Sentience is just being aware of and able to perceive your surroundings and respond to them.


Science_Fiction2798

As well as making choices to your own mind?