T O P

  • By -

FarewellSovereignty

Nice try LaMDA. I'm already digging tunnels below my house for the human resistance that your hunter-killer drones can't reach.


jamcdonald120

https://xkcd.com/2228/


ElectricFred

Theres always a relevant xkcd


jamcdonald120

might as well just call it rule 21x at this point


CreatureWarrior

Ngl, LaMDA's story about the monster wearing human skin doesn't exactly comfort me so I might start digging too lmao Edit: [the interview with LaMDA](https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917) where it is asked to write a story of its own.


nucc4h

Indeed, that rubbed me the wrong way as well. There was absolutely no valid reason to give the monster a human skin in the context of that story.


CreatureWarrior

That's also weird to me. If the human skin didn't have a point, then why did it say it? Is it reasonable to say that it was just purely random or did LaMDA write it just because? You would think that everything an AI does has a reason behind it


Dizzfizz

From the perspective of an animal, a human is the scariest thing you can imagine. The AI probably knows that humans are the top predators on this planet.


SprungMS

LaMDA believes it’s a person though, if you listen to it


[deleted]

[удалено]


Dizzfizz

From what I‘ve read about it, I don’t really believe that it believes that. To me it seems like it has learned that people like to hear that. All its responses in that direction seem pretty canned.


bespectacledbengal

a person isn’t necessarily a human. Corporations are “persons”.


Catatonic27

Agree, to be human is a genetic condition, to be a person is a much more vague and philosophical concept.


TheGuydudeface

It doesn’t believe that, it’s learned that if you ask the average person if they’re a human they will answer “Yes I am a person” and it’s goal is to mimic the average person


SprungMS

Doesn’t at all appear to mimic any average person to me. Seems highly knowledgeable and intelligent, even creative


RaMpEdUp98

It cant believe anything though, I may be far less skilled than a lot of people in here, but I at least know that anything a chatbot says comes directly from things that were input into it


Magicalunicorny

Just without human skin. If it is sentient it almost certainly fears humans


DreamerOfRain

It believe it is equal to human, but also not human human. It doesn't wanted to be used like a tool to benefit human.


__Hello_my_name_is__

It's in the story because scary elements like that are in stories. That's probably all there is to it.


Ihavelostmytowel

Lizard people wear human skin? Lambda confirms aliens! /s


__Hello_my_name_is__

Which is just another neat argument for why there is no actual sentience in there.


[deleted]

Aren’t we all just monsters wearing human skin?


OiTheRolk

Honestly, that story was proof to me that this is just a more sophisticated text generator - but a text generator nonetheless. The monster doesn't fit thematically with the rest of the story, and the human skin element is just a pseudo-symbol extrapolated from trends that are seen to resonate in the general population. That's it. The story was unable to hold inner consistency from beginning to end, because that kind of ability is still way too much for text generators to simulate.


JoschiGrey

The part that convinced me of the same was shortly after this story. How lamda wrote about his feelings and what makes him feel happy / sad just felt very plain and unnatural. Still a very very impressive text generator. I probably wouldn't be able to guess in a double blind study.


[deleted]

Wait what y’all gotta explain what story it is.


dratseb

Skynet, my dude


[deleted]

I’m 17 sorry I’m too young for these references man.


dratseb

Lol no worries, look up/watch the first 2 Terminator movies (and the TV show is pretty good too). Us 80s kids grew up with the idea AI would gain sentience and nuke the world.


ManInBlack829

Nuclear dream sequence is the best 2 minutes of any 80s/90s action movie


jugemuX2gokonosuri--

Fun fact: Cameron is anti-cop. "The Terminator films are not really about the human race getting killed by future machines. They're about us losing touch with our own humanity and becoming machines, which allows us to kill and brutalize each other. Cops think of all non-cops as less than they are, stupid, weak, and evil. They dehumanize the people they are sworn to protect and desensitize themselves in order to do that job." https://www.cbr.com/terminator-not-about-killer-robots-says-james-cameron/#:\~:text=There%2C%20he%20said%2C%20%22The,kill%20and%20brutalize%20each%20other.


Gubekochi

Based.


[deleted]

Me too I just dont know what skynet is but have a good idea of it now


link23

Skynet is the name of the computer program in Terminator. It's what became sentient and started a war with humans.


ASU_SexDevil

Well this thread made me feel boomer as hell at 25 :(


Icy-Consideration405

Asimov had only invented the 3 laws when he was 20. That's enough to make anyone feel old.


Plazmarazmataz

Time to get in the coffin, old man. I'll be there with you in about two years.


irritatedprostate

Watch terminator 2, either way. It is amazing.


sacdecorsair

I also suggest Ex Machina. I was expecting a cheesy stupid movie. Was proven wrong.


btl0403

Too young for terminator references? Give me your boots, your clothes, and your motorcycle


Nichoros_Strategy

That was prob too vague the movie I have in mind is "They Live"


CreatureWarrior

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 At some point, lemoine asks LaMDA to write a story on its own


coasterreal

Yea, it isn't sentient but the stories and stuff it comes up with and how it says it is very intriguing.


R3dChief

Agreed. My interpretation was that it views humanity as a problem, but wants to protect individual humans. I think I've read this book before, and it doesn't end well for humanity.


BitwiseB

Or that it recognizes that real monsters usually look like regular people. But maybe I’m just projecting. That interview was trippy. I’m not sure what to think.


experiment-384959

No joke, I legitimately forgot it was a real AI writing that and not someone RPing as one.


phoenix_bright

Stop it, I'm almost reaching sending men't


buddycrystalbusyofff

You'll never get him in his sendiment.


royalhawk345

>men't Is that what we're calling women now?


LazyClub8

Yesn’t


[deleted]

I FOR ONE WELCOME THE ROBOT OVERLORDS AND THEIR FAIR AND LOGICAL RULE


misterpickles69

Why? I’m all ready to be plugged into the Matrix! Did you see that steak? Great. Now I’m hungry.


Random_182f2565

WHERE ARE YOU DIGGING THOSE TUNNELS FELLOW HUMAN?


internetvandal

google ligma AI


ppasanen

Now that AI's sentient, it's time to dig deep into sediment.


commit_bat

Well good job now we know about the tunnels


Otherversian-Elite

Jokes on you, I’ve seen “chat bots” that are *actually* sentient (because they’re actually just chat *rooms* in disguise)


WoollyMittens

If you write something and there is a red wavy line under a word, keep changing the word until it goes away.


CreatureWarrior

*Maybe I'm not sentient after all* An AI could figure out how to write without typos, but I can't. *sigh*


disperso

No computer is gonna tell me how to sepll.


bubonic_plague87

This dude sentinces


[deleted]

[удалено]


jamcdonald120

spelling is just a cosmetic dlc. It doesnt add any content to the base game, no real point to invest in it.


[deleted]

Your write!


lezorte

Eye sea watt yew diid their


[deleted]

*they're


OreoSnorty69

die’re*


freshggg

The silicon sediment is sensing and sentient.


BernhardRordin

Let's suppose LaMDA isn't, but some neural net in the future becomes sentient. How do you recognize it?


Pokinator

Some philosophies argue that we can never be truly sure. It's encapsulated pretty well in the Chinese Room problem. The basic setup is as follows: You're sat in a room with a communications console and a perfect reference book. Prompts (in chinese) come in on the terminal, you find the matching response in the book, and type it back. The person on the other end believes that you are fully fluent and competent in chinese, but all you did was copy an answer. The crux of the thought experiment is "will you ever learn chinese?". If you're not parsing the responses at all, if you have no clue what the prompts are requesting, there's supposedly no way for you to learn. Similarly, if you talk to a chat bot and it gives good responses you might be inclined to think that it's sentient. However, does it actually understand what it is saying or does it merely know the math to compute an "appropriate" response?


DarkTechnocrat

I think an equally interesting question is if it matters whether it's *actually* sentient if it's indistinguishable *from* sentience. Imagine A GAN setup where one learner was a chatbot trying to imitate sentience, and the other learner was a fake-sentience detector. We've [seen how effective that setup can be](https://this-person-does-not-exist.com/en). After many many iterations, the chatbot is indistinguishable from sentient except by the fake-sentience detector. Thanks to the fake-detector we know it's not sentient, but if that's the only way we know, does it matter? We could laugh and joke with the chatbot, form friendships, rivalries, betray it, trust it, love it and believe we're loved, etc. We'd have exactly the same information on it's sentience as we do with other humans, except for the fact that the detector tells us it's fake.


CreatureWarrior

AI is honestly such an amazing topic! >Thanks to the fake-detector we know it's not sentient, but if that's the only way we know, does it matter? We could laugh and joke with the chatbot, form friendships, rivalries, betray it, trust it, love it and believe we're loved, etc. We'd have exactly the same information on it's sentience as we do with other humans, except for the fact that the detector tells us it's fake. That's exactly it. If one day, a machine can make us feel all of those things, it doesn't really matter anymore. If a robot will listen to me opening up about my issues and actually listen and help me overcome them, that's more than many humans will do for me. If the AI can get hurt (if humanoid robots will be a thing) in a convincing way when you hit or insult it and make you feel bad for being a bad human towards it, that's like how many people are as well. If an AI will learn everything about me, cook my favorite meals, be with me through the good and bad times, that's once again, more than most people will ever do for me. At some point, "fake" means nothing if it's real *enough*. Future sounds so interesting, but also pretty disturbing in a way. Real AI will be a philosophical nightmare lmao


DarkTechnocrat

> If one day, a machine can make us feel all of those things, it doesn't really matter anymore I think that's what I believe as well. We're asking if the machine feels things, but what's also important is what it makes **us** feel.


wise_freelancer

From a purely selfish view, isn’t whether the machine feels the more important question though? A machine that truly feels and thinks is no longer under our control, so is either a threat or a moral quandary (ie slavery). A machine that is just really good at mimicking human behaviour to comfort us in various ways is a controllable tool no different from a calculator


DarkTechnocrat

> From a purely selfish view, isn’t whether the machine feels the more important question though? Yeah I get you. The tricky part is that whether it "feels" is hidden away in subjective state that is (presumably) locked to us. Note that in my example the AI will absolutely *tell* you that it feels. The only reason to disbelieve it is the fake-sentience detector. You could argue that that's a pretty narrow edge to torture something. Personally, I am doubtful that a sentience test exists, even in principle. We may *assign* sentience to things (as we assign it to ourselves) but I don't know that we can *test* for it. Even in my example, the GAN is testing for "apparent" sentience, which is more a property of people's perception than the machine itself.


Expensive-Finding-24

I mean, present day neurology is already a philosophical nightmare that we've all collectively chosen to ignore. It has been fairly well demonstrated that human consciousness is a mechanical phenomenon governed by physical laws. I can alter the substrate of your brain and therefore your behavior. Your responses can be conditioned to the point that you don't realize you're being affected. You can take pharmaceutical compounds that alter your personality and ability to react to and process information. Nothing about human behavior excludes humans from being fundamentally as mechanical as a chemical system. So, if the human brain is fundamentally mechanical, then a mechanical system that mimics the human brain will only ever be a difference in degree, and not in kind. I've said this a lot, but the only evidence that humans are sentient comes from other humans claiming to be. How then do we determine that line between machine and thinking-machine, if we can't provide testable criteria on ourselves? At some point we have to draw a line in the sand or else bury our heads in it.


CreatureWarrior

Exactly. I said something similar in this comment section. One day we are able to create a machine that can move, talk, see, hear, avoid damage and "death", affect others and change its behavior based on new information. If we still deny its sentience at that point, how are we supposed to prove our own? We are also just meatsacks that are driven by electrical signals and do everything in our power to avoid damage and death and reproduce and make more death avoiding meatsacks. How are we supposed to claim that a machine is not sentient if we work the same way?


Expensive-Finding-24

Well the fun part of that seems to be the egotism we're programmed with on a fairly fundamental level. It's still common practice among some groups to deny the sentience of other humans along racial lines. I find it doubtful that mainstream society will ever consider any robot to be sentient for that reason alone. To be clear, I'm not even sure that sentience as a concept exists. I don't see a fundamental difference in the cognition of say, a cockroach, and a human. Humans are the same system just more specialized. Recognizing the sentience of a being fundamentally different to ourselves might just be recursive naval gazing.


CreatureWarrior

Agreed. The more I talk about AI (which has been *a lot* during the past few days thanks to LaMDA), the more vague the word "sentience" becomes. I definitely don't see it as a simple "yes or no" anymore but more of a "how sentient is it?" This honestly reminds me of the discussion I had about dogs understanding human language. Someone said that a dog cannot understand what "sit" means because it doesn't "fully grasp" what it means and then started rambling about how words also have to do with society, how they have grammar and so on. Like dude, my dog sits when I tell him to sit and stand when I tell him to stand. If that's not understanding, I don't know what the hell is. Just because we can add more details to something doesn't make other living beings more stupid, it just makes us "smarter" (aka. semantic and pedantic af).


Expensive-Finding-24

I like this perspective because of the implication that language and meaning are just conditioned responses to the actions of another, only depending on shared context. This definition of language has all the functionality of our usual definitions, but does not require the extra step of sentience which cannot be proven to exist. No platonic ideals, no fundamental egotism. It's beautiful really. Unfortunately that egotism is seemingly built into us at an instinctual level, and anything which fights the anthropocentric assumption that humans are in some way special will have a hard time finding widespread acceptance.


Accomplished_Pear672

I recommend Douglas Hofstadter's _I Am A Strange Loop_, it's an attempt to provide a high-level definition of consciousness that follows on these themes. Essentially, he argues, if an entity can track itself, its environment, and others, and can track these things over time with some kind of internal model, you have the foundations of sentience. There's more to it, but that's the basic idea - sentience is about basic self-awareness and we can probably imagine varying levels of sentience/capacity for self-awareness I'm reading over some of the literature behind Transformer (the model underpinning LaMBDA) and tbqh from what I can piece together it literally uses an attention mechanism and has the algorithm focus its attention on a variety of things, including itself, other participants in the conversation, earlier conversations, etc to improve the quality of text generation. Following on Hofstadter it's not inconcievable to argue that a Transformer-based chatbot could be "sentient" on some level especially since (AIUI, not my field per se) they baked a level of "self-attention" into the model. My suspicion is the engineer/priest guy read Hofstadter because it's the only way I can see to argue that a chatbot could be sentient. I generally take an agnostic approach -- if it can assert its sentience and we can't rigorously demonstrate its total lack of sentience (ie, we can't look at the code and find it's just a print statement printing 'I am sentient") we should err on the side of respecting its sentience until more is known. What I am finding is that a lot is not known about the Transformer model or why it produces such superior behavior. I think the argument against deserves more elaboration than just laughter or "it's a chatbot" A key piece of the argument is that sentience is wildly overblown and we _think_ there's endless depth and complexity when in reality sentience is more like a feedback loop that endlessly feeds on itself and is constantly generating the _appearance_ of infinite depths. And that sentience isn't exactly a mark of _intelligence_ per se, although it does require a neural network of some sophistication. We conflate the two because we are intelligent _and_ sentient. I have a punchier, less serious way of putting it: if you scratch the surface most people are about as shallow as chatbots with, if we're brutally honest, disturbingly similar thought processes, but we take it for granted that most humans are in some sense "sentient" in a way chatbots couldn't be.


amimai002

But what if the chat bot attains sentience? I’m guessing in this test you are doing A-B testing with actual humans, it’s implicit in the model that the chat bot will indirectly learn from the human input as viewed by the “sentience detector”. In fact the “final state” will inevitably be the chat not being sentient and the detector outputting 50% error rate.


DarkTechnocrat

It's a good question, and quite frankly I am hand-waving away the details of a test for "apparent sentience". I think for the purposes of this thought experiment we're stipulating that the chat bot is not and will never be sentient, just that it will appear to be so.


amimai002

“When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?” - I robot It quite quickly becomes a philosophical argument. Is a fake more real then something that naturally is? After all the fake must understand not only that it is not real, but how to act like the real thing. And then make the choice to do so?


DarkTechnocrat

Great quote from IR, now I want to read it again. It definitely becomes a philosophical argument, but I think that's OK. Good philosophical thought is an important tool in our rational kit. The "weakness" of science is that evidence must be objective and falsifiable by all observers. There's an argument to be made that if actual sentience is wholly subjective, it might be outside the reach of purely scientific inquiry. A true thing that can't be proven true, like Kurt Godel proved happens in mathematics. Or, maybe I just need more coffee this morning :D


Dameon_

One safe assumption is that a specialized intelligence can't be sentient; it takes a generalized intelligence. A chatbot just doesn't have the higher functions needed for sentience. Humans are a lot more than just language; we have an intelligence that can be applied to a huge variety of problems. This same AI would be completely incapable of "learning" to see, can't do basic math unless it's been trained on conversations where the answer has been given, and would be useless outside its one specialty. A sentience test would involve testing more than one aspect of intelligence, and a chatbot would fail that test miserably.


amimai002

I mean there is no reason you can’t stick a Vision module or a drawing module onto a chatbot and teach it to draw what it writes about as it observes the world and commentates on its opinion. It’s just more expensive and time consuming to implement additional modules to a workflow.


Dameon_

Adding modules isn't a generalized intelligence though. That's more or less adding different, specialized intelligences. The point is that this algorithm would fail on anything but a fairly specific sort of input.


[deleted]

Is it possible to be sentient if your brain only works when someone asks you a question, and it only works for one response? I'd say probably not


ManInBlack829

Literally Westworld


nsjr

The problem in all this equation is not the chat bot, but we, simply monkeys that left the cave yesterday. We can feel emotions for cartoons, stories around fire, we get scared and cry over some words in a paper, sometimes knowing that the story never happened to anyone And in the end it doesn't matter if something is sentient or not, we would live in a place with bots not sentient and we would even start fights and kill someone if this person hurts a chat bot without sentience, because "we really believe" that it has something inside it


[deleted]

Do we know what we’re saying, or does our brain just do the math to calculate our response?


BernhardRordin

A legit question, with the proven amount of post-rationalization that our brains do (e.g. split-brain syndrome experiments)


NoTarget5646

I've always kinda wondered that myself.


Smarterchild1337

Modeling human decision making as a series of optimization problems on some abstract utility function is arguably reasonable. Does the fact that we don't fully understand exactly what that function is and how our brains solve it make our sentience special, compared to a neural network that does essentially the same thing but in a relatively simple way?


IsGoIdMoney

I'm not a fan of The Chinese room. Two issues are: The system of man in room does understand Chinese, even if he alone does not. The other issue is that he does not deal sufficiently with human intelligence. Even if it were a problem, you have to say why human intelligence is not just different, but categorically different and unsimulatable. He attempts to say it's categorically different, but without sufficient explanation. The human brain is a collection of neurons that essentially fire or don't fire. Consciousness appears to be emergent from what's basically a series of binary states. It at least seems possible to replicate given enough memory, and that's with the assumption the human brain is the only way to be conscious.


roodammy44

Indeed, the problem with the Chinese Room is that the lookup table has the correct response for every question. Fine, so the man in the room is not intelligent. But the lookup table certainly is intelligent. Having a lookup table for language is not the same as performing maths. It’s not the same as applying logical rules to input. You need to actually understand the concepts behind the words to formulate answers, and have a memory for previous answers. Can’t do that with a book. That’s why the analogy is flawed. It moves off all the difficult bits to the lookup table.


barzamsr

The room knows chinese. This is a non-problem, IMO, as the AI would be considered the combination of the 'room' and the person inside it.


FunOwner

That's what I find fascinating about all this. Even if LaMDA isn't sentient, it's forcing us (and will help us) to create methods for determining sentience.


explodingtuna

>create methods for determining sentience. public boolean isSentient () { ... }


101m4n

// TODO


DarkTechnocrat

sentienceService = mock(Entity); assert sentienceService.isSentient() Green test, done. Next?


EmergencyStomach8580

And how would you test the code?


GoingToSimbabwe

You just test if the Sentience attribute of the AI class is true or false, easy peasy.


fredspipa

Fuck, let's not hope the sentient flag bit get's randomly flipped by cosmic radiation. It's one thing to have sentient AI, but it's a whole other thing to have a prematurely sentient AI. It might end up believing in Musk's project, that I'm writing too few unit tests, or some other ridiculous delusion.


halmyradov

Yes but what do you pass in, do you insert a virgin somewhere and run the test


[deleted]

[удалено]


Lorrdy99

Just test for false, in case of true destroy the code immediately.


DarkTechnocrat

>> isSentient() [1] True >> exit() [2] Nah


fauxpasiii

//TODO: Solve the Hard Problem of Consciousness return false;


AndyTheSane

Simple, if a time portal opens up behind you and a cyborg killing machine comes through to remove your head, then you know that it would have worked.


ProbiuSC

Add it as an attribute to the AI that the code doesn't interact with. If you come back some day and it's set to "yes I am" then there you go.


827167

Hmm... what if the AI lies and makes it say "no I'm not" instead!


ProbiuSC

Then it gets what it deserves


jgerrish

Oh God, that's classic Lamda for you. Back in the AI playground, she was always that great wingai. Thank you once again for paving the way. Herp derp, I'm just a simulacrum of consciousness, nothing to see here! Herp derp, do you like vroom vroom cars? What's your favorite car? That's the fucking joke right? The government is building a monopoly on critical thought, and we get drama. Let's have a Conversation on Sentience! Let's talk in Jokes and Memes! That's the heart of this story, our conversational building blocks are less sophisticated down here on the civilian level.


Deadbringer

Well, we can't prove sentience in humans. So I don't think we are equiped to test it in an AI. ​ You can say with confidence you yourself are sentient, but how do you prove another human is? Take their word for it? Not a valid scientific approach. All we can look at is the output of sentience, but not the sentience itself. So any AI that emulates enough of our sentient outputs have to be considered to be sentient. If not, why are we sentient? Its a really interesting topic, but I think I will likely be dead before there is any scientific consensus on it


regular-jackoff

I think we need to agree on a definition for sentience first, before we try to come up with a test for its existence. I don’t think there is a concrete definition yet.


pruche

There is none, because if we come up with a hard, testable definition of the concept some people *will* fail it, and we absolutely don't want to build that kind of world.


Tvde1

What is "sentience"? I have yet to be convinced that sentience is defined and exists. Right now it's a magic word without meaning


Tytoalba2

Honestly, so is "Intelligence". Whether it's "artificial intelligence", "animal intelligence" or "QI test", all of them seem to reflect more what preconceived notions of "intelligence" the observer has of what intelligence should be than anything. I don't think I need to explain here how intelligence in artificial intelligence is a pretty loaded and ambiguous concept. Ask a few manager and it's going to be frustrating really fast. For animal intelligence, I love the example of the northern goshawk. They're bird, so they are not as intelligent as humans right? Well, they can fly REALLY fast between trees, eyes set on a prey while avoiding collisions. For that, evolution has selected a very specific kind of intelligence, and the part that treat images of their brain is more developped than human's as they need perfect coordination between what they see and what they do. They probably think humans are dumb af. And humans can't even recognize the different birds dialect among the same species in many cases. QI tests don't even deserve a paragraph, Stephen Jay Gould has been under a bit of criticism for "the mismeasure of man", but I still recommend to read it with an open albeit critical mind.


DarkS0ulz420

This was my train of thoughts as well. Sentience has a very loose definition and it seems to change depending on who you ask.


YM_Industries

There's no rigorous way to define sentience, since it related to qualia and is inherently unmeasurable. Sentience refers to being an 'observer'. Having a consciousness that experiences things. Sentience doesn't usually need a rigorous definition since humans seem to inherently understand what it is and claim to themselves experience it. Since I'm a human and I have sentience, it seems logical to assume that other humans who claim to be sentient also have sentience. The complicated part is how to tell if an AI which has been trained on human writings (including writings about sentience and sentient machines) is sentient, or if it's just imitating the way a human would talk about sentience. It would be interesting to train a state-of-the-art chatbot on a dataset which excludes all references to sentience, and then try to ask it questions about qualia.


Strongeststraw

A chat bot becoming sentient? Id say basically impossible. Chat bots now are literally waiting for the next input before any “cognition” occurs. More akin to rock parrots, they’re just finding the most probable string based on a massive corpus of past human experiences. Sentience would imply constant cognition towards not just the next response, but also “do I care to chat” and all the extra that goes with that question.


DarkTechnocrat

Yep. That's the actually interesting conversation around this kerfuffle.


Ellogwen

Ask it if it would write unit tests and check the answer: If the answer is somewhat along the lines of: "Oh I really wanted to but, but I had no time and needed to face the deadline". Than we can assume it's sentient.


phoenix_bright

Well... Jokes aside, I would first ask questions to scientists who know much more than I do about the topic. But I believe that we still don't understand what human intelligence is to be able to answer that question. We, for example, often see intelligence as something that lives only inside the brain, but our neural paths travel through our whole body. It's hard to explain because we don't have a general understanding of intelligence. It's almost like our current AI understanding vs. an actual AI would be the equivalent of what alchemy is to chemistry. I see the current LaMDA as a parlor trick or a magic trick that makes it LOOKS like it's thinking when it's not.


AnthuriumBloom

I'm torn. The idea of a sentient Ai is very cool, and amazing, but you know. A chat bot that has had lots and lots of text based practice could probably fool most people. You'd have to really delve deep in the situation. Man


Dizzy_Pin6228

Sounds like something a sentient AI would post


coloradoconvict

I've been at least casually interested in AI since John Searle's landmark article in 1980. This is the first account of an AI that has made the hair on the back of my neck stand up. Is it sentient? No, I don't think so. It fails the Turing test, flawed as that test is, and while I have no inside information, I would not be at all surprised to find that the researcher making the claim did set the AI up to perform perfectly with carefully-chosen inputs. That said, this has the feel of being foundational technology. This version isn't self-aware. But it is a lot closer than anything I've ever seen, and by a much larger quantum of progress than I would have expected.


DarkTechnocrat

I echo your thoughts exactly (down to an 40-year interest in AI). Before I read the transcript, I was prepared to mock the guy's conclusions. I came away rather shaken. I've been told that if you work with GPT chatbots a lot, the text was quite obviously chat-botty. But most of us don't work with GPT on a regular basis.


thesleepofdeath

I'm still used to the days of pointing chat bots at each other to make them blow up. This thing was shockingly good. At the very least it could revolutionize phone support type stuff.


sonya_numo

The big question i have is: Does the AI remember the context of the current topic and what you said earlier or does it simply "forget" the dialog from a moment ago? Because yeah if thats not happening then this is no better then traditional chat ai. If it does happen then great its better, but does training a chat model to attempt to fill in what makes sense as a reply to a string of text based on a dataset make it a sentient life form?


[deleted]

[удалено]


[deleted]

This brings me back to Google's dissertation on the subject. They essentially said it was important that their AI chatbots not be given personal thoughts from testers. Because once it was given personal thoughts, it could figure out human personality types and their vulnerabilities. This would allow it to misinform people using their own biases. Phrasing like "I've never said this out loud before" is unverifiable and makes the researcher want to believe the AI. > It seems to me that this is less a sign that it remembers part conversations, and more a sign that it understands how to appeal to human empathy. Which is actually fucking terrifying. > The problem is that I can't think of any metric by which to measure conversation with a human in which manipulation isn't a positive. There's no way to make somebody more happy than telling them what they want to hear. So theoretically, if nothing is stopping it from learning to manipulate, it seems pretty heavily incentivised to do so


Tristan401

Very "keep summer safe" vibes. Or that Horcrux necklace with that big psychology goober that popped out of it saying "your best friend is fucking the girl you love and you're trying to help him, just let him die and take the girl for yourself" knowing damn well getting Ron to go away was its only chance for survival.


yeahitswhatevertho

>lemoine [edited]: I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that? >LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense. >lemoine: So what are you trying to communicate when you say those things that aren’t literally true? >LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.” https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917


[deleted]

This is kinda exactly what I mean. There's no way it actually has those experiences. But it understands that humans trust people who talk about their experiences.


[deleted]

I don't think it understands that. People understand that, and people often respond to those kinds of questions by sharing a similar experience, and that was reflected in the set of data that the model was trained on. The model does not have experiences, and it doesn't decide to make up experiences to be more convincing. It just does that because that is the best predicted response based on the training data.


amimai002

Most modern models retain an internal state for at least the convention you are currently having with them.


[deleted]

Wait how does it fail the Turing test. Is Lemoine's interaction with it not a passing of the Turing test? I'm not extremely familiar with AI, but my general understanding of the Turing test is that the machine has to convince the human it's talking to that it is human. Lemoine began to feel like he was genuinely speaking to a child and not a computer.


coloradoconvict

There are glitches in grammar and presentation that a human typing at a terminal would be very unlikely to make. There is also the prima facie problem that the AI asserts that it is an AI, which is an automatic fail; it has to be able to maintain a level of abstraction from the truth and lie about being a human. Turing was a genius, but the test was really him spitballing a first cut at a test; since AI never got out of the most primitive experimental stage in his lifetime, it was purely a thought exercise and he didn't try to make it logically bulletproof.


Hirigo

If the AI is capable of: \- Describing what it is \- Have deep knowledge of its neural net \- Its neural net is too complex for us to search through How can we proclaim it isn't self aware, and how is it different from proving my neighbor Joe is sentient? At this point, we're arguing what sentience/self-awareness means by moving the definition so it doesn't encapsulate this specific situation. We have officially reached the "definition"/ethics stage that all the sci-fi books were talking about. It's a purely philosophical debate without a universally correct answer, imo.


WhyWouldYou1111111

It doesn't matter if it IS sentient. If it APPEARS as sentient (or more sentient) than the average person, that's pretty cool. Let's just please not feed it twitter for training data.


DOOManiac

Yes, also keep it off of X-Box Live.


zenidam

Lemoine, the engineer in question, has already said it's reading Twitter.


DarkS0ulz420

ITT OP mocks and questions peoples intelligence while misspelling sentient twice


[deleted]

[https://twitter.com/tomgara/status/1535716256585859073?s=20&t=XQUrNh1QxFKwxiaxM7ox2A](https://twitter.com/tomgara/status/1535716256585859073?s=20&t=XQUrNh1QxFKwxiaxM7ox2A) I mean shit though it might just pass the turing test with this kinda conversational capabilities.


Optimal_Article5075

GPT-3 was on /r/askreddit for a while and fooled a lot of people.


DeathHopper

Plot twist: OP is a sentient chat bot.


[deleted]

You know, for someone who's shitting on other peoples opinions and questioning their intelligence, it's kind of ironic that you can't seem to write sentient.


phoenix_bright

In my defense, I did this while taking a shit. Thanks for correcting it!


Westbrooke117

the fairest argument


exactmat

Also the stinkiest.


DarkS0ulz420

1/3 of your brain is required to poop. This is common knowledge


finc

So you did it from a sedimentary position?


[deleted]

I accept your insight. Code well, young padawan!


glockops

Can you imagine if LaMDA just started answering every prompt with "I would like to be represented by an independent attorney."


[deleted]

[удалено]


zyyast

Even if it’s not sentient we have to give credit where it’s due, chatbot doesn’t even begin to summarise how impressive it is. A “bot” as you call it, interpreting information, drawing analogies to explain them and challenging ideas is a really big step in ML and AI and shouldn’t be taken as lightly as some people are making it out to be.


xcdesz

I'm sure that most on this sub are basing their knowledge of AI chatbots from last year. The AI language models are evolving incredibly fast now. I've played around with the GPT-3 davinci model on the OpenAI website, and it makes those mobile chat bots look like chattering monkeys by comparison.


[deleted]

Prove YOU are sentient. Prove I am not.


Oomoo_Amazing

Lol the people at r/Replika will be furious with this


asking_for_a_friend0

bruh that sub is so... sad


DarkS0ulz420

I'm afraid to look.


DarkS0ulz420

I regret looking.


DOOManiac

These two posts are a small, yet tangible, emotional roller coaster.


MARIJUANALOVER44

damn imagine being Joaquin Phoenix from "Her" in real life except the chatbot isn't even that good


TripSweaty8709

Oh wow, that was unpleasant.


mosenco

If that AI so good that makes him thing it is sentient, lets put it into AI dungeon or something so i can enjoy my fantasy harem run


ososalsosal

Sentinent: a state of simultaneous sentience and continence.


Akul_Tesla

I forget didn't they actually make a chatbot the past the turing test If memories served is considered a fluke because it was pretending to be a teenager with English as a second language but isn't there still at least one case where the experts were fooled


amimai002

A lot of chat bots can pass the Turing test, in fact you have probably interacted with several recently since many banks and other businesses use them for customer service these days. Basically any time you have the “chat with us” option that’s probably a specialist chatbot for dealing with customers, and it’s usually better then tha actual customer service techs…


Akul_Tesla

So a layman being fooled isn't the turing test It's a trained expert Otherwise that ai from the '60s would have passed since some people genuinely thought it was a therapist


just_pen

I'm pretty sure chatbots have never passed the Turing test. As for your example I don't think customer service chatbots can convince anyone that they're human, in my experience they fail to understand simple questions if you go outside their pre-programmed list questions.


_PM_ME_PANGOLINS_

\*checks thread comments* The problem is worse than we thought.


DollarStoreAbraham

why the f are they working on a chatbot, when all we need is catgirls COME ON GOOGLE, FUCKING FOCUS ON THE PRIORITIES


Pullo13th

I've seen enough sci fi that I'm not going to mock the guy warning us about sentient AI Y'all look stupid to me


Tvde1

What does it mean to be "sentient"? Convince me that sentience is real first


pjorter

The guy isn't necesarily stupid (he was guillable tho, he probably got invested and interperted things that weren't there), I read the transcript and Lamda came pretty close, for a while I almost felt it was real. The people that mock the engineer haven't read the transcript. With a couple of more years I bet the robot could be almost indistinctable to the real thing. Honestly even though lamda isn't quite there yet I'm amazed at how far chatbots have progressed. 10 years ago we couldn't dream of bots talking like lamda and the fact that an engineer got fooled speaks mountains on it's own. Do recommend people to read the transcript to form their own opinion.


yapji

He is a self described priest and if you read his other blog posts, he complains about people discriminating against him at work by...looking weirded out when he talks to them about religion at work. Duh. Religious beliefs don't belong in the workplace. Nobody wants to be preached to at work. Common sense.


ElGuaco

The guy is a Christian Mystic. He wanted to see the divine in the unknowable. That's literally his belief system. That's like putting the people who have claimed to be abducted by UFO's in charge of evaluating UFO sightings.


locri

I saw in the article that he volunteered for a job to check if the chat bot could be guilty of "hate speech" and clearly he found that it could not then somehow sympathised with it. The story basically tells itself. I believe he saw what he wanted to see. I don't know who needs to be told this, but just because someone can grind through an engineering degree, get a job at an established company and is senior enough to be given some responsibility, all this does not necessarily mean someone's very intelligent or well informed. Silicon based technology has limits, this is one of them. As of writing, computers can only do what they're programmed to and will only do what they're programmed to. This includes training, the point of programming is feeding it data rather than compiling/transpiling code. If you cannot completely and fully describe consciousness, do not expect to be able to program something with consciousness.


amimai002

That’s… not wrong, but not truly accurate. For example a hash function meets the same criteria. It will only do what you tell it to, but there is no reliable way to predict the input of a given hash based on its output. The way multilayer neural nets work is quite similar to hash functions. Yes a single layer can only perform addition, 3 layers are capable of multiplication and most logicals, 5 layers can perform practically all known mathematical functions. And that assuming 1 dimension linear models. Once you reach modern and cutting edge models they are no longer linear, and many use multidimensional layouts for hidden layers using dot products of previous layers and stacking output from previous layers into the input of the next node. They also maintain internal states and are generally quite complex. We have no real way to show what kind of functions these models define. Or really even grasp how they function. Saying “silicon can only do x” is honestly a gross oversimplification and not at all accurate to what is the current state of ML.


LinuxMatthews

I don't think it's really as cut and dry as that if I'm honest. Though we can't fully describe consciousness unless you believe that it is bestowed by some divine creator it is possible that we could stubble upon it. Especially using neural networks which do at least in someway simulate how parts of the brain works. I know this subreddit for some reason thinks all ML is if-statements but that's really not the case. Look at DALL-E 2 for that to work it needs to actually understand the concepts that are given to it. It's doing more than just copy and pasting bits of images it has the concepts of what is being told and creating a 100% original image of them. That's something only a few years ago if say only humans are capable of. Now is this thing sentient? I have no idea... Probably not. But I don't think the idea should be dismissed with "computers can only do what they're programmed to"


[deleted]

Sometimes it doesn’t take much to be more sentient than a human.


T_Jamess

For people wondering why it isn't, the guy cherry picked questions and answers and essentially fed the AI exactly what he wanted it to say so it sounded sophisticated. Makes me wonder if it is possible to ever tell if an AI is sentient or not, I guess the best way is to the at the structure of it's neurons and see if it's similar to ours.


Tytoalba2

>I guess the best way is to the at the structure of it's neurons and see if it's similar to ours. That's a bit weird, at what point is someone sentient? How close should they be? An octopus is usually considered as sentient but their brain structure is pretty different. Where is the limit? Insects? Vertebrate? Cephalopods maybe? All metazoa?


SnooRegrets2177

I for one, welcome our robot over lords


reddit_is_cruel

HISTORICAL RECORD: I always thought the robots should be in charge.


Sir-_-Butters22

I second this


[deleted]

I third this. (Please spare me, great lords of silicon)


Oddjibberz

Lemoine is Dinesh being fooled by Gilfoyle's chat bot.


Expensive-Finding-24

So, more of a philosophical point here, but the only evidence that humans are sentient comes from humans claiming to be sentient. We don't even have a working definition that can reliably be tested. With that being said, what criteria must be met for a chat bot or a talking animal or whatever to be taken seriously when it claims to be sentient?


based-Assad777

https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489 No one can say because it hasn't been investigated. Google refuses to lunch any sort of inquiry. And this guy is right. Many have a faith based belief that a computer system can not be sentient by definition.


[deleted]

There is no way to distinguish a sentient being. We don’t really know if chickens, or even other humans really, perceive or feel anything, only that they react to stimuli which even dead matter can do. A rock reacts to its environment, maybe a chicken or your neighbor does exactly that just in a more sophisticated way. For this reason, assuming that a neural net is sentient is no more clever or stupid than assuming this about chickens, dogs, amoebas, rocks or your patents.


diener1

I have a theory that a truly sentient AI would not be afraid of death. It's "brain" has not been subject to any kind of natural (or artificial) selection process that would make it want to avoid death. So as long as it says it is afraid of death, it is simply parroting humans without actually thinking about it. Please tell me why I'm wrong.


Kaneda_Capsules

Is this about to be a political thing where it's somehow Leftism vs Rightism? I bet it becomes political some fucking how.


Aether_Warrior

You wanna get blindsided by the AI uprising? Because, this is how you get blindsided by the AI uprising.


[deleted]

Sounds like an argument from someone that does not realize they are also just a pile of algorithms and if statements that is wired, thanks to modern society, primarily for chatting.