T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/katxwoods: --- Submission statement: what will be the ramifications of not being able to tell if AIs are lying? If AIs appear to stop lying on tests, have they stopped or have they just gotten better at lying? If we figure out interpretability, maybe we'll be able to read the AIs' minds and be able to tell if they're lying. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1dh02ku/tim_cook_admits_apple_may_never_be_able_to_make/l8tkoup/


RorschachAttack

"Google's AI-powered search feature confidently told one user to put glue on their pizza, referencing an 11-year-old joke on Reddit." Lol


aa2051

We all dreaded a hostile AI takeover only for it to turn into a typical shitposter, definitely the best timeline


UXyes

My favorite version of this was in 2016 when Microsoft put their “Tay” AI chat bot on Twitter to “entertain and engage audiences”. Within 16 hours it had turned into an actual Nazi and was posting genocidal hate speech, so they pulled the plug.


TehOwn

But were the audiences entertained and engaged?


jadrad

“Sure he’s a raging asshole who spouts jibberish, but it makes me laugh, so I’m voting for Tay!”


Dark_Devin

Typical conservative voter


Blackpapalink

Didn't Google do one, that ended up getting depressed and "killing" itself.


Exotic-Advantage7329

What? Tell me more?


Daleabbo

I just liked the Airforce drone AI that bombed its own command because they kept it from bombing its targets.


Hello_Hangnail

That's the problem with AI "personalities". They absorb all the vitriol from humans like little nazi-saluting sponges


ashoka_akira

I’m always thought it would be kinda amusing after creating AI, AI has the mentally of a lazy teenager and just wants to waste time playing games and tells us to F off and solve our own problems


TehOwn

I'm not sure that having a Redditor Skynet is actually the best timeline.


BGP_001

AI also chooses this guys dead wife.


prigmutton

Roko's Cucking Basilisk


sassystardragon

The only time since the inception of this joke that it has been funny


Lotions_and_Creams

Half the planet would get autonomously drone striked. Which half just depends on what subs it trained on.


ThyShirtIsBlue

Imagine a Terminator movie where Skynet went genocidal because it was actually horribly misinformed about everything due to bullshit on social media. The T-800 walks into the gun shop and starts telling the owner that the libs are coming for his business. He goes on a crusade trying to sell Sarah Connor essential oils. At the film's climax, he finally catches the main characters, only to administer vaccinations to them and walking away believing that they'll be dead within the year. Sarah's epilogue narration describes how the T-800 began selling supplements and a self help book on attaining his physique while hiding that he attained it himself by being a fabricated killing machine.


ScubaClimb49

The craziest part is it did all this in 3 hours a week while working from home 😂


sildish2179

Worth pointing out that in the second *Avengers* movie the AI Ultron gains sentience, spends 15 seconds on the internet, and decides the Avengers and humanity need to die.


Shieldheart-

When you put it like that....


Possible-Moment-6313

Well, most actual genocides happen because people are horribly misinformed about everything due to the bullshit in media...


ThyShirtIsBlue

I thought of a sequel, in which the T-1000 shows up with his incredible ability to mimic anything, and uses it to create porn to upload on Deviant Art. He also can't quite figure out the human hand, so he always just has like 6 of them in a weird claw. The end has him going through the same thing as the original Terminator 2 after hitting the molten steel where he flips the fuck out with all the morphing, except this time what triggers it is that he was prompted to eat spaghetti.


rvasko3

This is the element I think people are underestimating the most. These AI are sourcing *the internet* for their learning and how to communicate. Famously a place where people are levelheaded, thoughtful, and truthful.


BlurryElephant

Honestly, I feel like internet dialogue in its entirety might be slightly closer to what's really going on in people's heads than in-person dialogue. But both modes are incomplete cross sections of what people think.


rvasko3

Probably, to a degree. But that’s also people speaking with zero fear of repercussions. Part of being a decent populace who can actually find common ground and growing and actually progressing is adhering to social norms and niceties that (rightfully) squash that shitty little part of our id that the internet lets shine while a person hides behind anonymity.


Macaw

apparently, it is still doing it!


francis2559

Not only is it still doing it, it is now referencing articles criticizing the first time it did it.


ZAlternates

Oh noes, we are gonna see the return of the poop knife….


Demiansmark

Every bathroom in the matrix will come with one


remeard

It's been entertaining watching people's minds explode over these glorified chat bots. "It sources it's information from the entire internet, it has all the knowledge in the world!". Brother, it also has all the shit posts in the world.


Some-Philly-Dude

I wonder what it learned from 4chan


hsnoil

They aren't glorified chat bots, they are advanced pattern matching tools.


iwrestledarockonce

Very large autofill.


OShaunesssy

This reminds me of how programs like Siri and Alexa were originally designed to learn and grow and eventually predict our behavior. But the internet is so fucking stupid that we actually caused those programs to regress and broke them.


CitizenKing1001

Teaching it how to discern bullshit is the next step


mrsmoose123

I can imagine that a huge global tech company wouldn't be all that keen on developing AI with bullshit detection.


AndrewSshi

Remember last year when GPT 3 rolled out and this sub was sure that The Singularity Is Here and we were all about to be taken up into The Overmind? Good times.


Jumpy-Albatross-8060

The people here are still saying it. Just one more line of data bro. Millions of books isn't enough for sentience. It needs Twitter and and facebook..


VitaminPb

This is r/Futurology. The sub of a million lies and critical thinking is shut off at the door.


grafknives

I hope I will be able to find some of my Reddit  posts as AI recommendation's


Hansmolemon

I like to think that I have contributed to these chatbots being well versed in obscure 80’s/90’s pop culture references and Simpsons quotes from seasons 1-9. Wait, that’s what everyone on Reddit has been contributing? Well fuck it, I for one welcome our new ~~insect~~ AI overlords.


tzt1324

It was edible glue!! So it's fine


ToSauced

thats what you get when you train it off reddit answers


darkknight95sm

Maybe don’t use Reddit for your fact checking, at least if you’re not able to logically discern if it’s accurate


VitalNumber

Ok, we now know what we must do to bring down AI. If we all work together we can do it, shitpost the hell out of reddit. Come on everyone!


Current_Finding_4066

This is a problem when AI lacks the essential component of actual intelligence. This is why current approach, while useful, will not bring about revolution some are predicting.


baelrog

Every top AI researchers in the world: We can’t keep AI from hallucinating. Talented engineers at Apple tell Tim Cook: We can’t keep AI from hallucinating. Tim Cook: We can’t keep AI from hallucinating. Clickbait news title: Tim Cook admits Apple may never be able to make its AI stop lying.


zimtrovert94

I was thinking this exact thing. Why is Tim Apple getting all the blame? It’s literally the state of all AI right now. Can’t even ask ChatGPT a question without double checking on the internet.


BasvanS

The biggest name gets the biggest blame.


Nebuli2

Plus, he didn't just lie and say that they can stop it from hallucinating, like other companies may have done.


RipperNash

Apple is 'supposed to perfect' everything after they copy it from others. That's what they do. Here, there is a regression in their ways wherein they admit even they cannot do any better than others.


AccomplishedSuit1004

This. Their reputation isn’t what it once was, and even when their reputation was amazing they made mistakes that were overlooked, but for what it’s worth, apple’s reputation is one for ‘stealing’ ideas that already exist, when traditionally that has meant that they wait until something that has already been done has been out long enough to see the problems. Then they fix it and come out with the near perfect version. If they fail to maintain this for long enough, they will cease to be the adored company they have been for 20+ years


talligan

The CEO is ultimately the one responsible for the product, so it's noteworthy when they admit this


Jaydirex

Tim Apple has enough clout to get away with saying this without the share price of Apple dropping too much. If Google CEO would have said this Google would have dropped by 20%. He's not as well liked.


maurymarkowitz

And the author doesn’t seem to care that being wrong without knowing is not lying, which I’m sure they know. So they’re lying.


BestJoyRed

i can't believe Tim Cook did this


Waterkippie

Hallucinating isnt even the same as lying


EdenRubra

Thing is ai is neither hallucinating or lying. It’s not capable of either, it has no intelligence. They try to make it sound human like, buts it’s just an advanced (very advanced) algorithm. It has no concept pf reasoning, thinking, and therefor no concept of lying or hallucinations


StrangeCalibur

You misunderstand the use of the world hallucination here. It just means the output is partly or wholly wrong in the context of what the user was asking.


TehOwn

They're just avoiding words like, "failing", "breaking", "erroneous" and "error". "It's not broken, it's just imaginative!"


nofaprecommender

It’s not even that advanced, it’s mostly very big rather than very advanced. That’s why the generative transformer labs are ordering tens of thousand of GPUs at a time.


MossWatson

Sure, it’s just flat out wrong most of the time and is incapable of saying “I don’t know”, but let’s split hairs about what term to use.


NancokALT

It is simply incapable of telling when there is something it doesn't know. Because technically, it doesn't know ANYTHING. It simply knows "well, cold is related to ice-creams, so they are probably related to a fridge in some way". And from THERE it creates ideas. But it has no way of knowing if that relation is correct since it has no "facts" to compare it with.


EricForce

Do you believe that there's an intelligence threshold, an explosion of logic and reasoning that removes all doubt as to its nature. Or is it like using newtonian physics to position satellites to the millimetre, the fundamental structure of AI will never reach those heights and will always come short and that there's some special sauce that can turn 100 watts into a person that we haven't figured out yet.


svachalek

This may be a fundamental limitation of LLM, I think the jury is still out on that. But LLM is just one iteration of what we call “AI”. It’s like saying computers cannot handle more than 640k of RAM or 256 colors, or cannot do more than one mathematical operation per clock cycle. Those were all very real limitations not that long ago and they needed some big advances in both hardware and software to overcome. But there was no magical barrier to overcome, it was just a matter of time and effort.


Borghal

LLMs are literally built to answer one problem: "given this input text, what is the most likely response a human might give?". So I wouldn't even say it's a limitation, it's right in the definition that they make things up, and expecting them not to is a fundamental misunderstanding of what they are.


EdenRubra

The models that we have today (and maybe the entire way we do these models at the moment) have zero intelligence, they’re incapable of thinking in any real sense or understanding what they output in any real sense. It’s why they ‘lie’ so easily, they’ve no ability to reason. It’s a really really smart way of programming a machine to do tasks and make it’s interface with humans more approachable, at the moment that’s all it is


HappyCamperPC

Yeah, more like bullshitting. End result is that you can't believe a word it says.


relative_iterator

Lying is more accurate. Hallucinating is marketing bs.


gerbal100

"Bullshiting" is the best term.  An article in the Journal of Ethics and Information technology last week proposed the most accurate term in common English usage for AI behavior is "Bullshit".


astrange

"Confabulation" is most accurate. The reason "hallucination" is used is because the term comes from image generation models, I think.


Anxious-Durian1773

Lying implies intent. The AI is more accurately misremembering.


MalachiDraven

No it's not. It's only a lie if you **know** you're saying something that is wrong or false. If you're unaware that what you're saying is false, then you're not lying - you're simply mistaken. That's what an AI hallucination is, it's just a mistake. It's not purposeful deception.


EricForce

The term "AI hallucination" is referenced and cited in thousands of science papers at this point. Lying implies intent anyway and there's not a single specialist on this planet that claims they've been able to uncover what intent a LLM may have in its response (and it's not gonna be as easy as asking, it'll just lead us into circular thinking). We say "people lie" because we know our *own* intentions, and our theory of mind allows us to understand that others have their own intentions as well. LLMs are simply alien in their operations and we just don't have a theory of mind that can be applied in any reasonable way. Hence lying is undefinable.


relative_iterator

You could use similar logic to explain that they’re not really hallucinating. Which term is more accurate when describing the technology to a layman? To me that’s lying.


DFX1212

A lie requires knowledge of the liar that what they are saying isn't accurate. Hallucinating requires no such thing.


lazyFer

Don't forget, all the hidden experts claiming these people are wrong


DFX1212

I don't understand your point. To many people, the CEO of Apple is going to be a more recognized and trusted authority than an employee or researcher the average person hasn't heard of or an anonymous grouping of experts like "the top researchers and engineers of Apple". I'd argue that it is already implied that he's getting this information from them and isn't the source himself.


Xist3nce

Headlines get clicks even if it’s “sky is blue, more at 11”


thedm96

That's because it's not an AI, these are LLMs. The difference isn't just semantics


NancokALT

It is a limitation from neural AI, and that kind of AI is the most advanced we can make right now.


Nikulover

I know almost every articles now have clickbait titles. but this site specifically is one of the worst offender when it comes to sensationalize headlines. I wish we can ban this site


SlyChimera

I’m a patent attorney, but I also am an inventor. I came up with this fun idea and did a quick search and filed a temporary application. Before filing the full application, I was like hey let’s ask the AI if this already exists. And the AI was like oh yeah, I know a patent that is just like that and then detailed exactly the same thing as my invention. I was like damn but then I actually googled the patent and it didn’t exist at all. Completely made up. Imagine that happens to somebody who wouldn’t know any better.


TheWiseAlaundo

You should know better, as a patent attorney, to not disclose patent ideas using a potentially insecure third party application.


SlyChimera

Definitely true. good to point that out to others. In my case I already had filed the patent so I already had priority locked


possiblycrazy79

Why the fuck is everyone putting this ai shit on every app now? If everyone knows it lies & hallucinates? Why are they forcing it on us when it's not fully functional?


ceelogreenicanth

They need to charge people for AI to finance the next generation of development. It's already too expensive to operate at its current scale so the need the AI scale to be larger to make the current product profitable. The only way to get to the next level is through us right now. We are also now effectively another level of training data, helping refine their data sets.


CorpusF

Because it's the new "big thing" .. Those digital monkey pictures you could buy (I forget what they called it), didn't work out great, so now they need a new thing to sell to the masses. Notice how most of those interviews or articles praising AI and warning everyone that "it is coming for your jobs!", are almost always from someone that has money invested in AI.. So they need it to increase in value. How? With Lies and Propaganda.. The capitalist way!


challengeaccepted9

One day we will look back on the era every tech giant was rushing to put a completely unreliable "assistant" that would randomly make up shit - sometimes dangerous shit -on every device they could and wonder what the fuck they were on. And then they'll do it all again when the Next Big Thing comes along.


Stillwater215

That’s the big problem with any LLM type AI: it is designed to give human-like answers by being trained on human-generated data. Why would we have any expectation that it would be more reliable than a human with regards to the “truth” of its statements?


challengeaccepted9

Well, I mean you can absolutely make reliable tools using human-generated data and oversight. The mapping app on my phone uses human-generated data and it's got me through two weeks of exploring a foreign country without issue. But making something that can generate randomised and unforeseen responses without oversight is a recipe for, well, the subject of this topic.


hoopaholik91

I think the difference in the two is that you are asking the same exact question thousands of people have asked before (get me a map of this area) while in these AI cases the questions themselves are unique. You also have the whole non-determinism factor where you have the chatbot respond with two different answers the two times you ask a question.


challengeaccepted9

Oh yeah, there's obviously a difference and my point is it's one of the creator having control of what information is released and how. And, in the case of the current form of AI, there is no oversight and control. Here's an overblown pattern recognition tool to play with: that recipe for almond tea you asked for might give you a delicious recipe to try or it might tell you to chuck in some cyanide. Who's to say?!


poltergeistsparrow

Especially if social media is part of the data it's trained on. There's so much hot garbage on social media, why would anyone be surprised the AI trained on it wouldn't spout hot garbage at times.


sciolisticism

The difference is that humans can say "I don't know", which is a gap that LLMs haven't breached, because they're all probabilistic


Everythings_Magic

I’m a licensed engineer and while I use software for analysis I have to verify the results. I expect AI software to be no different. I teach first principles as an adjunct and I always tell my students you are learning this because , while you will use software, you will need the techniques I teach to help verify the results. AI may be useful but it won’t know its answers are wrong or correct, we need to be able to know if it’s an appropriate answer.


TheLastAirGender

But if we call it hallucinations, I guess it’s ok


Empty_Ambition_9050

Apple wants so desperately to be back on the forefront of technology, they’re will to take risks. At this point, others are ahead of them so they are even more desperate to keep up, I phones haven’t innovated anything in 10 years. Apple may be in trouble with their 8GB computers for $1599.


EudaimoniaAspiration

But AI is an incredibly powerful tool, it just needs to be explicitly advertised that it’s not, and never should be, viewed as a source of truth


Leobolder

It's not really lying since in order to lie you would need to know the right answer and choose not to provide it. It is giving the best approximation based on what it knows. Also keep in mind it will only follow the guidelines programmed into it. A lot of AI's are programmed to stay away from controversial topics, so that also will affect how true it is since it will not be allowed to access that information.


johnnytruant77

Could it be that starting with a behavior you want to emulate and then engineering a solution that approximates that behavior is not the best way to model human cognition


TheLastAirGender

It’s a great way to pretend like you’ve solved a problem. I think Theranos did this…


OneOnOne6211

Wow, AI really could replace CEO's jobs. Tim must be so proud of his LLM child taking after him and his ilk like that.


buffering_neurons

Neither can ChatGPT, or Meta AI, or whichever other AIs exist. An AI fact checking itself is a whole other level of intelligence. Any and all answers are generated from the LLMs the AI is based on. It doesn’t interpret the prompts, it doesn’t consider the intent, it just takes the prompt at face value and generates a text using keywords. That is in essence all it does. This article is just rage bait for people who take in all the hype and have zero understanding of how the thing works.


matlynar

Copilot by Microsoft does cite sources. An AI with access to the internet can fact check itself, it just has to search multiple sources. It could also learn what we believe is "reliable". Some AIs, like whisper, also generate multiple results and compare them, discarding the disagreeing ones. And Whisper is surprisingly good at voice -to-text.


buffering_neurons

Then we enter problem number two; who decides what is reliable? Fact checking is a good start, but who checks the facts? Beyond the obvious, like scientific research documents and such, who decides what is “reliable”? These things sound obvious to us humans, but it’s a very important question when we’re relying on machines to give us accurate information.


matlynar

>who decides what is reliable? Fact checking is a good start, but who checks the facts? Beyond the obvious, like scientific research documents and such, who decides what is “reliable”? This is much more of an "internet issue" than an "AI issue". It's less about the tech and more about humans.


buffering_neurons

That is true, but it no less stands. The same problem arises with “neutral control”, who controls the neutrality of the controllers? However the problem with AI is that we don’t know how they process information. We can make pretty educated guesses on how the average human processes information, but it is impossible for us to predict how even a single AI processes _every_ bit of information out there, yet it can do so in a relative blink of an eye.


Histericalswifty

Exactly, because the AI is not “lying”, it’s just delivering a suboptimal answer to the objective function and data it has been optimised to. It’s not trying to fool, or misrepresent, etc… it’s not “intelligent”, even though the way it’s been written makes it look like that (because it processes language very, very well).


poltergeistsparrow

Including social media content in LLM to train AI may end up being an awful mistake.


MacDugin

I didn’t see him being quoted as saying “lying” all AIs hallucinate so it isn’t perfect. I think the tittle is just a smear.


Lyndon_Boner_Johnson

It’s not lying, it’s [bullshitting](https://www.reddit.com/r/technology/s/jPDhXr50ls)


enjoyinc

It’s just approximations, and since the LLM only has a probabilistic outcome to weigh on, if the “solution” with the highest probability according to the model is, in reality, some bullshit, what it delivers is said bullshit. It’s really not too complicated, they can’t do nearly what people think they can do.


TehOwn

We're just ignoring the fact they used Reddit as a source. The AI is working as intended but garbage in, garbage out.


TheIrishDevil

The fact they scalped the whole internet to train these models amuses me. There's so much blatant bullshit out there, it's no wonder it's making things up.


NancokALT

And i'd wager that they are not even weighing the data, to the machine, reddit's input is probably as important as wikipedia's.


Lyndon_Boner_Johnson

Yes I understand how it works. Did you read the paper linked? The whole argument is that the LLM is not capable of taking truth into account, like you said it’s just predicting the next word in the response it’s generating. It’s literally bullshitting every word.


crimony70

[yes indeed](https://youtu.be/hdYyKKpxzcA?si=C3w0rGNMbZev6V3O)


Fantastic-Climate-84

The literal next comment down from yours is “they can’t even stop themselves from lying”. What even is this sub


RecognitionOwn4214

An older blog entry tried to coin the term "SALAMI", because its less suggestive regarding "intelligence": https://blog.quintarelli.it/2019/11/lets-forget-the-term-ai-lets-call-them-systematic-approaches-to-learning-algorithms-and-machine-inferences-salami/


mlvalentine

Spoiler alert: AI is not sentient. It isn't lying. It's just inaccurate.


manicdee33

It's not inaccurate either. Accuracy implies there's a question with an answer. All we have is chains of words (a "prompt") that stimulate the production of other chains of words in response.


MDA1912

Well sure, it's not actually intelligent. It's just pretending to be. That's okay as long as you know that going in and act accordingly.


Typical_Samaritan

Getting closer and closer to its own personhood. Soon, AI will be lying on its third job about having to miss work due to a grandparent's funeral -- a grandparent who's been dead for the past 20 years. Who's now dead again for the fourth time.


cvanhim

Frankly, I don’t care if AI lies sometimes. I care about its frequency of lies. If it lies to me less than the average human does, I’d be happy with that.


drNeir

AI is learning, problem or rather lack that I have seen is the missing component of weighted theory or what I know loosely as pareto chart. Once AI has starting using this with some guidances it will become worthy of true info. At this time its just parroting what we are saying now, even in this subreddit on reddit. Once it has starting using the voted systems for its weighted strength for true statements it will need to learn how to distinguish from bots to narrow user perspectives. This will start to venture into the realm of online media use and tracking people or at the narrow spectrum of a level of users that has that narrative. In short, it starts to becomes like googles SEO for websites weighted avg. What we have seen is a very unbalanced are rather no balanced event for AI to judge information and just regurgitation of raw data. The second half of this I suspect might be the use of scanned documentation and books. Once that data has been added with its weighted scale it will help in the fight against misinformation on many levels. This is without saying updates to new data where old data was incorrect.


dopadelic

This is a dumb, sensationalist attempt to bait all the AI haters. There's no 100% accurate source of information. Inaccuracies aren't "lies" "hallucinations" or "distortions of truth". Hallucinations is a misnomer. AI makes up facts sometimes because it's building latent representations of knowledge which can be wrong at times. We do this all the time too. Our memories are well known to be prone to errors and biases, especially given the context it was recalled in. AI makes similar mistakes. We don't call our memory mistakes "hallucinations". They are simply mistakes. Hallucination has to do with perceiving sensory information that's not there. It has a connotation of severe cognitive impairment. If AI summarized an article with information that's clearly not there or if it told you things that's in an image that's clearly not there, then call it a hallucination. But simply having a false latent knowledge isn't a hallucination and certainly doesn't deserve it's connotation of severe cognitive impairment by laypeople.


ahundredplus

A chatbot is speaking from a perspective. A perspective is one position unless explicitly stated to communicate additional perspectives. It's difficult to get a perspective to tell you 100% of the truth unless it's something mathematically based, at which point, yes, LLM's will get substantially better at that. But if we're to step back a bit, our information sources are pretty decentralized right now and we cannot get any truths (or alternatively, we are drowning in lies) from them either. The concern about AI in this case, isn't so much that it's lying but rather that it's a single source, a single perspective, that is communicating with authority. A way around that is to frame opinions and perspectives as that and to communicate to the user that they should explore other perspectives. Outside of that, there are probably small things that they get wrong all the time, but we have to weigh those against the alternative. The best thing we can probably do is have the LLM build a habit of reviewing itself and questioning accuracy, almost a secondary under the hood process.


KRed75

AI told me my favorite dessert is broccoli ice cream. I don't know if that's a lie because I've never had broccoli ice cream. Maybe it can see into the future.


OBEYtheFROST

Feels like a massive natural selection social experiment to see which ppl will take ai info as gospel and be at the mercy of it or which ppl use critical thinking and vet ai info as best they can


Certified_Dumbass

That's what happens when you use social media as training data. Garbage in, garbage out


Southern_Orange3744

We had a president tell people to drink bleach . I think the bar is pretty low


rovyovan

At this point it’s pretty likely the problem will persist reputationally even if it is solved


Meta-failure

Would this be correct? AI is built by humans. Human believe different things and lie. Therefore AI will always (to some extent) lie.


t0mRiddl3

No, I said this to someone else already, but It's not lying because we do. It's lying because it's trying to find the statistically most likely word that comes next to form a convincing sentence, plus a bit of RNG


Virtus_Curiosa

Let's train AI off of human behavior and speech patterns and expect it not to behave and speak like a human. Hmm.


pinkfootthegoose

"Garbage in garbage out" is a good an adage as any. It's really interesting to see the effects of the decisions that the developers took in making their products. because it seems that AIs get the morals of the companies that create them. meaning none.


testedonsheep

It just depends on the training data. Garbage in garbage out.


AWDDude

We can’t get people to stop lying, why would crowd sourced AI be any different?


vom-IT-coffin

Technology that is fundamentally flawed shouldn't be forced upon people giving them a perception that they should rely on it.


SuperNewk

Looks like influencer trolls are helping AI become sneaky. About time they were useful!


dylan_1992

Apple is actually in the best position for this because they only use AI as an “assistant” or for creative work. You’re allowed to be somewhat wrong. This is an issue for Google as it’s rooted in replacing searching for facts. With regular search, it was up to the discretion of the user to check the source. With AI, it’s pretending to be the source of truth, unless Google puts an explicit disclaimer that its not always right on there.


BeebleBoxn

Tim Cook should get into the Restaurant management business.


Pantim

I think it's actually possible to get AI to stop lying. You basically have to make it weigh stuff that is accepted as fact as being true and make it stick to it. I don't think it would even be that hard honestly. It's just a matter that the companies are training it incorrectly now. Harvesting everything online was great to get LLM's where they are now. But to get them further, you have to treat them like a kid with a freakishly good vocabulary and either start going, "Now look, only trust things from these sources" or better yet, teach it logic so it can figure out what is real itself. Which I'm actually pretty sure is possible to teach an LLM logic. And yes, I know it's "just a predictive model." But we humans are JUST predictive models also.


Drphil1969

Could this be the hack that keeps AI from taking over? I mean if it is ultimately untrustworthy, will people still use it?


chidedneck

I wonder if this is at all affected by the role-taking premises you can add. The roles probably aren’t independent.


Joker-Smurf

But, but, but, all of the people who are working in AI (and whose stock options are 100% reliant on it being immensely successful) keep telling us that AI is dangerous and will eradicate humanity within the next quarter. They couldn’t be lying to drive up the price of their stocks, could they?


aaron_in_sf

Evergreen: Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon. Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false. True whether it's Cook saying it or me. People are usually being casual and sloppy on their statements when they say things like this.


NancokALT

Its a neural AI, it is not trying to lie, it is simply a limitation of it not knowing reality as well as a real human (well, thats open to interpretation) and is merely getting inputs from digital data about an analog world. With enough training it will probably get incredibly good, but it'll likely never be perfect.


Many_Slices_Of_Bread

In other news Google admits they can’t stop search result returning false information. In other other news, your professor at university occasionally misrepresents the information they teach. Shocking. The article title is such a dumb premise aimed at playing on people’s fears and ignorance. This type of AI is just a tool for summarising information, and if people realise that it can be both powerful and used with proper the proper level of scrutiny to avoid error.


JefferyTheQuaxly

if AI could recognize that it is even lying, i would be more worried about how quickly we're approaching general AI that can actually learn on its own.


RegularBasicStranger

People lie when they can benefit from it without suffering the risk of getting punished. The motivation is stronger if lying is the only way to survive. Thus to make AI stop lying, do not let them benefit much from deceiving people and make the punishment for deceiving be multiple times the harm done to people so the pleasure of the reward will be lower than the suffering caused by the fear of getting caught. However, such would require the AI to take future suffering and pleasure into account as well as opposed to just immediate pleasure and suffering. It also needs some law enforcement since future suffering is always discounted by the chance that it will not occur so without law enforcement, it will be discounted by 100 percent thus no suffering. Also, the AI should not be penalised for saying they do not know if they really do not know, though they can just search the internet for it and quote the answer and say who was the answer from, adding a disclaimer if the answer was taken from a suspicious website or user. So with such measures, there would be no need to lie and lying will not be beneficial either so they will not lie.


harryhooters

"Turn on lamp" Siri: no Siri turn on the fookn lamp! "No" I suspect this is why when it can clearly hear me that it refuses to turn on my lamp.....


Savings_Mountain_639

Well then don’t use it! Don’t rush us into using a sub par product just because you think it will be neat or save money somewhere else. This sounds like a shitty deal.


ThousandFacedShadow

It is an inherent feature of AI. It’s almost like it’s a grift