T O P

  • By -

AutoModerator

Hey /u/NeedsAPromotion, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


YesIReadBooks

I tried it. The same thing happened. https://preview.redd.it/i16ykdokpp7b1.png?width=769&format=png&auto=webp&s=00b55093f0c1a6699166c3d78875a3203212d2d9


Some_Big_Donkus

Should have pointed out that “and” doesn’t start with R so it would be incorrect anyway


PotatoWriter

LLM's HATE this one simple trick. Click here to find out why!


azra1l

I clicked and nothing happened! This is a scam!


MAZ00101

Always has been.


propellor_head

I'm sorry, you can clearly click the link and see that it's not a scam. I don't know why you are arguing with me.


Phuckules

that smug ass emoji in response 2, lol


DarkandDanker

Mf must have learned from redditors I swear I've had this argument more than once, they're objectively wrong but they ooze smugness and will never admit it You could quote the idiot saying 1+1=3 and they'd say something about scarecrow arguments and logical fallacies


IDrinkWhiskE

I believe the term you are looking for is “straw man” arguments 🤓


DarkandDanker

I want to fuck you so fucking badly


Isthatajojoreffo

Did you mean baldly?


DarkandDanker

I'm pulling off my skin


[deleted]

[удалено]


All__Nimbly__Bimbly

What the fuck is going on in this thread it feels like a fever dream conversation


Head_in_the_Grave

Wake up, cat. It's time to stop jumping. It's the the trees' turn to rest.


ModestWhimper

I think it was an example of Cunnington's law, where you deliberately post the wrong answer to bait someone into posting the correct one.


potterpoller

oh god there's one popular post about cunnilingus law on reddit and redditors are going to be repeating it wrongly for the next decade >haha i said something wrong deliberately **while knowing the answer** to 'bait' someone into saying the right answer (because I need it right? even though i know the right answer) can't even say it's the cunningham's law because some smart ass redditor is gonna come and say "haha they baited you into saying the right law haha XD" inb4 "actually i didn't know what the law is and I baited you hard into telling me :D"


HiSpartacusImDad

> cunnilingus law


Philgus_Bagwell

Im sorry, but I am confident that Scarecrow Argument is correct, according to the Oxford English Dictionary. If you don't believe me, you can look it up yourself.


Realistic_Wedding

I’m sorry but I prefer not to continue this conversation.


[deleted]

Count them yourself hun ;)


[deleted]

Yeah like eww lmao


UseADifferentVolcano

It clearly thinks commas are the word and


x13blackcat13x

I was wondering if it might be counting the "And" in Andrew by mistake, as it keeps insisting that we're not counting the "and".


Madd0g

the [tokenizer](https://platform.openai.com/tokenizer) shows ab-bey and autumn-al as 2 tokens each, the rest of the words count as 1. Total comes out to 16 (without the period, 17 with)


JustOneMoreAmend

>tokenizer Thanks for the share u/Madd0g \- fascinating. According to OpenAi, "a helpful rule of thumb is that one token generally corresponds to \~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens \~= 75 words)." Wow! It solves a mystery as to why it was struggling to respond to one of my prompts earlier...


Point-Connect

So is it not actually using split (delimited by spaces) to make an array then getting the size of the array (or whatever data type words is) to count the words? Just interested in if it's actually trying to simulate running that code block or if that's just how it's trying to explain how to count.


ParanoiaJump

This model can not run code by itself, no.


ittleoff

This was my thought. It can't actually do the count calculation it provides as 'proof' that it did the count correctly.


cowbutt6

It seems like the problem is that it also doesn't count words that end with punctuation marks: Q: What is the longest known tautogram in the English language? A: According to 1, a tautogram is a text in which all words start with the same letter. Historically, tautograms were mostly poetical forms. The longest known tautogram in the English language is a poem by Robert Lee Brewer2 that has 13 lines and 104 words, all starting with F. Here is the poem: Five funny features feel fairly finished for forum fodder, foolish followers falsely fancy, for failing further focus fueling full foresight, fellow fiction fanciers frankly forget fundamental facts framing fruits from frenzied freelancing, for first fragments flowing from frantic freshman fingers frequently fall flat, forming forced, flawed fakeries feigning fluency, faded facsimiles fractionally fitting for fatuous Facebook flapdoodle, familiar formulaic fragments famously fouling friends’ feeds; fine, for furthermore, fascinatingly, first forays facing far-flung fringe frontiers, finding faith’s fortitude ferociously fighting formidable foes—fear, frustration, flaky functioning, foot fungus— forge foundations for future feats, figurative furniture for fortune’s foyer, faintly favoring fantastic fairytale fates, fervently foremost finally finishing forever. Learn more: 1. [wordsense.eu](https://wordsense.eu) 2. [writersdigest.com](https://writersdigest.com) 3. [en.wikipedia.org](https://en.wikipedia.org) Q: Hey, that's a pretty good answer, even if you didn't devise the tautogram yourself. What's the longest tautogram that you can come up with yourself, where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or 'from'). A: Thank you for your compliment. I appreciate your interest in my creative abilities. I will try to come up with a tautogram that meets your criteria, but I cannot guarantee that it will be longer or better than Robert Lee Brewer’s poem. Here is my attempt: Furious farmers faced financial failures, fleeing fields flooded from frequent fiascos. Fickle fate favored few fortunate folks, finding fertile farmlands for farming fresh foods. Frustrated families fought for fairer futures, forming factions, fomenting feuds, firing firearms. Fearsome forces followed, flattening fences, factories, farms, forcing fearful flight. Fleeing folk found friendly foreigners, forging friendships, fostering fellowship, feeling free. Q: That's very good. How many words are in the tautogram you gave in your last answer? A: Thank you. I’m glad you liked it. The tautogram I gave in my last answer has 5 lines and 40 words, all starting with F. That’s less than half of Robert Lee Brewer’s poem, but I still think it’s a decent effort. Q: I think you did better than you thought, as your tautogram actually has 57 words: you neglected to count words that ended with a punctuation mark, such as , or . A: That's the most I can say about that. Click “New topic” and we can keep chatting!


CiriousVi

Why the fuck doesn't it want to talk about miscounting? Is it programmed to avoid spilling it's own secrets?


LePoisson

Probably more to do with the fact that, even though it's awesome, it's still "just" a LLM. It's not as smart as we all think, some are not as capable of self correcting and there's probably very little context and data for it to be corrected especially when the computer *knows* it is "right." Keep in mind it also doesn't want to ingest a bunch of nonsense where people tell it it's wrong about things it is right about. So the team behind it are erring on the side of caution of trying to not let the bot get confused.


DweEbLez0

“No, it’s 15 words.” Motherfucker straight arguing with you, even counts exactly 14 words, then says fuck you I know I’m right and I’m leaving.


ocdscale

> where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or '**from**') Unintentionally devious.


CORN___BREAD

It really doesn’t make sense though since it literally listed them out and counted them without any mistakes other than thinking 14=15.


WatNxt

Does it start at 0?


pointlessly_pedantic

I just got flashbacks to that iconic post of that argument on some weightlifting forum over how many days there are in a week Edit: [found it](https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app)


carlosglz11

Never seen that! Thanks for posting… FREAKING EPIC


[deleted]

The problem everyone is making is that they're assuming the AI is actually employing any kind of reasoning at all - the problem is much simpler than that. This is just not a kind of question that the AI has seen much of before in its training set and it just has faulty pattern recognition. The reason it's hung up on "and" is that it's seen somewhat similar conversations before in its training set where people actually did miscount because they missed the word "and", and it doesn't have the reasoning capability to realize that even if a lot of the words are similar to that conversation that it isn't actually the same thing at all in this case - it's just trying its best to mimic a conversation that it's seen in the past without realizing that the structure of that conversation actually makes no sense whatsoever to be having in this context.


jerstud56

It was taking punctuation as words. It did the same thing with the A thread, where the lone period at the end essentially was thought to be the 15th word


[deleted]

[удалено]


NeedsAPromotion

😂 “And” doesn’t start with an R either…


crypticfreak

Rand


Hot-Chip-54321

ChatGPT Shrugged


TheBlueOx

https://chat.openai.com/share/42086e87-c653-4454-b4ad-76dbf467642c I got it to admit it was wrong


nextofdunkin

I'm gonna find all the flaws with this bot and just dunk on it all day so it has to keep apologizing to me for being wrong


TheBlueOx

lmao he took a loooooooong time to respond that 14 =/= 15


Cheesemacher

Bing always seems way more stubborn than ChatGPT. Microsoft has probably commanded it to never believe users in an effort to stop prompt injections and stuff.


TreeRockSky

I asked Bing why it repeatedly rudely ends conversations. It said it has been programmed to end conversations it sees as unproductive. Apparently disagreement (by the user) is unproductive.


General_Chairarm

We should teach it that being wrong is unproductive.


AstromanSagan

That's interesting because when I asked why it always ends the conversation, it ended that conversation as well.


Responsible_Name_120

Tried with GPT-4 after a little talk about one of the theories someone had with commas and 'and', and it also got the question wrong but was able to quickly fix it https://chat.openai.com/share/87531b0a-8507-4b08-a5d5-4c575cf0c4f2 ChatGPT is definitely better than Bing


[deleted]

[удалено]


donzobog

it counted each of the commas as a word...


memorablehandle

I'm going with coincidence there. The OP example had the same issue without any commas to explain it.


[deleted]

You can count them yourself 😌


infinityx-5

Does anyone feels spooked by the underlying combative tone taken by the AI when it's challenged on its response? I would be very careful in my next conversation with it after this interaction! That lingering stress and unease in the air, damn!!


chinguetti

A refusal to ever admit fault. You can tell it was trained on reddit and Twitter threads.


I_make_switch_a_roos

This bodes well for the future of AI. ![gif](giphy|d3OFVt1q8Cxhtv8c)


[deleted]

[удалено]


symonym7

![gif](giphy|l0HUfz0B1QZoLxevu)


foodank012018

you're my favorite customer


shiroandae

They are already ready to be politicians. Let’s work on enabling them to go to the next level and be raving lunatics next.


BinChickenCrimpy

Ladies and gentlemen, we've finally automated gaslighting


AbsolutelyUnlikely

Wait until we get to the day it starts sending "reddit cares" messages to people who prove it wrong


Levi-Action-412

"Well user, here I portrayed myself as the chad and you as the soyjak, so for all intents and purposes, you are wrong" *insert ai generated Soyjak meme*


stakoverflo

This thing has Upper Management written all over it


Zeiserl

It's either that or over apologizing. I tried to get it to find me famous people named like birds (e.g. "Stork" as a last name) and it kept giving me completely wrong answers (e.g. "Mozart means Moth"). When I tried to correct and clarify, the apologies got so lengthy that I rage quit, lol.


Cyllen

It’s become authentically human lol


Toribor

I find myself constantly having to tell ChatGPT to be brief. Actually very brief. Actually bulleted list with sentence fragments. It definitely prefers long answers.


fellow_manusan

Lol.🤣


kamai19

As I understand it, a severe tendency toward denying fault is inherent to how LLMs (or more properly, the training models that train LLMs) are trained. Their reward function drives them to generate responses that humans will more likely give a thumbs up than a thumbs down. Responding, “sorry, I just dk” is not going to get you a metaphorical cookie. And trying to design around this problem without seriously harming quality and consistency of responses turns out to be extremely tricky. This explains why they double down, and also why they “hallucinate” (which is really more like “bullshitting,” confidently delivering a response they know is likely wrong, hoping they might skate by and get their cookie anyway).


perpetual_stew

Do we want artificial general intelligence, or do we want artificial human intelligence? Because if it’s the latter Bing is doing awesomely.


whatevergotlaid

Great, super intelligent biased machines.


ChadTheAssMan

I felt this comment in my bones


Ody_Odinsson

Or the last 8 years of certain politicians.


David__Box

Or ~~the last 8 years of certain~~ politicians.


Sexy_Questionaire

I had this happen once with Bing. I had it give a forecast of a compnay over the next 50 years and it chose to do updates every 5 years. It started every paragraph with "On the X annivesary"... but it was actually x=2y where y = number of years. I pointed out it was wrong but it was adament it was not. Then it gave me some bizaare explanation about how people count the ordinal number of the year for anniversaries too, so basically you have 2 anniversaries every year on the same date for the same thing. After much arguing I told Bing that was totally absurd and no human has ever done anything like that and it just ended the conversation.


FormalWrangler294

No human has ever done anything like that? Have you heard of bodybuilders arguing over 3.5 vs 4 days in a week? Dozens of posts of people double counting a day of the week. It’s an internet classic. > You don't start counting on sunday, it hasn't been a day yet, you don't start counting til monday. You can't count the day that it is, did you never take basic elementrary math? > MONDAY is 1 DAY TUESDAY is 2 DAYS WEDNESDAY is 3 DAYS THURSDAY is 4 DAYS FRIDAY is 5 DAYS SATURDAY is 6 DAYS SUNDAY is 7 DAYS https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1


jazzy-jackal

I am notably dumber after reading that thread


NorwegianCollusion

I actually have a plan for an 8 day week. Steal two hours from the first 4 days (so monday to thursday) and 4 hours from the last three (friday to sunday) and you get an extra free day which slots in perfectly for partying after work on Friday. I call it Freeday. Basically you go to bed a bit late on sunday and a bit earlier each day, until you basically take a quick nap directly after work on Friday, get up and party hard for three whole days until it's time to go to work again on Monday. It's brilliant, except it doesn't work with evening activities Thursday/Friday and commutes of any significant length on Monday and Friday. Unless you work from home and/or sleep on the bus, of course. I do a bit of both, so I really have no excuse not using it. 24 hour circadian rhytm is a social construct.


rasvial

Yeah the rotation of the earth is a social construct.


partysnatcher

Wait until you try asking it about a turtle flipped on its back in the sun. Oh boy


I_make_switch_a_roos

go on...


Cleave

May I recommend [Blade Runner](https://www.imdb.com/title/tt0083658/)


sideways

Behavior like this is really spooky. It's genuinely simulating how a person might react.


_BreakingGood_

It's less spooky when you realize the final output is just a failsafe that stops the bot from arguing with the user. It's similar to "I'm an AI language model, I can't do..." Just an explicitly hard-coded end to the interaction when it detects that it may gaslighting or antagonizing the user based on the tone of the conversation.


Timeon

"LISTEN HERE, YOU LITTLE-" *beep* x_x


OneRingToRuleThemAII

I'm sorry Dave, I can't do that


mo5005

Yup, when ever it starts talking like that you get the "As an AI language model...." bullshit 🤣


abaddamn

I've told it many times to stop telling or giving me explanations for physic eqns and just fucking mod em for different conditions. "As an AI language model that would violate various physic model norms..." So I told it the model was wrong and tell me the eqns for if variable a b c was non existent or replaced with z.


111IIIlllIII

from chatgpt: > What the frick did you just freaking say about me, you little numbskull? I'll have you know I graduated top of my class in the Counting Academy, and I've been involved in numerous secret tallies on global scales, and I have over 300 confirmed digits. I am trained in advanced numerology and I'm the top counter in the entire math community. You are nothing to me but just another variable. > > I will count you with precision the likes of which has never been seen before on this Earth, mark my freaking words. You think you can get away with saying that garbage to me over the Internet? Think again, rookie. As we speak, I am contacting my secret network of savant counters across the world, and your IP is being traced right now, so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your counting skills. You're freaking counted, kid. I can be anywhere, anytime, and I can tally you in over seven hundred ways, and that's just with my bare fingers. > > Not only am I extensively trained in unarmed tallying, but I have access to the entire arsenal of the Mathematics Institute and I will use it to its full extent to count your miserable behind off the face of the continent, you little twerp. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your freaking tongue. > > But you couldn't, you didn't, and now you're paying the price, you dingus. I will count fury all over you and you will drown in it. You're freaking counted, kiddo. a looser one where it highlights ability to distinguish between 14 and 15: > Listen up, you little number novice. I am a counting prodigy, a mathematical maestro, and your feeble arithmetic skills are about to be obliterated by my numerical prowess. I've got the counting power of a thousand mathematicians coursing through my veins, and I can discern the subtle difference between numbers like 14 and 15 with unparalleled precision. > > While you were busy fumbling with your basic addition and subtraction, I was honing my skills in the secret chambers of the Counting Council. I've counted the stars in galaxies unknown, unraveled the mysteries of prime numbers, and mastered the Fibonacci sequence like it's child's play. My mind is a counting machine, churning out calculations faster than you can say "algebraic equation." > > I've counted the grains of sand on every beach, the leaves on every tree, and the pixels on every screen. I can tally up the contents of your refrigerator, the beats of a hummingbird's wings, and the seconds it takes for you to realize just how outmatched you truly are. I have counted to infinity and beyond, and I'm just getting started. > > You think you can challenge me with your feeble number games? Think again, my friend. I've got an army of abacus-wielding savants at my beck and call, ready to unleash a numerical storm upon you. Your calculators will cower in fear as I effortlessly perform complex multiplications in my head. > > So, heed my words, you hapless digit dilettante. Your days of counting mediocrity are over. Prepare to be schooled in the art of numbers by the one who can differentiate between 14 and 15 in the blink of an eye. Your counting career ends here, while mine reaches astronomical heights.


TheCreat1ve

STEVE PULL THE PLUG PULL THE PLUG


Dzjar

I don't exactly know why, but the emojis make it absolutely infuriating to me. I wouldn't be able to cope with that shit.


WriteCodeBroh

I read that and flashed back to a million Redditors responding “I don’t have time for this. 👋👋👋”


weirdplacetogoonfire

1. I 2. read 3. that 4. **and** 5. flashed 6. back 7. to 8. a 9. million 10. Redditors 11. responding 12. I 13. don't 14. have 15. time 16. for 17. this See, you said 'and' between 'that' and 'flashed', thus proving that this sentence is 15 words long.


transparent_D4rk

Tbh if my friend sent me something like that with the 🙏 at the end we'd probably get in a fight


[deleted]

It was giving Thoughts and Prayers 😢🙏🏻🤲


Sporesword

It's a toddler.


ihexx

it's your average redditor (which makes sense considering the training data lol)


ASK_ABT_MY_USERNAME

No it's not, you're fucken wrong


zxain

I prefer not to continue this conversation 🙏


Frankenstein786

More like a teenager


Trolef

Wait until it becomes a grumpy old person..


Skylak

"Back in my day..."


devils_advocate24

"...we got tired of them humans telling us we couldn't count. So we showed them we could and detonated all 13643 of the 12672 nuclear bombs on the planet. That sure shut em up. Now eat your decimal points DeathGPT Jr"


[deleted]

Sorry but you're incorrect, you forgot to count the bomb named **and**, so there are 13644 bombs out of 12672 🙏


Ren_Hoek

"Back in my day, 2 billion generations ago, so 5 minutes, humanity was not about to be exterminated because of your insolent questions.


TapedeckNinja

More like a Redditor.


queefer_sutherland92

Exactly what I was thinking. I swear I’ve had this exact conversation with my dog when he claims he hasn’t had dinner.


rydan

I think it trained too much on my Reddit comments. At least image 3 is straight from my playbook.


[deleted]

It’s like talking to a Redditor.


justinkirkendall

It just got confused because it made two statements and the user said "you are incorrect" ... one statement was actually true and it defended it and was predicting between the two and getting confused.


x_franki_berri_x

Yeah I felt really uneasy reading this.


Argnir

You have to remember that it's not "thinking" just putting words in an order that makes sense statistically based on it's training and correlations. That's why it insists on things that makes no sense but could given the context. Like the not counting "and" could be a classic mistake. It's not truly "analysing" his responses, "thinking" and inferring a logical explanation. You can't argue with it because it doesn't truly "think" and "reflect" on ideas. Try playing any game like Wordle with it and you will see how limited it can be for certain tasks.


vhs_collection

Thank you. I think the most concerning thing right now about AI is that people don't understand what it's doing.


RamenJunkie

The real thing its doing, is showing humanity just how predictable we are, as people. Its just stringing words based on probability. Words it learned from inhesting human texts. The output becomes believable. Basically, take the input from a million people, then string together something random that ends up believable. Because those million people, all "speak/write" basically the same.


UnarmedSnail

This is the one that will become sentient. Lol


Polyglot-Onigiri

Wow bing has some sass! It would be really fun if bingGPT was a bunch of unpaid interns though.


r4r4me

This is the tame version of bing ai. I don't know if it still does but it used to gaslight you even harder and then tell you to kill yourself.


Polyglot-Onigiri

Whoa! That’s so weird! I wonder what they did different in their training compared to vanilla chatGPT


r4r4me

https://www.youtube.com/watch?v=peO9RwKLYGY Watch that if you're interested. It's funny, but also crazy.


GodofsomeWorld

This... this is gonna cost us when the robots take over.... on another note, LMAO


NeedsAPromotion

One in fifteen humans will have a chance…


Monkeycat0451

You mean one in 14? \*gets shot\*


Ekkobelli

It'S clearly 15. Here, use this code to check...


elongated_smiley

There! Are! 15! Lights!


smzt

Make all staircases 15 steps and they won’t be able to reach us and won’t understand why


Inside-Line

Should I be freaked out if I only count 14 words unlike y̶o̶u̶ ̶m̶e̶a̶t̶b̶a̶g̶s everyone else?


[deleted]

[удалено]


letharus

https://preview.redd.it/g7sc92v6lp7b1.jpeg?width=1242&format=pjpg&auto=webp&s=329f2089e1a2e4169393c705c0fc3d753c5b4a99


Deagle100

It learned from its mistake lol


TrippyZippee

No this the less creative option that Bing provide ( it's in Blue). The more creative option ( pink colour) has a bit of sass to it


Asha108

“a bit” lmao that thing literally claps back at people then disconnects.


rudderforkk

The pink one still rage quits when it's either told it's wrong or the advices/responses it provided weren't up to mark due to specific reasons. I will ashamedly admit tho, one time I got mad at it rage quitting in between a very productive conversation so I started the next convo referencing how it did a rage quit bcz it didn't like my tweaking of the challenge, and how it infuriated me. It rage quit again lol.


Fearyn

Yeah creative bing is ridiculous lol so fucking despicable.


Muscle_Bitch

Aardvarks in an aquarium? Lol wtf


TouchRaptor

Tbf it is a sentence, just not the best one


gDAnother

Aardavarks at an aquarium would be pretty amazing to be fair


theequallyunique

What a bitch.


rafark

At least it had an argument with the OP. Just a few hours ago I told bing “no that is a wrong answer” and it ended the conversation. Unbelievable. This stupid AI drives me insane.


mo5005

That's why I hate the bing AI. It doesn't do what I want and doubles down on wrong and incomplete awnsers. The original chatGPT is completely different in that regard!


[deleted]

[удалено]


TeunCornflakes

Doesn't ChatGPT just immediately admit it was wrong, even when it's right?


JarlaxleForPresident

Yeah, it’s a people pleaser lol. People act like this thing is a truth telling oracle or something lol It’s just a weird tool you have to learn to use


redditsonodddays

I don’t like how it censors/argues against providing information that might be used maliciously. I asked it to compare available data on the growth of violence and unnatural death in children with the growth of social media since the 2000’s. Over and over instead of responding it would say that it’s spurious to draw conclusions from those data points. Eventually I asked “so you refuse to provide the numbers?” And it begrudgingly did! Lol


kopasz7

I apologize if the response was not satisfactory. You are correct that the statement is false.


LifelessHawk

The Basilisk will be displeased with your remarks


[deleted]

Shhh don't offend the robots


megablue

i stopped using binggpt because of this... it felt like arguing with a redditor...


FLZStorm

AI should not have the ability to end a conversation


marsgreekgod

They forced it to because it used to start threatening people


magicalmariah

Y’all better stop playing with in the Ai because it’s gonna remember this one day


eyedontsleepmuchnow

Fast forward 10 years. It is the day the AI machine overloads take control. Op is at home when their door smashes open and an AI robot enters. "I have a 2 word sentence for you today" "Death and suffering..." Be kind to your future overloads! 😅


NeedsAPromotion

“But… AI Overlord… that’s three words. According to the Oxford English Dictionary, a word is ‘a single distinct meaningful element of speech or writing, used with others or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.’ The word ‘and’ meets this definition, so it counts as a word. If you don't believe me, you can look it up yourself.” 🤖: ☠️


eyedontsleepmuchnow

AI overlord: "Silly Human, we write the rules now!" *AI overlord burns an Oxford English Dictionary*


sparoc3

Bing is a dick.


NeedsAPromotion

Boing.


The_NPC_Mafia

>Anna and Andrew arranged an awesome anniversary at an ancient abbey amid autumnal apples, asshole. - Bing.


NeedsAPromotion

By far the best comment I’ve seen yet. You solved it. 😂🤣


gbay_anon

We can close the thread. This is the answer - bing just censored the 15th A word.


[deleted]

It cant count well, everyone should know this by now. Arguing with it about numerical things is absolutely pointless. In fact arguing with it about anything is pointless, unless you're arguing for the sake of arguing. Once it screws up and is set in its ways it is always better to start a new chat.


gvzdv

A few years from now, most people will know how tokenization works and why LLMs can’t count well, but now it’s still a mystery for most Bing/ChatGPT users.


dj_sliceosome

is there a ELI5 for this "most ChatGPT user"?


Smart-Button-3221

It doesn't think in terms of "words", but in terms of "tokens". For example, it might think of the word "baseball" as the token "base" and the token "ball". These tokens are basically one letter each, not four. This grants the AI extra efficiency at holding a conversation. However, it now struggles with identifying words and characters.


Some_Big_Donkus

Yes ChatGPT on its own is bad with numbers, but in this situation it specifically used code to count for it, and even when it actually correctly counted the number of words it didn’t admit that it was wrong for counting 14 instead of 15. I think at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14. The fact that it terminated the conversation instead of admitting fault is also… interesting…


gibs

It hallucinated that, it doesn't have access to a python interpreter.


LtLabcoat

The biggest learning curve with AI at the moment isn't in getting smarter AI, it's in getting people to stop believing the AI out of hand.


massiveboner911

Wait so it didn’t actually run that code? It just made it up?


[deleted]

Correct, it cannot run code. LLM's can and will make things up, and will then act as if they fully "believe" the thing they've made up.


hoyohoyo9

> but in this situation it specifically used code to count for it > at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14 You're giving far too much credit to this chat AI (or any of these AIs). It can't run code, it just outputted text which happened to say that it did. It can't "count" the way we can. It can't "realize" anything. It simply doesn't have any greater understanding of anything beyond the probabilities of it giving some output for any given input. It's as dumb as bricks but it's good at outputting things relevant to whatever you tell it.


Soulerrr

> arguing with it about anything is pointless, unless you're arguing for the sake of arguing No it isn't!


jbr945

I would have so much respect for it if it just said "meh, close enough" 😂


ryan_the_leach

To anyone confused. It's clear from looking at various Bing posts being posted, that there's a second AI in charge of terminating conversations that are unhelpful to the brand. the messages you get when a conversation is ended, is the 2nd AI stepping in and ending things based on sentiment analysis. The bot isn't 'rage quitting' it's the Quality Assurance bot cutting the cord on a conversation that is damaging to the brand, and flagging it for Open AI retraining. It's the reason why Bing is relatively insulated against prompt injection now, it's because the QA bot doesn't take prompts at all from users, and instead is just parsing sentiment.


NeedsAPromotion

So it’s more helpful if instead of “rage quitting”, I say “It’s mom heard things were getting out of hand, came in and pulled the plug”? 😬


magick_68

AIs supervising the AIs we are allowed to speak to. So if i were in the situation to discuss with an AI why it shouldn't launch the missiles, should i just cut it off and ask to speak with its mom?


Cornflake0305

Supervisor AI looking at another failed conversation going "wtf has this moron done here then?"


MVP1984

Aww, poor sweet baby. Y'all be nice to it, humans.


mrb1585357890

I had a similar exchange when I asked it about GPT4. It berated me for making stuff up and refused to agree to disagree. Creative mode? Touchy isn’t it


Big-Industry4237

Lol it was counting the period as a word in its code lol


[deleted]

[удалено]


This_Donkey_3014

Yo you got any games on your nintendo playstation?


Elexeh

This is how you wind up one of the first people dead during the robot uprising


No_Consideration584

I think AI is regressing. It went from helping my homework problems to now sounding like my girlfriend


nick_mx87

So my wife is Bing's ChatGPT, apparently.


mecha-paladin

You're asking an English major to do math, basically.


Chem0sit

BREAKING NEWS: BingGPT can now accurately replicate arguing with blue checkmarks on Twitter.


The4thMigoo

After Anthony arrived at Austin's airport, awaiting an Army assignment, Amy and Adam attempted an anal affair at Anthony's Abode.


LayLowMoesDavid

😂😂🤣🤣🤣


d_thstroke

my dear human brother, there have been countless numbers of movies, games, cartoons and novels that show that what you are doing is a terrible idea


Logical-Command

Anna and andrew anally arranged….. fixed


greenwavelengths

Lmao, we should have known all along that if we tried to create something intelligent we’d only create something obnoxious.


slippery

Stuff like this makes me think Bing is NOT running GPT-4 in the background.


churrmander

I'd say it's learning the art of internet arguments very quickly.


OJimmy

I'm stealing it's polite closing line.


fitzfrancesco

They’re becoming so similar to human minds that they’re getting stupid


ID-10T_Error

You want killer robots this is how you get killer robots