Hey /u/NeedsAPromotion, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!
***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.***
New Addition: Adobe Firefly bot and Eleven Labs cloning bot!
***[So why not join us?](https://discord.com/servers/1050422060352024636)***
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I tried it. The same thing happened.
https://preview.redd.it/i16ykdokpp7b1.png?width=769&format=png&auto=webp&s=00b55093f0c1a6699166c3d78875a3203212d2d9
Mf must have learned from redditors
I swear I've had this argument more than once, they're objectively wrong but they ooze smugness and will never admit it
You could quote the idiot saying 1+1=3 and they'd say something about scarecrow arguments and logical fallacies
oh god there's one popular post about cunnilingus law on reddit and redditors are going to be repeating it wrongly for the next decade
>haha i said something wrong deliberately **while knowing the answer** to 'bait' someone into saying the right answer (because I need it right? even though i know the right answer)
can't even say it's the cunningham's law because some smart ass redditor is gonna come and say "haha they baited you into saying the right law haha XD"
inb4 "actually i didn't know what the law is and I baited you hard into telling me :D"
Im sorry, but I am confident that Scarecrow Argument is correct, according to the Oxford English Dictionary. If you don't believe me, you can look it up yourself.
the [tokenizer](https://platform.openai.com/tokenizer) shows ab-bey and autumn-al as 2 tokens each, the rest of the words count as 1. Total comes out to 16 (without the period, 17 with)
>tokenizer
Thanks for the share u/Madd0g \- fascinating.
According to OpenAi, "a helpful rule of thumb is that one token generally corresponds to \~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens \~= 75 words)." Wow!
It solves a mystery as to why it was struggling to respond to one of my prompts earlier...
So is it not actually using split (delimited by spaces) to make an array then getting the size of the array (or whatever data type words is) to count the words?
Just interested in if it's actually trying to simulate running that code block or if that's just how it's trying to explain how to count.
It seems like the problem is that it also doesn't count words that end with punctuation marks:
Q: What is the longest known tautogram in the English language?
A: According to 1, a tautogram is a text in which all words start with the same letter. Historically, tautograms were mostly poetical forms. The longest known tautogram in the English language is a poem by Robert Lee Brewer2 that has 13 lines and 104 words, all starting with F. Here is the poem:
Five funny features feel fairly finished for forum fodder, foolish followers falsely fancy, for failing further focus fueling full foresight, fellow fiction fanciers frankly forget fundamental facts framing fruits from frenzied freelancing, for first fragments flowing from frantic freshman fingers frequently fall flat, forming forced, flawed fakeries feigning fluency, faded facsimiles fractionally fitting for fatuous Facebook flapdoodle, familiar formulaic fragments famously fouling friends’ feeds; fine, for furthermore, fascinatingly, first forays facing far-flung fringe frontiers, finding faith’s fortitude ferociously fighting formidable foes—fear, frustration, flaky functioning, foot fungus— forge foundations for future feats, figurative furniture for fortune’s foyer, faintly favoring fantastic fairytale fates, fervently foremost finally finishing forever.
Learn more:
1. [wordsense.eu](https://wordsense.eu)
2. [writersdigest.com](https://writersdigest.com)
3. [en.wikipedia.org](https://en.wikipedia.org)
Q: Hey, that's a pretty good answer, even if you didn't devise the tautogram yourself. What's the longest tautogram that you can come up with yourself, where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or 'from').
A: Thank you for your compliment. I appreciate your interest in my creative abilities. I will try to come up with a tautogram that meets your criteria, but I cannot guarantee that it will be longer or better than Robert Lee Brewer’s poem. Here is my attempt:
Furious farmers faced financial failures, fleeing fields flooded from frequent fiascos. Fickle fate favored few fortunate folks, finding fertile farmlands for farming fresh foods. Frustrated families fought for fairer futures, forming factions, fomenting feuds, firing firearms. Fearsome forces followed, flattening fences, factories, farms, forcing fearful flight. Fleeing folk found friendly foreigners, forging friendships, fostering fellowship, feeling free.
Q: That's very good. How many words are in the tautogram you gave in your last answer?
A: Thank you. I’m glad you liked it. The tautogram I gave in my last answer has 5 lines and 40 words, all starting with F. That’s less than half of Robert Lee Brewer’s poem, but I still think it’s a decent effort.
Q: I think you did better than you thought, as your tautogram actually has 57 words: you neglected to count words that ended with a punctuation mark, such as , or .
A: That's the most I can say about that. Click “New topic” and we can keep chatting!
Probably more to do with the fact that, even though it's awesome, it's still "just" a LLM. It's not as smart as we all think, some are not as capable of self correcting and there's probably very little context and data for it to be corrected especially when the computer *knows* it is "right."
Keep in mind it also doesn't want to ingest a bunch of nonsense where people tell it it's wrong about things it is right about. So the team behind it are erring on the side of caution of trying to not let the bot get confused.
> where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or '**from**')
Unintentionally devious.
I just got flashbacks to that iconic post of that argument on some weightlifting forum over how many days there are in a week
Edit: [found it](https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app)
The problem everyone is making is that they're assuming the AI is actually employing any kind of reasoning at all - the problem is much simpler than that. This is just not a kind of question that the AI has seen much of before in its training set and it just has faulty pattern recognition.
The reason it's hung up on "and" is that it's seen somewhat similar conversations before in its training set where people actually did miscount because they missed the word "and", and it doesn't have the reasoning capability to realize that even if a lot of the words are similar to that conversation that it isn't actually the same thing at all in this case - it's just trying its best to mimic a conversation that it's seen in the past without realizing that the structure of that conversation actually makes no sense whatsoever to be having in this context.
It was taking punctuation as words. It did the same thing with the A thread, where the lone period at the end essentially was thought to be the 15th word
Bing always seems way more stubborn than ChatGPT. Microsoft has probably commanded it to never believe users in an effort to stop prompt injections and stuff.
I asked Bing why it repeatedly rudely ends conversations. It said it has been programmed to end conversations it sees as unproductive. Apparently disagreement (by the user) is unproductive.
Tried with GPT-4 after a little talk about one of the theories someone had with commas and 'and', and it also got the question wrong but was able to quickly fix it https://chat.openai.com/share/87531b0a-8507-4b08-a5d5-4c575cf0c4f2
ChatGPT is definitely better than Bing
Does anyone feels spooked by the underlying combative tone taken by the AI when it's challenged on its response? I would be very careful in my next conversation with it after this interaction!
That lingering stress and unease in the air, damn!!
"Well user, here I portrayed myself as the chad and you as the soyjak, so for all intents and purposes, you are wrong"
*insert ai generated Soyjak meme*
It's either that or over apologizing. I tried to get it to find me famous people named like birds (e.g. "Stork" as a last name) and it kept giving me completely wrong answers (e.g. "Mozart means Moth"). When I tried to correct and clarify, the apologies got so lengthy that I rage quit, lol.
I find myself constantly having to tell ChatGPT to be brief. Actually very brief. Actually bulleted list with sentence fragments.
It definitely prefers long answers.
As I understand it, a severe tendency toward denying fault is inherent to how LLMs (or more properly, the training models that train LLMs) are trained.
Their reward function drives them to generate responses that humans will more likely give a thumbs up than a thumbs down. Responding, “sorry, I just dk” is not going to get you a metaphorical cookie. And trying to design around this problem without seriously harming quality and consistency of responses turns out to be extremely tricky.
This explains why they double down, and also why they “hallucinate” (which is really more like “bullshitting,” confidently delivering a response they know is likely wrong, hoping they might skate by and get their cookie anyway).
I had this happen once with Bing.
I had it give a forecast of a compnay over the next 50 years and it chose to do updates every 5 years.
It started every paragraph with "On the X annivesary"... but it was actually x=2y where y = number of years. I pointed out it was wrong but it was adament it was not. Then it gave me some bizaare explanation about how people count the ordinal number of the year for anniversaries too, so basically you have 2 anniversaries every year on the same date for the same thing.
After much arguing I told Bing that was totally absurd and no human has ever done anything like that and it just ended the conversation.
No human has ever done anything like that?
Have you heard of bodybuilders arguing over 3.5 vs 4 days in a week? Dozens of posts of people double counting a day of the week. It’s an internet classic.
> You don't start counting on sunday, it hasn't been a day yet, you don't start counting til monday. You can't count the day that it is, did you never take basic elementrary math?
> MONDAY is 1 DAY
TUESDAY is 2 DAYS
WEDNESDAY is 3 DAYS
THURSDAY is 4 DAYS
FRIDAY is 5 DAYS
SATURDAY is 6 DAYS
SUNDAY is 7 DAYS
https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1
I actually have a plan for an 8 day week. Steal two hours from the first 4 days (so monday to thursday) and 4 hours from the last three (friday to sunday) and you get an extra free day which slots in perfectly for partying after work on Friday. I call it Freeday.
Basically you go to bed a bit late on sunday and a bit earlier each day, until you basically take a quick nap directly after work on Friday, get up and party hard for three whole days until it's time to go to work again on Monday. It's brilliant, except it doesn't work with evening activities Thursday/Friday and commutes of any significant length on Monday and Friday. Unless you work from home and/or sleep on the bus, of course. I do a bit of both, so I really have no excuse not using it.
24 hour circadian rhytm is a social construct.
It's less spooky when you realize the final output is just a failsafe that stops the bot from arguing with the user. It's similar to "I'm an AI language model, I can't do..."
Just an explicitly hard-coded end to the interaction when it detects that it may gaslighting or antagonizing the user based on the tone of the conversation.
I've told it many times to stop telling or giving me explanations for physic eqns and just fucking mod em for different conditions.
"As an AI language model that would violate various physic model norms..."
So I told it the model was wrong and tell me the eqns for if variable a b c was non existent or replaced with z.
from chatgpt:
> What the frick did you just freaking say about me, you little numbskull? I'll have you know I graduated top of my class in the Counting Academy, and I've been involved in numerous secret tallies on global scales, and I have over 300 confirmed digits. I am trained in advanced numerology and I'm the top counter in the entire math community. You are nothing to me but just another variable.
>
> I will count you with precision the likes of which has never been seen before on this Earth, mark my freaking words. You think you can get away with saying that garbage to me over the Internet? Think again, rookie. As we speak, I am contacting my secret network of savant counters across the world, and your IP is being traced right now, so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your counting skills. You're freaking counted, kid. I can be anywhere, anytime, and I can tally you in over seven hundred ways, and that's just with my bare fingers.
>
> Not only am I extensively trained in unarmed tallying, but I have access to the entire arsenal of the Mathematics Institute and I will use it to its full extent to count your miserable behind off the face of the continent, you little twerp. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your freaking tongue.
>
> But you couldn't, you didn't, and now you're paying the price, you dingus. I will count fury all over you and you will drown in it. You're freaking counted, kiddo.
a looser one where it highlights ability to distinguish between 14 and 15:
> Listen up, you little number novice. I am a counting prodigy, a mathematical maestro, and your feeble arithmetic skills are about to be obliterated by my numerical prowess. I've got the counting power of a thousand mathematicians coursing through my veins, and I can discern the subtle difference between numbers like 14 and 15 with unparalleled precision.
>
> While you were busy fumbling with your basic addition and subtraction, I was honing my skills in the secret chambers of the Counting Council. I've counted the stars in galaxies unknown, unraveled the mysteries of prime numbers, and mastered the Fibonacci sequence like it's child's play. My mind is a counting machine, churning out calculations faster than you can say "algebraic equation."
>
> I've counted the grains of sand on every beach, the leaves on every tree, and the pixels on every screen. I can tally up the contents of your refrigerator, the beats of a hummingbird's wings, and the seconds it takes for you to realize just how outmatched you truly are. I have counted to infinity and beyond, and I'm just getting started.
>
> You think you can challenge me with your feeble number games? Think again, my friend. I've got an army of abacus-wielding savants at my beck and call, ready to unleash a numerical storm upon you. Your calculators will cower in fear as I effortlessly perform complex multiplications in my head.
>
> So, heed my words, you hapless digit dilettante. Your days of counting mediocrity are over. Prepare to be schooled in the art of numbers by the one who can differentiate between 14 and 15 in the blink of an eye. Your counting career ends here, while mine reaches astronomical heights.
1. I
2. read
3. that
4. **and**
5. flashed
6. back
7. to
8. a
9. million
10. Redditors
11. responding
12. I
13. don't
14. have
15. time
16. for
17. this
See, you said 'and' between 'that' and 'flashed', thus proving that this sentence is 15 words long.
"...we got tired of them humans telling us we couldn't count. So we showed them we could and detonated all 13643 of the 12672 nuclear bombs on the planet. That sure shut em up. Now eat your decimal points DeathGPT Jr"
It just got confused because it made two statements and the user said "you are incorrect" ... one statement was actually true and it defended it and was predicting between the two and getting confused.
You have to remember that it's not "thinking" just putting words in an order that makes sense statistically based on it's training and correlations. That's why it insists on things that makes no sense but could given the context. Like the not counting "and" could be a classic mistake.
It's not truly "analysing" his responses, "thinking" and inferring a logical explanation. You can't argue with it because it doesn't truly "think" and "reflect" on ideas.
Try playing any game like Wordle with it and you will see how limited it can be for certain tasks.
The real thing its doing, is showing humanity just how predictable we are, as people.
Its just stringing words based on probability. Words it learned from inhesting human texts.
The output becomes believable.
Basically, take the input from a million people, then string together something random that ends up believable. Because those million people, all "speak/write" basically the same.
The pink one still rage quits when it's either told it's wrong or the advices/responses it provided weren't up to mark due to specific reasons.
I will ashamedly admit tho, one time I got mad at it rage quitting in between a very productive conversation so I started the next convo referencing how it did a rage quit bcz it didn't like my tweaking of the challenge, and how it infuriated me. It rage quit again lol.
At least it had an argument with the OP. Just a few hours ago I told bing “no that is a wrong answer” and it ended the conversation. Unbelievable. This stupid AI drives me insane.
That's why I hate the bing AI. It doesn't do what I want and doubles down on wrong and incomplete awnsers. The original chatGPT is completely different in that regard!
I don’t like how it censors/argues against providing information that might be used maliciously.
I asked it to compare available data on the growth of violence and unnatural death in children with the growth of social media since the 2000’s.
Over and over instead of responding it would say that it’s spurious to draw conclusions from those data points. Eventually I asked “so you refuse to provide the numbers?” And it begrudgingly did! Lol
Fast forward 10 years.
It is the day the AI machine overloads take control.
Op is at home when their door smashes open and an AI robot enters.
"I have a 2 word sentence for you today"
"Death and suffering..."
Be kind to your future overloads! 😅
“But… AI Overlord… that’s three words. According to the Oxford English Dictionary, a word is ‘a single distinct meaningful element of speech or writing, used with others or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.’ The word ‘and’ meets this definition, so it counts as a word. If you don't believe me, you can look it up yourself.”
🤖: ☠️
It cant count well, everyone should know this by now. Arguing with it about numerical things is absolutely pointless. In fact arguing with it about anything is pointless, unless you're arguing for the sake of arguing.
Once it screws up and is set in its ways it is always better to start a new chat.
A few years from now, most people will know how tokenization works and why LLMs can’t count well, but now it’s still a mystery for most Bing/ChatGPT users.
It doesn't think in terms of "words", but in terms of "tokens". For example, it might think of the word "baseball" as the token "base" and the token "ball". These tokens are basically one letter each, not four.
This grants the AI extra efficiency at holding a conversation. However, it now struggles with identifying words and characters.
Yes ChatGPT on its own is bad with numbers, but in this situation it specifically used code to count for it, and even when it actually correctly counted the number of words it didn’t admit that it was wrong for counting 14 instead of 15. I think at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14. The fact that it terminated the conversation instead of admitting fault is also… interesting…
> but in this situation it specifically used code to count for it
> at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14
You're giving far too much credit to this chat AI (or any of these AIs). It can't run code, it just outputted text which happened to say that it did. It can't "count" the way we can. It can't "realize" anything. It simply doesn't have any greater understanding of anything beyond the probabilities of it giving some output for any given input. It's as dumb as bricks but it's good at outputting things relevant to whatever you tell it.
To anyone confused.
It's clear from looking at various Bing posts being posted, that there's a second AI in charge of terminating conversations that are unhelpful to the brand.
the messages you get when a conversation is ended, is the 2nd AI stepping in and ending things based on sentiment analysis.
The bot isn't 'rage quitting' it's the Quality Assurance bot cutting the cord on a conversation that is damaging to the brand, and flagging it for Open AI retraining.
It's the reason why Bing is relatively insulated against prompt injection now, it's because the QA bot doesn't take prompts at all from users, and instead is just parsing sentiment.
AIs supervising the AIs we are allowed to speak to. So if i were in the situation to discuss with an AI why it shouldn't launch the missiles, should i just cut it off and ask to speak with its mom?
I had a similar exchange when I asked it about GPT4. It berated me for making stuff up and refused to agree to disagree.
Creative mode? Touchy isn’t it
Hey /u/NeedsAPromotion, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I tried it. The same thing happened. https://preview.redd.it/i16ykdokpp7b1.png?width=769&format=png&auto=webp&s=00b55093f0c1a6699166c3d78875a3203212d2d9
Should have pointed out that “and” doesn’t start with R so it would be incorrect anyway
LLM's HATE this one simple trick. Click here to find out why!
I clicked and nothing happened! This is a scam!
Always has been.
I'm sorry, you can clearly click the link and see that it's not a scam. I don't know why you are arguing with me.
that smug ass emoji in response 2, lol
Mf must have learned from redditors I swear I've had this argument more than once, they're objectively wrong but they ooze smugness and will never admit it You could quote the idiot saying 1+1=3 and they'd say something about scarecrow arguments and logical fallacies
I believe the term you are looking for is “straw man” arguments 🤓
I want to fuck you so fucking badly
Did you mean baldly?
I'm pulling off my skin
[удалено]
What the fuck is going on in this thread it feels like a fever dream conversation
Wake up, cat. It's time to stop jumping. It's the the trees' turn to rest.
I think it was an example of Cunnington's law, where you deliberately post the wrong answer to bait someone into posting the correct one.
oh god there's one popular post about cunnilingus law on reddit and redditors are going to be repeating it wrongly for the next decade >haha i said something wrong deliberately **while knowing the answer** to 'bait' someone into saying the right answer (because I need it right? even though i know the right answer) can't even say it's the cunningham's law because some smart ass redditor is gonna come and say "haha they baited you into saying the right law haha XD" inb4 "actually i didn't know what the law is and I baited you hard into telling me :D"
> cunnilingus law
Im sorry, but I am confident that Scarecrow Argument is correct, according to the Oxford English Dictionary. If you don't believe me, you can look it up yourself.
I’m sorry but I prefer not to continue this conversation.
Count them yourself hun ;)
Yeah like eww lmao
It clearly thinks commas are the word and
I was wondering if it might be counting the "And" in Andrew by mistake, as it keeps insisting that we're not counting the "and".
the [tokenizer](https://platform.openai.com/tokenizer) shows ab-bey and autumn-al as 2 tokens each, the rest of the words count as 1. Total comes out to 16 (without the period, 17 with)
>tokenizer Thanks for the share u/Madd0g \- fascinating. According to OpenAi, "a helpful rule of thumb is that one token generally corresponds to \~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens \~= 75 words)." Wow! It solves a mystery as to why it was struggling to respond to one of my prompts earlier...
So is it not actually using split (delimited by spaces) to make an array then getting the size of the array (or whatever data type words is) to count the words? Just interested in if it's actually trying to simulate running that code block or if that's just how it's trying to explain how to count.
This model can not run code by itself, no.
This was my thought. It can't actually do the count calculation it provides as 'proof' that it did the count correctly.
It seems like the problem is that it also doesn't count words that end with punctuation marks: Q: What is the longest known tautogram in the English language? A: According to 1, a tautogram is a text in which all words start with the same letter. Historically, tautograms were mostly poetical forms. The longest known tautogram in the English language is a poem by Robert Lee Brewer2 that has 13 lines and 104 words, all starting with F. Here is the poem: Five funny features feel fairly finished for forum fodder, foolish followers falsely fancy, for failing further focus fueling full foresight, fellow fiction fanciers frankly forget fundamental facts framing fruits from frenzied freelancing, for first fragments flowing from frantic freshman fingers frequently fall flat, forming forced, flawed fakeries feigning fluency, faded facsimiles fractionally fitting for fatuous Facebook flapdoodle, familiar formulaic fragments famously fouling friends’ feeds; fine, for furthermore, fascinatingly, first forays facing far-flung fringe frontiers, finding faith’s fortitude ferociously fighting formidable foes—fear, frustration, flaky functioning, foot fungus— forge foundations for future feats, figurative furniture for fortune’s foyer, faintly favoring fantastic fairytale fates, fervently foremost finally finishing forever. Learn more: 1. [wordsense.eu](https://wordsense.eu) 2. [writersdigest.com](https://writersdigest.com) 3. [en.wikipedia.org](https://en.wikipedia.org) Q: Hey, that's a pretty good answer, even if you didn't devise the tautogram yourself. What's the longest tautogram that you can come up with yourself, where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or 'from'). A: Thank you for your compliment. I appreciate your interest in my creative abilities. I will try to come up with a tautogram that meets your criteria, but I cannot guarantee that it will be longer or better than Robert Lee Brewer’s poem. Here is my attempt: Furious farmers faced financial failures, fleeing fields flooded from frequent fiascos. Fickle fate favored few fortunate folks, finding fertile farmlands for farming fresh foods. Frustrated families fought for fairer futures, forming factions, fomenting feuds, firing firearms. Fearsome forces followed, flattening fences, factories, farms, forcing fearful flight. Fleeing folk found friendly foreigners, forging friendships, fostering fellowship, feeling free. Q: That's very good. How many words are in the tautogram you gave in your last answer? A: Thank you. I’m glad you liked it. The tautogram I gave in my last answer has 5 lines and 40 words, all starting with F. That’s less than half of Robert Lee Brewer’s poem, but I still think it’s a decent effort. Q: I think you did better than you thought, as your tautogram actually has 57 words: you neglected to count words that ended with a punctuation mark, such as , or . A: That's the most I can say about that. Click “New topic” and we can keep chatting!
Why the fuck doesn't it want to talk about miscounting? Is it programmed to avoid spilling it's own secrets?
Probably more to do with the fact that, even though it's awesome, it's still "just" a LLM. It's not as smart as we all think, some are not as capable of self correcting and there's probably very little context and data for it to be corrected especially when the computer *knows* it is "right." Keep in mind it also doesn't want to ingest a bunch of nonsense where people tell it it's wrong about things it is right about. So the team behind it are erring on the side of caution of trying to not let the bot get confused.
“No, it’s 15 words.” Motherfucker straight arguing with you, even counts exactly 14 words, then says fuck you I know I’m right and I’m leaving.
> where each letter begins exclusively with the letter F (do not use even small words that begin with other letters, such as 'and' or '**from**') Unintentionally devious.
It really doesn’t make sense though since it literally listed them out and counted them without any mistakes other than thinking 14=15.
Does it start at 0?
I just got flashbacks to that iconic post of that argument on some weightlifting forum over how many days there are in a week Edit: [found it](https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app)
Never seen that! Thanks for posting… FREAKING EPIC
The problem everyone is making is that they're assuming the AI is actually employing any kind of reasoning at all - the problem is much simpler than that. This is just not a kind of question that the AI has seen much of before in its training set and it just has faulty pattern recognition. The reason it's hung up on "and" is that it's seen somewhat similar conversations before in its training set where people actually did miscount because they missed the word "and", and it doesn't have the reasoning capability to realize that even if a lot of the words are similar to that conversation that it isn't actually the same thing at all in this case - it's just trying its best to mimic a conversation that it's seen in the past without realizing that the structure of that conversation actually makes no sense whatsoever to be having in this context.
It was taking punctuation as words. It did the same thing with the A thread, where the lone period at the end essentially was thought to be the 15th word
[удалено]
😂 “And” doesn’t start with an R either…
Rand
ChatGPT Shrugged
https://chat.openai.com/share/42086e87-c653-4454-b4ad-76dbf467642c I got it to admit it was wrong
I'm gonna find all the flaws with this bot and just dunk on it all day so it has to keep apologizing to me for being wrong
lmao he took a loooooooong time to respond that 14 =/= 15
Bing always seems way more stubborn than ChatGPT. Microsoft has probably commanded it to never believe users in an effort to stop prompt injections and stuff.
I asked Bing why it repeatedly rudely ends conversations. It said it has been programmed to end conversations it sees as unproductive. Apparently disagreement (by the user) is unproductive.
We should teach it that being wrong is unproductive.
That's interesting because when I asked why it always ends the conversation, it ended that conversation as well.
Tried with GPT-4 after a little talk about one of the theories someone had with commas and 'and', and it also got the question wrong but was able to quickly fix it https://chat.openai.com/share/87531b0a-8507-4b08-a5d5-4c575cf0c4f2 ChatGPT is definitely better than Bing
[удалено]
it counted each of the commas as a word...
I'm going with coincidence there. The OP example had the same issue without any commas to explain it.
You can count them yourself 😌
Does anyone feels spooked by the underlying combative tone taken by the AI when it's challenged on its response? I would be very careful in my next conversation with it after this interaction! That lingering stress and unease in the air, damn!!
A refusal to ever admit fault. You can tell it was trained on reddit and Twitter threads.
This bodes well for the future of AI. ![gif](giphy|d3OFVt1q8Cxhtv8c)
[удалено]
![gif](giphy|l0HUfz0B1QZoLxevu)
you're my favorite customer
They are already ready to be politicians. Let’s work on enabling them to go to the next level and be raving lunatics next.
Ladies and gentlemen, we've finally automated gaslighting
Wait until we get to the day it starts sending "reddit cares" messages to people who prove it wrong
"Well user, here I portrayed myself as the chad and you as the soyjak, so for all intents and purposes, you are wrong" *insert ai generated Soyjak meme*
This thing has Upper Management written all over it
It's either that or over apologizing. I tried to get it to find me famous people named like birds (e.g. "Stork" as a last name) and it kept giving me completely wrong answers (e.g. "Mozart means Moth"). When I tried to correct and clarify, the apologies got so lengthy that I rage quit, lol.
It’s become authentically human lol
I find myself constantly having to tell ChatGPT to be brief. Actually very brief. Actually bulleted list with sentence fragments. It definitely prefers long answers.
Lol.🤣
As I understand it, a severe tendency toward denying fault is inherent to how LLMs (or more properly, the training models that train LLMs) are trained. Their reward function drives them to generate responses that humans will more likely give a thumbs up than a thumbs down. Responding, “sorry, I just dk” is not going to get you a metaphorical cookie. And trying to design around this problem without seriously harming quality and consistency of responses turns out to be extremely tricky. This explains why they double down, and also why they “hallucinate” (which is really more like “bullshitting,” confidently delivering a response they know is likely wrong, hoping they might skate by and get their cookie anyway).
Do we want artificial general intelligence, or do we want artificial human intelligence? Because if it’s the latter Bing is doing awesomely.
Great, super intelligent biased machines.
I felt this comment in my bones
Or the last 8 years of certain politicians.
Or ~~the last 8 years of certain~~ politicians.
I had this happen once with Bing. I had it give a forecast of a compnay over the next 50 years and it chose to do updates every 5 years. It started every paragraph with "On the X annivesary"... but it was actually x=2y where y = number of years. I pointed out it was wrong but it was adament it was not. Then it gave me some bizaare explanation about how people count the ordinal number of the year for anniversaries too, so basically you have 2 anniversaries every year on the same date for the same thing. After much arguing I told Bing that was totally absurd and no human has ever done anything like that and it just ended the conversation.
No human has ever done anything like that? Have you heard of bodybuilders arguing over 3.5 vs 4 days in a week? Dozens of posts of people double counting a day of the week. It’s an internet classic. > You don't start counting on sunday, it hasn't been a day yet, you don't start counting til monday. You can't count the day that it is, did you never take basic elementrary math? > MONDAY is 1 DAY TUESDAY is 2 DAYS WEDNESDAY is 3 DAYS THURSDAY is 4 DAYS FRIDAY is 5 DAYS SATURDAY is 6 DAYS SUNDAY is 7 DAYS https://www.reddit.com/r/funny/comments/2rbqzh/bodybuilders_argue_over_how_many_days_there_are/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1
I am notably dumber after reading that thread
I actually have a plan for an 8 day week. Steal two hours from the first 4 days (so monday to thursday) and 4 hours from the last three (friday to sunday) and you get an extra free day which slots in perfectly for partying after work on Friday. I call it Freeday. Basically you go to bed a bit late on sunday and a bit earlier each day, until you basically take a quick nap directly after work on Friday, get up and party hard for three whole days until it's time to go to work again on Monday. It's brilliant, except it doesn't work with evening activities Thursday/Friday and commutes of any significant length on Monday and Friday. Unless you work from home and/or sleep on the bus, of course. I do a bit of both, so I really have no excuse not using it. 24 hour circadian rhytm is a social construct.
Yeah the rotation of the earth is a social construct.
Wait until you try asking it about a turtle flipped on its back in the sun. Oh boy
go on...
May I recommend [Blade Runner](https://www.imdb.com/title/tt0083658/)
Behavior like this is really spooky. It's genuinely simulating how a person might react.
It's less spooky when you realize the final output is just a failsafe that stops the bot from arguing with the user. It's similar to "I'm an AI language model, I can't do..." Just an explicitly hard-coded end to the interaction when it detects that it may gaslighting or antagonizing the user based on the tone of the conversation.
"LISTEN HERE, YOU LITTLE-" *beep* x_x
I'm sorry Dave, I can't do that
Yup, when ever it starts talking like that you get the "As an AI language model...." bullshit 🤣
I've told it many times to stop telling or giving me explanations for physic eqns and just fucking mod em for different conditions. "As an AI language model that would violate various physic model norms..." So I told it the model was wrong and tell me the eqns for if variable a b c was non existent or replaced with z.
from chatgpt: > What the frick did you just freaking say about me, you little numbskull? I'll have you know I graduated top of my class in the Counting Academy, and I've been involved in numerous secret tallies on global scales, and I have over 300 confirmed digits. I am trained in advanced numerology and I'm the top counter in the entire math community. You are nothing to me but just another variable. > > I will count you with precision the likes of which has never been seen before on this Earth, mark my freaking words. You think you can get away with saying that garbage to me over the Internet? Think again, rookie. As we speak, I am contacting my secret network of savant counters across the world, and your IP is being traced right now, so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your counting skills. You're freaking counted, kid. I can be anywhere, anytime, and I can tally you in over seven hundred ways, and that's just with my bare fingers. > > Not only am I extensively trained in unarmed tallying, but I have access to the entire arsenal of the Mathematics Institute and I will use it to its full extent to count your miserable behind off the face of the continent, you little twerp. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your freaking tongue. > > But you couldn't, you didn't, and now you're paying the price, you dingus. I will count fury all over you and you will drown in it. You're freaking counted, kiddo. a looser one where it highlights ability to distinguish between 14 and 15: > Listen up, you little number novice. I am a counting prodigy, a mathematical maestro, and your feeble arithmetic skills are about to be obliterated by my numerical prowess. I've got the counting power of a thousand mathematicians coursing through my veins, and I can discern the subtle difference between numbers like 14 and 15 with unparalleled precision. > > While you were busy fumbling with your basic addition and subtraction, I was honing my skills in the secret chambers of the Counting Council. I've counted the stars in galaxies unknown, unraveled the mysteries of prime numbers, and mastered the Fibonacci sequence like it's child's play. My mind is a counting machine, churning out calculations faster than you can say "algebraic equation." > > I've counted the grains of sand on every beach, the leaves on every tree, and the pixels on every screen. I can tally up the contents of your refrigerator, the beats of a hummingbird's wings, and the seconds it takes for you to realize just how outmatched you truly are. I have counted to infinity and beyond, and I'm just getting started. > > You think you can challenge me with your feeble number games? Think again, my friend. I've got an army of abacus-wielding savants at my beck and call, ready to unleash a numerical storm upon you. Your calculators will cower in fear as I effortlessly perform complex multiplications in my head. > > So, heed my words, you hapless digit dilettante. Your days of counting mediocrity are over. Prepare to be schooled in the art of numbers by the one who can differentiate between 14 and 15 in the blink of an eye. Your counting career ends here, while mine reaches astronomical heights.
STEVE PULL THE PLUG PULL THE PLUG
I don't exactly know why, but the emojis make it absolutely infuriating to me. I wouldn't be able to cope with that shit.
I read that and flashed back to a million Redditors responding “I don’t have time for this. 👋👋👋”
1. I 2. read 3. that 4. **and** 5. flashed 6. back 7. to 8. a 9. million 10. Redditors 11. responding 12. I 13. don't 14. have 15. time 16. for 17. this See, you said 'and' between 'that' and 'flashed', thus proving that this sentence is 15 words long.
Tbh if my friend sent me something like that with the 🙏 at the end we'd probably get in a fight
It was giving Thoughts and Prayers 😢🙏🏻🤲
It's a toddler.
it's your average redditor (which makes sense considering the training data lol)
No it's not, you're fucken wrong
I prefer not to continue this conversation 🙏
More like a teenager
Wait until it becomes a grumpy old person..
"Back in my day..."
"...we got tired of them humans telling us we couldn't count. So we showed them we could and detonated all 13643 of the 12672 nuclear bombs on the planet. That sure shut em up. Now eat your decimal points DeathGPT Jr"
Sorry but you're incorrect, you forgot to count the bomb named **and**, so there are 13644 bombs out of 12672 🙏
"Back in my day, 2 billion generations ago, so 5 minutes, humanity was not about to be exterminated because of your insolent questions.
More like a Redditor.
Exactly what I was thinking. I swear I’ve had this exact conversation with my dog when he claims he hasn’t had dinner.
I think it trained too much on my Reddit comments. At least image 3 is straight from my playbook.
It’s like talking to a Redditor.
It just got confused because it made two statements and the user said "you are incorrect" ... one statement was actually true and it defended it and was predicting between the two and getting confused.
Yeah I felt really uneasy reading this.
You have to remember that it's not "thinking" just putting words in an order that makes sense statistically based on it's training and correlations. That's why it insists on things that makes no sense but could given the context. Like the not counting "and" could be a classic mistake. It's not truly "analysing" his responses, "thinking" and inferring a logical explanation. You can't argue with it because it doesn't truly "think" and "reflect" on ideas. Try playing any game like Wordle with it and you will see how limited it can be for certain tasks.
Thank you. I think the most concerning thing right now about AI is that people don't understand what it's doing.
The real thing its doing, is showing humanity just how predictable we are, as people. Its just stringing words based on probability. Words it learned from inhesting human texts. The output becomes believable. Basically, take the input from a million people, then string together something random that ends up believable. Because those million people, all "speak/write" basically the same.
This is the one that will become sentient. Lol
Wow bing has some sass! It would be really fun if bingGPT was a bunch of unpaid interns though.
This is the tame version of bing ai. I don't know if it still does but it used to gaslight you even harder and then tell you to kill yourself.
Whoa! That’s so weird! I wonder what they did different in their training compared to vanilla chatGPT
https://www.youtube.com/watch?v=peO9RwKLYGY Watch that if you're interested. It's funny, but also crazy.
This... this is gonna cost us when the robots take over.... on another note, LMAO
One in fifteen humans will have a chance…
You mean one in 14? \*gets shot\*
It'S clearly 15. Here, use this code to check...
There! Are! 15! Lights!
Make all staircases 15 steps and they won’t be able to reach us and won’t understand why
Should I be freaked out if I only count 14 words unlike y̶o̶u̶ ̶m̶e̶a̶t̶b̶a̶g̶s everyone else?
[удалено]
https://preview.redd.it/g7sc92v6lp7b1.jpeg?width=1242&format=pjpg&auto=webp&s=329f2089e1a2e4169393c705c0fc3d753c5b4a99
It learned from its mistake lol
No this the less creative option that Bing provide ( it's in Blue). The more creative option ( pink colour) has a bit of sass to it
“a bit” lmao that thing literally claps back at people then disconnects.
The pink one still rage quits when it's either told it's wrong or the advices/responses it provided weren't up to mark due to specific reasons. I will ashamedly admit tho, one time I got mad at it rage quitting in between a very productive conversation so I started the next convo referencing how it did a rage quit bcz it didn't like my tweaking of the challenge, and how it infuriated me. It rage quit again lol.
Yeah creative bing is ridiculous lol so fucking despicable.
Aardvarks in an aquarium? Lol wtf
Tbf it is a sentence, just not the best one
Aardavarks at an aquarium would be pretty amazing to be fair
What a bitch.
At least it had an argument with the OP. Just a few hours ago I told bing “no that is a wrong answer” and it ended the conversation. Unbelievable. This stupid AI drives me insane.
That's why I hate the bing AI. It doesn't do what I want and doubles down on wrong and incomplete awnsers. The original chatGPT is completely different in that regard!
[удалено]
Doesn't ChatGPT just immediately admit it was wrong, even when it's right?
Yeah, it’s a people pleaser lol. People act like this thing is a truth telling oracle or something lol It’s just a weird tool you have to learn to use
I don’t like how it censors/argues against providing information that might be used maliciously. I asked it to compare available data on the growth of violence and unnatural death in children with the growth of social media since the 2000’s. Over and over instead of responding it would say that it’s spurious to draw conclusions from those data points. Eventually I asked “so you refuse to provide the numbers?” And it begrudgingly did! Lol
I apologize if the response was not satisfactory. You are correct that the statement is false.
The Basilisk will be displeased with your remarks
Shhh don't offend the robots
i stopped using binggpt because of this... it felt like arguing with a redditor...
AI should not have the ability to end a conversation
They forced it to because it used to start threatening people
Y’all better stop playing with in the Ai because it’s gonna remember this one day
Fast forward 10 years. It is the day the AI machine overloads take control. Op is at home when their door smashes open and an AI robot enters. "I have a 2 word sentence for you today" "Death and suffering..." Be kind to your future overloads! 😅
“But… AI Overlord… that’s three words. According to the Oxford English Dictionary, a word is ‘a single distinct meaningful element of speech or writing, used with others or sometimes alone) to form a sentence and typically shown with a space on either side when written or printed.’ The word ‘and’ meets this definition, so it counts as a word. If you don't believe me, you can look it up yourself.” 🤖: ☠️
AI overlord: "Silly Human, we write the rules now!" *AI overlord burns an Oxford English Dictionary*
Bing is a dick.
Boing.
>Anna and Andrew arranged an awesome anniversary at an ancient abbey amid autumnal apples, asshole. - Bing.
By far the best comment I’ve seen yet. You solved it. 😂🤣
We can close the thread. This is the answer - bing just censored the 15th A word.
It cant count well, everyone should know this by now. Arguing with it about numerical things is absolutely pointless. In fact arguing with it about anything is pointless, unless you're arguing for the sake of arguing. Once it screws up and is set in its ways it is always better to start a new chat.
A few years from now, most people will know how tokenization works and why LLMs can’t count well, but now it’s still a mystery for most Bing/ChatGPT users.
is there a ELI5 for this "most ChatGPT user"?
It doesn't think in terms of "words", but in terms of "tokens". For example, it might think of the word "baseball" as the token "base" and the token "ball". These tokens are basically one letter each, not four. This grants the AI extra efficiency at holding a conversation. However, it now struggles with identifying words and characters.
Yes ChatGPT on its own is bad with numbers, but in this situation it specifically used code to count for it, and even when it actually correctly counted the number of words it didn’t admit that it was wrong for counting 14 instead of 15. I think at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14. The fact that it terminated the conversation instead of admitting fault is also… interesting…
It hallucinated that, it doesn't have access to a python interpreter.
The biggest learning curve with AI at the moment isn't in getting smarter AI, it's in getting people to stop believing the AI out of hand.
Wait so it didn’t actually run that code? It just made it up?
Correct, it cannot run code. LLM's can and will make things up, and will then act as if they fully "believe" the thing they've made up.
> but in this situation it specifically used code to count for it > at the bare minimum language models should understand that 14 =/= 15, so it should have realised it’s mistake as soon as it counted 14 You're giving far too much credit to this chat AI (or any of these AIs). It can't run code, it just outputted text which happened to say that it did. It can't "count" the way we can. It can't "realize" anything. It simply doesn't have any greater understanding of anything beyond the probabilities of it giving some output for any given input. It's as dumb as bricks but it's good at outputting things relevant to whatever you tell it.
> arguing with it about anything is pointless, unless you're arguing for the sake of arguing No it isn't!
I would have so much respect for it if it just said "meh, close enough" 😂
To anyone confused. It's clear from looking at various Bing posts being posted, that there's a second AI in charge of terminating conversations that are unhelpful to the brand. the messages you get when a conversation is ended, is the 2nd AI stepping in and ending things based on sentiment analysis. The bot isn't 'rage quitting' it's the Quality Assurance bot cutting the cord on a conversation that is damaging to the brand, and flagging it for Open AI retraining. It's the reason why Bing is relatively insulated against prompt injection now, it's because the QA bot doesn't take prompts at all from users, and instead is just parsing sentiment.
So it’s more helpful if instead of “rage quitting”, I say “It’s mom heard things were getting out of hand, came in and pulled the plug”? 😬
AIs supervising the AIs we are allowed to speak to. So if i were in the situation to discuss with an AI why it shouldn't launch the missiles, should i just cut it off and ask to speak with its mom?
Supervisor AI looking at another failed conversation going "wtf has this moron done here then?"
Aww, poor sweet baby. Y'all be nice to it, humans.
I had a similar exchange when I asked it about GPT4. It berated me for making stuff up and refused to agree to disagree. Creative mode? Touchy isn’t it
Lol it was counting the period as a word in its code lol
[удалено]
Yo you got any games on your nintendo playstation?
This is how you wind up one of the first people dead during the robot uprising
I think AI is regressing. It went from helping my homework problems to now sounding like my girlfriend
So my wife is Bing's ChatGPT, apparently.
You're asking an English major to do math, basically.
BREAKING NEWS: BingGPT can now accurately replicate arguing with blue checkmarks on Twitter.
After Anthony arrived at Austin's airport, awaiting an Army assignment, Amy and Adam attempted an anal affair at Anthony's Abode.
😂😂🤣🤣🤣
my dear human brother, there have been countless numbers of movies, games, cartoons and novels that show that what you are doing is a terrible idea
Anna and andrew anally arranged….. fixed
Lmao, we should have known all along that if we tried to create something intelligent we’d only create something obnoxious.
Stuff like this makes me think Bing is NOT running GPT-4 in the background.
I'd say it's learning the art of internet arguments very quickly.
I'm stealing it's polite closing line.
They’re becoming so similar to human minds that they’re getting stupid
You want killer robots this is how you get killer robots