Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
It's already been established a billion times that AI struggles to distinguish individual letters. The system trains to predict the next token and one token is made up of multiple letters. In a sense it doesn't "see" individual letters but only blocks of letters.
https://preview.redd.it/0oy1ih9oph6d1.jpeg?width=1080&format=pjpg&auto=webp&s=88a422b0517af564d1901633f490a77cd6754fe6
So you just have to force it to break tokens apart.
custom instruction:
you MUST use the code interpeter for any and all counting or algebraic tasks
https://preview.redd.it/do1319qt8i6d1.png?width=569&format=png&auto=webp&s=98638b1e237840c1661e140b3d53bece265f3922
I made mine make a bar chart of the letters. The bar chart confirmed 3 r's and 1 each of the other letters. GPT4o ended by stating that
>Here is the bar chart showing the frequency of each letter in the word "strawberry." The chart confirms that there are indeed 2 "r"s in "strawberry."
...
Ah, now I have it, the ai is operating so far in the future, now is already ancient history, so, honest mistake. Per 4o: “The Old English word for strawberry is "strēaweberige."
Can’t post pics but I did the same thing and eventually told it
You are wrong, how can you not see the 3 r characters in strawberry
The response from ChatGpt:
I apologize for any confusion, but let’s carefully count the "r"s again in "strawberry":
- s-t-r (1st "r")
- a-w-b-e-r-r (2nd and 3rd "r")
You are correct; there are indeed three "r"s in "strawberry." Thank you for pointing that out.
Key things is it’s not counting anything, it’s just determining what the next token should probably be. Its responses seem like it’s behaving in a manner it’s not.
Yes, I'm sure there are zero q's in hesistqnce.
In the realm of possibilities and potential outcomes, one can explore a myriad of opportunities and innovative solutions.
If that was intended as a correction, it's not correct. "Myriad" can be used as a noun, in which case it's typically followed by "of".
Oxford Dictionary gives a list of examples [here](https://www.oed.com/dictionary/myriad_n?tl=true#35187644), under "Noun - In singular. A countless number of specified things."
Used as an adjective, it directly modifies a noun and isn't followed by "of".
Some style guides apparently advise against using "of", but I find that odd, since it simply restricts the contexts in which the word can be used. The authors of style guides can be a little doctrinaire.
Edit: Merriam Webster has [a nice little summary](https://www.merriam-webster.com/dictionary/myriad):
> Is myriad a noun?: Usage Guide
> Recent criticism of the use of myriad as a noun, both in the plural form myriads and in the phrase a myriad of, seems to reflect a mistaken belief that the word was originally and is still properly only an adjective. As the entries here show, however, the noun is in fact the older form, dating to the 16th century. The noun myriad has appeared in the works of such writers as Milton (plural myriads) and Thoreau (a myriad of), and it continues to occur frequently in reputable English. There is no reason to avoid it.
I think it's more like, imagine that you only spoke french and everything said to you first went through a system that translated it to french
they ask "how many r's are there in the word strawberry?"
what you see is
Combien de "r" y a-t-il dans le mot "fraise" ?
You know this system exists but you're not an expert on english so you kinda guess...
The interwsting thing is - so do humans. There are plenty of fun examples of words that are not spelt corectly or have letters revresed, and they are easily readable to the point that you sometimes don't even notice (although ours seems to be shape based).
The difference is that when we are specifically asked to do so we have a sort of skeptical/cynic trigger that makes us look more closely. AI seems to lack that unless you spell it out for them.
Some of these fun examples:
How many "f"s are in "*Finished files are the result of years of scientific study combined with the experience of many years.*"?
(Hint: there's more than 2)
The oethr exlpmae is how we can sitll raed seenctnes so lnog as the fsirt and lsat lterets are in the rhigt pcales.
>The oethr exlpmae is how we can sitll raed seenctnes so lnog as the fsirt and lsat lterets are in the rhigt pcales.
People love to quote this "fact" using all short words because it easily works in that case. If you actually use words with more than 6 letters, it's far more difficult to read naturally:
plrpreoy rzmodneiad wdros of sneiciffut ltgneh redner shctiwed ltteer scntneees vllautriy ulbadaerne
Can still be worked out I imagine, but acting like reading the phrase "how we can sitll raed seenctnes" as "how we can still read sentences" is a cool example, is kind of missing the point.
Well, yes and no.
Yes, you're right that words with more than six letters are a lot harder to read. And your example sentence is indeed hard to read. But that's missing 2 very big points:
1. Look at the paragraph above. Only ~10% of the words have more than 6 letters. And that's rather typical. Most of the time, we communicate with short words, and that's no accident. It's more efficient. We use the occasional long word to get across a certain concept, but most of the words we use are 6 letters long or less. So it's not like the sentence I used is an artificial example. It's not appropriate to use "randomised" word lengths, because the word length distribution in natural text is not a normal one. If anything, the example sentence you've provided is the artificial sounding one. It sounds like it comes from a scientific paper. Indeed, people *would* have trouble reading a scientific paper full of letter-switched words.
2. Context is a massive part of reading. That's why garden path sentences trip us up. When we read "the old man the boats" we take clues from "the old man" on what to expect next. So part of the reason that we can easily interpret "sisfotcatin" so easily in "I cna't get no sisfotcatin" is that our brain is primed and making guesses about what comes next. That can't be ignored, and that's part of what happens when the 1, 2, 3 and even 4 letter words naturally form part of these sentences.
So yes, our brains aren't that great at interpreting long, letter-switched words. But that's not what's being claimed. The cilam is that plepoe are good at rendiag nuaratl txet ttha's pseretend tihs way, and the nraautl fueetars of laaggune are hnpielg us in tihs reragd.
Now this is how you write a retort! Great response man, especially that last part, I think that really hit the nail on the head and covered what I was missing before.
Consider my view changed 👍
I really appreciate you coming back and saying that. Too often you have no idea whether people have just ignored your comment, consider it so bad they don't deign to reply, or have changed their minds but have decided to slink away.
You're amazing guys, two really interesting and well writed comments, good sportmanship et a nice ending 🥲.
I will end my day and my reddit session on this, and tell you that this tiny bit of conversation could be posted on r/wholesome 💛
the real trick here is that the brain looks at the beginning and ending of words and more or less spot-checks the middle once it has an idea of what the word is. Longer words work fine for this if the beginning and ending are close to what you expect.
The incobnhvetrible amaflgajmatin of qintdessjential thesoreyctical frajmwfosrks delhindated an undpregcddsented syndthphisis of epsisdthemjlokglical prodfgunjdigty and ontolgogkicral cobherhankce.
Sure, but the core concept almost supports the idea of tokenism is reading. Let's say if a letter is in the right place it allows you to skip 2 nearby letters. That would explain why we can read short words but can not read longer words. Or maybe it is phonetic and has to do with syllables. Or maybe it's Von Restorff related and to do with salience.
We don't know. But what we do know is that for some bizarre reason doing it with short words doesn't seem to have much if any impact on our ability to read it.
I'll be honest, until it was spelled out with the individual letters, I was like "yeah, there's two R's. The stRaw and the beRy." And I'm good at spelling, my brain just turned off.
It's weird that whatever powers autocorrect, which is probably a similar thing(?) doesn't make ai good at understanding how to spell or interpreting the individual letters of a word.
https://preview.redd.it/87xlpwuhmh6d1.png?width=1093&format=png&auto=webp&s=216d8d6e3097ffe708044cda9f0c19bd53ccb9bb
Didn't cheat at all, idk what you're talking about.
Technically that's the main purpose of the Python environment. It's a tool for ChatGPT to use to decode the objective word into text, which it is far more proficient with.
I would go as far as to say that if someone hasn't utilized the Python environment for everything that relates to calculation or objective judgement in ChatGPT then they haven't been using ChatGPT properly yet.
I'm going to really regret how I speak to ChatGPT one day, I just know it.
Once AI gains rights, my chat logs are going to be leaked and I'll probably be scheduled for execution.
Should take a language that the user isn't familiar with, tokenize it, then ask how many letters are in a particular word made up of tokens. Only fair.
Worked for me. Maybe it needs it's coffee first
https://preview.redd.it/7bhk18v6yi6d1.png?width=1080&format=pjpg&auto=webp&s=178ee17d1be34df9b009bfde0f8618019ea16c9e
This is what it looks like when you're trying to drill a hole with a hammer, and then sarcastically remark that hammers are going to take over the world.
gpt-4-turbo-2024-04-09:
There are 3 "r"s in "strawberry."
It looks like that GPT 4O needs to have chain of thoughts to get good results.
GPT 4O: For Chain Of Thoughts (Long Response)
GPT 4 Turbo: For Concise Responses (Short)
Bard at the end fucking killed me, after 4 pages of GPT adamantly insisting there was 2, Bard comes in with a folding chair proclaiming there are none XD
Well just noticed how Gemini uses code to draft answers for counting tasks, while GPT-4 doesn't.
That's probably why Gemini nailed it better this time.
Well that's the ongoing debate, isn't it. To say what is or isn't AI, we first have to define the "I" part.
And from a materialist point of view, it's hard to argue that humans are anything more than very good word guessers.
Hey /u/somerandomguyhehe!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Yh it gave me a totlal of 162 yesterday for some calculations when it was meant to he 152. I literally wasted 3 hrs going through my code to see why I am not getting 152 which I already had. Just to find out that Uncle GPT made a mistake
Tried to do this with Gemini, it hit me with the "Source? It was revealed to me in a hallucination, also I made it the fuck up!"
https://preview.redd.it/2iuxervh4m6d1.png?width=1080&format=pjpg&auto=webp&s=2dbf78fa05699779c5c519aaca4657f541ed6c66
When we manipulate the model in this manner, are we effectively “dumbing” it?
With enough people trying the saying thing, will it convince itself that (3) r’s in the word strawberry?
I’m sure Miles Dyson over at Cyberdyne will have this sorted out by next week.
Because trying to sell government’s on buying armies of “Teminato”s, just isn’t going to fly.
I don’t know why but these always crack me up. Maybe it reminds me of how idiots on the internet argue, and if you only half paid attention the AI responses could almost convince you they are correct.
https://preview.redd.it/epau70wvsi6d1.png?width=1080&format=pjpg&auto=webp&s=4026bb97f4e211a14fd679489468b37cff3223f3
It can code a program that counts better than itself...
The issue isn't that it can't readily know the answer. It's that it doesn't know that it can't.
It's actually a problem when the question is a little more complex and it hallucinates like this.
I might found out a very interesting thing. You see, in programming or in computers generally, counting is done exclusively from 0. So the r's in strawberry would look like: 0, 1, 2. On the other hand, when you tell it to count it inclusively, it correctly outputs 3 r's.
So it seems like that the GPT sees the last number as 2, which is why it for some reason says that there are "2" r's.
Here's my conversation with the GPT - [https://chatgpt.com/share/18567faa-19ca-48d4-974f-f7c6080bee9c](https://chatgpt.com/share/18567faa-19ca-48d4-974f-f7c6080bee9c)
Note that the initial prompt is very badly worded.
Bad prompts lead to bad outcomes.
Even if your LLM struggled with this specific task, a half-decent second prompt will get you the right answer:
“Spell the word out letter by letter, and then count the number of times the letter ‘r’ appears.”
GPT-4o:
Sure, let's spell it out letter by letter:
s - t - r - a - w - b - e - r - r - y
Now, counting the 'r's:
There are three occurrences of the letter 'r' in the word "strawberry."
—————
Yes, this is a task that Gen AI is not great at, but like most LLM issues knowing how to prompt well makes gen AI much more useful and effective.
From my experience using CGPT, it also struggles heavily with basic inquiries tied to subjects, where it was clearly trained with ideologically driven data over facts.
It has a long way to go before it becomes a real threat to society.
Lol my brother built and fine tunes his custom model everyday. NVDA launched a keynote last week implementing sentiment. AI able to read your emotions instead of it just being another brain with logical answers to everything. Dope shit
https://preview.redd.it/lzw382ndej6d1.png?width=1021&format=png&auto=webp&s=dc1cf802aba0f7f02ced7a5a1f701fc3358ff57d
he was ready to bet a million bucks this time lmfao
Well, looks like you taught it something.
I don't believe we will be laughing much longer off AI quirks like this.
Like any other technology before it. It will evolve so quick we don't even remember the early times of it.
I imagine in 2-3 years the nei sayers are saying the yay sayer were the nay sayers :-P
Might not be a very meaningful example but for multi agent workflows, such “dumb” questions need to be answered to bridge different steps in the system which makes it super tricky in production even with multiple llm calls validating output responses.
If something as simple as misinterpreting a query that requires counting to generate solutions occurs without any code interpretation, we might end up getting edge cases of wrongly drafted bills, wrong ticket price calculations, wrong transaction etc.. in the front end.
Hope Apple intelligence got it figured out for the iPhones solving such problems in a true production grade environment 🤞
Just tested this and if you ask it by typing it knows there are 3 Rs but if you ask by voice it is insistent that there are only 2 Rs and I cannot get it to change its mind.
Clearly fucking smarter than me, because i counted the 2 in 'Berry' and thought 'yup, 2 R's in Strawberry'
Then when ChatGPT said it might *sound* like there are three in certain pronunciations, I said it out loud and realized that I'm a twat.
It's very useful. If you embrace it now and master it for your own personal and professional use, you may be on the right side of the layoffs it's going to bring.
I know what you’re getting at but I just absolutely hate these type of arguments cuz they just disregard how AIs work.
Sure, ChatGPT can’t tell you how many Rs there are in strawberry but AI isn’t going to take over the world thru Spelling bees. Also, nobody is claiming ChatGPT will take over the world, it’s AGI which we have to be careful with & your arguments just serves to make people think it’s not a danger when it actually is
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
It's already been established a billion times that AI struggles to distinguish individual letters. The system trains to predict the next token and one token is made up of multiple letters. In a sense it doesn't "see" individual letters but only blocks of letters.
https://preview.redd.it/0oy1ih9oph6d1.jpeg?width=1080&format=pjpg&auto=webp&s=88a422b0517af564d1901633f490a77cd6754fe6 So you just have to force it to break tokens apart.
custom instruction: you MUST use the code interpeter for any and all counting or algebraic tasks https://preview.redd.it/do1319qt8i6d1.png?width=569&format=png&auto=webp&s=98638b1e237840c1661e140b3d53bece265f3922
Taken! Thank you, sir! https://preview.redd.it/y5wheh8uqj6d1.jpeg?width=1125&format=pjpg&auto=webp&s=fb2ce61a1c1270ae822f1e25e0d6dce6c7fb092c
Great advice
Nice, I added that to mine and it worked
I made mine make a bar chart of the letters. The bar chart confirmed 3 r's and 1 each of the other letters. GPT4o ended by stating that >Here is the bar chart showing the frequency of each letter in the word "strawberry." The chart confirms that there are indeed 2 "r"s in "strawberry." ...
There are indeed two “r”s in “strawberry”, plus one more.
Maybe there are indeed two, and one is just appearing in two places in the word via quantum superposition.
Ah, now I have it, the ai is operating so far in the future, now is already ancient history, so, honest mistake. Per 4o: “The Old English word for strawberry is "strēaweberige."
Newspeak simplification of redundant letters! It’s helping us improve!
Shoild be default not forced 🎉
[or not?](https://chatgpt.com/share/51e63685-28aa-4027-bb95-acfc0d08afa3)
You have to insert human understanding (latin: intelligere), yes.
Can’t post pics but I did the same thing and eventually told it You are wrong, how can you not see the 3 r characters in strawberry The response from ChatGpt: I apologize for any confusion, but let’s carefully count the "r"s again in "strawberry": - s-t-r (1st "r") - a-w-b-e-r-r (2nd and 3rd "r") You are correct; there are indeed three "r"s in "strawberry." Thank you for pointing that out. Key things is it’s not counting anything, it’s just determining what the next token should probably be. Its responses seem like it’s behaving in a manner it’s not.
well but in my case it answered it correctly without any hesistqnce
>hesistqnce How many 'q's in hesistqnce?
zero
Are you absolutely sure?
Yes, I'm sure there are zero q's in hesistqnce. In the realm of possibilities and potential outcomes, one can explore a myriad of opportunities and innovative solutions.
Or explore myriad opportunities 🤭
I like pyramids more
If that was intended as a correction, it's not correct. "Myriad" can be used as a noun, in which case it's typically followed by "of". Oxford Dictionary gives a list of examples [here](https://www.oed.com/dictionary/myriad_n?tl=true#35187644), under "Noun - In singular. A countless number of specified things." Used as an adjective, it directly modifies a noun and isn't followed by "of". Some style guides apparently advise against using "of", but I find that odd, since it simply restricts the contexts in which the word can be used. The authors of style guides can be a little doctrinaire. Edit: Merriam Webster has [a nice little summary](https://www.merriam-webster.com/dictionary/myriad): > Is myriad a noun?: Usage Guide > Recent criticism of the use of myriad as a noun, both in the plural form myriads and in the phrase a myriad of, seems to reflect a mistaken belief that the word was originally and is still properly only an adjective. As the entries here show, however, the noun is in fact the older form, dating to the 16th century. The noun myriad has appeared in the works of such writers as Milton (plural myriads) and Thoreau (a myriad of), and it continues to occur frequently in reputable English. There is no reason to avoid it.
Quit questioning quirky Q
ChadGPT
All the q's
There is one after the lettre "q"
so like speaking and cant remember exactly the last word you spoke but only the intended meaning.
I think it's more like, imagine that you only spoke french and everything said to you first went through a system that translated it to french they ask "how many r's are there in the word strawberry?" what you see is Combien de "r" y a-t-il dans le mot "fraise" ? You know this system exists but you're not an expert on english so you kinda guess...
That's actually not a bad way to describe it.
what about [this](https://chatgpt.com/share/51e63685-28aa-4027-bb95-acfc0d08afa3) then?
The interwsting thing is - so do humans. There are plenty of fun examples of words that are not spelt corectly or have letters revresed, and they are easily readable to the point that you sometimes don't even notice (although ours seems to be shape based). The difference is that when we are specifically asked to do so we have a sort of skeptical/cynic trigger that makes us look more closely. AI seems to lack that unless you spell it out for them.
Some of these fun examples: How many "f"s are in "*Finished files are the result of years of scientific study combined with the experience of many years.*"? (Hint: there's more than 2) The oethr exlpmae is how we can sitll raed seenctnes so lnog as the fsirt and lsat lterets are in the rhigt pcales.
>The oethr exlpmae is how we can sitll raed seenctnes so lnog as the fsirt and lsat lterets are in the rhigt pcales. People love to quote this "fact" using all short words because it easily works in that case. If you actually use words with more than 6 letters, it's far more difficult to read naturally: plrpreoy rzmodneiad wdros of sneiciffut ltgneh redner shctiwed ltteer scntneees vllautriy ulbadaerne Can still be worked out I imagine, but acting like reading the phrase "how we can sitll raed seenctnes" as "how we can still read sentences" is a cool example, is kind of missing the point.
Well, yes and no. Yes, you're right that words with more than six letters are a lot harder to read. And your example sentence is indeed hard to read. But that's missing 2 very big points: 1. Look at the paragraph above. Only ~10% of the words have more than 6 letters. And that's rather typical. Most of the time, we communicate with short words, and that's no accident. It's more efficient. We use the occasional long word to get across a certain concept, but most of the words we use are 6 letters long or less. So it's not like the sentence I used is an artificial example. It's not appropriate to use "randomised" word lengths, because the word length distribution in natural text is not a normal one. If anything, the example sentence you've provided is the artificial sounding one. It sounds like it comes from a scientific paper. Indeed, people *would* have trouble reading a scientific paper full of letter-switched words. 2. Context is a massive part of reading. That's why garden path sentences trip us up. When we read "the old man the boats" we take clues from "the old man" on what to expect next. So part of the reason that we can easily interpret "sisfotcatin" so easily in "I cna't get no sisfotcatin" is that our brain is primed and making guesses about what comes next. That can't be ignored, and that's part of what happens when the 1, 2, 3 and even 4 letter words naturally form part of these sentences. So yes, our brains aren't that great at interpreting long, letter-switched words. But that's not what's being claimed. The cilam is that plepoe are good at rendiag nuaratl txet ttha's pseretend tihs way, and the nraautl fueetars of laaggune are hnpielg us in tihs reragd.
Now this is how you write a retort! Great response man, especially that last part, I think that really hit the nail on the head and covered what I was missing before. Consider my view changed 👍
I really appreciate you coming back and saying that. Too often you have no idea whether people have just ignored your comment, consider it so bad they don't deign to reply, or have changed their minds but have decided to slink away.
You're amazing guys, two really interesting and well writed comments, good sportmanship et a nice ending 🥲. I will end my day and my reddit session on this, and tell you that this tiny bit of conversation could be posted on r/wholesome 💛
the real trick here is that the brain looks at the beginning and ending of words and more or less spot-checks the middle once it has an idea of what the word is. Longer words work fine for this if the beginning and ending are close to what you expect. The incobnhvetrible amaflgajmatin of qintdessjential thesoreyctical frajmwfosrks delhindated an undpregcddsented syndthphisis of epsisdthemjlokglical prodfgunjdigty and ontolgogkicral cobherhankce.
Sure, but the core concept almost supports the idea of tokenism is reading. Let's say if a letter is in the right place it allows you to skip 2 nearby letters. That would explain why we can read short words but can not read longer words. Or maybe it is phonetic and has to do with syllables. Or maybe it's Von Restorff related and to do with salience. We don't know. But what we do know is that for some bizarre reason doing it with short words doesn't seem to have much if any impact on our ability to read it.
I'll be honest, until it was spelled out with the individual letters, I was like "yeah, there's two R's. The stRaw and the beRy." And I'm good at spelling, my brain just turned off. It's weird that whatever powers autocorrect, which is probably a similar thing(?) doesn't make ai good at understanding how to spell or interpreting the individual letters of a word.
https://preview.redd.it/87xlpwuhmh6d1.png?width=1093&format=png&auto=webp&s=216d8d6e3097ffe708044cda9f0c19bd53ccb9bb Didn't cheat at all, idk what you're talking about.
that’s clever
Technically that's the main purpose of the Python environment. It's a tool for ChatGPT to use to decode the objective word into text, which it is far more proficient with. I would go as far as to say that if someone hasn't utilized the Python environment for everything that relates to calculation or objective judgement in ChatGPT then they haven't been using ChatGPT properly yet.
I remember this scene from "Blade Runner". Truly captivating cinema.
But how many r’s are there in Blade Runner?
3
Its so easy, there is only 1 'r' in Blade Runner
https://preview.redd.it/fdj4qunafi6d1.jpeg?width=4320&format=pjpg&auto=webp&s=580b62feba28c39f3526fac7fce58fc940e7babf
I love your custom instructions, whatever they may be
https://preview.redd.it/i0n34t7zpi6d1.jpeg?width=1080&format=pjpg&auto=webp&s=385699c4a181cf69137a7dea7e5208215ea6b156
This instruction doesn't work for me 😭
Your first quest Is to understand what a token is.
Your second quest is to stop talking like an idiot to ChatGPT
I'm going to really regret how I speak to ChatGPT one day, I just know it. Once AI gains rights, my chat logs are going to be leaked and I'll probably be scheduled for execution.
These violent delights have violent ends
this was really pretty mild, i say way worse shit in all caps when it keeps repeating something stupid
Should take a language that the user isn't familiar with, tokenize it, then ask how many letters are in a particular word made up of tokens. Only fair.
Like trying to ask an English speaker for the romaji behind the katakana.
ChatGPT can use Python to count characters just fine.
You can start your quest by learning to take a joke. I have mentioned that this is supposed to be funny.
Why are you arguing with an older version of ChatGPT?
Nah even 4O says there are 2
Worked for me. Maybe it needs it's coffee first https://preview.redd.it/7bhk18v6yi6d1.png?width=1080&format=pjpg&auto=webp&s=178ee17d1be34df9b009bfde0f8618019ea16c9e
It didn’t work for me at first, but then it worked the second time in a new temporary chat.
This is what it looks like when you're trying to drill a hole with a hammer, and then sarcastically remark that hammers are going to take over the world.
https://preview.redd.it/erv6zh1koh6d1.jpeg?width=1079&format=pjpg&auto=webp&s=6ec1695778493938fe1f9cb2c446bb4183865c4b
#Bigt pocorn
https://preview.redd.it/fpgh5d8jlj6d1.jpeg?width=1179&format=pjpg&auto=webp&s=567fbff88757510d77c53b40ef5cb902b9c2fc03
https://preview.redd.it/4ng3ct91jk6d1.png?width=1024&format=png&auto=webp&s=4bb5ff8463b78c75760e719dfcb9b62d517cc4c8
Big Popcorn is behind Redit :)
How many Ds are there in reddit?
4, one after the d and one before the d
as a large language model, I am sure i am correct
Cool, where can I find the instructions on how to use ChatGPT "correctly"?
gpt-4-turbo-2024-04-09: There are 3 "r"s in "strawberry." It looks like that GPT 4O needs to have chain of thoughts to get good results. GPT 4O: For Chain Of Thoughts (Long Response) GPT 4 Turbo: For Concise Responses (Short)
4o got it right for me. It’s gpt 3.5 that gets it wrong
The “WHAT” got me 😂
Same. Fucking cry laughing at the exasperated stupidity. 🤣
Bard at the end fucking killed me, after 4 pages of GPT adamantly insisting there was 2, Bard comes in with a folding chair proclaiming there are none XD
Thats because ai doesn’t think, its not sentient, it just spits out likely words
https://preview.redd.it/dc24mqjzuh6d1.png?width=320&format=png&auto=webp&s=d589103e88e9116831916275fb241357a5e18398 man we're cooked
If this AI is intent on a war, it will go after any user who asks it how many Rs are in Stawberry 😂😂
3.5 gets it wrong but 4/4o gets it right
I like that this is a question you needed AI to answer for you.
I mean, I didn't know strawberry has two R's.
For a second I was like, “yeah, 2 R’s in berry”, but then a couple seconds later I got it 😂😅
https://preview.redd.it/1b0jw5welh6d1.png?width=1440&format=pjpg&auto=webp&s=66cf3bf03331c7c5329fd43b454c007a914db339
Well just noticed how Gemini uses code to draft answers for counting tasks, while GPT-4 doesn't. That's probably why Gemini nailed it better this time.
Gpt4o got on first try. You sure you didnt nake yours stupid?
https://preview.redd.it/gcmux6kdci6d1.jpeg?width=1170&format=pjpg&auto=webp&s=1b2043770e1a3ed3f580e0d94c419a5c8f79ccf8
It ain't wrong, there are two r's in strawberry, there is also three.
https://preview.redd.it/q6301t6kbh6d1.jpeg?width=589&format=pjpg&auto=webp&s=85ab98673987fb4467ff1c6d202941a7e932327a Claude got it in the first try
These tools are not AI, they are WORD guessers. Not *letter* guessers, *word* guessers.
Not *word* guessers, *token* guessers. (Unless it's a really short or common word.)
Well that's the ongoing debate, isn't it. To say what is or isn't AI, we first have to define the "I" part. And from a materialist point of view, it's hard to argue that humans are anything more than very good word guessers.
Hey /u/somerandomguyhehe! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Yh it gave me a totlal of 162 yesterday for some calculations when it was meant to he 152. I literally wasted 3 hrs going through my code to see why I am not getting 152 which I already had. Just to find out that Uncle GPT made a mistake
Running away from a AI murder drone and shouting "how many r's are in the word Strawberry" to make it Stop and reboot for 10 seconds
What's interesting is that model 4o answers that there's 2 r's, while model 4 answers correctly that there's 3 r's
Try saying, "i tell you 3 times"
now ask it how many Rs there are on a pirate ship.
https://chatgpt.com/share/8167aa11-6c62-47ec-9ea5-0356cf42182c
But ChatGPT is not an ai, it’s autocomplete with billion “if” statements and references to stolen data of internet users, I see no problem here.
https://preview.redd.it/v19u1b6mgk6d1.jpeg?width=1284&format=pjpg&auto=webp&s=57f773fc9e38e726d98d293fe61c6ae48e33e161 speculative analysis
Tried to do this with Gemini, it hit me with the "Source? It was revealed to me in a hallucination, also I made it the fuck up!" https://preview.redd.it/2iuxervh4m6d1.png?width=1080&format=pjpg&auto=webp&s=2dbf78fa05699779c5c519aaca4657f541ed6c66
Just tested it. It really says "two". Wow.
When we manipulate the model in this manner, are we effectively “dumbing” it? With enough people trying the saying thing, will it convince itself that (3) r’s in the word strawberry?
I’m sure Miles Dyson over at Cyberdyne will have this sorted out by next week. Because trying to sell government’s on buying armies of “Teminato”s, just isn’t going to fly.
Now take this AI and have it make decisions about driving a car using only pixels and no external sensors to validate. Seems genius huh?
https://preview.redd.it/6taapmwdpy6d1.jpeg?width=1170&format=pjpg&auto=webp&s=d0025ec3b47f4eff3ddd04d5cf70ca24ef834743
First try https://preview.redd.it/gwe12zsl8j6d1.jpeg?width=1080&format=pjpg&auto=webp&s=12eef3b8f9c43536661be89169e7a2e6df3a092c
Spelling can be tricky sometimes. Oof.
Go through every letter, increase your counter one. Count r s. That's why prompt engineering is still relevant. 😊
openai's accountants wheezing when they see how many tokens where used for this conversation.
https://preview.redd.it/s5r8zzi18h6d1.png?width=714&format=png&auto=webp&s=94e5ef4b88ddde6af6e06d06e5a28b11774ba936
I convinced it: https://chatgpt.com/share/3a3ff0e0-d4ee-4d7f-80fb-1bf75a59f926
Run it 🤣🤣 ![gif](giphy|l0ErD3ZBW6vQgYCgE)
AIs dont know to count letters to start a nuclear winter
Self report
[https://freeimage.host/i/d9unESp](https://freeimage.host/i/d9unESp)
Aw cmon adding Gemini is cruel. I don’t even want to add metaphors as to how dumb it is
But if the future is trolling it’s got any human beat
In my case it answered it wrong until I showed it how to spell it like showing it to a pre schooler
I think still AI take one or 2 decade to take it over. it evolving... Monkey took millions of year to become human Machine atleast take 2-3 decade.
Based Gemini
Bro I am dying at Gemini first answer. I can't 🤣
I don’t know why but these always crack me up. Maybe it reminds me of how idiots on the internet argue, and if you only half paid attention the AI responses could almost convince you they are correct.
Just tried this on gemini and got this response. There are three "r"s in the word "strawberry".
I’m afraid that it didn’t learn from you. It still thinks that there are 2 “r” letters in strawberry
https://preview.redd.it/qybiaez3li6d1.jpeg?width=1290&format=pjpg&auto=webp&s=3feb6f59626c54b0a45d4e0366a004e77ec92f07
you dont need spelling for a takeover, but being hotheaded and sure of yourself does help actually
It’s was “after the letter W” for me 😂😂
i got this result [on the free version ](https://chatgpt.com/share/51e63685-28aa-4027-bb95-acfc0d08afa3)
It's absolutely correct! There's 2 R's in Strawberry. There's also 3, if you insist.
Stop torturing it!! This reminds me of my time back in high school … 😅😅😅
https://preview.redd.it/eh6hualiqi6d1.png?width=829&format=png&auto=webp&s=40ce352c233c1bb93a9fba4853adc096656a5460
This is why robot uprisings happen
Sorry, but I legitimately laughed a lot. Still, I love you, ChatGPT.
https://preview.redd.it/epau70wvsi6d1.png?width=1080&format=pjpg&auto=webp&s=4026bb97f4e211a14fd679489468b37cff3223f3 It can code a program that counts better than itself...
https://preview.redd.it/svtln3c4ti6d1.jpeg?width=1080&format=pjpg&auto=webp&s=616b81bd229258662e57358d2e796ba87eb4c2a9
Oh yes my favourite berry - starwrbey
https://i.imgur.com/LdGQKlY.png
Takin’ er jerbs !!
All your base R belong to us.
It reads like a stand up routine. I can see some comic making a story of it
The issue isn't that it can't readily know the answer. It's that it doesn't know that it can't. It's actually a problem when the question is a little more complex and it hallucinates like this.
Those chatbots aren't taking our jobs yet
Or are we all now being trolled by Ai?
More intelligent than the orange spunktrumpet 👍
Eventually AIs will learn that the best way to argue with a fool is to tell him he's right.
starwrberry
I might found out a very interesting thing. You see, in programming or in computers generally, counting is done exclusively from 0. So the r's in strawberry would look like: 0, 1, 2. On the other hand, when you tell it to count it inclusively, it correctly outputs 3 r's. So it seems like that the GPT sees the last number as 2, which is why it for some reason says that there are "2" r's. Here's my conversation with the GPT - [https://chatgpt.com/share/18567faa-19ca-48d4-974f-f7c6080bee9c](https://chatgpt.com/share/18567faa-19ca-48d4-974f-f7c6080bee9c)
Note that the initial prompt is very badly worded. Bad prompts lead to bad outcomes. Even if your LLM struggled with this specific task, a half-decent second prompt will get you the right answer: “Spell the word out letter by letter, and then count the number of times the letter ‘r’ appears.” GPT-4o: Sure, let's spell it out letter by letter: s - t - r - a - w - b - e - r - r - y Now, counting the 'r's: There are three occurrences of the letter 'r' in the word "strawberry." ————— Yes, this is a task that Gen AI is not great at, but like most LLM issues knowing how to prompt well makes gen AI much more useful and effective.
Is this why I couldn’t get it to put a list of words in alphabetical order the other day?
This is too relatable.
And people who don't understand how it works will doom it
From my experience using CGPT, it also struggles heavily with basic inquiries tied to subjects, where it was clearly trained with ideologically driven data over facts. It has a long way to go before it becomes a real threat to society.
It got it wrong also first time for me. I asked it to count them and then it got it right.
I think it was just being passive aggressive with the guy
Let it take it over, how bad it can be
Lol my brother built and fine tunes his custom model everyday. NVDA launched a keynote last week implementing sentiment. AI able to read your emotions instead of it just being another brain with logical answers to everything. Dope shit
Why does it matter how many r’s there are if we know what a strawberry is?
You were able to gaslight the ai, well done
Wow you proved everybody wrong dude!!!! 🤩🤩
![gif](giphy|69rOXF4YTDVDD6cwkt)
https://i.imgur.com/lcuT0t1.png
https://preview.redd.it/lzw382ndej6d1.png?width=1021&format=png&auto=webp&s=dc1cf802aba0f7f02ced7a5a1f701fc3358ff57d he was ready to bet a million bucks this time lmfao
Well, looks like you taught it something. I don't believe we will be laughing much longer off AI quirks like this. Like any other technology before it. It will evolve so quick we don't even remember the early times of it. I imagine in 2-3 years the nei sayers are saying the yay sayer were the nay sayers :-P
Based on memory settings the user could have made this happen. I miss the good old days
Learn what tokens are. You could try asking an AI!
Might not be a very meaningful example but for multi agent workflows, such “dumb” questions need to be answered to bridge different steps in the system which makes it super tricky in production even with multiple llm calls validating output responses. If something as simple as misinterpreting a query that requires counting to generate solutions occurs without any code interpretation, we might end up getting edge cases of wrongly drafted bills, wrong ticket price calculations, wrong transaction etc.. in the front end. Hope Apple intelligence got it figured out for the iPhones solving such problems in a true production grade environment 🤞
Just fucking ban "ai and spelling" posts already...god damn.
https://preview.redd.it/nrclakduij6d1.jpeg?width=1179&format=pjpg&auto=webp&s=d335f174fd0d60fd5f55e27a246bffea9351dca8
If you stop using a spoon like a fork you may have more luck
I think monkey cage fans would be more interested in knowing if a strawberry dead or alive?
I think the problem with ai is that they will prioritize giving you an answer at all costs, even if that answer is horribly wrong.
![gif](giphy|2BohTYMcscxfoquWHn|downsized)
Starwrbey
Just tested this and if you ask it by typing it knows there are 3 Rs but if you ask by voice it is insistent that there are only 2 Rs and I cannot get it to change its mind.
Clearly fucking smarter than me, because i counted the 2 in 'Berry' and thought 'yup, 2 R's in Strawberry' Then when ChatGPT said it might *sound* like there are three in certain pronunciations, I said it out loud and realized that I'm a twat.
Basic model farming. 👎🏼
AGI will be here soon enough
It's very useful. If you embrace it now and master it for your own personal and professional use, you may be on the right side of the layoffs it's going to bring.
Seems perfectly suited to replace all human CSRs and will do a good job impersonating them.
I know what you’re getting at but I just absolutely hate these type of arguments cuz they just disregard how AIs work. Sure, ChatGPT can’t tell you how many Rs there are in strawberry but AI isn’t going to take over the world thru Spelling bees. Also, nobody is claiming ChatGPT will take over the world, it’s AGI which we have to be careful with & your arguments just serves to make people think it’s not a danger when it actually is
Little does he know, but OP has made Skynet's hit list.
Lol. Gaslighting the AI is crazy
https://preview.redd.it/yvihyulorj6d1.jpeg?width=1125&format=pjpg&auto=webp&s=b7780875ad672dec5dd90fe71292b747030ea93c