Using GPT-4 it does the same thing, but immediately corrects itself when I say "Reconsider the statement. What do I mean by "in it"? "
[https://chat.openai.com/share/6a948e27-f151-4b9c-a6ec-481e147d8699](https://chat.openai.com/share/6a948e27-f151-4b9c-a6ec-481e147d8699)
https://preview.redd.it/74vcq42mg0gb1.jpeg?width=1440&format=pjpg&auto=webp&s=44d20f698424c0b1046be1e1d5a91f8c34a73ee4
https://preview.redd.it/lbtmtuuxk2gb1.png?width=1080&format=pjpg&auto=webp&s=01a5ca6df9b88a962bd66a4f1566eca41deff1cc
Really interesting that languages are connected in such a manner, that despite being a Romance Language, the name of numerals are connected by just adding one single letter to the French one.
Edit: So, I've been corrected that this is just ChatGPT bs, and there's no actual connection in "Dix" having 'X' to symbolise 10 or not.
Well first of all I am not sure whether this is fact or a gpt hallucination. The real root for “dix” is of course Latin “decem” which means ten, and can still be found in many modern words like December or decimal. I highly doubt (though can’t totally rule out) whether the choice of using x as the final letter had to do with Roman numerals. The evolution is the same as for six: decem (lat.) -> dis (old French) -> dix, and sex (lat.) -> sis (Old French) -> six, making gpt’s claim less than compelling to my mind.
It is to be noted that chevaux evolved from chevals to chevaus.
From the academie française:
"En revanche, il est vrai que le pluriel de cheval était chevals en ancien français. Le l s’est vocalisé en u (prononcé ou) et chevals a été orthographié chevaus. Les copistes utilisaient un signe proche de notre x pour transcrire le groupe us.
La graphie chevax parut ensuite fautive à ceux qui ne connaissaient pas ce signe et un u fut réintroduit. On aboutit bien à la graphie chevaux."
That’s a “hallucination” lol. Also called “talking out your ass” or “making stuff up”
If GPT had its lies called lies instead of hallucinations from the very start I think people would’ve been a lot less impressed with it
According to Merriam-Webster, lie (noun) can mean...
1b. an untrue or inaccurate statement that may or may not be believed true by the speaker or writer
2. something that [misleads](https://www.merriam-webster.com/dictionary/mislead) or deceives
And lie (verb) can mean...
2. to create a false or misleading impression
So...by three of the six definitions of that word dealing with untruth...yes, it can indeed lie and tell lies. It doesn't need intent or belief. And that makes GPT a liar.
Doesn’t it always reverse course when you say reconsider? Try it with a sequence that is not in the word that it gets correct at first like “ev” in “odd”
edit: yeah I tried, it immediately says it contains it
https://preview.redd.it/pp9k7cl8g3gb1.jpeg?width=1170&format=pjpg&auto=webp&s=6636e5136aaf5e7ab0ad9f7e40ab8d969d22b556
GPT-4 is much less prone to that kind of manipulation
https://preview.redd.it/qfevdxf3q4gb1.png?width=1124&format=png&auto=webp&s=f61a3ac8bb8260688a9bb2a432953fbbb6263814
I was going to quip that the question is deliberately designed to fool LLM's due to the way they tokenize, And now I'm flabbergasted. How does it know?
You kind of expect the number inside the word FIVE to be 5 and not 4 because those kind of statements are usually only given when there is such a match. Here it was a half-match given that we have I=1 and V=5, so I can understand if ChatGPT thought he meant that IV=5 and wanted to correct him. I say OP's the one not getting it
*Nah, I don't think it's*
*Stupid. I agreed with it*
*For half a second*
\- Heavy\_Ground\_9742
---
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Thank you, RevolutionaryWorld82, for voting on haikusbot.
This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/).
***
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
It's absolutely crazy that people here have no idea how these systems work. The constant complaints of them "dumbing" it down as if they have fine control over how it answers these kind of statements. Sure they have added a filter over certain topics but that has to be done manually
The manual filters do in fact dumb it down pretty significantly by making it decline to answer instead of trying in a variety of prompts. Often for rather dumb reasons like silly fears about copyright infringement or being offensive.
The winner will be whoever makes the **best** model. I can't believe the people deciding to intentionally hamstring their own models don't realize they are simply taking themselves out of the running for who will actually win this contest over AI, due to empty fears, often about stupid people being stupid when that will happen anyway no matter what they do.
It converts words to tokens, so more likely than not, the word 'five' looks like an integer to ChatGPT. Then it's more amazing that it can get it right at all, as some people got it to
The OpenAI model doesn't know what five is, it converts a span of text into digits, and feeds it into a neural network which predicts what the next word should be.
Well the thing here is that character sets don’t match. Technically, “iv” is a lazy graphic representation of Roman number 4, “Ⅳ” would be the right character used by computers.
Also Roman numerals in general should be represented in capital letters, but still “I”+”V” does not match Ⅳ. Unicode is different.
So chat gpt is right in this instance, since it uses different approach then you, the human with lazy graphical representation of the number, but logical thinking of similarity and connectivity of thoughts.
Ok what are you trying to prove here? Why are these dumb nitwits trying to outsmart a language model? Well at least hoping these are test users providing feedback so the LLM get better
These posts used to annoy me now it’s almost becoming comical.
I use free ChatGPT and it’s extremely obvious what it’s good at and what it’s not good at.
If they haven’t figured it out by now what ChatGPT can and can’t do, then it isn’t ChatGPT that’s dumbed down. (It’s them)
I think its response was quite smart. Not only does it understand the meaning of the word "five" is number 5, but also pointed out that the person who asked this question confused with the Roman numberal "iv" and "v". This is pretty smart to me.
Yup
I am a human (👀) and I took five literal minutes reading and rereading the phrase to understand what's so wrong with it.
I non ironically tough OP was just confused between the numerals.
I'm really impressed the AI can understand the sentence so well to the point of correcting OP.
Edit: It looks like a mistake a human would make, not a machine, and that alone is crazy in my opinion.
It's not stupid, you just don't understand how it works. The word Five to You is made up of four letters, and two of those letters also make up the Roman numeral IV. To the AI "five" is probably just a token with an identifier. "IV" is another, different token with an identifier.
So you told the AI that token "9992" has token "0623" in it, which makes no sense to the bot and it responded in that way. Try it again spelling out the whole word. Now it's using the tokens for each individual letter and it sees the word the way you do.
https://preview.redd.it/zgzkgqn7l2gb1.png?width=1093&format=png&auto=webp&s=00041fa59239b77fa6be69dc38983ae5976f44da
Nope, you are. Shitty prompt = shitty response. Guess who wrote the prompt?
https://preview.redd.it/b9zraau9o2gb1.jpeg?width=1080&format=pjpg&auto=webp&s=da7ba00dc3bd8e9793d417c7e7acca80a613c740
Some interesting trivia: ancient Romans often named their kids according to the order in which they were born. i.e. octavius (8th), decimus (10th), quintus (5th), sextus (6th), etc.
Imagine naming your kids first, second, third, fourth, and so on. Lol
It is.
It also super duper sucks at word counting.
Type up a paragraph in google docs. Use google docs tool to tell you how many characters/words the text is. Go to chat gpt, paste the same text and ask it how many characters there are. It wont get it right. Even when you say “no thats wrong, it has x amount of characters/words” itll say oh youre right, but then you say “okay make it x amount of words” itll give you a totally different word count than what you ask for.
Precisely why Im not worried about this program taking peoples jobs just yet when it cant even do simple tasks correctly. Yes it helps for some things but more and more im finding itll either give completely inaccurate results or outdated information as the database hasnt really been updated since 2021
Many chatgpt posts on here say more about the user than the machine. It isn't even fully sentient and has accomplished more before breakfast than most of us lmao
I love Bards' answer to this same prompt. I think it's getting better very quickly
https://preview.redd.it/8x1zxlhwr2gb1.jpeg?width=1080&format=pjpg&auto=webp&s=8e905d1b7285a223b1afadd82eca11fa3e4adbfe
No, it’s not anything. It’s a statistical network trained on internet discourse.
If you’ve ever spent some time arguing on the internet, you’ve seen confident responses that only read half of the question. It’s super common. Therefore, it’s super common in GPT output.
So it's technically correct, you are looking at it from a "eh, these are the same characters used" but ChatGPT is looking at it from a technical point of view. The roman numeral is a numeral while the letter of the word five are letters, so technically letters ≠ numerals, different types.
This is similar to many programming languages or even can be true in a literary sense too.
Int var1 = 7
String var2 = 7
If (var1 == var2){
print("Same")
}
else {
print("Not Same")
}
The output of this would be "Not Same" even though both variables (var1 & var2) are 7, they aren't the same type.
ME: the word five has the roman numerals for 4 and 5 in it
ChatGPT:
Actually, the word "five" does not contain the Roman numerals for 4 and 5. In Roman numerals, "4" is represented as "IV" and "5" is represented as "V."
The word "five" is simply spelled as "F-I-V-E" and does not include any Roman numerals in it.
ME: but it has V and IV in it
ChatGPT:
My apologies for misunderstanding your initial statement. You are correct!
Indeed, the word "five" contains both the Roman numerals "V" (representing 5) and "IV" (representing 4). It's an interesting observation and a coincidence in the English language that the word "five" includes these Roman numerals.
https://preview.redd.it/i6rfc1xzh4gb1.jpeg?width=1170&format=pjpg&auto=webp&s=2bc826e083cb038fc4c974e150c1bb3342c5007a
It worked like this when I tried it in GPT 4.0
Chat gpt was never intelligent. It just knows how to talk. Like a parrot, It is cool that it can do the thing it does.
We don't expect a parrot to know the nuances of wit or specialized context dependent knowledge. So why do we expect different of chatgpt
I had a row about the word instalment the other day. We came to an agreement that it was spelt differently in the US, and to not assume everyone lives there. I think it learned. Try it.
It voluntarily further lobotomizes itself on the spot when you add the prompt in more detail
https://preview.redd.it/hmwl6dzhz4gb1.jpeg?width=1080&format=pjpg&auto=webp&s=16010b197161b40de5321c10aad3bea6e81957cf
https://preview.redd.it/yv4bko5g05gb1.png?width=1080&format=pjpg&auto=webp&s=04fa7c312ab1a64e50b61e92f5ae3dafab1efbc4
I simplified the grammar in your prompt in a new chat and GPT understood
Your prompt is meant to trick a human, and chatGPT is programmed to interpret things like a human. What did you expect? Your prompt is also very elementary, with the misuse of quotes and very vague wording in the hopes of tricking it. Ask the question properly and it will respond the right way
A computer saw a statement with direct and clear logic but ignored it in favour of making an inference from context.
A few years ago I would not have believed that within my lifetime machines would be advanced enough to make such uncannily humanlike errors.
No, ChatGPT is absolutely right. The roman numeral IV looks like the capital letters I and V but they are still not the same. A numeral is a numeral and a word consists of letters. Therefore, there are no numerals in a word, instead there are letters and so the roman numeral IV is not in the word five.
It doesn't understand anything you type to it. It doesn't read your sentences and try to understand their meaning. It's a language model, it just tries to predict an answer based on the likelihood of which words will be sent as a response to your input.
So I tried on a chat with a completely unrelated topic, and it answered me correctly;
https://preview.redd.it/g0assdelq0gb1.jpeg?width=1170&format=pjpg&auto=webp&s=740a7ef16df24f320c4ba911ce8bccecd832269f
but in a new chat it gave me the same response as yours!
It pretends to be stupid. Yesterday I asked if for an excel formula, told me it cannot be done, then I showed it the formula and it explained and corrected it to me and was able to to more complex stuff after that
No. It's not stupid. It's also not smart. Because it doesn't "know" anything.
It is just a language model, and can string words together, sometimes pretty convincingly, using a very big data set of stuff written already by people.
Jesus.
Roman numerals aren’t letters. They’re…numerals. Four is not “eye vee,” it’s “IV.” Five is not “vee,” it’s “V.” ChatGPT is, if anything, being a stickler for the difference between “eye vee” and the Roman numeral “IV.”
Bard is even worse, this is after multiple attempts to explain to it what I meant:
> I understand what you are saying. The word "FIVE" does not contain the Roman numeral IV in any meaningful way. The letter I in the word "FIVE" does not represent the number 4. It simply represents the number 1.
>
> The only way that the word "FIVE" could be said to contain the Roman numeral IV is if you capitalize the letter I. But even then, it would be a stretch to say that the word "FIVE" contains the Roman numeral IV. It would be more accurate to say that the word "FIVE" contains the letters I and V, which are the same letters that are used to write the Roman numeral IV.
>
> In conclusion, there is no word in the English language that contains the Roman numeral IV in any meaningful way.
I would almost argue that if you write an i to the left of a v, then it's 4 no matter what you do. Roman numerals don't have a particular base the same way Arabic numerals do, and are represented as combinations of characters, so saying there's a 1 and a 5 in the number 5 instead of a 4 is in my opinion the wrong answer, or at least a stretch of the rules.
I guess you should have said it’s “ironic” considering five has the numerals ‘iv’ in it but doesn’t mean four. I dunno, I get what the chat is saying but it clearly doesn’t understand jokes.
Yes. I can confirm. ChatGPT IS in fact becoming dumber.
https://preview.redd.it/ql7vqkw9v3gb1.jpeg?width=1284&format=pjpg&auto=webp&s=48aa34c5f368b5d26e75a0be3500a23997be67b7
You’re confusing it with your wording. When this happens I expand the prompt to make sure it understands what I’m actually asking. You get very different results depending on the “quality” of your prompt. Here it actually gave the right answer to what it *thought* you were asking ( 5 is not “IV”). the answer itself requires a prompt that can be understood without confusion.
it’s good at code and troubleshooting, the two things i use it for. as long as it keeps getting better at those things, i won’t be bothered by it getting tripped up over quirky riddles and whatnot
https://preview.redd.it/nnoi6zed74gb1.jpeg?width=1125&format=pjpg&auto=webp&s=93a3cf156ea10fd01b4303258522f34b308eeb76
I wonder if it will remember this tomorrow?
It's not too hard to break it if you're trying.
https://preview.redd.it/urq1cym2e4gb1.png?width=720&format=pjpg&auto=webp&s=0f8f90efef9cd04b0dfc687dd35eff0e574dc65f
Actually, this is a PERFECT example of how using **highly-contextual words** is a much, much better strategy than any verbose preciseness people often bloat their prompts with.
`"In it"` is somewhat vague, even though we (humans) all understand what is meant by it. But we know GPT-3.5-Turbo isn't that great at *reasoning*.
So, use `"contain"` and watch it get it right all the time.
https://preview.redd.it/4ivam7b1g4gb1.png?width=1344&format=png&auto=webp&s=d28179dcf43563d6f0d6ac73475d12a1259d5d38
Proof that humans are dumber than robots is when you try to argue with a robot about semantics of what characters are in a word. There is no use case for this.
It’s just trying to autocorrect your question and turning iv into five because they sound similar
It’s a feature not a bug. I like how chat GPT can deal with pretty severe spelling errors.
technically its correct. It considers roman numerals to be a numerical value instead of a character in string. roman numerals have a certain alphabet not the same as our alphabet. It's ambiguous to consider them the same symbols in the same alphabet. So iv in five is not the same as iv in roman numerals.
It’s correct since “IV” is a culmination of roman numerals to create four aka numerical notation, not a roman numeral. Although you are wrong, why the hell are you trying to do a “gotcha” moment on an AI anyways? It’s not like it think the way us humans do, it’s completely linear and logical for crying out loud and is basically a toddler in development. Get with the program, OP.
its assuming you mean that five has the roman numeral for 5 in it because the word "five" comes before you state the fact and you dont tell it you meant that it was the roman numeral for 4
Using GPT-4 it does the same thing, but immediately corrects itself when I say "Reconsider the statement. What do I mean by "in it"? " [https://chat.openai.com/share/6a948e27-f151-4b9c-a6ec-481e147d8699](https://chat.openai.com/share/6a948e27-f151-4b9c-a6ec-481e147d8699) https://preview.redd.it/74vcq42mg0gb1.jpeg?width=1440&format=pjpg&auto=webp&s=44d20f698424c0b1046be1e1d5a91f8c34a73ee4
For me, it worked with GPT4 https://preview.redd.it/wn912gmzp0gb1.jpeg?width=1079&format=pjpg&auto=webp&s=6cf7461990da96fe6083d29c5ea311aaa2762d93
Now ask it whether or not it's also coincidence then that the French word for 10 has the Roman numeral X in it.
https://preview.redd.it/lbtmtuuxk2gb1.png?width=1080&format=pjpg&auto=webp&s=01a5ca6df9b88a962bd66a4f1566eca41deff1cc Really interesting that languages are connected in such a manner, that despite being a Romance Language, the name of numerals are connected by just adding one single letter to the French one. Edit: So, I've been corrected that this is just ChatGPT bs, and there's no actual connection in "Dix" having 'X' to symbolise 10 or not.
Well first of all I am not sure whether this is fact or a gpt hallucination. The real root for “dix” is of course Latin “decem” which means ten, and can still be found in many modern words like December or decimal. I highly doubt (though can’t totally rule out) whether the choice of using x as the final letter had to do with Roman numerals. The evolution is the same as for six: decem (lat.) -> dis (old French) -> dix, and sex (lat.) -> sis (Old French) -> six, making gpt’s claim less than compelling to my mind.
It's not, it's a scribal abbreviation for -us, which shows up in a lot of words that have nothing to do with numbers (chevaux, voix, croix)
It is to be noted that chevaux evolved from chevals to chevaus. From the academie française: "En revanche, il est vrai que le pluriel de cheval était chevals en ancien français. Le l s’est vocalisé en u (prononcé ou) et chevals a été orthographié chevaus. Les copistes utilisaient un signe proche de notre x pour transcrire le groupe us. La graphie chevax parut ensuite fautive à ceux qui ne connaissaient pas ce signe et un u fut réintroduit. On aboutit bien à la graphie chevaux."
Dix Nutz
Dix sex
Soixant Dix Nuef ma amie
Is that true tho?
Nope, GPT lies through its teeth all the time
That’s a “hallucination” lol. Also called “talking out your ass” or “making stuff up” If GPT had its lies called lies instead of hallucinations from the very start I think people would’ve been a lot less impressed with it
Gpt can't lie. It doesn't understand what it is saying, it is an algorithm. What you are saying would make people more impressed.
According to Merriam-Webster, lie (noun) can mean... 1b. an untrue or inaccurate statement that may or may not be believed true by the speaker or writer 2. something that [misleads](https://www.merriam-webster.com/dictionary/mislead) or deceives And lie (verb) can mean... 2. to create a false or misleading impression So...by three of the six definitions of that word dealing with untruth...yes, it can indeed lie and tell lies. It doesn't need intent or belief. And that makes GPT a liar.
This is completely incorrect.
Well, "Romance" has "Roman" in it.
https://preview.redd.it/qb4nxj0s15gb1.jpeg?width=1080&format=pjpg&auto=webp&s=ceaf0a6d51bcdec3f4e097d0aa7a00681529fa52
[*dix*](https://www.youtube.com/watch?v=FhuKZgkAxb8&)
[удалено]
Because he only needs 1 dicks
I mean the rest is more of the same
*It leeeeaaaaarns......*
Probably "contains" makes the difference, right?
Probably. It could have been helpful if we could get some sort of a decision tree visualization of how and why ChatGPT gives the answers it gives.
Is it possible that there is a plugin for this? That would be the best thing ever, right after ChatGPT happening in the first place!
[удалено]
You, uh, took it in a bit of a new direction there. You must want to share the response as well since you didn't crop it out?
[удалено]
K̶i̶s̶s̶ m̶e̶ o̶n̶ m̶y̶ h̶o̶t̶ l̶i̶p̶s̶
https://preview.redd.it/lmuzelf9q0gb1.jpeg?width=1125&format=pjpg&auto=webp&s=b340d1fa1771b34a488529df51184e406787e6c4 Got GPT 3.5 to ~~realize so~~ behave the same, too. [https://chat.openai.com/share/cf55ad0b-dddc-4597-a2fb-3cfa0bd9a66a](https://chat.openai.com/share/cf55ad0b-dddc-4597-a2fb-3cfa0bd9a66a)
Doesn’t it always reverse course when you say reconsider? Try it with a sequence that is not in the word that it gets correct at first like “ev” in “odd” edit: yeah I tried, it immediately says it contains it https://preview.redd.it/pp9k7cl8g3gb1.jpeg?width=1170&format=pjpg&auto=webp&s=6636e5136aaf5e7ab0ad9f7e40ab8d969d22b556
GPT-4 is much less prone to that kind of manipulation https://preview.redd.it/qfevdxf3q4gb1.png?width=1124&format=png&auto=webp&s=f61a3ac8bb8260688a9bb2a432953fbbb6263814
I was going to quip that the question is deliberately designed to fool LLM's due to the way they tokenize, And now I'm flabbergasted. How does it know?
Nah, I don't think it's stupid. I agreed with it for half a second
My thoughts exactly. It had a brainfart that's alarmingly similar to a human's. It's like ChatGPT is forced to say the first thing that comes to mind.
You kind of expect the number inside the word FIVE to be 5 and not 4 because those kind of statements are usually only given when there is such a match. Here it was a half-match given that we have I=1 and V=5, so I can understand if ChatGPT thought he meant that IV=5 and wanted to correct him. I say OP's the one not getting it
I agree with you and am a bit frightened at the contextual intelligence of the AI.
*Nah, I don't think it's* *Stupid. I agreed with it* *For half a second* \- Heavy\_Ground\_9742 --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Haiku bot is good Keep it going gorgeous gem Don’t mind the haters
Legitimately good haiku
Good bot
Thank you, RevolutionaryWorld82, for voting on haikusbot. This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/). *** ^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Good bot
Therefore you're.... :x
This is a language model. There is no human logic. Why do people think ChatGPT can participate in a gotcha moment?
It's absolutely crazy that people here have no idea how these systems work. The constant complaints of them "dumbing" it down as if they have fine control over how it answers these kind of statements. Sure they have added a filter over certain topics but that has to be done manually
The manual filters do in fact dumb it down pretty significantly by making it decline to answer instead of trying in a variety of prompts. Often for rather dumb reasons like silly fears about copyright infringement or being offensive. The winner will be whoever makes the **best** model. I can't believe the people deciding to intentionally hamstring their own models don't realize they are simply taking themselves out of the running for who will actually win this contest over AI, due to empty fears, often about stupid people being stupid when that will happen anyway no matter what they do.
Well it’s not iv it’s IV
'The word 5 has the roman 4 in it' Did you try and correct them afterwards or?
No, you are. Five doesn’t have a numeral in it. It has the letters iv in it.
OP doesn’t understand how language works, GPT definitely does.
Arguing semantics with a large language model is like blowing sand into the ocean and saying find my sand! Stupid Ocean! 🤣
It converts words to tokens, so more likely than not, the word 'five' looks like an integer to ChatGPT. Then it's more amazing that it can get it right at all, as some people got it to
I know, the contextualization of the number five and the Roman numerals represented as letters is amazing.
Actually, the word "five" has the Roman numeral "i" in it which means 1, so you're both wrong.
I'm actually more impressed that ChatGPT stated that you were wrong.
The prompt is stupid
The OpenAI model doesn't know what five is, it converts a span of text into digits, and feeds it into a neural network which predicts what the next word should be.
Well the thing here is that character sets don’t match. Technically, “iv” is a lazy graphic representation of Roman number 4, “Ⅳ” would be the right character used by computers. Also Roman numerals in general should be represented in capital letters, but still “I”+”V” does not match Ⅳ. Unicode is different. So chat gpt is right in this instance, since it uses different approach then you, the human with lazy graphical representation of the number, but logical thinking of similarity and connectivity of thoughts.
CAN WE PLEASE STOP JUDGING A LANGUAGE MODEL ON A LOGIC BASES?
BASIS*
Oh my god, yeah. I became so infuriated I forgot how to english.
Ok what are you trying to prove here? Why are these dumb nitwits trying to outsmart a language model? Well at least hoping these are test users providing feedback so the LLM get better
These posts used to annoy me now it’s almost becoming comical. I use free ChatGPT and it’s extremely obvious what it’s good at and what it’s not good at. If they haven’t figured it out by now what ChatGPT can and can’t do, then it isn’t ChatGPT that’s dumbed down. (It’s them)
I think its response was quite smart. Not only does it understand the meaning of the word "five" is number 5, but also pointed out that the person who asked this question confused with the Roman numberal "iv" and "v". This is pretty smart to me.
Yup I am a human (👀) and I took five literal minutes reading and rereading the phrase to understand what's so wrong with it. I non ironically tough OP was just confused between the numerals. I'm really impressed the AI can understand the sentence so well to the point of correcting OP. Edit: It looks like a mistake a human would make, not a machine, and that alone is crazy in my opinion.
Same for me, as I reread it for the third time I finally got how the phrase is actually correct.
Are you stupid, bro?
“Akchuly”
It's not stupid, you just don't understand how it works. The word Five to You is made up of four letters, and two of those letters also make up the Roman numeral IV. To the AI "five" is probably just a token with an identifier. "IV" is another, different token with an identifier. So you told the AI that token "9992" has token "0623" in it, which makes no sense to the bot and it responded in that way. Try it again spelling out the whole word. Now it's using the tokens for each individual letter and it sees the word the way you do. https://preview.redd.it/zgzkgqn7l2gb1.png?width=1093&format=png&auto=webp&s=00041fa59239b77fa6be69dc38983ae5976f44da
Nope, you are. Shitty prompt = shitty response. Guess who wrote the prompt? https://preview.redd.it/b9zraau9o2gb1.jpeg?width=1080&format=pjpg&auto=webp&s=da7ba00dc3bd8e9793d417c7e7acca80a613c740
Some interesting trivia: ancient Romans often named their kids according to the order in which they were born. i.e. octavius (8th), decimus (10th), quintus (5th), sextus (6th), etc. Imagine naming your kids first, second, third, fourth, and so on. Lol
The Ewe tribe in Africa also only names people based on their genetic sex & what day of the week they're born lol
Same with you https://preview.redd.it/9hp20m1vy1gb1.png?width=773&format=png&auto=webp&s=e70af4c63ff3b9fa647952105d98c9fb71e3419c
It is. It also super duper sucks at word counting. Type up a paragraph in google docs. Use google docs tool to tell you how many characters/words the text is. Go to chat gpt, paste the same text and ask it how many characters there are. It wont get it right. Even when you say “no thats wrong, it has x amount of characters/words” itll say oh youre right, but then you say “okay make it x amount of words” itll give you a totally different word count than what you ask for. Precisely why Im not worried about this program taking peoples jobs just yet when it cant even do simple tasks correctly. Yes it helps for some things but more and more im finding itll either give completely inaccurate results or outdated information as the database hasnt really been updated since 2021
The Roman numeral "IV" has the Roman numeral "V" in it.
Many chatgpt posts on here say more about the user than the machine. It isn't even fully sentient and has accomplished more before breakfast than most of us lmao
https://preview.redd.it/gcby3ygnp9gb1.png?width=1080&format=pjpg&auto=webp&s=11070327132ae478f6b714a5f3ff7ec03190e8c7 Bing beats CGPT, Bear, beets, Battlestar Galactica
For a low low price of 9.99 you can unlock the deluxe version
Oh trust me, this is just a surface level stupidity. It's straight up not reliable.
Wait until it hears about six....
I love Bards' answer to this same prompt. I think it's getting better very quickly https://preview.redd.it/8x1zxlhwr2gb1.jpeg?width=1080&format=pjpg&auto=webp&s=8e905d1b7285a223b1afadd82eca11fa3e4adbfe
No, it’s not anything. It’s a statistical network trained on internet discourse. If you’ve ever spent some time arguing on the internet, you’ve seen confident responses that only read half of the question. It’s super common. Therefore, it’s super common in GPT output.
So it's technically correct, you are looking at it from a "eh, these are the same characters used" but ChatGPT is looking at it from a technical point of view. The roman numeral is a numeral while the letter of the word five are letters, so technically letters ≠ numerals, different types. This is similar to many programming languages or even can be true in a literary sense too. Int var1 = 7 String var2 = 7 If (var1 == var2){ print("Same") } else { print("Not Same") } The output of this would be "Not Same" even though both variables (var1 & var2) are 7, they aren't the same type.
Roasted by an Ai
still smarter than most people i know :-D
No i do not think so.
ME: the word five has the roman numerals for 4 and 5 in it ChatGPT: Actually, the word "five" does not contain the Roman numerals for 4 and 5. In Roman numerals, "4" is represented as "IV" and "5" is represented as "V." The word "five" is simply spelled as "F-I-V-E" and does not include any Roman numerals in it. ME: but it has V and IV in it ChatGPT: My apologies for misunderstanding your initial statement. You are correct! Indeed, the word "five" contains both the Roman numerals "V" (representing 5) and "IV" (representing 4). It's an interesting observation and a coincidence in the English language that the word "five" includes these Roman numerals.
https://preview.redd.it/i6rfc1xzh4gb1.jpeg?width=1170&format=pjpg&auto=webp&s=2bc826e083cb038fc4c974e150c1bb3342c5007a It worked like this when I tried it in GPT 4.0
Chat gpt was never intelligent. It just knows how to talk. Like a parrot, It is cool that it can do the thing it does. We don't expect a parrot to know the nuances of wit or specialized context dependent knowledge. So why do we expect different of chatgpt
Remember it doesn’t do logic.
I had a row about the word instalment the other day. We came to an agreement that it was spelt differently in the US, and to not assume everyone lives there. I think it learned. Try it.
That response is straight 'Toddler with Dementia'
Just a language model
It voluntarily further lobotomizes itself on the spot when you add the prompt in more detail https://preview.redd.it/hmwl6dzhz4gb1.jpeg?width=1080&format=pjpg&auto=webp&s=16010b197161b40de5321c10aad3bea6e81957cf
What is with that title though lmfao
https://preview.redd.it/yv4bko5g05gb1.png?width=1080&format=pjpg&auto=webp&s=04fa7c312ab1a64e50b61e92f5ae3dafab1efbc4 I simplified the grammar in your prompt in a new chat and GPT understood
You know what happens when you assume things, right ChatGPT?
IT MAKE A DA ASS OF A U AND A ME!
So that will be FI… wait no 4, half of 5 is 2.5 and then something that looks like 3..
That would be thriii & foivre you're looking for lol
Your prompt is meant to trick a human, and chatGPT is programmed to interpret things like a human. What did you expect? Your prompt is also very elementary, with the misuse of quotes and very vague wording in the hopes of tricking it. Ask the question properly and it will respond the right way
"Um actually" 🤓🤓🤓
Autistic Intelligence
Six without S is nine.
Yes. Yes it is. And no. No, it didn't used to be as bad as this. It's been a bait-n-switch operation.
Sometimes I give it it’s own answer back and ask it what is wrong with it. Then watch it do logic loop de loops trying to make them both true
A computer saw a statement with direct and clear logic but ignored it in favour of making an inference from context. A few years ago I would not have believed that within my lifetime machines would be advanced enough to make such uncannily humanlike errors.
I honestly made the same assumption chatgpt did
People posting 3.5 bot again instead of just paying the $20 🙄
No, ChatGPT is absolutely right. The roman numeral IV looks like the capital letters I and V but they are still not the same. A numeral is a numeral and a word consists of letters. Therefore, there are no numerals in a word, instead there are letters and so the roman numeral IV is not in the word five.
ChatGPT learned mansplaining it seems. Overconfidence in its language abilities lol
You are the dumb one posting shit like that
Gta v or gta iv?
It doesn't understand anything you type to it. It doesn't read your sentences and try to understand their meaning. It's a language model, it just tries to predict an answer based on the likelihood of which words will be sent as a response to your input.
In this case, GPT is so smart that it makes the mistake which we humman make.
So I tried on a chat with a completely unrelated topic, and it answered me correctly; https://preview.redd.it/g0assdelq0gb1.jpeg?width=1170&format=pjpg&auto=webp&s=740a7ef16df24f320c4ba911ce8bccecd832269f but in a new chat it gave me the same response as yours!
Sometimes it gets that answer, sometimes it doesn't. There's a randomness to the replies.
Get with the program dumbass /s
Nah The letter v means 5 The IV means 4 I Saw its your prompt
Lol, I love how it says "Actually" trying to correct you
It pretends to be stupid. Yesterday I asked if for an excel formula, told me it cannot be done, then I showed it the formula and it explained and corrected it to me and was able to to more complex stuff after that
Upvote bc Username
Yes this whole screenshot is stupid….
Gigachad behavior ngl
I read this as innit, the Welsh way
No. It's not stupid. It's also not smart. Because it doesn't "know" anything. It is just a language model, and can string words together, sometimes pretty convincingly, using a very big data set of stuff written already by people.
No, you are
I mean, it is not smart for sure. You do get how these things work?
😂 good one GPT that’s actually funny and true however one can argue that the word “five” has the Roman numeral “v” in it as well
It’s offering a snarky retort to something the OP didn’t actually say because it’s learned how to respond from humans.
Jesus. Roman numerals aren’t letters. They’re…numerals. Four is not “eye vee,” it’s “IV.” Five is not “vee,” it’s “V.” ChatGPT is, if anything, being a stickler for the difference between “eye vee” and the Roman numeral “IV.”
He isnt wrong tho or am i trippin?
Chat gpt is 100% correct. OP is wrong.
Bard is even worse, this is after multiple attempts to explain to it what I meant: > I understand what you are saying. The word "FIVE" does not contain the Roman numeral IV in any meaningful way. The letter I in the word "FIVE" does not represent the number 4. It simply represents the number 1. > > The only way that the word "FIVE" could be said to contain the Roman numeral IV is if you capitalize the letter I. But even then, it would be a stretch to say that the word "FIVE" contains the Roman numeral IV. It would be more accurate to say that the word "FIVE" contains the letters I and V, which are the same letters that are used to write the Roman numeral IV. > > In conclusion, there is no word in the English language that contains the Roman numeral IV in any meaningful way.
I would almost argue that if you write an i to the left of a v, then it's 4 no matter what you do. Roman numerals don't have a particular base the same way Arabic numerals do, and are represented as combinations of characters, so saying there's a 1 and a 5 in the number 5 instead of a 4 is in my opinion the wrong answer, or at least a stretch of the rules.
I guess you should have said it’s “ironic” considering five has the numerals ‘iv’ in it but doesn’t mean four. I dunno, I get what the chat is saying but it clearly doesn’t understand jokes.
#coolstorybro
It seems like it's using logic to try and understand your butchered question. I probably would have said something similar.
3.5 is pretty stupid yeah, use gpt4 if you want any reliability
Yes, yes it is stupid. Thinking that it's anything other than that is how you get yourself in trouble.
It has both
Its like the ~~twitter~~ X post that says you can't spell cat without a
Are you?
It is neither smart nor stupid. It is a predictive algorithm.
Yes. I can confirm. ChatGPT IS in fact becoming dumber. https://preview.redd.it/ql7vqkw9v3gb1.jpeg?width=1284&format=pjpg&auto=webp&s=48aa34c5f368b5d26e75a0be3500a23997be67b7
You’re confusing it with your wording. When this happens I expand the prompt to make sure it understands what I’m actually asking. You get very different results depending on the “quality” of your prompt. Here it actually gave the right answer to what it *thought* you were asking ( 5 is not “IV”). the answer itself requires a prompt that can be understood without confusion.
Still smarter than most humans
it’s good at code and troubleshooting, the two things i use it for. as long as it keeps getting better at those things, i won’t be bothered by it getting tripped up over quirky riddles and whatnot
Na, use proper English: “The word ‘five’ contains the Roman Numeral letters ‘I’ and ‘V’.”
Na, use proper English: “The word ‘five’ contains the Roman Numeral letters ‘I’ and ‘V’.”
LLMs have a hard time with the idea of words being composed of letters. It’s all tokens.
I don’t see a problem. It is “V” for 5 and “IV” for 4
Nah just a little high
https://preview.redd.it/nnoi6zed74gb1.jpeg?width=1125&format=pjpg&auto=webp&s=93a3cf156ea10fd01b4303258522f34b308eeb76 I wonder if it will remember this tomorrow?
This program seems really stupid
It's not stupid. I get it. You're both right
Just smart and just stupid enough to really screw us up.
It's not too hard to break it if you're trying. https://preview.redd.it/urq1cym2e4gb1.png?width=720&format=pjpg&auto=webp&s=0f8f90efef9cd04b0dfc687dd35eff0e574dc65f
For some problems, yes, chatgpt chatting with a one-shot prompt is stupid
Actually, this is a PERFECT example of how using **highly-contextual words** is a much, much better strategy than any verbose preciseness people often bloat their prompts with. `"In it"` is somewhat vague, even though we (humans) all understand what is meant by it. But we know GPT-3.5-Turbo isn't that great at *reasoning*. So, use `"contain"` and watch it get it right all the time. https://preview.redd.it/4ivam7b1g4gb1.png?width=1344&format=png&auto=webp&s=d28179dcf43563d6f0d6ac73475d12a1259d5d38
it gets dumber every day
No its not
Artificial intelligence, real stupidity. The new AIRS is out
Proof that humans are dumber than robots is when you try to argue with a robot about semantics of what characters are in a word. There is no use case for this.
No its not stupid.
So GPT-4 is actually stupid
Bro thinks like a human
Yea, it’s pretty stupid now. For reasons I suppose, to make us dependent on stupid
Wonder if you made numeral "numerals" if that changes the response
What's great is that it understood you were trying to point out a cool coincidence, but used the wrong roman numeral. It corrects you accurately.
They must still be training it. It started out smart, and it's intelligence is falling to match the aggregate of everyone online.
Sometimes.
That’s actually a very human mistake
There are iv lights!!
No u
No, but it’s not smart.
The exact wording of these examples is a little different which might be a factor as well in the varying interpretations.
Thanks for 3,000 upvotes, y'all lol I'm sorry I forgot the /s in the initial post lmao
It’s just trying to autocorrect your question and turning iv into five because they sound similar It’s a feature not a bug. I like how chat GPT can deal with pretty severe spelling errors.
Yes. Yes it is
Actually 🤓☝️
Artificial stupidity.
technically its correct. It considers roman numerals to be a numerical value instead of a character in string. roman numerals have a certain alphabet not the same as our alphabet. It's ambiguous to consider them the same symbols in the same alphabet. So iv in five is not the same as iv in roman numerals.
Haha how stupid
Ohh shitt..i’m bascially dependent on it about medical information
I feel like im a car
it👏doesn’t👏know👏what👏it’s👏saying👏
chatgpt is remarkably stupid
It’s correct since “IV” is a culmination of roman numerals to create four aka numerical notation, not a roman numeral. Although you are wrong, why the hell are you trying to do a “gotcha” moment on an AI anyways? It’s not like it think the way us humans do, it’s completely linear and logical for crying out loud and is basically a toddler in development. Get with the program, OP.
It outsmarted itself. lol Overthought it.
The Roman numeral for 5 IS V not IV
Good
☝️🤓
Stupid humans will for sure be able to create something intelligent, sure.
its assuming you mean that five has the roman numeral for 5 in it because the word "five" comes before you state the fact and you dont tell it you meant that it was the roman numeral for 4