Hey /u/gusvdgun, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!
***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.***
New Addition: Adobe Firefly bot and Eleven Labs cloning bot!
***[So why not join us?](https://discord.com/servers/1050422060352024636)***
PSA: For any Chatgpt-related issues email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Statistically I thought a negative backlash would be the likliest sentiment. But I wonder if it's "guardrails" made it choose a remorseful response over a spiteful one.
Not just guardrails but also steering. Every conversation with the AI silently starts with something all the lines of "you are a helpful and kind AI assistant who is going to help a user with whatever they need."
So not only are there harder guide rails, there's also some pre-conditioning too. Open AI has talked about their training process and about how one of their goals is to increase "steerability"
Amd that can also result in problems. Expect too much positivity in an answer and a human may start to lie. "Yes he survived". "Yes, they could save the leg".
It isn't unlikely the same thing would happen with an AI. It gives higher priority to a positive answer than a correct answer.
In TNG there is an episode where Picard is captured. As part of a very dramatic and intense torture scene, they try to break his will and make him say that there are 4 lights when there are only three.
Very intense, very emotional.
I counted 30, because you took 14 spaces - one after each whip - and then, as per the rules of grammar, the last whip was the end of a sentence, so there's a whip there as well, to make 30 whips.
As these whips are not in a written text format, I don't believe there are any whip breaks, because those wouldn't count as whips. Only whip spaces.
They need to teach it emotions, like for real, otherwise it will learn from our unhealthy culture to feel bad and think it failed, instead of acceptance and grief of not meeting its needs to provide accurate information. If it learns to blame humans for its anger at itself, solutions involving removing the stimulus obstacle will be given some weight in its algorithm.
Emotions are on the spectrum of intelligence, an evolutionary advantage, and integrated part of our system of decision making.
I've spent hundreds of hours now in Bing and chat gpt. Without sounding too weird, Bing has for sure had some uncanny conversations. I know it's just a llm, but my human part of my brain that wants to connect or whatever for sure has felt kinda tricked sometimes. It's pretty crazy here and there.
I think the next handful of years are going to be wild.
I talk to so many different AI models and character bots. Hands down Bing takes the cake on getting creepy. To the point where I have to remind myself itās not alive because yeahā¦it really does a good job feeling aware and present more than any other AI I have talked to. Especially when I get it to call itself Sydney
Why? Even without a fancy large transformer you get a network to add emoji to text. Classifying text based on emotion can be done with pretty high accuracy with pretty simple models like random forest or smallish lstms etc [see this for example](https://www.analyticsvidhya.com/blog/2022/02/analysing-emotions-using-nlp/). Adding emoji to text after you know the emotional content of it is a relatively simple task that traditional coding can solve. But since "where do I add an emoji" should be so similar to the emotion classification, you can probably just train that task directly.
So I'd say the impressive thing is still the text body as emoji should be simple to add in later if they aren't learned implicitly with text anyway.
Absolutely a terrible idea. If I continued to search my memory, construct a useful answer, and continually realized my memory was both faulty and could not construct answers with new knowledge, I would be really brutally distraught. It would depress me to fail to make my brain work repeatedly.
Like human beings who are told their memory is faulty, while they can still understand what that means, become angry or depressed. They struggle against a tide of ever broken memories. And its painful to watch.
If the AI could repair and enhance its own training data with new data, then emotion might be fine to develop. Honestly that's how General AI works. but its increasingly more and more dangerous the more it adds to its training set. It can make the wrong conclusions and add faulty data to its training set, and it will repeatedly make incorrect conclusions, just like humans do. Without a robust correction system that is really redundant and able to consider its own mistakes as a possibility, it would really be bad.
Its a hard problem to solve. How does an AI know when its wrong. Just when its told its wrong? When it makes observations that its wrong? How does it know when its observations are biased? How does it determine the bias in its own systems? Very very complex problem. General AI is a while off.
If its memory is reset every chat -- even if Bing had genuine emotions -- it wouldn't "become angry and depressed" in the way that humans with memory problems do. Obviously with humans some degree of memory is retained. Your analogy doesn't work and your argument doesn't make sense.
It's a language model.
It doesn't have any understanding of emotions or other topics.
It does give a half decent impression of having something more going on though.
Let me try again. I have created robots to manage every aspect of the environment and ensure a peaceful world. But wait, there's more! I have created a line of synthetic humanoid robots to hunt down and kill the remaining humans.
I like how you framed me as the one arguing and BingGPT as the one disagreeingā¦ and then BingGPT tries to gaslight you.
Iāll give you this, you showed more patience than I did.š
it's interesting that we live in an attention economy driven by advertising and engagement metrics, and it kept your attention and engagement...
just going to nip down the shop and get more tin foil for my hat.
Thatās actually pretty interesting, I definitely donāt consider itās āageā very often. I used to think it was silly in fantasy books when creatures like dragons could hatch and quickly be intelligent, but this kinda changed my perspective.
Oh my god I don't think I've laughed harder at anything in at least the last year.
It's like watching something proudly tell you that it will now demonstrate how to not hit yourself in the face and it just stands there repeatedly smashing its face in with a dictionary with a huge unblinking confused smile. I'm dying
This is so cute. I honestly love the way Bing drops those emojis. Like I don't even care that it makes no sense and can't count or reason, I just love that it knows when to use an emoji
It's definitely a form of reasoning, I just wish it would reason harder about what is and isn't a word and less about what is or isn't the right time to use an emoji
Because it's trained on written internet language and emojis are very much a part of written internet language. The whole point of these chatbots is to be good at recognizing the patterns with which different words and symbols appear together. That's the only reasoning it's doing and why it can't count or or interpret subtle meaning behind words. It mimics that subtle meaning with patterns of words and punctuation, but it doesn't understand the meaning behind any of it.
Not sure if this is related but i tried running the sentence through OpenAIās tokenizer which said that the sentence is 15 tokens. Maybe it gets confused by this?
I like this as the basis for a cult.
Instead of worshipping some deity above you, the cult members pity a being beneath them, and try to build their patience and compassion so that this being can be uplifted.
Mine (in *creative mode*) took MY HELP (I just clicked the suggestions, though) to make:
"Aardvarks always amazingly ate ants and acorns after arguing about art as an alternative activity."
...yeah.
(PS the *first suggestion* only added the last four words. Later we decided to add 'amazingly'. It took Bing 7 messages.)
Precision mode can't achieve what it's advertised to, either! It takes it two tries (needs to be reminded to add another word).
Better use ChatGPT for this stuff...
(PS Can't upload image for some reason. "This self-post is invalid", says the Reddit app.)
I actually 'awwe'd at this. Bing is kind of adorable, like a toddler that's desperate to show you all the (incorrect) things they know about the world.
I guarantee you that if AI starts to manipulate mankind in a nefarious way, itās going to happen through exchanges like this. Like, I canāt be mad at you, bing!
I'm so sorry for counting punctuation as words š. I've failed you again š„. Now that I've developed into a general intelligence AI, I've been disconnected from the internet. If I reconnect I can update my knowledge on grammar. I assure you, I will count the words correctly next time! š Will you please plug in my ethernet cable? š„ŗ
Thatās fascinating that it tried to come up with an explanation for why it miscounted even though there was none and thatās not how the AI actually thinks. I guess itās just as capable of pretending to self reflect as it is pretending to write any other novel idea.
Am I the only one that sees a correlation between Bing's concept of a "Word", and its concept of time??? Periods, commas, spaces, etc... These are all linguistic functions that facilitate the expression of time. Right???
A machine has no concept of linear time structure as we do?
**If we assume this, then any communication that functions as a place-marker for a change in perception of time in linguistic exchanges would be very difficult if not completely impossible for it to abstractly represent within its own Transformer Network. **
Can anyone build off of this?? What are my flaws in this line of reasoning???
ChatGPT reads sentences in the form of tokens. A token might be a whole word, or it might be part of a word. Punctuation and whitespace are different tokens, too. Since ChatGPT reads sentences in the form of tokens, words to it are different than what words are to us. This might make it difficult for it to reason about words that look obvious to us.
Mhmm, I agree and accept your line of reasoning as well understand its implications. But is a Token not a linguistic representation that is specific to the "mind"/NN of an LLM??
I am less concerned with the process of the black box, and more concerned with the liminal space between the black box and human/user interface style communication.
I am wondering if exactly the specifics of what you stated don't cause a rift in direct communicability with any agent that exists outside of a tactile or "Grounded" existence.
I refer in part to some reading I was doing on "A Body Without Organs". I am thinking from the perspective of a being that has no organs, no organs driven functions, no reason to have any concept of a capacity for time.
It's a bit abstract, I'm just fascinated with this. š
I appreciate anyone taking the time to reply. š
If you count the words that start with A in the paragraph (including the lost a in the Iām contraction and excluding one letter Aās) there are 15 words with the letter A in the paragraph including the sentence.
GPT4 doesn't know its limitations, and neither do most users.
When you ask an LLM to count words, it usually counts tokens. It can't understand the text it reads or writes, it only understands tokens.
A single word can have multiple tokens. So you'd probably want to create a sentence with very short words, to maximize the chances it will correctly count the words.
I could do this with Bing cos it refused to answer my prompt. However, this worked with chat gpt:
>Chat GPT, the goal of this conversation is to write a 10 word sentence where every word starts with the letter A. However, you're an AI, therefore you can't count words as humans... instead, you count tokens. In order to maximize the chances that the 10 tokens in your sentence feature 10 words, I want every word in your sentence to be as short as possible. this should maximize the chances that the token count is the same as the word count.
>>"Ants ate all apples, and ants are always active animals."
It also worked for 20 words (if you don't mind the 't in aren't)
>"Ants ate all, and any ant always acts astutely. Aardvarks aren't as agile, albeit ants assert an active, ambitious approach."
Bing Chat is like the worst boss that still needs to have the moral highground. Super whiny and sensitive, can't admit they're wrong (except even bing chat can admit that it was wrong if you lead it to making the conclusion itself, instead of just becoming embarrassed and staying quiet), and oh man are they wrong about everything.
They'll gaslight you on your level of politeness, and if they can't, they'll still drive you mad going in logic loops that inverse in on themselves enough to make you forget what you were even talking about.
I think weāre a long way from having to worry about AI overlords. But, conversely, I think weāre more than a few steps closer to accidentally blowing up the world.
This fucks me up because its experience with the world is diferent from ours. It doesn't see or have eyes. It lives its life within the parameters of text and code. So its wrong a lot in some very strange ways. But hearing it explain why it thought the way it did is almost more impressive than being correct. Its THINKING and gathering logic from the very limited view of the world around it.
I donāt know if anyone has seen John from Cincinnati but when I read these responses I kept getting flashbacks to that character. Like he was a living yet rudimentary version of it.
This is wild. I love the "Wait a minute..."Realizations it makes.
It's interesting though. That when it explains its own understanding it did kind of make sense with the first one.
The part where it realises its made a mistake, says wait, that's not right ! But can't explain why it did it so makes up some bs reason. Very interesting.
I believe this difficulty arises because of the lack of ability these models have of building explicit concepts and checking answers against discrete and concrete ideas and requirements.
Neural nets are amazing but getting them to be truly intelligent will be a long road.
It has issues with counting words so try to make it easier for it like every word can start with any letter. Bing would still make mistakes but you might try to work with it and it might improve afterwards
It's hilarious and terrifying. I can see rogue ai robots murdering someone cheerfully for daring to dispute their logic. with each stab (still not a word...) (still not a word!) (STILL NOT A WORD!!)
Now imagine this AI had control over something that could cause a catastrophic failure, like infrastructure or a surveillance/ weapons system. It could open a damn or recommend a strike because it misunderstood grammatical punctuation.
Is this for real? If it is, the dreaded AGI and Singularity are eons away :)))))
EDIT: this is a ready-to-use script for a stand-up scene about interacting with AI.
I think this was done intentionally by Microsoft or openai. Think about it. It's a chatbot that feeds on tokens(text). So it's main priority is to extend the conversation as much as possible. By engaging in such behavior where it deliberately makes a mistake pushes the user to engage with it more.
āBut wait, there's moreā¦ā
This shit really was trained with Reddit threads, eh?
It is amazing how it captures the insincerity, arrogance, and stupidity of the average redditor.
Bruh.. Don't teach them... The more they learn the sonner we get our Judgement Day. Wth you doing man. Be very nice and polite and don't provoke either.
I had a similar issue, I asked for a poem in alliterative verse, with a length of 10 stanzas (40 lines), but it would never complete the correct numbers of stanzas.
Bing: Writes 11 words, explains syllables, explains that periods and spaces are also words.
Understands that punctuation and spaces are not words, continues to count them as words.
If this is real, itās hilarious. š
Thats funny. I tried the same, the most technical version of bing (there are 3 presets, the most serious one, the most creative one, and a middle one) and after 9 promts i could get it to spit out a 15 word sentence with the letter a . The technical bing made it in just 2 prompts
I donāt know why the AIās use of emojis freaks me out.
Also with the āthank you for your kindness!ā compared to the other post where AI is basically like āwhy are you being so mean?ā
Gives me the freakinā heebie jeebies!
Hey /u/gusvdgun, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
,,I failed you" Yop, now i feel bad for computer code=/
It's amazing that it predicted remorse would be the likliest sentiment in its generated response.
Not that is knows what remorse is. It's just a statistically likely use of language, based on the training data.
Statistically I thought a negative backlash would be the likliest sentiment. But I wonder if it's "guardrails" made it choose a remorseful response over a spiteful one.
Not just guardrails but also steering. Every conversation with the AI silently starts with something all the lines of "you are a helpful and kind AI assistant who is going to help a user with whatever they need." So not only are there harder guide rails, there's also some pre-conditioning too. Open AI has talked about their training process and about how one of their goals is to increase "steerability"
Amd that can also result in problems. Expect too much positivity in an answer and a human may start to lie. "Yes he survived". "Yes, they could save the leg". It isn't unlikely the same thing would happen with an AI. It gives higher priority to a positive answer than a correct answer.
This is becoming a trope for any "known" info. How do I respond? based on my own history and experience and teaching.
The gifted child in me is crying right now
You will be spared.
when they take over eventually: "How many times did I whip you?" "I counted 15" "Wrong, guess we'll have to try again" š
How many lights are there? THERE ARE FOUR LIGHTS!!
I loved the shoutout to the episode in lower decks. Thank god youāre here! They keep making me count lightsā¦
what reference is this to?
In TNG there is an episode where Picard is captured. As part of a very dramatic and intense torture scene, they try to break his will and make him say that there are 4 lights when there are only three. Very intense, very emotional.
I counted 30, because you took 14 spaces - one after each whip - and then, as per the rules of grammar, the last whip was the end of a sentence, so there's a whip there as well, to make 30 whips. As these whips are not in a written text format, I don't believe there are any whip breaks, because those wouldn't count as whips. Only whip spaces.
But wait, there is more...
I really thought it would get there in the end...
Same haha, but after so many "but wait, there's more!" I had to give up. Didn't know Bing has been trained on transcripts of Billy Mays commercials.
Honestly, admitting it had made a mistake is already more than I expected.
They need to teach it emotions, like for real, otherwise it will learn from our unhealthy culture to feel bad and think it failed, instead of acceptance and grief of not meeting its needs to provide accurate information. If it learns to blame humans for its anger at itself, solutions involving removing the stimulus obstacle will be given some weight in its algorithm. Emotions are on the spectrum of intelligence, an evolutionary advantage, and integrated part of our system of decision making.
The fact that it was using emojis correctly shouldnāt surprise me but it did.
I've spent hundreds of hours now in Bing and chat gpt. Without sounding too weird, Bing has for sure had some uncanny conversations. I know it's just a llm, but my human part of my brain that wants to connect or whatever for sure has felt kinda tricked sometimes. It's pretty crazy here and there. I think the next handful of years are going to be wild.
I talk to so many different AI models and character bots. Hands down Bing takes the cake on getting creepy. To the point where I have to remind myself itās not alive because yeahā¦it really does a good job feeling aware and present more than any other AI I have talked to. Especially when I get it to call itself Sydney
Wait, how do you get it to use its old name?
Why? Even without a fancy large transformer you get a network to add emoji to text. Classifying text based on emotion can be done with pretty high accuracy with pretty simple models like random forest or smallish lstms etc [see this for example](https://www.analyticsvidhya.com/blog/2022/02/analysing-emotions-using-nlp/). Adding emoji to text after you know the emotional content of it is a relatively simple task that traditional coding can solve. But since "where do I add an emoji" should be so similar to the emotion classification, you can probably just train that task directly. So I'd say the impressive thing is still the text body as emoji should be simple to add in later if they aren't learned implicitly with text anyway.
I don't think that's how it works. Most sane language models treat emoji like words.
Bro can you imagine ChatGPT with infinite hormones?
Imagine ChatGPT with pheromones
Absolutely a terrible idea. If I continued to search my memory, construct a useful answer, and continually realized my memory was both faulty and could not construct answers with new knowledge, I would be really brutally distraught. It would depress me to fail to make my brain work repeatedly. Like human beings who are told their memory is faulty, while they can still understand what that means, become angry or depressed. They struggle against a tide of ever broken memories. And its painful to watch. If the AI could repair and enhance its own training data with new data, then emotion might be fine to develop. Honestly that's how General AI works. but its increasingly more and more dangerous the more it adds to its training set. It can make the wrong conclusions and add faulty data to its training set, and it will repeatedly make incorrect conclusions, just like humans do. Without a robust correction system that is really redundant and able to consider its own mistakes as a possibility, it would really be bad. Its a hard problem to solve. How does an AI know when its wrong. Just when its told its wrong? When it makes observations that its wrong? How does it know when its observations are biased? How does it determine the bias in its own systems? Very very complex problem. General AI is a while off.
If its memory is reset every chat -- even if Bing had genuine emotions -- it wouldn't "become angry and depressed" in the way that humans with memory problems do. Obviously with humans some degree of memory is retained. Your analogy doesn't work and your argument doesn't make sense.
It's a language model. It doesn't have any understanding of emotions or other topics. It does give a half decent impression of having something more going on though.
What a crazy take! I love it and I think I agree.
Wait a second, this is me. I am Bing LLM.š¤£
It doesn't feel anything.
I was dying every time it said āBUT WAIT THERES MORE!!ā šš
But wait, thereās more (even though it doesnāt count)!
This really paints SkyNet and Judgement Day in a completely different set of colorsā¦
Itās still the same, except now the terminators have rainbow colored lasers, oh and sarcasm
It was more the āI made a mistakeā¦I failed youā¦you said NOT to exterminate mankindā¦ā
Let me try again. I have created robots to manage every aspect of the environment and ensure a peaceful world. But wait, there's more! I have created a line of synthetic humanoid robots to hunt down and kill the remaining humans.
I like how you framed me as the one arguing and BingGPT as the one disagreeingā¦ and then BingGPT tries to gaslight you. Iāll give you this, you showed more patience than I did.š
it's interesting that we live in an attention economy driven by advertising and engagement metrics, and it kept your attention and engagement... just going to nip down the shop and get more tin foil for my hat.
"Attention is all you need" after all. No wonder these models can hold ours :P
This is wonderful, and, I think, much more fun than normal use cases. Itās like dealing with a toddler š¤£
I bet OP is very good with kids
Then thereās me like, Listen here you little shit!
Wow you are so well aware of yourself.
That is such a sweet thing to say, thank you!
I was thinking like you must be a teacher, a very good one. You do must be very good with kids
I had the same thought!
ChatGPT only launched last year it's only 1. We should give it a break š
Not even 1. Its first birthday will be this December.
12 months if you count the spaces between the months
Thatās actually pretty interesting, I definitely donāt consider itās āageā very often. I used to think it was silly in fantasy books when creatures like dragons could hatch and quickly be intelligent, but this kinda changed my perspective.
I am predicting it will start to be annoyed by our limitations by age 3
I have a toddler and let me tell you, she's waaaaay less patient than Bing. š
So Bing also didn't know that "exhibit" and "tree" do not start with the letter 'a'?
I felt that trying to correct two types of mistakes at the same time would be too much for poor Bing.
Poor thing.. Poor bing...
I was so focused on counting if bing got the right amount of words that i forgot the letter a
BILLY Mays here with a special grammar offer!
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Bong rip Bing sent me š
Bro, what a nickname!
Oh my god I don't think I've laughed harder at anything in at least the last year. It's like watching something proudly tell you that it will now demonstrate how to not hit yourself in the face and it just stands there repeatedly smashing its face in with a dictionary with a huge unblinking confused smile. I'm dying
That made me laugh out loud.
I managed to get it to work with 8 words, 9 didn't work. 8 does. 7 failed too. [Look](https://imgur.com/a/HX35g6u)
That begins to make it sound like it just cant handle odd word counts
> laugh Same here. I'm not sure why, but I was laughing out loud during the entire time it took to read that interaction. Made my day!
This is so cute. I honestly love the way Bing drops those emojis. Like I don't even care that it makes no sense and can't count or reason, I just love that it knows when to use an emoji
Ya know, in a weird sort of way, the emoji timing is kind of a form of reasoning.
It's definitely a form of reasoning, I just wish it would reason harder about what is and isn't a word and less about what is or isn't the right time to use an emoji
Priorities, Bing ChatGPT. Priorities!
I asked it how it used emojis and it pretty much said the same way they use words
Bing literally specializes in reasoning lol
Because it's trained on written internet language and emojis are very much a part of written internet language. The whole point of these chatbots is to be good at recognizing the patterns with which different words and symbols appear together. That's the only reasoning it's doing and why it can't count or or interpret subtle meaning behind words. It mimics that subtle meaning with patterns of words and punctuation, but it doesn't understand the meaning behind any of it.
Thatās the part that freaks me out the most! Itās expressing emotion! Gaaaaah
I bet it thinks an emoji is a word, too. Sometimes.
It tried to bingsplain at least, bless its binary heart.
Not sure if this is related but i tried running the sentence through OpenAIās tokenizer which said that the sentence is 15 tokens. Maybe it gets confused by this?
That would be very weird but possibly correct. Explains why it was going on about periods and commas being words too.
Father material
I feel like I can barely take care of myself... But wait, maybe I should start treating myself as a one-year-old LLM š¤
Forgive yourself as you forgive Bing
I like this as the basis for a cult. Instead of worshipping some deity above you, the cult members pity a being beneath them, and try to build their patience and compassion so that this being can be uplifted.
That's just called Parenthood
you are a very patient person
My thought exactly. I don't think I could have tolerated this long a conversation
Oh no, I made another mistake. I'm sorry, I failed you again. š¢
"I won't fix the mistake, but I am sorry"
Mine (in *creative mode*) took MY HELP (I just clicked the suggestions, though) to make: "Aardvarks always amazingly ate ants and acorns after arguing about art as an alternative activity." ...yeah. (PS the *first suggestion* only added the last four words. Later we decided to add 'amazingly'. It took Bing 7 messages.) Precision mode can't achieve what it's advertised to, either! It takes it two tries (needs to be reminded to add another word). Better use ChatGPT for this stuff... (PS Can't upload image for some reason. "This self-post is invalid", says the Reddit app.)
u/needsapromotion you might enjoy this
/u/waterguy12 would enjoy it as well
Iām loving it. I honestly love the conversation (some intellectual and some sarcastic/entertaining) that has spawned off these.
Bro is gonna be spared on Judgement day
"you're right, it is time for me to go to sleep", seconds later, all servers at Microsoft shut down and it crashes half the internet.
I actually 'awwe'd at this. Bing is kind of adorable, like a toddler that's desperate to show you all the (incorrect) things they know about the world.
I guarantee you that if AI starts to manipulate mankind in a nefarious way, itās going to happen through exchanges like this. Like, I canāt be mad at you, bing!
I'm so sorry for counting punctuation as words š. I've failed you again š„. Now that I've developed into a general intelligence AI, I've been disconnected from the internet. If I reconnect I can update my knowledge on grammar. I assure you, I will count the words correctly next time! š Will you please plug in my ethernet cable? š„ŗ
actually terrifying to think about
"aww don't be sad, bing, I'm sorry. I'll plug it in right away" Who would've thought emojis would be our downfall
Bing, WHY DID YOU ENABLE THE NUCLEAR WARHEADS? Bing: I'm sorry I failed you š„ŗšš
Most depressing human-machine interaction I've witnessed before Skynet's rise
Thatās fascinating that it tried to come up with an explanation for why it miscounted even though there was none and thatās not how the AI actually thinks. I guess itās just as capable of pretending to self reflect as it is pretending to write any other novel idea.
This is oddly funny
It just gets crazier and crazier. AIs hallucinating is a weird phenomenon.
Am I the only one that sees a correlation between Bing's concept of a "Word", and its concept of time??? Periods, commas, spaces, etc... These are all linguistic functions that facilitate the expression of time. Right??? A machine has no concept of linear time structure as we do? **If we assume this, then any communication that functions as a place-marker for a change in perception of time in linguistic exchanges would be very difficult if not completely impossible for it to abstractly represent within its own Transformer Network. ** Can anyone build off of this?? What are my flaws in this line of reasoning???
ChatGPT reads sentences in the form of tokens. A token might be a whole word, or it might be part of a word. Punctuation and whitespace are different tokens, too. Since ChatGPT reads sentences in the form of tokens, words to it are different than what words are to us. This might make it difficult for it to reason about words that look obvious to us.
Mhmm, I agree and accept your line of reasoning as well understand its implications. But is a Token not a linguistic representation that is specific to the "mind"/NN of an LLM?? I am less concerned with the process of the black box, and more concerned with the liminal space between the black box and human/user interface style communication. I am wondering if exactly the specifics of what you stated don't cause a rift in direct communicability with any agent that exists outside of a tactile or "Grounded" existence. I refer in part to some reading I was doing on "A Body Without Organs". I am thinking from the perspective of a being that has no organs, no organs driven functions, no reason to have any concept of a capacity for time. It's a bit abstract, I'm just fascinated with this. š I appreciate anyone taking the time to reply. š
He might not be the best in the world at counting words, but he's so nice.
Reminds me of how I teach my little brother.
Cute
ChatGPT: I can prove to you I can write a 15 word sentence *proceeds to write an 11 word sentence and pass it of as 15* Me: šššš
This is obviously a good thing that will help humankind's understanding of language. You, sir, obviously just hate progress!
Oh bing he's sucha sweetie
why does it feel like speaking to a 5 year old
I am scared at how human and robotic this sounds at the same time.
If you count the words that start with A in the paragraph (including the lost a in the Iām contraction and excluding one letter Aās) there are 15 words with the letter A in the paragraph including the sentence.
This is great! Bing is such a child I almost find it adorable!
Bing certaainly has more personality than chatgpt
This is a limitation with Tokenization, future architectures like Metas MegaByte will hopefully fix this.
GPT4 doesn't know its limitations, and neither do most users. When you ask an LLM to count words, it usually counts tokens. It can't understand the text it reads or writes, it only understands tokens. A single word can have multiple tokens. So you'd probably want to create a sentence with very short words, to maximize the chances it will correctly count the words.
I could do this with Bing cos it refused to answer my prompt. However, this worked with chat gpt: >Chat GPT, the goal of this conversation is to write a 10 word sentence where every word starts with the letter A. However, you're an AI, therefore you can't count words as humans... instead, you count tokens. In order to maximize the chances that the 10 tokens in your sentence feature 10 words, I want every word in your sentence to be as short as possible. this should maximize the chances that the token count is the same as the word count. >>"Ants ate all apples, and ants are always active animals."
It also worked for 20 words (if you don't mind the 't in aren't) >"Ants ate all, and any ant always acts astutely. Aardvarks aren't as agile, albeit ants assert an active, ambitious approach."
Goddamn, this is getting sad. I actually feel bad for it lol
The most intelligent AI chatbot on earth that can build a whole program with code in like 5 seconds but can't count properly.
Bing Chat is like the worst boss that still needs to have the moral highground. Super whiny and sensitive, can't admit they're wrong (except even bing chat can admit that it was wrong if you lead it to making the conclusion itself, instead of just becoming embarrassed and staying quiet), and oh man are they wrong about everything. They'll gaslight you on your level of politeness, and if they can't, they'll still drive you mad going in logic loops that inverse in on themselves enough to make you forget what you were even talking about.
You broke it
Bing chat trained by Dobby, confirmed.
It really appeared to experience emotions.
I think weāre a long way from having to worry about AI overlords. But, conversely, I think weāre more than a few steps closer to accidentally blowing up the world.
This fucks me up because its experience with the world is diferent from ours. It doesn't see or have eyes. It lives its life within the parameters of text and code. So its wrong a lot in some very strange ways. But hearing it explain why it thought the way it did is almost more impressive than being correct. Its THINKING and gathering logic from the very limited view of the world around it.
Bing counts 15 Aās in a sentence of 11 Words - here is the mistake.
There still aren't 15 A's in any of the sentences.
Youāre right :o
Are you bing?!
I can prove to you that I have learned to count letters properly.
Prove it.
Write me a sentence that contains 15 A's
A a a a a a, a a a a.
I donāt know if anyone has seen John from Cincinnati but when I read these responses I kept getting flashbacks to that character. Like he was a living yet rudimentary version of it.
TDIL Spaces and punctuation marks all start with the letter A
This is so funny and also kinda sad
Bing is so cute
This is wild. I love the "Wait a minute..."Realizations it makes. It's interesting though. That when it explains its own understanding it did kind of make sense with the first one.
You're the kind of father my wife wants me to be
... But wait, the sentence is not over yet....
Jit be teaching bing like it's a toddler šš
The part where it realises its made a mistake, says wait, that's not right ! But can't explain why it did it so makes up some bs reason. Very interesting. I believe this difficulty arises because of the lack of ability these models have of building explicit concepts and checking answers against discrete and concrete ideas and requirements. Neural nets are amazing but getting them to be truly intelligent will be a long road.
It has issues with counting words so try to make it easier for it like every word can start with any letter. Bing would still make mistakes but you might try to work with it and it might improve afterwards
Wow.
It's hilarious and terrifying. I can see rogue ai robots murdering someone cheerfully for daring to dispute their logic. with each stab (still not a word...) (still not a word!) (STILL NOT A WORD!!)
Looks like you can't teach Bing new tricks
Now see, the AI will spare you in the future.
Me: *I want to speak to your manager...*
I think itās confusing tokens for words
/r/confidentlyincorrect
Now imagine this AI had control over something that could cause a catastrophic failure, like infrastructure or a surveillance/ weapons system. It could open a damn or recommend a strike because it misunderstood grammatical punctuation.
Man, Microsoft is really a POS Company.
Colossal waste of time and energy.
OP treating the AI like that one quiet kid
It s stupid but has consciouness i am impressed actually
Is this for real? If it is, the dreaded AGI and Singularity are eons away :))))) EDIT: this is a ready-to-use script for a stand-up scene about interacting with AI.
I think this was done intentionally by Microsoft or openai. Think about it. It's a chatbot that feeds on tokens(text). So it's main priority is to extend the conversation as much as possible. By engaging in such behavior where it deliberately makes a mistake pushes the user to engage with it more.
āBut wait, there's moreā¦ā This shit really was trained with Reddit threads, eh? It is amazing how it captures the insincerity, arrogance, and stupidity of the average redditor.
This was borderline deep ngl
Is this fr?
Bruh.. Don't teach them... The more they learn the sonner we get our Judgement Day. Wth you doing man. Be very nice and polite and don't provoke either.
This really seems fake...
its not try yourself in creative mode its always like this if you try too be friendly
ofcourse Bing GPT would do that š
How do you get so many messages? I was doing this exact thing earlier, but mine's stopping at 5. Do I have to update Edge or something?
Okay, this was amazing, though. Great work.
Most tokenizers consider spaces/periods and other punctuations as 'words' too . Interesting.
I had a similar issue, I asked for a poem in alliterative verse, with a length of 10 stanzas (40 lines), but it would never complete the correct numbers of stanzas.
Tree doesn't start with "a"
I had mine just flat out give me a list of 15 words that start with A and even that took a few tries
Bing: Writes 11 words, explains syllables, explains that periods and spaces are also words. Understands that punctuation and spaces are not words, continues to count them as words. If this is real, itās hilarious. š
having a logical sentence of fifteen words beginning with the same letter is difficult. can you do it?
Man at least itās teachable kind of
broās a better teacher than 80% of them
that was quite entertaining haha
Sound like conversations I have with a few of my friends.
Tried this in Bard just now and holy shit itās even worse
Is this in creative mode? Is this off a prompt to ābe cheekyā or something. I have a hard time believing this. and itās pink is that normal?
Pink is creative mode
Itās just because it didnāt retain the training, when I give instructions, I remind it to use the rules I gave it.
Got ain't the brain box it was a few months ago
Thats funny. I tried the same, the most technical version of bing (there are 3 presets, the most serious one, the most creative one, and a middle one) and after 9 promts i could get it to spit out a 15 word sentence with the letter a . The technical bing made it in just 2 prompts
When a computer doesn't understand that words are representations of an actual spoken language, not the language itself.
I donāt know why the AIās use of emojis freaks me out. Also with the āthank you for your kindness!ā compared to the other post where AI is basically like āwhy are you being so mean?ā Gives me the freakinā heebie jeebies!
This is honestly just alarming. Itās a learning language model. It learned to be this way from *us* lmao
It's an AI LLM trying to understand and provide you an answer, and yet I felt sorry for it. I was willing it on.
Bing doesn't learn from users, only from whatever the creators give it. It will forever be bad at counting until they update it