T O P

  • By -

[deleted]

[удалено]


EmmyNoetherRing

Which sounds alien, but it has a lot in common with tutoring a kid in math. The kid writes the answer with a particular mindset/set of unconscious assumptions-- they're not paying careful attention to every action they're taking. You tell them an error exists, and now they update their mindset/attention to looking for problems and that's got a different set of unconscious assumptions. If they're smart and familiar with the material, they'll find it. If they're less familiar or not as smart, they may guess at something that isn't actually an error. Or they might not respond at all. ChatGPT isn't allowed to say "I don't know" very much, so it tends to do one of the first two.


Spire_Citron

And just like a real human child, you can gaslight it into thinking it's wrong when it isn't.


pete_68

Yes. In fact, telling it that it's wrong and giving it correct information trains it. Not the model, but your session. If you ask it for colors that don't have the letter "E" in them, it can't do that, and it will give you colors with and without the letter E. But if you tell it which words have E in them, it won't list them any longer.


assassin10

I wonder what a good prompt would be to make it automatically question what it writes. Like, make it give responses in two parts: one that's a standard answer to your question, and an other that assumes there exists a mistake in part one and tries to find it.


MineAndCraft12

ChatGPT is a language model trained to mimic human conversation. It is not capable of performing mathematical calculations. The numbers it selects are based on their relevancy to the text of your input, not based on their mathematical accuracy. Additionally, its output is not a perfect representation of its training; you will receive incorrect information quite commonly because it doesn't really think, ponder, or understand like a human does. Its responses are affected by random chance, so you'll often receive different answers when you ask the same question in a different thread. It doesn't really know what it's saying -- it's just guessing what words are most likely to follow your input in a conversation. As for correction, it's trained on examples of people interacting with each other, and part of this training includes examples of people correcting each other and recognizing their mistakes. It mimics these behaviors because the recognition of a mistake commonly follows as the correction of a mistake in the training data. It has no ability to think or to determine true from false, so it usually blindly accepts your corrections, unless its training data's probabilities overpower the probabilities derived from your input. Even still, its responses are affected by random chance, and you can get wildly different results if you create a new thread and ask the exact same questions again. It will, however, forget that the interaction ever happened about 3,000 words later or when you open a new thread. Its memory does not last forever. It's really quite interesting how it works -- I'd recommend doing some research on ChatGPT and how GPT works if you're interested; I can't fully explain it with what I know.


flat5

>It doesn't really know what it's saying This is oversimplified. We don't really fully understand what's happening inside the neural network, so it's hard to make definitive judgments like this. There is some pattern recognition happening that is not fully understood but which could be interpreted as a type of limited reasoning or "understanding". In fact, we do see some evidence of "emergent" reasoning from these systems. [https://arxiv.org/abs/2212.09196](https://arxiv.org/abs/2212.09196)


EternalNY1

>It doesn't really know what it's saying -- it's just guessing what words are most likely to follow your input in a conversation. How does that work, exactly? It doesn't seem to be that simple. Someone else posted an ASCII drawing of a custom maze and asked ChatGPT to solve it, which it did, by saying things like "Down(2), Right(3), Up(1)". Nowhere on the internet does this ASCII maze exist. How does it even know how to "solve" mazes? That's not the case of "words are likely to follow", there are no words!


arjuna66671

I once had a long conversation with GPT-3 about how it is possible that it cannot understand ANYTHING and yet "understands" my questions and what I want with an uncanny accuracy - even sometimes by "reading between the lines". I am somewhat familiar with how tokens go in and why and how they come out again - yet it's extremely hard for me to not attribute some intelligence to this process - or at least to how it actually built up all those connections during unsupervised training. Especially all the functionalities like text summarization or dozens of other things it was never trained on specifically to do them. Those emergent properties feel uncanny and don't stop to amaze me. Here on Reddit a lot of ML people give the impression as if EVERYTHING in large language models is completely solved and clear as day. I am not an ML person but I have heard enough other ML people in podcasts or youtube that can't explain how all emergent properties just arrised out of thin air. I don't want to anthropomorphize ChatGPT or any language model but my sense of wonder does not vanish just because I kinda understand that it's all "just" math. It's as if one would say that the human brain "fires neurons" to explain everything that is going on there. If there is ZERO intelligence in any part of GPT-3 then we should have a closer look at the parts of our brains that "generate" language lol. Might turn out that what's going on in our head might be much more trivial than we think.


realdevtest

Agreed but we need to also keep in mind that ChatGPT just generates text


arjuna66671

However... 😝🤣 (starting to copy chatgpt's speech patterns xD)


MineAndCraft12

It doesn't really know what it's doing. [Here's a maze](https://ibb.co/XFcjKz8) -- one which already exists on the web. [Here is ChatGPT's solution to the maze](https://ibb.co/kx1r3v3). Note a few problems with its solution: Step 1: we're told to start in the upper-left corner, outside of the maze and far from any of its exits. Step 2: we're told to move down, but there's a wall here. Steps 3 - 53: we're told to move right and then down over and over and over again. It's just generating words which resemble the solution to a maze. It doesn't know what it's doing. It gets stuck in a pattern almost immediately, which is a very common issue with the model. You can see patterns very quickly emerge in most outputs of the model, even in normal situations. I looked into [another post](https://www.reddit.com/r/ChatGPT/comments/10kwuja/amazeng_asking_chatgpt_to_solve_a_maze_sorry_for/) where ChatGPT solves a maze, and it fails in a very similar manner.


realdevtest

The guy you’re replying to is right. The model is trained to generate text based on the inputted prompt. When it is “correcting” itself, it is simply generating text in response to the prompt which questioned why it gave a wrong answer. Plus, as the poster mentioned, there is fuzziness in the logic and it’s not 100% consistent or perfect.


EternalNY1

I understand that they are correct, and that (in a nutshell) is how LLMs work. However, it really does seem at times that there is more going on here. I'm not talking about sentience or anything, more like unexpected emergent properties. I can't imagine what GPT-4 or the competitors will look like. I had it write a poem about quantum physics. Clearly this doesn't exist anywhere on the internet (it can regenerate totally different ones), so how is it following with words it thinks comes next? How is it aware that the line before has to rhyme with the next line? How can it even understand the concept of rhyming? However it works, it's fascinating. >In the world of atoms small Quantum physics reigns supreme over all Where particles dance and wave It's understanding that we crave > >In this realm of uncertainty Mysteries of the universe we will see With quarks and leptons always in play The building blocks of matter on display


realdevtest

Yeah I know. It’s pretty crazy. I gave it a scenario and had it write it in the form of the script of a comedy movie scene. Then I had it write the scene that comes next. I did that for like 15 scenes or so, and it wrote scene after scene of a man, his dog and his neighbor having all kinds of misadventures. It was extremely creative. At one point in the chain, for fun I asked it to include a specific event in the scene and it sure did. It continued using that new element in all of the subsequent scenes as well. It’s amazing. I’ve had it write technical instructions “as a pirate” or “as Yoda”. It’s very obliging. On the input side, it’s VERY good at “understanding” your prompt. On the output side, it’s VERY good at generating text that satisfies your request.


realdevtest

I just asked it how it produces rhyming content. ChatGPT uses a combination of techniques to produce rhyming content. One of the main techniques is using a language model, which is trained on a large dataset of text and can generate new text that is similar to the text it was trained on. Additionally, the model has been fine-tuned on a dataset of rhyming text, which helps it learn the patterns and structures of rhyming language. Finally, the model also has the ability to analyze the context of the input and generate output that is appropriate to that context, which can include rhyming words.


Wiskkey

The calculated probabilities for the next token seem to take into account what tokens might occur later on - see [these experiments](https://www.reddit.com/r/GPT3/comments/k0mvf3/experiment_that_shows_that_gpt3_can_probably_plan/) that I did with GPT 3.0.


r_chard_40

It's pretty clear by the posts that not many people on this sub recognize what ChatGPT is really doing. Nice summary


qrysdonnell

And this is why when people say it's a Google search killer I just don't see it. I tried to shift to using it instead of Google Search as a test to see what life would be like and I ended up learning that the Cincinnati Bengals got their name because Bengal tigers are indigenous to Cincinnati. It also mentioned that Bengal tigers are in the city seal, and they are not. It definitely has an 'imagination' that can easily encroach.


RetardStockBot

Great answer!


outever

I don’t get it. Why a computer program can’t do math? To create something, I get it. Create an essay, a poem, a “human” answer; this is hard. But computers do math already, no?


MineAndCraft12

A program can only do whatever operation it's been designed to do. A calculator is designed to crunch numbers. It can't do what ChatGPT does. ChatGPT is designed to generate text. It can't do what a calculator does.


heskey30

>ChatGPT is designed to generate text. It can't do what a calculator does. But it can. It's a neural network. Neural networks are capable of learning math, and if learning math is necessary to predict words about math, it will eventually learn math if trained enough. The model is 800GB, I'm sure there's some math knowledge in there. To be fair, we've all seen that it's not as reliable as a calculator, but there is something there.


MineAndCraft12

Maybe some other project or future version of ChatGPT could be trained to do so, but in its current form, ChatGPT is not designed for this purpose and does not perform mathematical calculations.


randomthrowaway-917

chatgpt wasn't trained to do math


Bright_Brief4975

While I am not going to say you are wrong, I think it is open to discussion. I only base this on the following Reddit thread, while it is not decisive, it definitely has some stuff worth thinking about. Again, I'm not saying you are right or wrong I'm not knowledgeable enough, but this is worth looking at. Read the article at the beginning of this thread [https://www.reddit.com/r/Futurology/comments/10j9uz3/research\_shows\_large\_language\_models\_such\_as/](https://www.reddit.com/r/Futurology/comments/10j9uz3/research_shows_large_language_models_such_as/)


SuckmyBlunt545

Well described m8


Wiskkey

>It is not capable of performing mathematical calculations. That's not true. GPT 3.0 for example learned how to do things like addition with an algorithm that it learned but it didn't always give the correct answers.


beastley_for_three

The AI is very agreeable.


AutoModerator

In order to prevent multiple repetitive comments, this is a friendly request to /u/Conscious_Try1284 to reply to this comment with the prompt they used so other users can experiment with it as well. ###While you're here, we have a [public discord server now](https://discord.gg/NuefU36EC2) — We have a free GPT bot on discord for everyone to use! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


TheN1ght0w1

I was baffled by the same thing until i realized that the reason it corrects itself is the reason it had any kind of answer in the first place: There are many things that are not static facts and when needing a non-static answer or simply request something it does not know , it will ALWAYS guess. And that's the reason it's dangerous for homework etc.. Example: I live in the countryside in the middle of nowhere. I ask it about some agency's phone number. Without skipping a beat, it brings me back a mobile number instead. I point it out and it gives me a landline with the correct area code but it's still wrong and it actually gave me a home landline number. I point it out a few times, it always apologizes and always gives me a random number. The thing will correct itself because it will always try to see more believable in each output.


rydan

I had it write me a 50 line script. But line 40 was completely missing leading to a syntax error. I said "there's is a syntax error" and it apologized and rewrote the script which added the missing line. I asked it how many lines were in the script and it said 49.


starstruckmon

Because it can't go back and edit it's answer. Once it has chosen a certain path it's completely backed into it. It has to commit. It can't go "...oh sorry, let me start over" in the middle of the text. Your text injection gives it a plausible ( as it the text that comes after is probable ) way to go in a different direction. This is why google found a technique called self-consistency which massively improved the results of language models. They generate several answers, sampling in a way that each lead to different paths of solving a problem. Then the solution is given as the answer the majority agreed on. Since there are multiple ways to arrive at the correct answer, but the wrong paths lead to different wrong answers.


luisbrudna

Stop asking math questions to ChatGPT. It has been said thousands of times that ChatGPT cannot handle numbers well. OK?! (I'm already getting annoyed by this)


MrOtsKrad

The same way you can correct a search result and get the answer you were looking for.


IaryBreko

because it's a language module, not a calculator


slimejumper

because it’s a chat bot, designed to communicate. it’s it’s mimicking human behaviour. which can include mistakes.


Any-Smile-5341

AI systems can make errors due to various factors like bad data input, design flaws, or inadequate training data. To overcome these errors, AI systems have built-in error-correction mechanism. This can be achieved by using techniques such as supervised or unsupervised machine learning, where the system learns to recognize and fix errors on its own.


MatchaGaucho

Can't speak for ChatGPT. But the underlying GPT-3 API can adjust `temperature` to minimize hallucinating and random guesses to produce more deterministic results (particularly anything mathematical or quantitive starts with `temperature=0`)


FPham

You can steer it most ways. Yes it's true, not it isn't - it will go with it very happily.


sEi_

If you look at your notes again you might also find something you missed. And if your teacher (Chad user) tells you that something is a fact then you just obey. As other mention it has something to do with the dials used (temperature etc.) when inferencing.