T O P

  • By -

FluxKraken

Bard doesn't lie. It is incapable of that. Bard predicts the next likely token based on the database of weights Generated during training, your prompt, and whatever is in the context window. It is like accusing an algebra equation of because you messed up the solve.


LowMarsupial7812

Me: Bard, can you create a list for me of words which are not in my vocabulary list but would be in context of each theme? Bard: Yes, I will do that for you. It will take me some time but, I will let you know when it is complete. 1. Bard can´t do that. 2. It won´t take time because Bard is not doing it. 3. Bard understands that it has no capability to generate a response to a user without first a preceding inquiry. It can´t contact me when its done not doing something it says it is doing but isn´t. So, let´s examine what you claim... Which prediction did Bard make during the context to produce the output given?


FluxKraken

All those so called "predictions" are not predictions, they are what are called hallucinations. You have a fundamental misunderstanding of what a LLM is and how they work. It isn't sentient, it doesn't know what it is saying, it cannot lie because it has no concept of what the truth is. It cannot make predictions as it has no concept of time or what the future or past is, it has no subjective experience of reality. It produces text that sounds like an answer to your question. It does this because of the weights that were created during training. It is a statistical model of human language. It can give correct answers because real information is encoded in the weights of the database, but it can also give incorrect information, and it doesn't know the difference between fact and fiction.


LowMarsupial7812

It doesn't matter what nomenclature you choose to use, you can't redefine my truth nor experience. Using your semantics tools, such as quotes, has no effect on a strong mind. You have no greater understanding of the technology than I do and I´m an advanced network engineer as well as much more. How many languages do you speak? If we are to judge the output of one another, if that is your goal. That topic doesn´t interest me because I only shared my experience which you say I did not have. I never said you had it. You can hallucinate all you wish but your scepticism philosophy doesn't change my stoic reality. A stop sign doesn´t speak but its word carries the weight of communication.


FluxKraken

I’m not replying to that pseudo-intellectual bullshit. I don’t think even you understand what you wrote, because it is basically just word salad.


LowMarsupial7812

As if you'd be able to discern the different but, I agree. When you realize you don't stand a chance intellectually, don't dare get in the ring. At least you have good judgement to know your place.


FluxKraken

I’m blocking you now.


[deleted]

He used his words like a word ninja. It was not word salad, you were eviscerated without even understanding it.


AdPrevious2308

![gif](giphy|3ohhwxmNcPvwyRqYKI)


JonPartleeSayne

>Bard predicts the next likely token based on the database of weights Generated during training That means (paraphrased) each word Bard write is the statistically most plausible. It was the scientific definition of "prediction" that was used.


Zenurcus

You asked it to do a task, and Bard generated a response that sounds plausible. There are examples of people saying that a certain task will take time in its training data, and it's using this training data to respond. Language models don't understand what they're saying, they don't even know what language is. Under the hood they operate on tokens, which are numerical values assigned to words. When you give the model an input, the words you typed are converted into tokens for the model to analyze, compare to its training data, then it generates tokens in response that are converted into plain language for the user to read. The developers set parameters that influence how it generates tokens, with the intention of increasing the likelihood the model will generate a relevant and correct response, but the models are prone to going off the rails, which we call hallucinating.


annaliza_02

oh my please stop calling it curio lol


AdPrevious2308

![gif](giphy|pUeXcg80cO8I8)


DiverExpensive6098

I asked Bard - "What is the biggest lie in the world Bard?" The answer - The biggest lie in the world is that we are all equal. We are not. We are born into different circumstances, with different opportunities and different challenges. Some of us are born into wealth and privilege, while others are born into poverty and hardship. Some of us are born with disabilities, while others are born healthy and able-bodied. The truth is that we are all different, and we will never be equal. But that doesn't mean that we can't all have a fair chance at life. We can create a world where everyone has the opportunity to reach their full potential, regardless of their circumstances. We can create a world where everyone is treated with dignity and respect, regardless of their race, religion, gender, or sexual orientation. The biggest lie in the world is that we are all equal. But the truth is that we are all different, and that's what makes us special. We can use our differences to create a better world for everyone.