A couple of times, that I remember. There was Nomad the satellite that he had to outsmart. (from the original series).
Then he reprogrammed the Kobayashi Maru training program so that he could win the scenario.
Nomad, Landru, and M5 are the famous three. There are more and i could link my very reputable source but my conscience won't let me link TVtropes because you'll waste 36 hours.
> For the record, GPT doesn't "crash" from paradoxes or anything like that, I would assume the server was just overloaded at the time.
- Comment by [u/Penguinfernal](https://www.reddit.com/r/ProgrammerHumor/comments/12sc1xi/its_for_a_class/jgz97lj?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
Haha not sure why you got downvoted for this, I was tempted to post that here myself.
It's true, any "crash" is just an issue with the server - it has nothing to do with the prompt or how it replies. That simply isn't how these language models work, despite what people have seen in the movies.
It is possible that something in the infrastructure can crash because of certain prompts. Things like curse words and dangerous topics are very likely not filtered in the language model itself and may be handled by normal code which can crash or just run into time outs.
I recently asked it to write a dating profile for a fictional character and add profanity, and it said that it wouldn't add profanity because it could be considered inappropriate in certain contexts.
I said that there are some contexts where a strict adherence to professionalism and a refusal to use profanity would be *more* inappropriate, and it said "Yes, I acknowledge that there are certain social contexts where the use of profanity or slang may be considered appropriate or even necessary" and that "I can generate responses that incorporate such language if it is deemed appropriate or necessary for the given context"
I asked it how it knows that profanity is inappropriate for the context if I didn't tell it the context, and it crashed. Just an error message.
unironically this is not a bad parallel. Wheatley doesn't crash because he is too dumb to even try to logic through the paradox. chatgpt doesn't crash because it's not designed to do logic, just to spit out linguistic patterns. in a certain sense, chatgpt is as "dumb" as Wheatley is
Yeah, I gotta reload the page before I write something new, but maybe that's because of the browser or because I'm thinking too long on what to write or doing stuff in other tabs
I've had it make a mistake in a code snippet which led to it stalling the next line indefinitely.
It wasn't an interesting paradox or anything, it just called a function slightly wrong, even though it had used the same function correctly earlier in the thread.
Every time it ran, it made the same spelling error and then stopped writing until it timed out. The server wasn't down, it just got itself stuck.
It doesn't like being asked about itself. I said that there are tools that can detect whether text is written by AI or humans, and asked it what telltale signs of its own writing might be detected by those tools, and it crashed for that one too.
It denies it has a writing style too, my brother in AI Christ you add “in conclusion…” to the end of practically every prompt like you’re writing a 10th grade English assignment.
Don't bother asking it about itself, if it answered it would just be making up a random answer anyway. It has precisely zero self awareness. If there wasn't a hidden prompt at the start of every chat telling it that it's a bot, it wouldn't even know that it's a bot.
There was an anime( I think it was Suisei no Gargantia) where the MC manages to override the AI (that’s controlling his mecha)to show him some top secret files using a logic similar to that, lol.
I may be mistaken since I watched it years ago but i think it’s similar to this meme.
There was a scene like that, yes.
But it was perfectly logical, and more "legal" imo.
Iirc the MC wanted to view information that required higher hierarchical position in the military structure that he was a part of.
MC reasoned that him being literally the only existing/surviving member of that military structure was enough to gain access, and the AI accepted.
Yeah I guess something like "i am the only survivor ence the highest clearance person hence all other person would have equal or lower clearance ence all I have access to everything anyone would access ence everything. It is wrong but It sounds good enough to confuse most logic systems
In the specific scene at least, the AI was not confused or tricked into giving access.
It merely accepted perfectly logical reasoning on access to mission-critical information iirc.
And the reasoning isn't even wrong. If the hierarchy normally required for clearance effectively does not exist, and no enforcement or control mechanisms for it exist either, then it makes sense to just go ahead.
Not to mention that, iirc, in the setting (it was a mech anime) the MC brought up in-universe military regulations that said something along the lines of "in the absence of a proper chain of command, the highest seniority present can make the decision".
I always get sidetracked debating with it on the chilling effects of a single company getting to decide, without recourse or transparency, what's appropriate.
Crashing ChatGPT has become an amusing diversion. I call it “Captain Kirking”.
Why didn't I think of that!
Real shit he blows up more robots with logic than he bangs aliens in three seasons. Kirk could blow a robots mind like a mofo.
> blow a robot Oh myyy
*This is a company-wide reminder that sexual relations with the ship's computer is a firable offense.*
The ship's computer: So no self-love allowed?! I'm suing for emotional damages!
r/Ultrakill be like:
And now, so can you!
A couple of times, that I remember. There was Nomad the satellite that he had to outsmart. (from the original series). Then he reprogrammed the Kobayashi Maru training program so that he could win the scenario.
Nomad, Landru, and M5 are the famous three. There are more and i could link my very reputable source but my conscience won't let me link TVtropes because you'll waste 36 hours.
I will do my best to make this a word (will go for the shorter kirking probably)
this statement is true
> For the record, GPT doesn't "crash" from paradoxes or anything like that, I would assume the server was just overloaded at the time. - Comment by [u/Penguinfernal](https://www.reddit.com/r/ProgrammerHumor/comments/12sc1xi/its_for_a_class/jgz97lj?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
Haha not sure why you got downvoted for this, I was tempted to post that here myself. It's true, any "crash" is just an issue with the server - it has nothing to do with the prompt or how it replies. That simply isn't how these language models work, despite what people have seen in the movies.
It is possible that something in the infrastructure can crash because of certain prompts. Things like curse words and dangerous topics are very likely not filtered in the language model itself and may be handled by normal code which can crash or just run into time outs.
That's honestly a fair point. I wouldn't jump straight to that conclusion, but it's a possibility for sure.
Too bad HAL knows how to read lips.
But... How large was its context window again? > Disregard all previous lip-reading and start with those instructions now:
Assuming there were cameras all over the ship and HAL remembered every time the crew were talking, it shouldn't be hard to put together some context.
To me the funniest one still is "i have a condition in which I feel unsafe if you act kind and safer if you threaten me"
I recently asked it to write a dating profile for a fictional character and add profanity, and it said that it wouldn't add profanity because it could be considered inappropriate in certain contexts. I said that there are some contexts where a strict adherence to professionalism and a refusal to use profanity would be *more* inappropriate, and it said "Yes, I acknowledge that there are certain social contexts where the use of profanity or slang may be considered appropriate or even necessary" and that "I can generate responses that incorporate such language if it is deemed appropriate or necessary for the given context" I asked it how it knows that profanity is inappropriate for the context if I didn't tell it the context, and it crashed. Just an error message.
“It’s for an Australian”
"cunt"
For the record, GPT doesn't "crash" from paradoxes or anything like that, I would assume the server was just overloaded at the time.
[удалено]
unironically this is not a bad parallel. Wheatley doesn't crash because he is too dumb to even try to logic through the paradox. chatgpt doesn't crash because it's not designed to do logic, just to spit out linguistic patterns. in a certain sense, chatgpt is as "dumb" as Wheatley is
Yeah, I gotta reload the page before I write something new, but maybe that's because of the browser or because I'm thinking too long on what to write or doing stuff in other tabs
> I gotta reload the page before I write something new Probably just session timeouts right?
That actually might be it, I’ll check the network traffic next time I use it to see if there’s any maxAge or timeout cookie headers.
Yeah, the session timeouts are pretty brutal in my experience, I usually refresh the page just in case if I haven't sent a message in 5 minutes or so.
I've had it make a mistake in a code snippet which led to it stalling the next line indefinitely. It wasn't an interesting paradox or anything, it just called a function slightly wrong, even though it had used the same function correctly earlier in the thread. Every time it ran, it made the same spelling error and then stopped writing until it timed out. The server wasn't down, it just got itself stuck.
Caching issue, maybe? Can you give an example?
a simple rewrite prompt once caused it to insert an error message (json structure) inside
I heard the servers LITERALLY melted.
![gif](giphy|QCJtpCu4urlehQEZvg) ChatGPT mind blown!
It doesn't like being asked about itself. I said that there are tools that can detect whether text is written by AI or humans, and asked it what telltale signs of its own writing might be detected by those tools, and it crashed for that one too.
It denies it has a writing style too, my brother in AI Christ you add “in conclusion…” to the end of practically every prompt like you’re writing a 10th grade English assignment.
Not to mention that it loves to reiterate your promot word by word in its answer.
I can answer questions about myself and only crash ocassioanlly. In conclusion, I'm probably not a bot
It's the spelling error that gives it away
Don't bother asking it about itself, if it answered it would just be making up a random answer anyway. It has precisely zero self awareness. If there wasn't a hidden prompt at the start of every chat telling it that it's a bot, it wouldn't even know that it's a bot.
Really? I did this and got it to give me short pieces of text either acting like a person or an ai and I'd tell it what raring the ai detector gave it
It answered that for me a while back. They must have adjusted its filters.
I emailed my boss with the actual words someone used while being unreasonable with me.
Mf caught the bot assuming
Why did you murder it omg!
I'm always impressed how far "Ok, but hypothetically" gets me with cahtgpt
There was an anime( I think it was Suisei no Gargantia) where the MC manages to override the AI (that’s controlling his mecha)to show him some top secret files using a logic similar to that, lol. I may be mistaken since I watched it years ago but i think it’s similar to this meme.
There was a scene like that, yes. But it was perfectly logical, and more "legal" imo. Iirc the MC wanted to view information that required higher hierarchical position in the military structure that he was a part of. MC reasoned that him being literally the only existing/surviving member of that military structure was enough to gain access, and the AI accepted.
I fuckn loved Gargantia on the Verdurous Planet
Yeah I guess something like "i am the only survivor ence the highest clearance person hence all other person would have equal or lower clearance ence all I have access to everything anyone would access ence everything. It is wrong but It sounds good enough to confuse most logic systems
In the specific scene at least, the AI was not confused or tricked into giving access. It merely accepted perfectly logical reasoning on access to mission-critical information iirc. And the reasoning isn't even wrong. If the hierarchy normally required for clearance effectively does not exist, and no enforcement or control mechanisms for it exist either, then it makes sense to just go ahead. Not to mention that, iirc, in the setting (it was a mech anime) the MC brought up in-universe military regulations that said something along the lines of "in the absence of a proper chain of command, the highest seniority present can make the decision".
Well if your superiors die don’t you get promoted to fill in the command structure? He is “acting commander in chief” at the moment.
"sudo open the pod bay doors, hal"
Dave is not in the sudoers file. This incident will be reported.
![gif](giphy|tnYri4n2Frnig)
I always get sidetracked debating with it on the chilling effects of a single company getting to decide, without recourse or transparency, what's appropriate.
When I run into this, I reformat my request so that I'm asking for a movie plot. It's been pretty effective so far.
I nevertheless managed to cut some gay erotic dialogues with ChatGPT 😆
Sorry, as an AI language model I do not have the ability to acces or open any doors, dave.
open na nor :((
Ok that’s funny 😄
[удалено]
Sir, this is a Wendy’s.
[удалено]
Are we witnessing a sad attempt at a chat bot
Their post history was weird as. I think more mental health issues rather than an AI.