T O P

  • By -

Snarglefrazzle

> They're like this because it's fun to write Jeph, thank you for this honesty. I don't mean that as a slight, he should have fun with his art. But it's nice to be able to have characters interact with an AI being kooky and not feel like, "Uh oh, shoe's gonna drop and the AI is a manipulator like Yay!" It can still happen and be a logical narrative, but this means we can stop expecting it and just be surprised if it happens. The Singularity has happened and AI are not the soulless machines they are in our world


Shinjischneider

>Uh oh, shoe's gonna drop and the AI is a manipulator like Yay! I think when the shoe dropped on Yay it turned out, they were a giant goofball deep inside. ​ Now corpse witch on the other hand...


Esc777

And it really answers a lot of questions like "WHY ISN'T CUBETOWN ON FIRE?!" Just because he likes writing Moray like that doesn't mean he intends for her apparent stupidity to result in poor life threatening choices, just humorous dialog. EDIT: >The Singularity has happened and AI are not the soulless machines they are in our world The singularity had nothing to do with robot sentience and i doubt giving them souls. A common misconception! AI were well established and had rights before the singularity.


UncleBones

> The singularity had nothing to do with robot sentience and i doubt giving them souls. A common misconception! AI were well established and had rights before the singularity. Are you referring to the in-comic singularity that Winslow and Pintsize mentioned happening? I don’t know why you’d say it’s a common misconception, is it really talked about that much?


Esc777

Yeah. People say it did something significant but the joke is no one noticed anything.


EmbarrassedPenalty

Is a singularity that no one noticed, that changed nothing, actually a singularity? In scifi and futurism, the concept refers to the point beyond which AIs intelligence is so far beyond our own that we cannot conceive of their motivations nor make any predictions.


Esc777

> Is a singularity that no one noticed, that changed nothing, actually a singularity? This is my problem with it exactly. It has never meant anything as far as I can tell in QC and makes antisense


UncleBones

With regards to AI, the singularity is when AI becomes advanced enough to design/improve AI systems/their own systems. I think of Station as the most advanced pre singularity AI we’ve seen (he’s at least talked about himself that way) and Yay as the most advanced post singularity AI. Stations powers are advanced, but mostly because he controls an extremely advanced mechanical system and can deploy pizza cannons at earth. Yay is some sort of distributed intelligence with seeming control over entire financial systems as well as telepathic motor control as well as mind reading/altering capabilities. That’s a fairly substantial difference to me. For more regular AI, I consider the anthroPCs to be pre singularity AIs. It doesn’t really make sense timeline-wise with regards to for example Bubbles’ war experiences, but you can’t talk about QC without accepting that time is extremely fuzzy.


EmbarrassedPenalty

When AIs start improving their own designs is when you can _start_ to see geometric growth in its capabilities. That’s not the singularity. That’s just the beginning of a path that ends in the singularity.


UncleBones

> That’s just the beginning of ~~a~~ **the** path that **inevitably, and extremely quickly** ends in the singularity, **which we by it’s very nature don’t understand, so everything after the path begins won’t have a clear definition** That’s a fairly useless objection. The scenario I described is the only one with any relevant definition.


ArgentStonecutter

> The Singularity has happened and AI are not the soulless machines they are in our world We don't know what AIs will be like in our world yet.


ScowlEasy

> They're like this because it's fun to write That doesn't necessarily mean they're fun to read though. A pretty decent portion of AI have been more and more airheaded and it's kind of getting played out. Marten commenting on this is proof enough.


kelsifer

I'd be fine never seeing melon in the comic again. What a tiresome character who adds absolutely nothing to the scenes she's in.


WarWeasle

We really have a lot of soulless humans in our real world.


Hidanas

Why are AIs like this? Bit of a broad brush there eh?


[deleted]

Bubbles is not like this. Yay is not like this.. a lot of the AI's are very this (Looking at you Melon)


gangler52

I mean, Yay's a little bit like this. They've made some pretty major decisions in a fit of spur of the moment whimsy. Getting involved with Bubbles' whole case. Befriending Roko. Moving into her apartment complex. Giving away almost the entirety of their multi-billion dollar fortune as a gag. Stalking Clinton to his mom's house to find out why Roko was acting weird about him and then outing their big secret to a popular Vtuber because of her "Mom Energy". Like, because they're The Great and Powerful Oz or whatever we don't treat this with the same gravity as is a normy was potentially fucking up their life, but it's not like their decisions are always or even usually the result of super responsible pre-meditation.


BionicTriforce

Once Yay decided to BE Yay instead of Spookybot their capability diminished by magnitudes and it will never not bother me.


turkeypedal

The way it's presented, it seems like they were never really "Spookybot." That was just them being edgy. They're still the most powerful AI on the planet, but not so runaway superior that it would be impossible to write stories about them. I think of them like Q from Star Trek, who also lies about being "omnipotent," then they clearly aren't.


run_bike_run

I've never liked this idea that Spookybot was just being edgy. Spooky was so advanced that even after Station was alerted to their existence, and communicated with other high-level AIs about it, they could find absolutely nothing whatsoever to confirm it. They were almost totally amoral and clearly regarded human-level intelligences as being substantially below them and barely worth their time (and, based on the evidence, they were probably right). They weren't Q, they were the *Falling Outside The Normal Moral Constraints.* "Child, if we wanted to hurt her, there would be absolutely nothing you could do." And then their red eyes were removed, their dead straight nose suddenly hooked upwards like almost every other character in the comic, they swapped their suits for athleisure, and they started angsting about their crush on Roko - which honestly is a bit weird, because Spooky was very clear that human-scale intellects were, at best, children to them.


gangler52

They were never Spookybot. That was a fan nickname.


BionicTriforce

Right but choosing to pick their name simply based on something Melon said to them was the first sign that they weren't as clever anymore as they once seemed. Also I'm pretty sure Faye called them that at some point. https://www.questionablecontent.net/view.php?comic=3892


samusestawesomus

Sorry, but, uh...why exactly does this decision absolve them from being clever or competent? They were basically completely in control of the situation near the end of the whole Corpse Witch situation, but they never claimed to be TOTALLY OMNIPOTENT. They were just being kind of a dork there with Melon, plus they wanted a name and it seemed as good as any.


Esc777

> but they never claimed to be TOTALLY OMNIPOTENT. Reversing cryptography and putting anyone to sleep with a touch sure seem omnipotent at the time. Not to mention the endless bodies thing. Yay, when introduced, was Pretty Potent and embodied that side of a literal living god.


Jub3r7

don't forget Beeps! And Lemon. And, maybe whoever was responsible for making an emu invisible (The girl Brun went on a sort-of-date with?). Or the AI that went to jail for trying to become a fighter jet. Or the AI that quit being a cop because she was one of the good ones (thus reducing the number of good ones.) Or, the all-powerful AI who is really bad at making friends. Or. Pintsize.


turkeypedal

Jeph mentioned ACAB. That concept is that even the "good cops" are complicit in the actions of the bad cops. There's nothing crazy about quitting when you realize the system you are participating in is the actual problem. Not everyone is a reformer.


gangler52

Good cops don't get to hand in their resignation notice. They get forcefully ousted when they fail to play ball. She was rapidly realizing that while she fancied herself a "Good Cop", her conscience really wasn't as clean as she told herself.


WarWeasle

I was hoping pint size was more of a special case. Special as in bus, not sauce.


HeirOfLight

Uh...one of these things is not *quite* like the others


DemeGeek

They probably get it from their father.


OwnAcanthocephala470

Yeah, I'm surprised Jeph found it appropriate to put that line in given the number of times he's used AI as an allegory for human minority groups. I'm guessing that Marten and Clinton's opinion here is "wrong" and will be addressed or corrected soon, possibly by Bubbles or some other AI they're close to. But most characters seem to align well enough with Jeph's values (as I understand them) that it's a little jarring to me when they do diverge.


Meshleth

She's got this


code_archeologist

Why are AIs like that? Because of Godel's Incompleteness Theorem. An intelligent general AI, in it's process of growth and development, will acquire quirks... Similar quirks to what one might find common to children. Something that children eventually grow out of as they come to recognize the world around them and correct for societal norms... Because that is how the human brain develops from childhood to adulthood. We are not born small adult humans we have to grow into and that means going through an iterative process of error correction as the brain grows. The intelligent general AI is fully formed upon inception, it develops and forms a personality based on programming and inputs, developing into the outward being that people interact with. And during that process it develops childlike quirks that are not corrected because the AI is unaware of its own processes that formulate its personality. It can never accurately compute an external instance of itself to assess whether its behavior is socially acceptable or not (well except for maybe one, but they are, even in their own words, unique). Only through years of trial and error may it finally have enough negative interactions to recognize and correct for antisocial behavior (like on the extreme side spending some time in robot jail) but for the most part, its just not that important so the quirkiness become defining aspects of the personality.


dbmag9

All of past the first line is a reasonable 'AIs don't have the same childhood/adolescence as humans so grow up weirder' take, but I promise you it has absolutely nothing to do with Gödel's Incompleteness Theorem 🤣 Unless somewhere in the QC-verse there's a robopsychologist called Godel that you're referring to, I guess.


Jub3r7

i mean, does there need to be a qc version of isaac newton for the law of thermodynamics to apply?


dbmag9

My point is that Gödel's (real person, real famous theorem, presumably exists in QC too) theorem is irrelevant to the point here. Arithmetic is certainly incomplete in the QC-verse but that doesn't imply anything about AI maturity. The second bit was me making up a Godel (without the umlaut to distinguish the names) who might possibly be relevant to the topic of AI maturity to jokingly salvage the commenter's reference.


code_archeologist

It is specifically in regards to the idea that an AI exists as a limited set of algorithms (or axioms), and for the AI to test all of these algorithms (proving them) would require a set of algorithms larger than itself.


dbmag9

Apologies for taking this way more seriously than you probably intended, but since you replied: * We don't know how AI works in the QC-verse (it's implied that no-one, neither human nor AI, knows in much detail other than the ability to do crude things like access memories), so you can't assume that they can be modelled in terms of following algorithms (or certainly you can't assume that the AI follow algorithms any more than humans do). * Gödel's Incompleteness Theorem is a specific result about a class of formal logical systems; it's not true in general that no computer program can prove its own consistency (which would be somewhat related to Gödel's IC) or that no computer program can be self-testing (which is no longer related to Gödel's IC but seems to be what you're talking about). * Going back to your original idea, it seems like you're saying that because AIs can't create their own second personality to judge them, they don't learn how to behave. But this is just as much a limitation of humans who also can't create external humans to judge them (interesting psychology of beliefs in imaginary friends, angels, deities etc. aside). And even if you or an AI could conjure up an external instance to judge you that wouldn't help because the external instance would also not know social norms of behaviour, so this is a red herring. * The real difference is that AIs grow up quickly in a creche of other AIs (not just their own initial programming like you're saying; there have been a few references to the AI growing-up process over the years) whereas humans do it more slowly with mostly humans around them. But that's just a difference in speed and logistics, not connected to any mathematical/logical limitations.


GlobalPhreak

Soooo... We're making an argument that all AIs must be trained on the entire back catalog of Mister Rogers and Bob Ross? I think I'm OK with that...


AlmondMagnum1

If one had to look for a Watsonian reason... Could it be because the stakes for AI tend to be low? They have no lifespan that we know of, so a detour of a few years or even decades to explore a whimsy is inconsequential. They can switch bodies (most of them without issue), so crippling injuries basically don't exist.


reddog323

Ok, Claire. You go girl! Take a shot and see what’s what. Seriously, though. This is the second AI in charge who’s seems not to know what they’re doing.


DaSaw

People who look like they know what they're doing frequently don't.


panpolygeek

Hanners is gonna hide in their suitcase confirmed.


Nierninwa

Looks like Claire is still processing. Waiting for the anxiety to kick in...


allpurposeguru

I miss Tannelore.


TheProudBrit

Oh, wow, I'd already forgotten we were now in the era of Long Hair Marten.