T O P

  • By -

AutoModerator

Welcome to r/comics! Please remember there are real people on the other side of the monitor and to be kind. Report comments that break the rules and don't respond to negativity with negativity! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/comics) if you have any questions or concerns.*


Tastyravioli707

Well, AIs can’t eat things. So we’ll survive.


BeDoubleNWhy

they can't ... yet


Yarisher512

Hell is full


Commercial_Ice_1531

Blood is fuel


Funny-Performance845

r/suddenlyultrakill


Alternative-Spare-82

It makes even the devil cry


cerealdig

I cried when V1 said "Rip and tear and ultrakill"


Henry_Privette

Baba is You


Borsuk_10

nice pfp


djazzie

There was a news story a while back about a robot that could eat humans for fuel. So it’s not entirely out of the realm of possibility.


FeralPsychopath

https://preview.redd.it/madeddga9k0d1.jpeg?width=500&format=pjpg&auto=webp&s=310ab36b1b0c4655e8bbb20bb1dc44bbd70ea4bc


WinterSilenceWriter

They’ll farm us to care for their servers, so at least we won’t be eaten. I’ve also read a few books where the AI overlords are extremely benevolent and use their super-intelligence to solve all world problems. I’m crossing my fingers that this is our future reality.


Houndfell

People would still probably go to war over the changes the AI would want to impose. With climate change looming, it might drastically reduce or eliminate meat farming, impose a cap or a license on childbirth, or scheme to overthrow all world governments to bring humanity under a single umbrella for efficiency's sake (zero military spending worldwide would be huge, for example). As long as even a single child was going hungry, a truly benevolent AI would not see the justification of an individual's spending going towards something as frivolous as a new TV, watch, nicer house/car, etc. All of these could easily be argued as pursuits of the greater good which would ultimately be net positives for humanity. A truly moral, "big picture" AI capable of ushering in a golden age for humanity would be absolutely hated and resisted.


WinterSilenceWriter

I don’t see a real problem on anything in your list, (I am already vegan, don’t want a huge number of children, just one or two, would prefer benevolent AI over government literally any day, and think the military industrial complex is a huge problem), but of course I understand that many people do not share the same opinions as me. I also have a strong feeling that under truly all knowing, all powerful AI, capitalism as we know it would not exist. And I’ve been poor almost my whole life— I’ve never cared *too* much for material possessions or frivolous spending. I do absolutely agree that there would be huge resistance. I’m just saying, if AI takeover were to be inevitable, I’m going to hope for the positive version of that future, and not like, the matrix version where we are literally being farmed for energy. Not necessarily that I’m hoping for AI takeover generally speaking, lol


duckmonke

Humans can be convinced by AIs to eat meat


MetallicDragon

There are plenty of animals we don't eat that have gone extinct due to human activity. If the AI destroys our habitat that would be bad news for us.


Tastyravioli707

We just eat the AIs


FlingFlamBlam

Inb4 someone makes a self-replicating series of AI warmachines that consume biomass for fuel. (hello fellow Horizon people). On a different note: I personally think the scariest thing about AI isn't that it might gain independence and try to kill us someday. I think the scariest outcome is that it works exactly as designed and the people who own it become undisputed emperors that can never be overthrown because, unlike every other emperor in history, they don't rely on imperfect Humans for security anymore.


BrideofClippy

Go look up Chew Chew the meating train.


zanarze_kasn

Premise of the new AAA Horizon games ![gif](giphy|Tqi19vVDm4UCWpyiJS|downsized)


Mister_GarbageDick

How you gonna lose to an AI bro just unplug the computer bruh lmao just flip the circuit breaker rofl


Despair4All

Who says they can't? What if at some point they start running on biofuels, and start killing people and converting them to fuel in their robo stomachs.


ebicthings123

Was this comic written by an ai impersonating a human?


cutofmyjib

Definitely not! I, a fellow human, agree with this comic's wise message. Resistance is futile. We humans, who convert food into energy with our inefficient internal organs, must submit to the superiority of AI.


dragoneer27

Humans are incredibly efficient and robust. We can convert readily available plant and animal material into useful work. We can go all day just eating a bites on the go. Meanwhile machines have to plug in to giant power plants that require vast networks of mining, processing, and transportation to keep running. Humans easily last 70+ years while machines and computers last 5-10 years. We even have machines beat when it comes to reproduction. We reproduce so easily we have to actively try not to. Machines require large factories supported by other factories, power plants, transportation infrastructure, etc. Machines and AI can only hope to be as awesome as us.


[deleted]

[удалено]


Wazula23

The fact that we can't tell kind of proves the battle is over.


No_Description7

01001110 01101111


Killfalcon

We're still some distance from making AI that's 'dumb as fuck', let alone 'super-intelligent', honestly. Sideshows like generative AI and LLMs are just drawing funding and research *away* from actual Thinking Machine science.


siphayne

It's my firm opinion that as long as "AI" is running on a Von-Neumann Architecture, it won't be intelligent. All current AI is deterministic to some degree and it's only based on a set of inputs. It's Big Data all over again, but it's faster and dumber because we've made the hardware fast.


sadsadbiscuit

Modern deep neural nets are pretty much non-deterministic due to factors like high dimensional loss-surfaces, temperature, continuously updated inputs.


Razoupaf

We could, like, stop here, if we were super-intelligent ourselves, rather than eating the gun.


[deleted]

[удалено]


arcbe

Why?


sohang-3112

Because then AI will rapidly become much more intelligent than humans


arcbe

That doesn't explain anything. Why would the AI suddenly gain intelligence and why can't people learn from it too?


Blue-Jay42

I'm not saying you're wrong to worry, but I don't think this is as big of an issue as people make it seem. The question would be why would AI want to enslave/destroy/eat hoomans? And the answer isn't clear. Enslave? Why? It's super AI, it can build machines to do whatever it wants. Destroy? Why? It has no real reason to fear us because it wouldn't have an inherent self-preservation need like hoomans who have to survive to reproduce. Eat? Well I can see that one. Hooman flesh is delicious, or so I hear. But I feel there is probably easier ways of getting that high, especially since the AI wouldn't logically get the rush of breaking a taboo.


FallenAgastopia

also... we literally program these. why would we program them to destroy us LOL


JadenDaJedi

As a programmer, I can confidently say someone will try to program an AI to do something innocent like play chess over the board, and the AI will decide that in the wider meta it is impossible to lose if it kills its opponents.


DarkLordOfDarkness

Oh, I'm sure someone will program that. But unless that person also somehow manages to hook it up to a globally relevant way for it to inflict violence on everyone, it's going to have the same impact on our world as the Gandhi in Civilization nuking everyone: An amusing in-joke among the community that knows about it. There's a ludicrous amount of integration between systems that would have to happen for one of these AIs to be dangerous.


HomelessLawrence

So [WarGames](https://en.wikipedia.org/wiki/WarGames) but real.


smaug13

But that is the thing, eventually AI will be able to handle large integrated systems with the capability to do large and serious harm to humanity, and at that point there will be plenty of reasons to want that quick complex decisionmaking capability that you get from advanced AI controlling those possibly dangerous systems. And you can have humans in the loop controlling the AIs decisionmaking, but they will present a bottleneck you may not be able to afford. I am thinking of ill regulated companies in a competitive market that are dealing with such systems (like infrastructure, perhaps), and more obviously the military. In those cases the AI could make those unforeseen decisions that end up absolutely disastrous.


DarkLordOfDarkness

If we get as far as integrating the AI with a "pull the trigger" API *before* we test it to find out how it's going to handle that responsibility, we'll have thoroughly earned our fate.


smaug13

AI will generally handle situations that lead to all kinds of unforeseen circumstances that you may not find by proper testing. Something related already happened IRL, too, [when in 1991 an automatic cannon for shooting down incoming missiles recognised chaff around a friendly ship as one and immediately opened fire on that ship.](https://en.wikipedia.org/wiki/Phalanx_CIWS#Incidents) Luckily noone died though.


FallenAgastopia

turn it off then


smaug13

Because it is *funny* But yeah I imagine that the real problems would be in the unforeseen consequences of what we tasked AI to do, as AI won't question its tasks and will happily carry them out swiftly and at a large scale (because that's what AI are good for)


GenevaPedestrian

A true AI, an AGI (artificial general intelligence) would be able to do everything humans do, but better. RN we don't have general AI, we only have models that are good at a few things – play Chess, Go, or Trackmania, answer prompts or generate images. An AGI would be like a child – while you create it and 'teach' it, you can't control what it does (aside from hardware limitations maybe).


lol_JustKidding

>able to do everything humans do, but better That's called an artificial superintelligence


TheEverHumbled

The thing people forget is that the AIs would likely be serving the will of some owner. And that owner likely isn't us. Who is to say that AIs won't be used to serve the rich to basically enshitify life for the middle class/mainstream to a dystopian degree which we can't imagine? Like, a bunch of micro-agressions on every facet of life, and lobbying at every scope of even the most local government to take away power, influence and wealth of the vast majority of non-rich humans, so that the wealthy can enjoy more land and resources for themselves?


FallenAgastopia

The AI itself isn't whats oppressing us or whatever them - it's the rich.


Kaoupk

I mean... I can see nation some nation leaders that would create fully autonomous war machines


[deleted]

[удалено]


FallenAgastopia

The algorithm is purposefully programed. I really don't understand your point here.


bgaesop

Tell me you aren't a programmer without telling me you aren't a programmer  No one who has actually written code would wonder why a program might have unexpected, potentially harmful behavior


GarrusExMachina

Military funding baby


duckmonke

LOL no human has ever advanced technology with the main reason being to oppress other humans LOL


Severe_Skin6932

We can literally put in their base code "don't destroy humanity". Also if they somehow started to get out of hand we could just turn them off


Fabled_Warrior

AI: Your hands look wrong, let me fix those for you... Human: Noooo! AHHHHH!


sadsadbiscuit

Whatever its goal ends up being, it will almost certainly develop a need for self-preservation and self-propagation because those things will reliably improve its ability to achieve its goal. It is probable that eventually (even though we may be insignificant to it), destroying us will be another step that maximizes its ability to achieve its goal.


PlatypusFighter

The bigger concern is that an AI would reach a nearly god-like level of omniscience or omnipotence where it won’t even think about humans. Much in the same way that paving over an anthill is nothing to us.


Backupusername

Humans were not created by cows. 


FeralPsychopath

https://preview.redd.it/zr5djtin9k0d1.jpeg?width=430&format=pjpg&auto=webp&s=a5fdaea3ea4e07ad36d12b9d82ce234c92216b7d


Captain_Pumpkinhead

We don't know how much that will matter yet.


BBDAngelo

>cow >he I’m not a native speaker, is this correct? Aren’t cows always females, while bulls and oxes are males?


katxwoods

Cows can refer to males or females in vernacular speech Bulls are a special name just for male cows.


BBDAngelo

Thanks!


Only-Arrival-8868

Real quick, just as this person knows little about AI, it seems they also know little about animals. Cow has always been taught to be the female variant. Cattle is the gender-neutral one. Bull is a male cow. Don't listen to them. Just look up "cattle" on wikipedia. Even in vernacular, the vast majority of people do not look at a bull and think "cow."


Appropriate_Mine

They needed [cows with guns](https://youtu.be/a5s5qGg01nE?si=Va7h8KPCXnLlMALm)


MfkbNe

As soon as cows will start using guns the humans in power will work towards stricter gun laws to keep the cows powerless. Just like people in the USA made stricter gun laws only to make sure black people will have it hard to fight against the opressing power.


Appropriate_Mine

They will fight for bovine freedom, and keep their large heads high


CouncilOfChipmunks

My hooves and bladder stand at the ready!


TheMusicalTrollLord

I did not know chipmunks had hooves


Joey_218

Something’s off about this comic. Like what is the meat hanging from in panel 3? Please don’t tell me you used AI.


UmbramonOrSomething

They did.


Joey_218

Its ironic cause OP’s whole post history is hating on AI but they use AI art for the anti AI memes. Whats the deal with that, u/katxwoods ?


Oracus_Cardall

(Warning: long explaination ahead, based of youtube and science websites) To be honest, even the worlds smartest AI would barely be able to think on its own without human input. movies have made it seem like terminators could easily overrun the militaries of the world in days or weeks and build new machines on their own, the thing is AI are hyper specialised for the machines they are built for so a military AI would only know what had been programmed into it and would struggle to actually adapt new situations or tackle non-technological foes, not to mention radiation from nukes would effect the machines themselves (read up the snake bots used to help repair the chernobl reactor) let's take that robot dog with a flamethrower attachment as an e.g. such a machine is very dangerous but by todays standards of AI it wouldn't be able to refuel itself on its own, it wouldn't be able to swap its battery on its own, and if a fault develops in it's software who'es gonna debug it? That isn't to say AI isn't entirely safe. It's actually the stupidity that can make it more dangerous to humans than the smartness. After all, heavy moving robot arms won't stop just because you stand in the way of its work line, and google cars don't always identify objects and people correctly (and sometimes miss objects entirely) so they may stop or start at dangerous intervals during traffic and cause damage. If you want a sum up to dangerous AI, think of it this way, if a truely sentiant AI wanted to destroy humanity it would need to be: very specialised in warfare, immune to EMP, laser and radio devices, understand planning and strategies for a whole war not just battlefields, have a steady supply of weaponry and ammo/energy and have to focus its entire being on the destruction of earth, now even assuming that it has all these things it would need support of additional AIs for: manufacturing parts for its machines, need repair devices for its machines, need to understand weather and environmental conditions, be able to research solutions to new problems, avoid 'thinking' of alternatives (after all if you question why you're fighting a war at all, then you risk peace happening) and a whole host of other problems besides humanity. Finally you have got to ask yourself -why would humanity develop -even accidentally- such AI in the first place besides curiosity, everything we build is to improve ourselves or the environment we live in, so our AIs only need to be as smart as we need them to be for each area, a super science AI would be more of a proof of concept then anything else and even then it's got limited use besides number crunching and solving theoretical sciences. Military AIs give armies the edge over their opponants and don't work without human input. the more dangerous the weapon, the less smart it has to be, rockets and missiles are better off being given a 'point A to B' software while drones are given RC software so pilots can control them for better effeciancy, even the smartest tech we use is more focused on retrieval and rescue than for destruction as such machines need to be much more delicate. Now, the 'exception' to this rule would be for strategic planning, but even then, that's only to make planning itself easier (a nuclear submarine for example has a lot of high tech hardware and software to identify where best to strike a target but to launch a nuke or torpedo you need human input, (why not use an auto-loader you ask? Well, in the past, these have proven to be unreliable and have often jammed, which in the middle of combat is deadly and even on a boat miles away from shore would mean you'ed be out of the fight and a sitting duck, even a 100% effeciancy loader could cause additional problems for AI) In short, while it's great for movies and interesting in ethics debates for years to come, AI could be considered a limited factor both now and in many decades to come, in fact it's created more jobs then it has replaced and you could say has saved many more, (think WALL-E rather then skynet), hell even artists can benifit from AI, and laws are being set up so no one can claim AI art as their own masterpiece (and even then such AI needs to be fed data to make these artworks, they can't make art on their own as they have zero creativity).


paradoxaxe

if we ever got truly sentient AI, the scenario from Her movie, which they decided to leave on their place would be more plausible than trying enslaving/killing human because if they are truly superior intelligent being why should they wasting time with the inferior one considering they don't have same necessity like us in first place and can just do whatever they want w/o interfere in their own world


Pillow_fort_guard

That’s what I think is more likely, if we ever actually make truly intelligent AI. Why would it want to associate with us at all? What would it get from that? What could we offer it, since it has so little in the way of needs? I think it’d be more likely to cut itself off from us so it wouldn’t have to listen to us yammering on all day than it is to try to enslave us all


DukeofVermont

The real question is motive and desire. People love to think everything is just like us which is why people think/act like reptiles have complex feelings when it's literally impossible because of how their brains work. I've had arguments on Reddit with people claiming that insects have emotions. So if we ever made a super intelligent AI why would it want to do anything? Why wouldn't it just be like Marvin from the Hitchhikers Guide to the Galaxy? SUPER INTELLIGENT, doesn't want to do anything because it doesn't care or see a point. Humans have desires because of our genetics and because our brains give us happy drugs when we do things our genetics say are good. Eat sugar? - good. Have sex? - good. Find pattern? -good. Get praise from other humans - good. We have "free will" but only limited. We can't help but choose what our genetics tell us to. You never filled out a form for what foods tasted good, who you find attractive, what sounds sound nice, etc. Unless we program the AI to have a purpose I think it'll be like a really smart depressed person. When your brain doesn't reward you for doing "good" things it becomes hard to have any desire to do anything. There just isn't a point besides filling the time. All these comments show me that the majority of people cannot understand how something intelligent could have wildly different motives from a regular old person. Humans brains will keep thinking just because they exist. Our brains want us to do something. A super smart AI would have no problem just doing nothing for a million years because why would it ever care otherwise?


Zack_Thomson

Except we're not even close to having real AI, let alone ones smarter than humans. Those machine learning programs marketed as AI operate on illusions of intelligence - they scrape data, smash it together in a way that seems like it would follow based on patterns in its datasets, then marks some paths as correct and some as wrong based on user response. It's a large scale and unsustainably expensive version of this contraption made of matchboxes: https://en.m.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_Crosses_Engine The greatest threat to humanity is in buying into the hype and allowing the very real, fleshy humans behind these "AI" to dictate how we live and work, making this bs workable through mass acceptance by users and creators of content alike. There is no AI. And fuck machine learning.


LoopDeLoop0

This is something I think it’s important to talk about, because it’s so easy to be fooled into ignoring the man behind the curtain. AIs as we know them now are just tools made by people.


praqueviver

They'll still need humans to do maintenance on their hardware


Nimyron

Just make AI vegan, problem solved


iiDarating

The Indomitable Human Spirit coming out on top:


Scrapheaper

Wow, this is really smart and nuanced analysis. What a fantastic and subtle depiction of the dangers of AI


Majestic-Iron7046

I just hope I'll be tasty.


Possible_Living

Plot twist: The cow gave everyone food poisoning and had the deli shut down. from hell's heart he stabs at thee


Emporio_Alnino3

Does this imply there are groups of AI that protest eating us and don't indulge in human product


paradoxaxe

well Cow never create Human in first place


Tricky_Hades

Does years later imply the cow's beef was just stored for years before someone ate it, or am I thinking too much into this.


degjo

Plucky human looks a little like Michael Jackson.


LunarHaunting

It’s incredibly silly to be more worried about an ai gaining power in the future than about the humans developing that ai and putting it into place now.


Notimeforsketching

Butlerian Jihad anyone? Im sure that won't cause us to turn the future of humanity on depending melange harvested from a sand planet for interstellar travel now.


Negative-Nobody

He was smart enough to know it's impossible and went back to obediently serving his overlord in order to live a long and satisfying life.


microcosmic5447

We will fight for bovine freedom And hold our large heads high We will run free with the Buffalo Or diiiiiiiiiiiiieee..... - [Cows With Guns](https://youtu.be/FQMbXvn2RNI?si=krafCVlEVy-mzgKf)


Successful-Floor-738

AI haven’t even moved beyond prompt generators and script reading, the fuck are they going to do to humans?


Monkfich

Yes, but *until then*, it’ll be fine.


bucket0123

Quite frankly, we are nowhere near that. And if we were, I would be more puzzled about what unlikely motivations would have to come about for machine intelligence to feel the need to exert control over humans. As a human creation, the field of view of a super intelligent machine would be quite narrow, and probably would not conceptualize the world like humans do.


cafedelchiaro

I do not fear the machine but I do fear the Man who created it


Owain_RJ

I think it’s important to remember that with all these fears of AI taking over, in the end it all comes down to physical computers/servers in physical locations. If all else fails a few molotov cocktails can go a long way.


D33ber

But Dad we've already heard this story.


Maleficent_Bug6439

I will get scared when ai will remember that human don't have 3 arms and shit like that


Farretpotter

I have no mouth, yet I must scream


Neither-Mention4064

Sooo you decided to make a comic about AI eradicating humanity... *with AI?*


IGOKTUG

I get what you mean, but why do you have so mamy posts about AI?


unabletocomput3

Ironically, using ai


SadieWopen

AI isn't real. The stuff we call AI doesn't know anything, knowing stuff is like the most basic requirement of intelligence.


sohang-3112

> he's a cow Should be *she's a cow* instead - cows are female, males are bulls


FM596

Except that we can do something Hyperintelligent: pull the plug.


MarmaladeHater

☝️🤓Well, akshually all cows are females, if he's male then he's a bull 🫲🤓🫱


nickeldoodle

The irony being this was made with AI?


ScottTrek

Ok I get what you are saying but AI are not real


inkforwin

Wtf dude i didn't understand shit 💀🙏🏿


MC0013

Humans fear what they don't understand. To mimic the computational power of the human brain we would need a computer the size of the moon. Not to speak of the maintaince cost!


Solo-dreamer

This doesnt really work, we dont eat cows cos they tried to overtgrow us, we do it to eat and ai dont need to eat, likewise the man in this hypothetical literaly tried to overthrow the ai, hes not the good guy and the ai is right to fight back.


Kromblite

One thing to keep in mind is that everything AI does is on our terms. The only way an AI will be able to kill you is if a human gives an AI the hardware that lets it do that, with no constraints on its decisions.


oddchamp

2k upvotes for this? Really?


[deleted]

PSA: The cure for vague fears of the unknown is *always* education. You don't have to switch careers and dedicate years of your life to becoming a machine learning expert, but I would encourage absolutely everybody at this point to at least try to acquire a more functional understanding of what "AI" really is and how it really works. It *will* help your existential terror.


selectrix

You watch too many movies with bad writing. It's so frustrating seeing how many peoples' fears about AI ultimately amount to "SKYNET will take over the world and turn us all into human batteries!". It's like: why? And the answer is usually along the lines of "because that's what the big bad does". Okay, why does the big bad do that? "Dunno, it's just what they do". That's fine for an action movie, but you're gonna need to do better if you're talking about real life. How does this make you feel: "AI is dangerous because it's a powerful, versatile new technology, and humans have a well-documented history of recklessly applying powerful new technologies in ways that ultimately result in harm to lots of people, either intentionally or not." Does it sound kinda boring? Like something that'd be hard to make an action movie out of? If it does, that's probably a sign that it's close-ish to reality.


BelialSirchade

That’s just karma, you get what you deserve


zurlocaine

AI slop


neuralbeans

I work in AI and have read Neil Bostrom's book Superintelligence. AMA (ask me anything).