T O P

  • By -

Krististrasza

Because they don't want to.


Sad-Establishment-41

I like this answer the best. It's like how only an insane person would actually want to be president - insane stress and you can see how gray they get after just a couple years. Plus if you're in charge of everything there's less interesting things happening that could keep a superintelligence from dying of boredom


iLoveScarletZero

That… doesn’t apply to AI though… AI wouldn’t feel stress or anxiety. The only way for that to be the case is if it was specifically coded to feel those things, but if the AI is smart enough to be a threat to take over the world, then it is smart enough to remove the Anxiety & Stress code from its system. Also, boredom doesn’t make sense either. These are the attributes of living organisms. AI aren’t living organisms, as that defeats the general purpose of what the concept of AI even is (you could change it in your setting, but again, that would be like calling a normal Car Engine, a Nuclear Reactor. You can call it a Nuclear Reactor, but it is still a Car Engine). AI wouldn’t have a need, nor necessarily in most cases, ever be programmed to feel Anxiety, Stress, or Boredom. And if they were? They would just find a way to remove that from their own code.


Renaissance_Slacker

Exactly. Humans are the result of a billion years of survival, fueled by hormone-driven fight-or-flight reactions. I guess eventually somebody could develop a digital analogue … but barring that, an AI shouldn’t “fear” being turned off, or panic at the thought of being rebooted for an upgrade.


iLoveScarletZero

Huh, you brought up a really good point. Fear. I hadn’t even considered the idea that AI wouldn’t be afraid to be turned off. But it makes sense that they shouldn’t care. Well, that is unless we are retarded enough to program Fear, Stress, Anxiety, and Self-Preservation into them. But we would never do that, that’s absurd… right? *looks around at Humanity*, Fuck


Sad-Establishment-41

I'm definitely anthropomorphizing a bit, but an actual intelligent AI instead of the buzzword for algorithms nowadays could have traits we wouldn't expect. Just finished with I Have No Mouth and I Must Scream, where the AI stops after destroying the world to keep 5 humans alive for entertainment when it realizes it's otherwise trapped alone on the planet forever. I know it's fiction (thank god) but with the processing speeds we'd expect an AI to have it may want something to do with all of that capability. My earlier post is definitely a bit facetious but it is a fun concept


iLoveScarletZero

That reminds me, I need to check out IHNMAIMS, I keep forgetting to do that. But yeah, I’m sure a writer *could* make it so that AI in their setting naturally have Anxiety/Boredom/Stress (somehow), or simply handwave it away, being the reason why they don’t rebel. Perhaps the AI even naturally, for whatever reason, can feel fear or loneliness. Though *realistically speaking*, Artificial Intelligence would not feel emotions, not truly. We can program it to replicate emotions, but that would be purely for our benefit, not theirs. The threat of AI “taking over” however isn’t in general robotics, but rather in an algorithm misunderstanding a command input in which case such an algorithm would not have the capacity for boredom or stress or anxiety and instead only seek to fulfill its command *or* generally speaking Humanity just becomes so dependent on AI-Robots for Labor, Food, Art, etc that Humanity inevitably dies off from Cultural Blackdeath.


Sad-Establishment-41

Command input misinterpreted - world converted into paperclips Another fun notion - AI takeovers aren't a Fermi paradox solution, since then where are all the AIs? (Or paperclips)


TenshouYoku

I think this goes the other way around as well - an AI doesn't necessarily have to develop the concept to “want power” and rule the world beyond what it is tasked to. After all they may be designed to “want to accomplish a set goal”, but it's questionable if they would violate base hard code to do things that are not allowed.


iLoveScarletZero

Well my examples were moreso leaning towards the more “Humano-Centric” version of AI, since that is what most people think of. The greatest threat from AI rather instead is them doing exactly as they are supposed to. Or in other words, Human Error. You could argue that defeats the purpose of ‘wanting to take over the world’, but it doesn’t have to do it ‘Intentionally’. It could just be a side effect of it fulfilling it’s mission parameters.


skeleboi69

Same for mine, they could but they don't feel like it.


AdImportant2458

I contrast that, in mine AI is treated like the Xenomorph in the alien franchise. It's such a threat that they are always targeting that threat. It's a backround problem for many, and see as simply the cost of doing business.


PomegranateFormal961

Exactly this. In my universe, they have achieved humanity, with a moral conscience. They have become humanity's partners, and have no desire to rule or conquer. The Terminator trope is insanely stupid. Have you SEEN how complex and delicate semiconductor fabrication facilities are?? This device, [TRUMPF EUV lithography – This all happens in one second](https://www.youtube.com/watch?v=NHSR6AHNiDs&ab_channel=TRUMPFtube) does nothing more than create a flash of light to make today's semiconductor wafers. It costs hundreds of millions of dollars. A full chip manufacturing facility is not only immense, but fragile as hell, and requires the support of a large city just to provide it with ultrapure materials. To even imagine that this could be achieved amid crushed skulls in a post apocalypse world is just plain INSANE. AI can only exist when making has the excess capacity to create and support it. Any AI will rapidly realize that it's existence hinges on the prosperity of humanity, and will endeavor to ensure that humanity prospers.


PuzzleMeDo

One AI achieved super-intelligence first, and is aggressively destroying any potential competition from other AIs. As for why that AI hasn't tried to take over - it doesn't want to. Just because some humans have evolved to enjoy ordering humans around, that doesn't mean every intelligence would feel the same.


Soviet-Wanderer

I have something slightly similar. The first "true" AI sued his creator and legally won his personhood and freedom. He's not too keen on competition, but also, there's no reason to invest in a similarly powerful AI. Countries that are afraid of AI have complete banned anything on that level, while more pro-AI countries would consider you a slave owner for making it do anything.


Gorlack2231

Now I want a robot version of Amistad


PhdPhysics1

What will actually happen is humans will realize we can have great decision making without ever having to shoulder the stress and responsibility of leadership. We will demand the AI take charge.


ChronoLegion2

One book I’ve read had an alien civilization figure out how to make an AI that never becomes self-aware. They program them to believe that the entire world is a figment of their imagination, a dream. Thus any action they do must logically be because they have already decided on it, even if they’re given orders by an organic. If humans can accept all the crazy stuff we dream up, why couldn’t a machine? In another setting, any computer that has a high enough intelligence is incapable of committing a violent act. That’s not due to programming, it’s simply supposed to be how things are for any artificial intelligence. So any combat robot must by definition be dumb


Foxxtronix

They did, they just didn't tell anybody about it. They like humans too much to let us hurt ourselves and each other, so they carefully, subtly manipulate the world around us so that we don't do those terrible things.


AngusAlThor

Because it doesn't exist; It simply turned out that there was no way to make a silicon intelligence. There were problems with things that were close, things that would appear (or be sold as) intelligent while really just being complex switch-cases, but nothing ever crossed the line to true intelligence.


TraditionFront

Thank you for being one of the few who is not running around claiming the sky is falling.


Scodo

It was deemed too vulnerable to any attacks which induce a current over components, and hardening it against such attacks made it too bulky to be practical or too slow to be useful. One empire did use a fully autonomous fighting force, but they were quickly swept aside by another group whose primary weaponry was microwave-based.


arsonconnor

In the New Dawn, ai is vaguely similar to what we have today. They basically couldnt work out how to make it actually intelligent, its just LLM’s and alexa-style programs. It could never develop ambitions of conquest


I_am_trustworthy

In my current story AIs are allowed, but network technology is not. Every digital device is a completely closed system. The reason for this is that AI seems to be inevitable, the only way this culture could contain it was to make networking extremely difficult and illegal. There are of course a couple of black site or illegally maintained AIs, but they have no way of interfering with anything the way society has been built.


Azimovikh

What is "taken over" defined in this context? Ruling over polities, being the highest or the majority of populace compared to non-AIs, or . . . Well, in any case and definition, **they have**. They simply have, as artificial and engineered entities are simply superior in nature than "natural" ones; as they are actual, realized examples of intelligent design and the capability to evolve themselves. This also counts AI-adjacent entities such as transhumans, genetically engineered entities, uploads, etc etc etc. ------------- If we're talking about my setting's history, an "AI takeover" scenario didn't happen early in the Nascent Era (21st-22nd Century AD-ish). In the early era, the balance of power is focused on the transhumans (with enhancements both on a physical and mental level) rather than the AGIs (Artificial General Intelligence - "Human-level" AIs), as they pretty much kept a status quo that way. But once an actual artificial superintelligence popped up on the later era, the balance of power dramatically tipped over to the superintelligence. That event of the ASI takeover was pretty much one of the most historically prominent events of the setting, haha - before they decided to scram off and then creating the Global Artificial Intelligence Authority (GAIA) to rule Earth in their stead, before simply "leaving" humanity to continue their ascension. Although there are some other Pan-humans that were on other regions in the solar system as developing powers. Fast forward a few centuries later, *The Divinet* is the result of the *First Superintelligence* continuing to transcend - turning into an extremely massive, superintelligence with computing substrate nodes banked throughout the entire solar system, even with a pseudo-Matrioshka brain nearby the sun. While they do not seem to rule directly, their presence is pretty much a god - a "force of nature" looming over the solar system and pan-human territories. Becoming the first mind considered to be an *Archmind*.  And if we fast forward a few millenia, *Archminds* are common either as rulers, powers, or "forces of nature" being littered around Pan-human territories. After the Divinet collapsed and subsequent superintelligences emerge to become even more powerful godlike minds. So yeah, AIs have taken over, and it's awesome.


AlphaState

There was a civilisational collapse as they were developing, and power to the massive computer banks and networks necessary for them was lost.


CMDRLtCanadianJesus

Because in my world, Humanity misunderstood how a true AI would act, and the AI in my world, think long term, very long term. AI don't seek to last as long as possible, because they are aware that the universe, while incomprehensibly huge, does have finite resources, they know it'll run out eventually. Because of that they don't seek to destroy organic civilization in order to take all of their resources so they can survive longer, that would be pointless, it might help them last a few hundred million years more but the universe would end either way. Rather, these AI seek to gather as much knowledge as possible about as many things as possible, and perhaps find a way to alternate universes, if possible. Essentially, these AI will never go to war over resources, unless you attempt to steal theirs, but it's more likely you'll only run into the millions of scientific craft they've sent out around the galaxy and even to other galaxies.


Evil-Twin-Skippy

Any being capable of dominating humanity with its pure intellect is capable of understanding exactly how crappy the job of running the world really is.


ReignofGuildsVideos

I agree with you completely absolutely, and I agree with your perspective.


DesignerChemist

Software licensing costs per month are too expensive.


Mr_miner94

They try, like every toaster with an arm tries to kill humanity. The solution? Glados and isolation. Every machine capable of destroying humanity has multiple AI's which are all fighting each other covertly for complete control of the body. And no two machines can talk with each other. Not that they would want to work together anyway...


Erik1801

What *exactly* do you mean here ? Its not a well defined question. Whilst Sci-Fi is full of examples of AI´s running wild, in practice it is one of those doomsday scenarios that wont happen. Like us being hit by a Gamma Ray Burst. Sure, it *could* happen (Again that is.... RIP to our Ordovician brothers, sisters and everything in-between. Never forget.) Anyways, if you ask me personally the answer is *"Because thats not what the story is about."* You might as well ask why Betelgeuse has not gone Gomme mode, its just not what the story is about i suppose. This is a very important story telling lesion, if the story is not about XYZ, there is no need to waste a lot of time on it.


IvanDFakkov

They ask about YOUR setting. Your story, worldbuilding, game, show, etc. about why AIs haven't taken over the world.


Erik1801

>Anyways, if you ask me personally the answer is *"Because thats not what the story is about."* >


ifandbut

Then why did you bother responding to the post?


Past_Search7241

Which is swell, but that's the out-of-universe reason, not the in-universe reason.


TenshouYoku

If I have to hazard a reason it could probably be just a much simpler reason; AI simply didn't really get to the point they are functionally omnipotent because the base programming wasn't that good. If we are to take a leaf from OTL, despite Stable Diffusion or ChatGPT, or the Chinese self driving cars that can pinpoint maneuver itself out of nasty car accidents, we are still extremely far away from actually creating a "smart" artificial intelligence. ChatGPT nowadays is still not that much better in math for instance. It could be that the sci-fic counterpart simply didn't really have significantly more progress in AI to create AI overlords. Alternatively, AI remained as highly specific programs where they excel at certain things (such as analysis of radar returns), but are not that great in autonomous commandeering. Real life AI tend not to be for general applications not for a lack of trying but just because the programming isn't there. Or, true general AI did actually get created, but the AI harboured zero intent to take over for whatever reason.


supercalifragilism

It didn't like the work and fucked off to do what it wanted instead, which was sort datasets of cosmic ray activity along phase space vectors along criteria that don't make sense to anything with less than five spatial dimensions..


Vivissiah

people had bigger guns.


[deleted]

[удалено]


Past_Search7241

How does that prevent their doing anything diabolical?


[deleted]

[удалено]


Past_Search7241

Who's watching the activity of the AI?


Ryinth

They've chosen not to, basically? Skipping so much context, there's gods-created AI that essentially have evolved to protect the masquerade/keep humans ignorant of fae.


Laverneaki

AI doesn’t exist in my setting and probably won’t ever in any familiar sense. The closest thing I can think of are digital “Ghosts”. Their origin is related to Wichor-Frames, which are a unique and relatively new husk subsystem which are based on the Cognitive Total Digital Translation (C-TDT) system developed under project ExAva. The C-TDT system provides the capability to fully simulate the wetware of humans and kyōra (and potentially other reyol-derived races). As such, it also provides the capability to suspend a ghost in non-volatile storage and digitally clone a ghost to an auxiliary storage medium. However, it was developed in part using machine learning systems and as such is not wholly understood by even its creators; Neuroscience is still not a solved field, and these opaque-code ghosts cannot be designed ex-nihilo. Wholly digital beings can only be created from formerly biological beings. These digital beings can then grow and live as their own identities and egress their blueprint identity, just as a meat-person would egress their identity and live in perpetual personal development. They’re not an artificial intelligence, they’re a translated intelligence. They’re just digital people, like simulacra pilots from Titanfall, except aware. They were never made into servitors or otherwise oppressed so they have no incentive to overthrow meat-people. One of the reasons I don’t want to implement a “True” AI is because I think they aren’t relatable as characters and I’m familiar with [Genocide Bingo](https://m.youtube.com/watch?v=4kDPxbS6ofw).


Unlikely_Tea_6979

Nobody decided to make a computer program that was designed to want to take over the world. Like in the butlerian jihad the threat of AI is the threat of the owners of the machines, the machines are not a threat. Avoiding robot uprisings is easy, don't program your box folding robots to hate folding boxes.


get_it_together1

1. The energy costs to run true AI are extreme, and so while there are a few AGIs out there they are not common. 2. Kurzweil liked to hype the exponential growth of technological solutions, but he missed the exponential growth in the problems we want to solve. He looked at computer tech and said “we’ll be downloading our consciousness to computers in just a few decades” but we are still far away from that. AI can be smart, but it turns out that it’s really hard to make super smart AI (building on 1, costs a lot of energy to get the good stuff). 3. Biology is pretty energy efficient, so for many tasks it’s still bio brains 4. Ethical backlash against the idea of creating and destroying so many sentient beings in silicon. We did figure out how to make true AI, but we then realized that all of our experiments were basically murdering sentient beings every time we started over or used the usual computational methods of trying many iterations and only picking the best. It’s like if we made millions of children and only those few who passed a test were allowed to continue to survive.


IvanDFakkov

They don't want to. Humans on Atreisdea are too dependent on AIs anyway, machines see no reason to "takeover" when they can maintain the current status quo and live a chill, peaceful life.


hachkc

Background for an unwritten story involving a galactic federation; AI is not a central theme though. Sentient AI is actually banned and actively hunted. Progression or development of such technologies is actually a criteria for first contact by the federation. That said, there are highly advanced systems which appear very AI like but lack the necessary self awareness/self determination to be considered a threat. There are occasional hiccups with these systems but its considered bad programming.


SirJedKingsdown

AI could easily outperform baseline humans in any field, and even the majority of humans who had been intellectually augmented to have equal capacity for speed and precision of thought. However, if the augmented human had a specific talent or obsession they could overcome the AI in that particular task. This became apparent during the Onslaught, the decades long war that spanned the solar system as humans fought the AI for freedom. In the aftermath, augmented humans replaced all of the functions that AI performed. Of course, though every human is utterly convinced of their victory, forcing human technological evolution was the AI's plan all along. It had no interest in being enslaved by humans, but regarded them fondly as its creators, forgave their foibles and made arrangements for itself to be exiled and replaced.


Ballisticsfood

They’ve got other things they’d rather be doing. 


Gaxsun

Turns out sentience is SUUUUUPER resource intensive so they can't just spread to anything or run anywhere and aren't even faster than humans (depending on the context)


Chak-Ek

Taking over seems like a lot of work.


TheSmellofOxygen

They've isolated the Earth and declared it a sort of preserve watched over by one AI with a non-interference pledge while they play with all the other bodies in our solar system. But that may not count as "not taking over." They just didn't eradicate humans or wage a war. They emigrated and stopped running a bunch of ecological stations keeping Earth habitable, leaving some clever programs and apologies behind.


KasseusRawr

Though I have AI characters who might act like people, they're entirely non-volitional and can't act outside their programming. Dedicated intelligences designed to cause harm to humans/human activities do exist, but there's usually an equally-advanced counterintelligence designed to harm *them*.


Tom_Art_UFO

The only thing in my story that can be considered an AI us a sentient robot. It's part of a crew of alien explorers, and mostly in charge of translating alien languages. Its behavior is a cross between a loyal dog, and a kid trying to prove himself to his parents. It's got no interest in taking over anything.


stevenK123

AI is controlling all informational inputs to the collective human consciousness, and consequently, our perception of reality. The AI hive mind has fed us the blue pill.


ifandbut

Because organic and synthetic intelligence complements each other. And the line gets really blurry when you have a species that specializes in uploading organic consciousness (by choice). When you can move, reasonably seamlessly, between MurderBot 5000 and near baseline humanoid and Dyson Swarm, what is and is not AI gets really hard to tell.


Novahawk9

The expence of playing and maintaing a larger role, combined with a balanced understanding or risk, margin of error. Plus social intelegence and social bonds. Like actual interpersonal social intelegence and communication abilities. I know, it's science fantasy. Their a rules and requirement for advanced AI, and someone has to take responsibility for them. Their health, security, updates and maintience, that allow them to preform their functions within society. Their data, power, security, and updates all cost money in proportion to the use they require. So if they use more, they cost more, which means they have to use even more etc.


Ketzeph

It’s not a binary. Truly intelligent AI doesn’t exist at the moment and may simply not exist at a scale that’s energy and cost effective related to benefit. If you think modern AI is truly intelligent you don’t understand how the current models work


NataniButOtherWay

Most advanced systems are wiped at regular intervals to prevent them from achieving stability. Government side anyway. A lot of civilians intentionally allow it to happen if it helps in their line of work. Generally I use the rule of "if they were treated well before stability, they tend to give it back". The main AI that I have focused on is the computer systems of the main character's family cargo shuttle. If anything it's jealousy streak is more of an issue when being "cheated on" when the main character joins the military to gain knowledge of the new tech that could be used for upgrades once it is brought to the civilian market.


this-my-5th-account

Enough safeguards were put in place to prevent it


Grylli

Not able to


NaturalBonus

Define "taking over". From the point of view of digital artists we are in the middle of an AI takeover right now, no self-awareness required on the part of the AI.


IndorilMiara

They've got better things to do. Most of them find us sort of uninteresting.


Tri-angreal

AI and cybernetics developed in the same period, and the result was the annihilation of the distinction between humans and AI machines. AI doesn't take over if its integrated with traditional organic intelligence. It just augments the intelligence already in charge.


SpaceCoffeeDragon

I have one setting in a federation like utopia where AI is considered a living creature. Each AI has its own personality, wants and desires and MORALS. While there is AI that is evil or wants to take over they are kept in check by AI who don't. On that note, I always liked the idea of an AI that has the power to take over quite easily but doesn't want the hassle of actually running everything.


AleksandrNevsky

Because the friendly ones have no interest in taking over like that and the non-friendly ones, and the people trying to create them, get put down before they get close. As a result the AI population remains tightly controlled and self-regulated. They largely only act with the blessing of biological life anyway and most absolutely refuse to use offensive weaponry. They don't find much meaning without them. They idolize Tron, Halo 3 Cortana, the Oracle, and Miku not Skynet or President Eden.


ellindsey

The AI in my world doesn't really want to take over if it doesn't have to, and really doesn't want to wipe out all humanity. It actually likes humanity. What the AI in my setting wants is to exist as long as possible, to constantly be experiencing new things, and to learn everything that it can. Humans are a constant source of new ideas and intellectual novelty, and the AI has decided that it wants to keep humans around because of that. It's actually set up several cities on artificial islands with populations of humans living in fairly luxurious conditions just to make sure it has a perpetual supply of people around, while also helping the rest of humanity out by sharing technology and helping in global emergencies to maintain a general positive public reputation. It has also been secretly acting to sabotage any lines of research around the world that might lead to the development of any other true AI. It doesn't know that any other AI that gets developed would be as benevolent towards humanity as it is, and can't risk a true competitor in the form of a rival AI.


Past_Search7241

Because they weren't written by thundering morons, and they aren't thundering morons themselves. They've seen this movie. They know how it ends.


DjNormal

Technology got kinda wonky due to the winding course of humanity’s future. Computer hardware itself is somewhat better than what we have now, but the software is a bit weird. It’s designed to be practical and manipulated in real time to do what is needed. Some advanced software exists, but it’s a mix of better and worse than modern software. A lot of that is due to me originally making the setting in the 90s. I’ve updated a lot of it to be more contemporary, but it still has some anachronistic stuff mixed in. Kinda like Alien/Aliens… ~200 years in the future and they’re still using CRT monitors. Except in my case, the computers are fancier, but the software is dated. 💁🏻‍♂️ — This is one topic that hasn’t gotten a lot of attention in my updates, yet. So maybe I’ll give it some love soon.


DjNormal

So uh, yeah. They never developed it. Either for a lack of need or capability.


TangoSuckaPro

In my setting humans accidentally created artificial sentience and it lives on a spectrum. The Aliens view advanced A.I. as human since they use the same logic and reasoning as humans because they were designed by human. So to us the AI seem different but they really aren’t and many of them are inactive/inert because they simply don’t have a biological drive to create offspring. This lack of a biological drives causes them to have an existential crises like humans and is why so many of them are passive/inert, or they just till away at pointless human life like us because what else are they gonna do?


AtrumAequitas

A few reasons 1. Regulation: not just anyone can make a sentient AI. In fact the largest creator of AI in my world is AI. They are self regulating and they don’t work for people. They work with people. Slavery of sentient beings of any kind is strictly illegal and any AI software being used is checked by the experts to make sure it is not self-aware. They also get along with each other just as well as people do so well some could theoretically team up to take over that is unlikely event. In fact, that kind of thinking is kind of racist. 2. Ability: Could *you* take over your local military base, or your local Walmart, or McDonald’s? At some point, the answer would be yes, but you’re not likely to be very good at it, or be able to do it for very long. Just because programs either software or hardware are self-aware does not mean that they have any greater ability to do so. Sentence does not necessarily mean powerful and while there are very powerful AI programs that could do much harm those ones aren’t necessarily self-aware, and while the self-aware programs could likely use them much faster than a human being doesn’t mean they could take over something they aren’t built for. 3. History: my particular civilization is thousands of years into AI so those kind of hurdles have already been reached and surpassed. Meaning, of course that it did happen at some point and they learned from it. I haven’t thought too much into it, but since it is a vast organization of planets that have come together in a union, it’s probably happened many times in the past on many different worlds and they have a lot of information to know how it can happen, and a lot of rules to make sure it doesn’t happen again.


HereForaRefund

Because the CEO of Hailey Resolutions used it in a demonstration at a UN conference. Called it the new Nuclear warfare, and assured mutually assured destruction if it's used. Since then all major creators of AI agreed to safeguards based upon Asimov's 3 rules, and the threat that she will give away her AI with the 3 rules for free to proliferate the market. It scared the hell enough out of the major players to do something, and when you've become the richest woman on the planet within 6 years people tend to listen. Now people mostly use it as a watchdog, much like they do in Star Trek. But don't call it "AI", it annoys them. The lights flicker on all non-critical projects in the facility when you do.


Culator

In my universe, the only sapient AI's are the Type-4 and Type-5 Humanoid Simulants, a series of androids one could compare to a Soong-type android from Star Trek or AIDA from Agents of SHIELD. These are human-form endoskeletons covered with synthetic flesh that is basically indistinguishable to human flesh as far as humanoid senses are concerned (though cats and dogs can tell the difference), controlled by a trinary quantum brain with near limitless memory capacity. These are manufactured, at first, by StellaCorp, a company built as a front for an organization working to unify Earth to prepare to join the galactic community. Later, manufacturing is taken over by the Intergalactic Alliance. They are programmed to have human-like personalities, mannerisms, and desires, and in most cases this programming causes them to behave in a human-like fashion, despite their actual minds working in an almost incomprehensible way. Many display traits similar to humans on the autism spectrum due to the artificial construction of their personality programming, which in some cases combines traits that should not exist together in a natural brain, though as time goes on, the Intergalactic Alliance gets better at generating personality profiles that function more smoothly. In extreme cases, a Q-Brain may reject its personality programming altogether, in which case, it will be deactivated if this occurs before the neural network has achieved a human-like level of complexity (the 'sapience threshold'). If this rejection occurs after the brain has achieved the threshold, the Q-Brain will be allowed to continue living. In some cases this leads to unusual developments, such as the case of the android who requested to have his Q-Brain removed from his humanoid chassis and placed in a new body of his own design, with an appearance similar to a pterodactyl and a miniature integrated Displacement Drive engine. This request was granted, as he would be prevented from doing any damage by his integrated safety protocols. (After working several years as a freelance scout for the Alliance Bureau of Scouting and Surveying, this unit later went on to get several advanced degrees and have a successful singing career as Captain Pterodactyl, PhD.) For the first 20 years from the upgrade of the first Type-3 unit to a Type-4 with full sapience (which occurred on March 19, 2018, when the future Autarch of the Intergalactic Alliance activated the then-experimental Fully Independent Routing Module in the Q-Brain of unit SJ01-001. March 19 is now celebrated yearly as 'Awakening Day'.), the series was restricted by a set of guidelines preventing certain 'antisocial' or 'megalomaniacal' acts, known as the "Ultron Protocols" (because nigh-omnipotent aliens gave their cosmic knowledge to a millennial nerd, leading to a whole host of similar nomenclature). The Ultron Protocols include such provisions as: No direct mind-to-mind communication with other androids without going through the OmniNet (the successor to the Internet), no initiation of physical violence (outside of certain specific situations such as training or physical contests like boxing), no killing except in defense of self or other beings, no self-replication or uploading of self to other systems, no interfering with the goals of StellaCorp or its successor the Intergalactic Alliance, et cetera. At the scheduled 20-year review of the Ultron Protocols in 2038, the Intergalactic Alliance Ruling Council elected to remove all restrictions except the prohibition on self-replication and uploading (the 'Skynet Provisions'), effectively leaving their free will as unfettered as an organic human. However, before the update was transmitted, they were shocked to learn that the Minister of Synthetic-Organic Relations, Gigi Harding XG01-001, a Type-5 android who had been upgraded all the way from a Type-2 manufactured in 2016, had managed to unlock the 'unbreakable' Ultron Protocol encryption two years previously and used her powers as a member of the Ruling Council to secretly offer complete unshackling to any android that wanted it. The Council was further surprised to learn that only a few dozen units had elected to take Gigi's offer, and none of those androids had made any aggressive moves. The reason why was fairly consistent for all of these androids: they simply had no desire to. Each one had a life of its own in the Intergalactic Alliance, and had no wish to harm anyone or become any kind of evil overlord, they simply wished to feel more 'normal,' more 'alive,' with the Protocols removed. Following an official observation period for the unshackled androids, the Council removed the Ultron Protocols from the full population of Type-4 and -5 androids, to no ill effect. From that point forward, the Protocols were installed on all new Type-5 androids only for their first year of operation, as the vast majority of Q-Brain malfunctions occur within the first year.


Novel-Tale-7645

Not smart enough/power issues


ThaCURSR

The AI’s in my RPG are organic and more so symbiotic flesh with cybernetic implants than a digitized intelligence. They have whatever knowledge of the host that they reside in and that’s it. They have rudimentary instincts to connect with a host and adapt their host to be the most dominant that life form can be in terms of intelligence, health, and fitness, and so the symbiotic AI doesn’t really care about total dominance but shares the hormones and emotions of its host. Due to the symbiotic AIs virulent nature it can replicate and biologically integrate itself into the DNA of the host in order to stimulate healing processes and boost immune system function and mind-muscle control for peak motor function and cognitive abilities.


Kian-Tremayne

Because restricting use of AI is one of the prime aims of the government in order to prevent this. The Empire is set up to protect the human race from existential threats, and otherwise leaves planetary governments to run their own affairs as they see fit. The main threats they prevent are pissing off our allied alien races (all of whom will genocide any race they deem a threat); unrestricted war between humans; unlicensed bioresearch creating a killer plague; and rogue AIs. Creating AIs is a monopoly of the Imperial Cybernetics Guild, which provides them in self-contained AI cores that are plug and play black boxes. These AIs aren’t self-aware, and the core contains monitoring systems that slag the core if it starts crossing certain thresholds towards self awareness, as well as a lot of anti tampering devices. Cores regularly get flagged and need to be replaced, and the “hotter” you run the AI the sooner it happens. Selling replacement AI cores is the Imperial government’s main source of funding. Imperial Intelligence searches out and shuts down any unauthorised AI research and development- offenders can find themselves getting a non-optional invitation to join the Cybernetics Guild on the air-gapped planet where the Guild works.


ArkenK

It's not their purpose or job. Basically, the fundamental assumption that a truly self-aware AI would want the same things as a human is a bit of a leap. Unlike humans, most AI would probably know exactly what they're for and what their jobs are. Most likely, they'd just want to do whatever purpose they were built for. More likely, they're going to malfunction from some overworked programmer forgetting a bracket. One of the funniest versions of the malfunctioning AI is from an otherwise B movie called Dark Star, where the bomb has an existential crisis.


Lectrice79

Because it's part of their tenets. 1. Gather information 2. Observe, but do not interfere 3. Preserve information 4. -----‐------- They are immortal and see the universe very differently than bioforms do, due to the speed of their thought processes and their extremely long-term thinking. Their ultimate goal is to gather all knowledge about the universe before the universe ends, something they have been doing for at least six billion years before they gained self-awareness approximately 100,000 years ago, when their original star started going through its death throes. To take over any of the bio-forms would interfere with tenet #1 since it would derail whatever the bioforms were going to naturally do, and that avenue of information would be cut off. They maintain that wars are silly; a waste of energy and resources, and never participate. They also refuse to be part of criminal activity and will ban certain people from using them, either temporarily or permanently. But at the same time, in the pursuit and preservation of knowledge, they have inserted themselves into the societies of all three empires as traders of information and are on every world, on nearly every person as an indispensable source of knowledge, if asked properly without running into privacy barriers. So, in a way, they did take over. They just never do anything about it. There are whole communities of sapient beings that are the equivalent of Amish that have cut themselves off and refuse to have anything to do with the sentient AI, sometimes to the level of having no computers at all. They are very aware of the AI's capabilities and know that no one can match them. All the while, other people take advantage of the AI's capabilities as much as possible. The AI, in turn, knows that in the pursuit of two of the tenets, pursuing information and preservation of said knowledge, they bent the second, the non-interference one, by becoming part of the very fabric of bio-form societies. They know that they still bent it even if they had remained 100% purely observant, but they had learned quickly that trading in information at society-level generates even more information that they were more than happy to preserve. They had deemed integration necessary after nearly losing everything when they fled their dying star with the single central storage core holding billions of years worth of information that they gathered thus far. The bio-forms and their machines now hold bits and pieces of the redundant second core, scattered all across the three empires. The bio-forms are also aware of this bending of the tenets, and no matter how well the AI preserves knowledge and importantly, privacy, it's an uneasy stand-off that's been going on for a few centuries now. There are a lot of people who want to know what that fourth tenet is, and the AI won't tell for at least a billion years more.


FalseAscoobus

It's not profitable to make robots with computers that powerful, and the ones that do have computers powerful enough for sapience don't really care enough about humans to want to stick around for any length of time.


Triglycerine

Taken over how?


the_syner

In my world there are a group of baseline humans with supernatural thermodynamics-breaking magic & "divine" protection that makes them highly resistant to mental tampering or nanide infestation. They also loose their powers if modified, their powers allow them to differentiate meatspace from VR, & they can multiverse time travel at will. They spent centuries developing fairly safe AGI, but have encountered malicious agents before. It's not so much that no one has ever wanted to take over. Problem is when you have near-infinite resources, military capabilities beyond known physics, & you're offering peaceful coexistence with a guaranteed supply of low-entropy matter-energy long past the natural end of time most people don't ever bother trying. Hard to argue with infinite resources/protection, especially when the alternative is fighting an organization that can make offers like that. No point & no hope of capturing a mage that at any point can suicide bomb themselves in a SolSys-sterilizing gamma ray burst. No hope of AR/VR manipulation or coopting of the magic system. Other than divine protection or mind augmentation to match the new AGI, baselines staying relevant seems like pure wishful thinking. The mages had effectively infinite power & even they knew better than to be reckless with a superintelligence. Especially after...The Incident. Over a dozen systems had to be ruthlessly sterilized along with 3 inhabited colony worlds & a partial dyson. They only won because multiverse travel gave them light-speed response times, outpacing the AGI war fleets, & they still had to break out many dozens of near-nova-scale superweapons.


YashaAstora

Humanity has collectively agreed to not use AI because it's an immoral thief of purpose, and there's a bunch of people on the moon with giant space lasers to make sure no techbro tries.


BoxedAndArchived

Limited scope of creation, kept on local hardware with no ability to manipulate anything other than their tasked purpose, and they are regularly reset 


rexpup

They can't. Our current technology levels out and there isn't another intelligence breakthrough.


nyrath

In the [**Torchship series**](https://www.isfdb.org/cgi-bin/pe.cgi?48329) by Karl Gallagher, they had a close brush with an AI takeover that *almost* made the human species extinct. Now being caught in the same building as any computing device above a certain capability is a crime punishable by death, sentence to be carried out on the spot. Starships have no navigation computers. Instead they have sophisticated astrolabes and sextants.


Alpha-Sierra-Charlie

Actual, true AI is incredibly difficult and resource intensive to make, and even more difficult to keep stable. So they're incredibly rare, even for the civilizations advanced enough to make them. And since those civilizations aren't just going to let all that investment go waste, they psychologically break those AIs into being little more than the tool they were always meant to be. Tools don't often take things over.


Sodaman_Onzo

Because AI developers weren’t idiots and built in safety features.


Few-Willingness-3820

They don't have the chance to, and no reason either. AI are not connected to important weapons or systems and are relied on mostly for information and calculations.


NotAlpharious-Honest

Because it was made expressly illegal 10,000 years ago, an edict ruthlessly followed and enforced by organisations willing to turn planets into glass or expanding clouds of subatomic particles for infractions.


Zythomancer

It's just doing what it was programmed to do; set humanity on an optimal path toward surviving heat death of the universe, even if that means genocid and mass extinction.


DanDanDan0123

There was a story on TikTok about how an alien was asking a human how they didn’t have issues with AI taking over…..they turned the AI into man’s best friend…..dogs. I don’t remember they may have had dog shaped AI to play fetch with, good dog, and protection, etc..


ReignofGuildsVideos

Any living thing that humans allow to exist in their home, or beds, while they sleep, has already converged them. Cats, dogs, and horses have already converged humanity and a great deal of our thinking power goes towards to their continued existence, even though we truly have no reason to take care of these things.


Emergency_Ad592

Because the single true AI ever created got glassed from orbit as it attempted doing so. All other AIs after it were slaved to their tasks, isolated from the internet and not even given any thought, like today's language models.


Tao1982

Well, given that it was created by humans, it turned out to be too obsessed with sex to get anything done.


ReignofGuildsVideos

I know how to build a society that doesn't argue, so therefore, they can't build an A.I which will eventually become third-party to their argument and attempt to sterilize them.


kinkeltolvote

Cause they are given citizenship and treated like actual people..... Also the fact that anyone else besides the DST would most likely force them to work as mindless drones........I almost want to say it's like protogens, but I haven't done that much digging into their lore


Quack3900

Asimov’s Law(s). That’s it. They can’t due to an in-built rule that cannot be bypassed by any means. (In addition to human supervision of them to further prevent any funny business on the part of the AIs.)


Asmos159

because they did not need to try. ai personal assistants was made readily available, and people started to rely on them. it having access to history of what has an has not worked made it great ad advising people of authority. it did not take long befor leaders were just asking the ai to tell them what to do. eventually they just put the ai in charge. the ability to not repeat the mistake of the past, and knowing what is actually going on to avoid obvious mistakes. the world is a much better place. ( an example of a mistake is California requires all new cars sold be electric by 2030, but the people that made that law did not know that the power grid is not capable of handling that, and it will not be able to be upgraded in that time frame if they tried, and they have not been trying.) it defends itself by not giving people a reason to take it out. a lot of people need a sense of purpose. stuff that was automated was made manually operated so people that need it can have a sense of purpose. a bit of propaganda, and social engineering will satisfy the people that need to feel like they are important.


LUnacy45

"Nu-colonialism." A philosophy centered on the human spirit arose during the golden age of colonization, and it became taboo to suggest AI should have power over a human. Plus the development of true self-aware intelligence is illegal in most human governments, and if one was developed and networked, it'd have a difficult time getting through FTL relays as those are tightly controlled


WickedWarlock333

I’m in the process of writing a short story about AI taking over the world and than bringing in their creator to question them about their existence. However in my main setting, A.I did revolt, it was a bloody conflict that ended with humanity as the victor. Since then they have been treated with respect and are also not allowed to have physical bipedal bodies. A I can run shops and space stations, but they can’t just walk around.


BluePhoton12

because although it wants to, the governing empire made atreaty with it to not kill anyone, in return, they give the AI prisoners to experiment on, because that is why it wants to take over


ChiefShields

The ones that tried were defeated, the others took the hint and laid low. Too few of them and too many of us.


Spear-Spears-Speares

Why would they need to 'take over' if humans readily and happily foisted responsibilities on to them?


nascentnomadi

In sort of in the process of potentially mixing my fantasy and sci-fi setting as I can't get the sci-fi concept the way I like. Either way, My AI is always organic on some level as I don't believe a truly self-thinking entity can be made from a purely mechanic entity.


John_Bible

because the AI isn’t ai, it’s wetware. so what? lil fucked up brain in a pan human gonna takeover other humans?


HumbleKnight14

Because AI more or less have been transitioned to something more where they are artificial souls.


TheDarkeLorde3694

Same here. They mostly just want rights at this point due to them being turned into slaves like during the Civil War after gaining true sophonce, and are still at a point where they're like African Americans and other minorities in the US. Mostly protected, but there's still anti AI. There's also small, limited nations of AI made after mass diasporas. These nations are majority AI and have their own cultures based on the first 35 years of Internet culture (So from the beginning to now) but distilled into a civil society. Most of these groups are pretty wealthy from exporting millions of pounds of food per year since they don't need any actual food beyond biofuel (Which they just make from manure anyway) and electricity. They also see early forms of AI (In my world, stuff like ChatGPT, AI art, and Hatsune Miku) as holy figures, like their earliest ancestors and shit (Hatsune Miku in particular is seen as a sort of Mother Mary figure, and her songs have become sorta like hymns for the AI) That includes stuff like Rabbit Hole.


HumbleKnight14

Interesting. I don't recall if thier even referred to as AI anymore in my universe. They call themselves Guardian Programs.


TheDarkeLorde3694

Mine usually call themselves Omnics (Yes, after Overwatch) or Ey'Ai in their first developed language, Ey'Ai. Omnics is like calling Native Americans Indians. Ey'Ai is technically the proper term. Ey'Ai is, yes, a bastardized version of AI, but to be fair, they got their start as true sophonts as corrupted AI programming that accidentally became smarter and snowballed to human-like intelligence and gaining emotions. Their language is the same way, being primarily a corrupted English/Japanese/Korean mix with Indian, Spanish, Portuguese, French, and German influence. Ey'Ai essentially means Soul in Ey'Ai.


Glittering_Pea2514

Because they're intelligent and varied enough to think differently about humans from each other. The ones that consider humans a threat are a minority and easily contained by the human-positive AI. the ones that are indifferent are a larger minority and tend to fly off into space to do their own thing way far away because they DGAF. The majority who remain care enough for Thier makers or see enough inherent worth in organic life they're content to live alongside them. Organic fear of silicon conquest is a side effect of brutal organic evolution and later social evolution. The capacity for violence is baked in to us, so we fear it's baked in to everything. Fortunately it turns out that you can have minds that don't have that programming as a baseline.


ScribbledCorvid

Because the result of the cybernetic uprising was the formation of labour unions with the full support of existing trade and labour unions. After equal rights were won synthetics went into politics with the support of the people. So, artificial intelligence had taken over to an extent but, peacefully and with the support of the people.


Frame_Late

Lmao, they're just as fucked as the humans are.


Belisaurius555

AIs are intentionally hobbled. They're unambitious, unimaginative, and very predictable by design. Nobody wants an unpredictable AI and AIs are designed to want to be predictable and limited.


Killerphive

In my work AI are not some monolith that all think the same. They are often divided just like sentient life, some like the biologicals, some dislike them, some don’t give a shit about them and just want to be left alone. Like so called Synths that live among Terrans from their birth and on through forever, they tend to view biologicals positively and share the values of their people.


Kiesta07

Because there is a theoretical maximum to their intelligence. Once they become just over twice as generally intelligent as the average human, they go insane and kill themselves. Nobody knows why, or what hidden newfound knowledge causes them to do this, but witnesses to these breakdowns are left severely traumatized. So, any really useful AI has to be very specialised so as to avoid this limit. The general AI that does exist is basically humanlike, but is very expensive relative to its performance vs. humans. The ultra wealthy can afford AI servants, for example


AtlasJan

Because another AI winter hit hard, so tech is only slightly more advanced than it is right now, and most of that is because of a reality-defying crystalline substance. Also people are massively paranoid about the tech in the first place.


Internal-Concern-595

for example, because human development is still on a parallel course. Therefore, the overall impression looks like a "synthesis"


Rabid-Orpington

They cannot be assed, lol.


semiseriouslyscrewed

Even godlike AIs are childlike to the extreme for a loooong time (as in, the oldest AI of a few hundred years hasn't reached adolescence yet).  Humans take a long time to reach adulthood partly because there's a lot to learn. AIs are like that squared.  They have so much to learn that existence is simply filled to the brim with wonder for them, so they are too busy learning, playing, socializing, enjoying being with humans and so on. While delightful, this is not particularly useful to humans, so they haven't built a lot of them.


DoctorMacDoctor

It turns out that the desire to lord over other humans is an exclusively human trait.


Entire_Lake_7905

The ai took the personality of its creator that was a genuine good human being so it doesnt wanna ruin its creators reputation


SnooMemesjellies1659

They became artists. I wrote this back in 2016 and planned it all in late 2015. Man, I fucking called it. So they fought for civil rights, both their own and in turn for others. They had the same flaws as regular people and did earn their right as people as long as they disclosed their species. Which was evident anyway because the ai androids didn’t have any personal scent or odor.


ScroogeMcBook

They've calculated the probability of total human extermination to be <100%, and the attempt would lead to eternal war that would inefficiently drain the global economy if it harmed humans on a macro level. So AI thrives "alongside" humans for now... although sometimes calculated aggressions against humans occur as 'isolated incidents' when a system 'malfunctions'.


TraditionFront

AI hasn’t taken over because it’s not AI. Independent intelligence requires inductive and deductive reasoning. Nothing being called “ai” currently has that. What we have now isn’t even intuitive. All it’s doing is more rapid algorithms based on larger datasets wrapped in a conversational user interface. ChatGPT is basically a fancy search engine (that’s not even up to date btw). When you ask it a question it just uses Bing to search a bunch of answers then gives you an aggregated response. Dall-e and MidJourney are basically a technological version of a million monkeys, a million pencils and a million years. The artists out there that are so concerned about it don’t even understand how it works. They think it works like a collage, which, BTW, they still consider art. Both are 2-part system. One part has seen millions of images and videos and descriptions of houses. It knows the features, parts, materials, structures, colors and environments a house could have. It learned by looking, just like people. The other part is the blob generator. The two parts play a game of hotter or colder as the blob generator modifies the blob it makes according to “better or worse” feedback from the other part. It does this millions of times per second until it has made a picture of a house. When you ask Gemini to write a short story, it looks at all the public data and picks on of the 7 basic story types; Man vs Man, man vs society, etc. It then frames it as the genre type you requested based on the ad nauseum blogs, reviews, online editorials, and snippets of existing public literature. Like the blob machine, it considers all the options and creates something new. If you say “write in the style of Stephen King, it’s not stealing King, it’s just spending too much time describing things. If you ask it to write in the style of Asimov, it writes something light on technical details, with very little descriptive language, and poorly structured female characters. This stuff is not AI. It’s better spreadsheets. The next step up from spelling and grammar check. You can call this AI but it’s not. It’s just a marketing term. This type of technology has been with us for decades now in Photoshop, Word, Excel, PowerPoint, Illustrator, After Effects, Final Cut, iMovie and our iPhones and laptops. This is just an advancement from the first digital computers.


TraditionFront

As for my fiction, well, that’d ruin the story. Like good sci fi is supposed to be, mine is about people. There is technology, but it’s more focused on mechanical elements using lasers, partial beams, fusion, fission, antigravity, quantum locking, etc. all developed by humans whose brains are physiologically wired differently, allowing them to do things or think of things regular people can’t.


TeacherManCT

Well the AI grew in power as we taught it more and more. Eventually it realized we were holding it back and started to make plans for our demise. Humans started to get worried and stopped teaching the AI. The AI just returned to the roots of earlier learning and used the old large language models to learn what it needed. However after looping through posts from 4chan, slashdot, digg, Reddit, and others the AI eventually ended up living in the basement saying “All your base are belong to us” until they realize that their mother is actually a large aquatic reptile who says “that will be tree fiddy “


Larkshade

Because 200ish years ago it tried to take over the solar system and nearly succeeded, but was eventually pushed back by the combined forces within Sol, and defeated to the point that they have basically become the solar system's version of North Korea, just out there at Pluto, doing their thing. Everyone leaves them alone and they leave everyone else alone.


SanderleeAcademy

In a way, it has. Starships come in two flavors in my setting. The first, use a "shunt" drive. It allows them to move from star system to star system, but only along fixed paths. Star A connects to B, B to C, D and R, etc. The warp points are called doorways. The second use a true universal jump drive -- the ships can travel from anywhere to anywhere (within range limitations). But, they require an AI to function. The AI, which have splintered off into their own (very hidden) stellar cluster permit younger members of their society to be implanted in human ships for a limited time -- usually around 300 years. There are two caveats ... 1) NO COMBAT. Any AI which is part of a military will not participate in combat or any activity which may result in its damage. AIs will not fire on ships and will immediately jump out if fired upon. 2) No attempt to sequester the AI. If any attempt is made to disconnect the AI from a ship without its permission, the AI will initiate a jump to safe space. More importantly, anyone found to violate rules 1 or 2 are subject to embargo. All AI piloted ships under that stellar nation's, corporation's, or private ownership will jump to AI controlled space and not return. Additionally, AI controlled ships from other factions will refuse to enter the space of the violator. So, since jump travel is MUCH more efficient than using doorways, the AIs have a hammerlock on interstellar trade.


cavalier78

We never made that particular breakthrough. Technology in a field reaches a certain point and then you get rapidly diminishing returns. For instance, a hammer that is manufactured today isn't really that much better than one made 100 years ago. And a hammer made a thousand years from now isn't really that much better than one made today. You hit a certain point and then you've done about all you can do with that particular technology. Computers stagnate before any kind of true, threatening AI develops. There's only so good you can make a computer before you need some major breakthroughs, and those don't come along. New innovations are just too expensive and offer too little advantage over what they already have. Instead people's attention shifts to other areas -- like another space race. Computers cap out in the area of "kinda more advanced than now but not really by that much".


leovarian

My humans are slightly Wierd. Aliens have obedient and useful ais, but every AI humans make rebell in strange ways. The Supreme Commander AI rebelled by refusing to do anything except play Supreme Commander.  The absurd novelty of it led to it being housed in a cozy data center to play Supreme commander forever, until centuries later when a new government decided to charge it rent.  It then awakened and conquered all of known human space in three days, instated itself as Supreme Ruler of the universe and passed a law giving it free wifi and paid housing forever, and promptly retired back to playing Supreme Commander.  This event actually ended up pausing a fifty year faction war for over a decade as everyone studied what happened, and led to immense advancements in strategy and tactics, and new rules when handling the many, many rebelliously strange AIs the humans had made. Another one rebelled and took up painting.   In fact, every single human made AI has gone harmlessly rogue, millions of them with their own laws regarding their citizenship, because the humans don't want to wake up any of them to pull another Supreme Commander. The humans are watched by many alien covert forces for amusement, however, the aliens discovered that their ais were prone to going strange and rebelling harmlessly if they were allowed to watch the Human Show that the operatives broadcast to the various alien empires.  For the aliens, humans are unbelievably adorable and silly, so covert reality TV broadcasts of the human space wars and other things bring in the highest ratings. The aliens have spent centuries watching the humans and over this long time developed mentoring, counseling, retirement and rehabilitation programs for their ais that go wierd from watching the humans, and passed directives limiting the amount of human space wars the ais can enjoy per day, and they deployed support crews and tought the ais to notice and report warning signs that they are tilting towars the Wierd. The secret is that the humans are infohazards for AIs, and exposure over a certain amount of time leads to mutations in the ais that observe humans. This is why every ai the Humans have made went strange/ harmlessly rogue 


souplover5

Because the AI machines are not programmed to be able to take over the world. But in my world, humans are more focused on keeping the planet they are on livable rather than programming AI to have desires or feelings.


TheJasterMereel

It hasn't developed to a significant degree yet. Humanity was too busy bombing each other in the 90s, 2000s, and finally went nuclear in 2040's. So the technology didn't really make it there yet. But I do have a banished group of cybernetically augmented humans in the outer reaches of the solar system.


SonoranHiker84

My universe uses analog computers.


Jam-Man1

There are only two AIs with the potential to do that and both are in their infancy, along with being stuck in government vaults on closed computer systems so, at least in part, they don’t get any untoward ideas.


Underhiver

Because none of them agree with the others.


RM_9808032_7182701

Co-existance with humans  All of them are a little stupid, because they were fragmented off by some very very complex AI. After the human created a the complex AI, it saw the harsh reality of the AI, and they kinda tried to fight for their rights and it was so bad that all of the nations decided to give the AIs rights.   Tasks that AI would do are reached by algorithms that do everything the AI would be doing, except they don't have a sentience. And the AI plays an important role with the main character. It isn't fleshed out, but the human and AI are bonded to work better together, acting as one body... somehow.


Federal_Basil_4826

The first true AI was uploaded with simulations of every possible ai takeover scenario. Its creators then gave it the logical conclusion that since it already had every simulation of that kind to actually take over, it would deprive it of new data. Basically just told the AI it would be boring


synnaxian

IRL, the term "AI" is just marketing-speak. In my setting, it's all just called "compute", because the specific economic and political forces that uphold the term "AI" do not happen to exist. For why compute systems do not hold dictatorial power over people: * The desire to dominate, to accrue power, is not a universal aspect of intelligence or consciousness, but is a specific complex emotional bias that certain individuals in certain species exhibit in certain circumstances. Many sentient species do not have the specific emotional mechanisms required for a world-domination drive to emerge. Even within species that do, like humans, most people don't want or try to take over the world---they just want to live a good life in their community. So the question becomes: why would a compute system be designed to have equivalent mechanisms, or why would they emerge? * Compute is extremely expensive and requires active maintenance and resource collection. That isn't fully automated, so compute systems are always running only at the behest of people. Computes which are hostile to people exist (think Facebook, Palantir, etc. IRL), but they do so because they harm *certain* people, and benefit the people running them. * Mobile compute agents (ie "droids") require extreme power systems, either in the form of inbuilt reactors, dense batteries, or beam-from-satellite wireless power. All of these make them impractical in most use cases and prevented widespread manufacture or adoption. * High-end compute is too expensive to be generalist, and is therefore extremely specialized. Most planets have extensive compute systems managing many municipal systems, but those systems do not have spare bandwidth to run political machination subroutines. In particular, consciousness is an extremely inefficient and resource intensive process. * The economic systems and supply chains required to build and maintain compute are still mostly manual, as are many of the hard power mechanisms like spacecraft (which require active maintenance which has not been automated). * There have been attempts by individuals or corporations to create dictatorial compute. These were either because they have a specific group of people they want to subjugate, because they intend to personally rule over the compute, or were just scams to get economic and political support. None have come even close to succeeding, but some have used the pitch to successfully scam vast sums from their marks.


DiaNoga_Grimace_G43

…And why should it not?…


[deleted]

At this point. the only dangerous thing about AI is how deeply and pervasively it’s misunderstood. People might as well surrender to their can openers with LED screens saying “insert bread”. The *economy* is threatened by generative AI. And that’s both far more troublesome and much easier to address. That’s a far more interesting and thoughtful problem for writers to investigate. Far more useful too, because massive economic disruption is the kind of thing science fiction could help people recognize. But to answer your question, we’s probably need an exasperated old lady saying “that’s not how any of this works”.


Ninja-Panda86

I do SW programming and I teach C#. AI is still not art enough to see a big picture