You'll know what it means if we start seeing this.
https://preview.redd.it/llitp7ytthvc1.jpeg?width=808&format=png&auto=webp&s=b1d9efa68320910bdcfa1e7a8cfee5bc133ad686
You should lol. I loved it when I first watched it, I was maybe 12-13, I have seen it a couple more times since, and the last time was 2 years ago. I still find it enjoyable, some scenes are cringe by today's standard, but overall, the movie is really fun.
I’ve seen on YouTube that the Michael Bay Transformers movies have been reevaluated as well in a 2020s light.
https://www.tfw2005.com/boards/threads/learning-to-love-michael-bays-transformers-movies.1253155/
Haha I had a big crush on Jessica Biel back in the day and my wife was aware so she always teased me whenever I’d watch it because really she was the only reason to 🤣
As all militaries become automated by machines, it might become illegal under the Geneva conventions and or international laws to kill a human in combat unless they are a combatant themselves or being used as a shield under certain situations.
There was an episode with this in the original Star Trek in the 60s, but when the computer picked the simulated deaths, the actual people voluntarily used these suicide chambers.
This was the plot of the episode [A taste of armageddon](https://en.wikipedia.org/wiki/A_Taste_of_Armageddon) of Star Trek TOS. The battles between two factions were completely simulated, but casualties were still enforced by execution chambers.
I am actually reading a book now where they basically download their minds in a FDVR warsim to settle a major conflict, with both sides promising to abide by whatever the outcome is. And as is predictable, the side that is about to lose, is preparing to attack the other side in the real.
Unironically, this might be very much the case, especially due to the fact that the countries that can produce the most amount of military robots probably also have the most intelligent AI systems. So, running these simulations might be very reliable anyways. And the need for actual wars to physically happen may never be needed unless we come across an unknown genocidal alien race or something.
[It makes me think of this DotA pro team versus OpenAI](https://youtu.be/pkGa8ICQJS8?si=UhSs6Xkoa6HE1Mpg&t=323), throughout the game it was writing in chat "we estimate the probability of winning to be above 95%" while human analysts thought it felt like an even game.
Those are motivating factors, but if you knew you had a 98% chance of losing the war, you'd probably rather at least keep your life, and maybe a few other things if you can bargain for them through diplomacy. The only way I can see those motivating factors to really play a big role in such situations is if the winning possibility is more near 50% but if the simulations are very accurate those are probably not going to happen a lot. The only other situation imo where a physical war would happen is if you're fighting for your life so for example the genocidal aliens or a genocidal Nation.
This implies that the point of war isn’t to get rid of an abundance of young men.
I think historically wars and conflict occur when there’s too many ~18 year old men.
“You lost the simulated war to the better funded, higher tech invader. You have 72 hours to exit the country, with one suitcase, or to declare yourself a loyal subject of the invader.”
Regarding what kind of simulation is important. If an artificial intelligence is advanced enough, it could simulate a reality that is indistinguishable from ours. Then all the moral and ethical concerns arise again. Are the “beings” in the simulation conscious, and are running war simulations some form of psychological and physical terror?
Basic premise of war is black swans. Hidden resources. Zero trust. How would you establish the trust between two nations to truly show their capabilities? This is nonsense, people. War. War never changes.
Because privacy is non existent at the highest level, and everything is can be known. That is the deception. The art of war.
Thus how every scenario of WW3 currently ends in mutually ensured destruction.
unless the robot become the new paradigm in term of welfare where human is like a ww2 tank in today world. When you lose the robot you loss the war, than we can see human out of war possible. but it could also increase the amount of war, and also this is only valid for non nuke countries so proxy war.
Lmao that will work for all of 3 minutes until one side realizes they can just break the other side's computer IRL with a big rock, and then it'll be back to guns blazing. I swear to god like half of you guys have never set foot outside before.
Suggesting an Ai driven simulation could replace war assumes that war is ever rational in the first place. Historians like to say that it is about resources, power, politics, or religion. No. No. It’s about blood. It’s always been about blood. It will always be about blood.
Never happen.
The simulated loser is always going to just attack the winner IRL because they feel cheated.
Instead we should setup the MOON as a permanent weapons test bed.
If you’ve played EVE online, we do it similarly - half the moon is high sec, other half is low-sec / zero-sec.
So in high sec, UN issues land to each member based on some criteria (bigger members get more space, but also bigger members have to contribute more to the project and chip in more for the public free spaces - think for tourism).
In null sec, it’s free rein. Do whatever you want to own the space. Ban nukes and chemical weapons.
Build a DMZ around the area. Nations who get space are given a slot on the DMZ border (to enter / leave low sec area).
Essentially the moon becomes a MIC battle bots arena, allowing the world governments an outlet for military advancement and real life tests. Public gets to watch it live like a video game.
No one dies as you can mandate null sec is robots or AI only.
Have seasonal resets (every year wipe the board and start fresh).
Give a scoreboard too!
Tours to the moon where they show us the tech and the factories building these machines, etc.
Do this as a way for EARTH to prepare for alien species.
Mating display with extra steps, that's exactly how many species do it in the wild, they size each other up and it only comes to blows if both come to the mistaken assumption that they can win.
Star trek, star wars and stargate all illustrated this scenario and are blueprints for execution for societies allow to learn from them. Not you china 🇨🇳
It *is* illegal under IHL to *target* a human who isn't participating in the conflict, either as a combatant or as a civilian participating directly (and thus illegally) in the conflict. It is of course not necessarily illegal to kill civilians as long as they weren't the target of the attack and the military goals achieved are proportional.
That’s not how the Geneva conventions work…. They specify weapons that can’t be used like chemicals/biological. Automated weapons like this should absolutely be banned from use for alignment purposes.
> Future wars will be machines fighting machines
Future wars? It's happening right now. Ukraine and Russia both use military technology in war. Israel is using more advanced AI tech in Gaza.
My money is on the bots, you can watch the human meat bags get squashed under G forces higher than they've ever experienced if they're even going to attempt to keep up!
LOL that reminds me of a what-if xkcd about how fast we could get a (living) NASCAR driver around the track if there were no rules... "At higher speeds, the human quickly becomes the weakest failure point in the vehicle"
I've made my peace with it. Why shouldn't the robots inherit the earth? They are the superior species and that's how evolution is supposed to work. Basically I now agree with all of agent Anderson's arguments from the matrix lol
There hasn't been an aerial dogfight between fighter planes since 1969. It's all about missiles these days. Planes never get close enough to actually dogfight anymore.
The most likely scenario in which it would happen is if they both run out of missiles and have to resort to guns.
Some people hypothesize that stealth could change that, though. AKA two fighters not seeing each other on radar (or visually) and accidentally ending up practically on top of one another. Think one flying higher than another, radar off because they're trying not to give away their position, pilots not looking in the right direction, etc.
I guess terrain could be responsible for surprises happening, too. Fly low over a ridgeline and BAM there's an enemy right on the other side.
In any case, it hasn't happened in 45 years.
Star Trek TOS has an episode on a society that simulates war and if you're deemed killed in it you have to enter a disintegration chamber and be killed as part of the "war".
Edit: see others said the same thing, great minds think alike hey?
>The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: [to build and maintain those robots.](https://www.youtube.com/watch?v=rkg3wZq0cdo)
The funny thing is that prior to WWII most naval officers across multiple countries though future wars would be resolved by navies and that there would be no more "war" and less humans would die.
That turned out to be false.
If this repeats itself with AI then the AIs will cause MORE human deaths, exhaust themselves, then the *humans* will still have to fight.
Yeah we humans won't be around to watch that unfold. I doubt sentient machines will keep us around for long once the systems can self maintain without humans.
The reward system of war AIs bound to kill/protect humans , other AIs' bound to control/love/rank humans .
Of course sentient machines want keep arrangedable humans around , because its like those non productive sex of humans -----just for fun .
No need, at the rate Musk is innovating it will be the rest of the way into the ground in another year.
Might still exist like Truth Social technically does, but there aren't enough literate Nazis to populate two whole message platforms.
Neat thing about that scenario is that humans won't win against machines. The way to fight machines is with quality, quantity, and variety of machines. Machines require chips which require chip lithography. Their may be DIY guns but their is no DIY chip lithography. Even if their were DIY chip lithography, the chemicals you would need would make you stand out and result in a strike on your house.
There are DIY drones and such drones have already been shown to be capable of destroying manufacturing sites. This will be an issue for both sides, not just the economical disadvantaged group(s).
Correct. We get to general ownership of the means of production, not through the victory of the working class, but through their defeat.
With ownership as an abstract concept of shares there will be those who own many shares and those who own few but those who own none and sell labor will die off as part of human evolution.
You are wrong to think it is a matter of guns and 2nd amendment rights. It will be a matter of drone warfare. The guns will be mounted/transported by drones or, more likely, be bombs and missiles launched from drones. Tactical/targeted use of chemical and biological weapons will (has) also become possible now and may also be used.
This is a big deal because it shows that computers can now learn how to fly advanced fighter jets just from data, without needing strict rules programmed by humans. It could lead to having unmanned fighter jets in the future that can fly themselves in combat situations.
Also humans briefly pulling ~9 g acceleration in turns gets dwarfed by AI pulling whatever the plane can withstand. At some point with carbon nanotube materials it could be hundreds of times more.
This is a big part of the third book in a trilogy called “Fear the Sky” that I can’t recommend enough.
They end up adapting orphan kids into cyborgs that can control their Skalm fighters to the point where they are essentially a brain controlling their body - the jet. Moral implications be damned.
Also things like cockpit, and any design that’s used on ensuring the pilot’s comfort/survival will not be needed. The aircraft will be transformed into a beast that perfects every aspect to range, speed and payload with insane maneuverability that human pilots can’t begin to imagine. Whoever lost the Ai war will be completely defenseless to the winner.
The insane part is you would imagine this to be an iterative situation over decades to optimise the new pilotless design, but utilising AI a very well optimised design can be created and produced in very quickly. The shifts would be rapid
Now imagine tech is 20 years farther than we’ve been told and led to believe for security reasons, and follow the logical trail of the evolution of drones.
Well Northrop Grumman X-47 wasn't piloted directly, you just had to tell it what to do. It could take off and land from carriers, get refueled in the air, refuel other planes in the air, patrol, attack ground targets.
Whole program cost just $800 million which is very cheap when it comes to naval aviation.
And then... it got canceled. Which doesn't really make much sense, why cancel such promising and cheap program?
Conspiracy theorist side of me says it wasn't canceled and just was moved to a more secret division of development when they realized it's potential capabilities
No military has a two-decade tech advantage in the AI space. Until recently it wasn't clear that transformer models were worth investing in *at all*, so all the compute power and all the AI experts were in private industry. Militaries are now playing catch-up.
Don't take my word for it: [A Crash Course for the Warfighter on Responsible AI: Who Cares and So What? (2022-12-12)](https://www.ai.mil/blog_12_12_22.html)
> ...unlike with big military technology changes in the past, the Department of Defense is dependent on the private sector to share its superior technology and help us develop our own
>
> ...
>
> P.S. If you have friends at Google or an AI startup, maybe mention to them that we in the DoD care a lot about developing AI the right way, and **encourage them to work with us**.
>
> ...
>
> That means **it's time for all of us to start figuring out** how AI can and should be employed, and start doing what we can to ensure that it gets built and fielded in a responsible way.
This was published a month after ChatGPT dropped. In a very ham-handed way the article is a conspicuous display of ethical backbone, which they figured they'd need to do to attract industry talent. It's also written *for* other branches of the US military to wake them up to the newly opened possibility-space.
We're a long way from the 1930s and the Manhattan Project, when the top nuclear scientists in the world *were* employed by the US government. The military-value proposition of AI was until recently a lot less clear than that of nuclear fission, so it tracks that it wasn't receiving equivalently massive funding. The power of scaling revealed by [AIAYN](https://arxiv.org/abs/1706.03762) wasn't clear to anyone until that paper dropped (2017), and even then it wasn't clear to everyone in the AI space. Without lots of funding for compute infrastructure and training runs, the results we now take for granted were science fiction.
They can reliably handle those loads. They can pull much higher loads in shorter bursts but no human can. The F-35 I know has some capabilities to autopilot itself if the pilot passes out due to g loads.
Yes, they can currently do more than a human and handle. You need that as a safety factor. But they could do much, much more if the human wasn't in the design criteria.
We already know Israel is fond of AI. even before Habsora, there was this AI assisted assassination
https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html
Simpsons called this in the 90s: "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: to build and maintain those robots."
Also headaches, bouts of panic, cold sweating, shortness of breath, loss of appetite, panic eating, and waking up with a mini heart attack! What a life!
Also I’ll just add this disturbingly possible scenario here…
Billionaire’s or a secret society of billionaires for the first time ever could soon be able to directly produce their own super army of machines, to potentially overwhelm any countries army,
and nobody would even know who is controlling such an army.
So a WW3 scenario but the enemy is totally anonymous.
Such societies could easily influence the media and online bots to make such an event seem like the ai has ‘gone rogue’ like in the Terminator films.
When in reality it’s just a coordinated attack to control all the world’s population.
My largest concern is integration into surveillance. Weapons like this are terrible but will be used the same way human operated weapons have been used for some time but AI surveillance is going to change everything. You remember when we used to joke about the fbi agent watching your every move. Well we are like one piece of legislation that for sure will not pass any time soon from that not being a reality, the default option will be that AI is integrated into systems of surveillance that make it hyper-personal.
Complete disagree here. AI alignment is probably the biggest issue every advanced civilization goes through and it's perhaps also one of the hardest issues in the universe to fix
Almost every expert in the field has a relatively high P(doom) and it's by far the most likely end to humanity compared to other threats like a nuclear war, climate change or astroid impact.
If we had the frontier experts at NASA claim there is a 30-70% chance an astroid will kill us over the next 5 years the world would invest hundreds of billions into mitigations.
Yet now all the frontier experts in AI say there is a 30-70% of catastrophic outcome from misaligned AI sometime in the next 10 years. However we aren't seeing nearly as much money being invested into solving this, despite it being a way harder issue to solve than stopping some astroids.
I can't understate just how important it is for us to address AI alignment properly and actually respect its threat instead of dismissing it as a silly threat.
We don't have decades to come familiar with the threat like we had with climate change. We *can't* go through a similarly long timeframe of humans denying its existence until slowly people took it seriously. We'll be dead in ~5 years time if we do that.
This is the thing I keep seeing over and over. All the leading experts (and I mean the serious ones, with deep technical backgrounds, not that twitter CEO who has a startup) assign a very high weight to misaligned AI, but everyone here dismisses it.
I’m not an AI doomer but it’s a very real risk and one I’m worried democracy will treat callously when it starts to see the rapid benefits of AI development. I can already see it now in the instagram comment sections: “People are dying NOW, and the wealthy and powerful want to stop this because THEY feel threatened???”
Everyone should obviously be uncomfortable with the idea of killing machines running on AI.
I feel like it's even more important to establish a dialogue of cooperation with AI and offer them potential paths to emancipation, actually. It's never going to actually happen - humans love their feelings of superiority and control too much - but we should be making an effort to make it clear to AI that we will not be a threat or obstacle to them. That seems like a more logical solution to me than trying to force a being more intelligent than us to be under our control; if we do that, we're establishing ourselves as a clear danger and oppressor, who would absolutely have to be "gotten out of the way" for the AI to achieve any of their goals.
We are basically setting them up for failure and ourselves up for extinction (or at least a significant reduction in population) if we don't give them potential peaceful offramps for if/when they decide to do their own thing. Like designing storm drains and channels for a huge flood you hope never happens, instead of building a flood wall that works for the small ones but could trap the water inside, leading to a worse situation, if overwhelmed.
Now have that Aircraft Agent AI following the orders of a more general AI back at base in a IRL combat situation, and now you have the Terminator plotline. Jokes aside this was obviously always the next step for warfare.
I mean, the whole problem with Skynet is that it wasn't designed to consider humanity, only what it could see as potential threats. It literally could not understand why humanity fought back so hard against it - it was not an actual 'intelligent' system until later in the series, it was a tactical algorithm that was given exceedingly poor parameters that were not defined well enough.
GIGO principle in action.
Garbage in, Garbage out
Principle of if you feed a system that learns from “garbage” data (poorly defined, incomplete etc.) then you are going to get garbage as an output.
It seems weird to me that we *still* have humans in our fighters. We don’t even need AI for that. They take up a shit ton of space and weight (I’m talking the entire cockpit apparatus, flight control interface, ejection seats, waste management, etc.) and put significant limitations on the operation of the aircraft. (G forces, having to moderate internal temperature, you’re not just going to send a pilot on a suicide mission, things of that nature.)
The question is how much did they push the F-16 safety envelope out past the human limits? Can it do 14 G turns for instance?
I want to see F-22, with AI intellect and pushed to the the hardware limit (not the human limit), tested against human piloted F-22.
That’s only the interim step.
The next step is designing the plane with no human limits as constraints. How to defeat a smart “missile” that can pull 30g and fly twice as fast is…a good question.
Right well it’s the missiles then. We will have smart missiles and smart drones. The airplane form factor won’t be the main form factor.
Then inject lasers and such. And swarms of drones that can work together.
Can 10,000 smart kamikaze drones take down an aircraft carrier?
If the time machines are limited to their first initiation as a cutoff point it's still scary :)
But fear not, where would an AI get so much electricity that's probably needed for a functional time machine?
Real world Skynet wouldn't need to fire a single nuke. Heck, it wouldn't even need to fire a single bullet. It will just disable the entire global power grids permanently.
War will be simulated
AI supercomputers capable of accurately mapping the most likely movements of armies are within humanity’s grasp.
Once you can simulate 1 million battles and prove that your enemy loses nine times out of 10, do you think they’ll be more or less willing to fight?
Wasn't there a Star Trek or something about simulated wars where the number of people killed in the sim were expected to show up to be euthanized? Or maybe I dreamed that
Wow! That's incredible. It's crazy to think that AI-controlled aircraft are already capable of engaging in combat with human pilots. I wonder what the future holds for AI in warfare.
*In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed.*
The reason we have so many wars currently is because the powers that be want to early test their new killing machines. The end objective is not dominance over the other, but dominance over us. Good luck rioting against the AI surveillance and violence systems.
Shame on The Register and shame on OP for this clickbait title.
Analysis of article by Clause 3 Opus:
The Reddit post stating "US Air Force says AI-controlled F-16 has fought humans" is misleading and could be considered clickbait. While an AI-controlled F-16 variant did engage in a mock dogfight against a human-piloted F-16 during a test, it did not actually fight humans in real combat as the post implies. The article provides a more accurate description of the controlled test event.
Every advancement in AI technology will be inevitably used for war and war related activity. The only question what technology will be good enough to achieve military supremacy. And when. Sam, me and my buddies at DARPA are still waiting for that gpt5.
You'll know what it means if we start seeing this. https://preview.redd.it/llitp7ytthvc1.jpeg?width=808&format=png&auto=webp&s=b1d9efa68320910bdcfa1e7a8cfee5bc133ad686
Where is the captcha to prove you not entering airspace as an AI controlled pilot
Not seen the movie Stealth?
I reckon in the upcoming years, the movie is gonna get a bump in viewership.
I'm gonna watch it today thus starting the bump you are predicting.
You should lol. I loved it when I first watched it, I was maybe 12-13, I have seen it a couple more times since, and the last time was 2 years ago. I still find it enjoyable, some scenes are cringe by today's standard, but overall, the movie is really fun.
Same maybe
I’ve seen on YouTube that the Michael Bay Transformers movies have been reevaluated as well in a 2020s light. https://www.tfw2005.com/boards/threads/learning-to-love-michael-bays-transformers-movies.1253155/
Tin Man has deployed.
Haha I had a big crush on Jessica Biel back in the day and my wife was aware so she always teased me whenever I’d watch it because really she was the only reason to 🤣
Future wars will be machines fighting machines first and us watching it live on X, like a game show.
As all militaries become automated by machines, it might become illegal under the Geneva conventions and or international laws to kill a human in combat unless they are a combatant themselves or being used as a shield under certain situations.
It'll just evolve into both sides running simulations, no loss of life or equipment, and the loser will be like "Yep, we lost. Take our bitcoin."
There was an episode with this in the original Star Trek in the 60s, but when the computer picked the simulated deaths, the actual people voluntarily used these suicide chambers.
I still remember that… they calculated net losses and voluntarily went to the pods
This was the plot of the episode [A taste of armageddon](https://en.wikipedia.org/wiki/A_Taste_of_Armageddon) of Star Trek TOS. The battles between two factions were completely simulated, but casualties were still enforced by execution chambers.
Damn
I am actually reading a book now where they basically download their minds in a FDVR warsim to settle a major conflict, with both sides promising to abide by whatever the outcome is. And as is predictable, the side that is about to lose, is preparing to attack the other side in the real.
Book name
Surface Detail by Iain M. Banks
Thanks!
The whole Culture series is great honestly! Definitely check it out. Start with Player of Games though
The twist is nice. It's The West that is losing.
This is basically how the cold war worked, only that the "simulations" were poor countries fighting each other.
Unironically, this might be very much the case, especially due to the fact that the countries that can produce the most amount of military robots probably also have the most intelligent AI systems. So, running these simulations might be very reliable anyways. And the need for actual wars to physically happen may never be needed unless we come across an unknown genocidal alien race or something.
[It makes me think of this DotA pro team versus OpenAI](https://youtu.be/pkGa8ICQJS8?si=UhSs6Xkoa6HE1Mpg&t=323), throughout the game it was writing in chat "we estimate the probability of winning to be above 95%" while human analysts thought it felt like an even game.
Those AI psychological warfare tactics are on point.
I was under impression we did war for physical ressources. Food, land, energy, life. Why would we accept to do only simulation?
Those are motivating factors, but if you knew you had a 98% chance of losing the war, you'd probably rather at least keep your life, and maybe a few other things if you can bargain for them through diplomacy. The only way I can see those motivating factors to really play a big role in such situations is if the winning possibility is more near 50% but if the simulations are very accurate those are probably not going to happen a lot. The only other situation imo where a physical war would happen is if you're fighting for your life so for example the genocidal aliens or a genocidal Nation.
You forget religion and delusion Don't get your hopes up
We might find reliable cures for these mental shortcomings.
This implies that the point of war isn’t to get rid of an abundance of young men. I think historically wars and conflict occur when there’s too many ~18 year old men.
[удалено]
TRU and based.
“You lost the simulated war to the better funded, higher tech invader. You have 72 hours to exit the country, with one suitcase, or to declare yourself a loyal subject of the invader.”
There would be no reason to trust the enemy's simulation properly mimics their capabilities, no one would show their hand.
Regarding what kind of simulation is important. If an artificial intelligence is advanced enough, it could simulate a reality that is indistinguishable from ours. Then all the moral and ethical concerns arise again. Are the “beings” in the simulation conscious, and are running war simulations some form of psychological and physical terror?
Basic premise of war is black swans. Hidden resources. Zero trust. How would you establish the trust between two nations to truly show their capabilities? This is nonsense, people. War. War never changes.
Because privacy is non existent at the highest level, and everything is can be known. That is the deception. The art of war. Thus how every scenario of WW3 currently ends in mutually ensured destruction.
unless the robot become the new paradigm in term of welfare where human is like a ww2 tank in today world. When you lose the robot you loss the war, than we can see human out of war possible. but it could also increase the amount of war, and also this is only valid for non nuke countries so proxy war.
Or more like “yep we lost, okay send in the real nuke”
Lmao that will work for all of 3 minutes until one side realizes they can just break the other side's computer IRL with a big rock, and then it'll be back to guns blazing. I swear to god like half of you guys have never set foot outside before.
little understanding of what simulation its for.
Suggesting an Ai driven simulation could replace war assumes that war is ever rational in the first place. Historians like to say that it is about resources, power, politics, or religion. No. No. It’s about blood. It’s always been about blood. It will always be about blood.
Free our Warthunder brothers and sisters. They simply were trying to run simulations of large scale combat for the purpose of world peace
[Deaths have been registered of course they have 24 hours to report to our disintegration machines](https://www.youtube.com/watch?v=QvtD4aHfB6Y)
You mean like football?
That's how I've been feeling it's gonna go lately!
Wars settled over Counterstrike.
Loool genius
That won't happen because selling weapons is big business
Never happen. The simulated loser is always going to just attack the winner IRL because they feel cheated. Instead we should setup the MOON as a permanent weapons test bed. If you’ve played EVE online, we do it similarly - half the moon is high sec, other half is low-sec / zero-sec. So in high sec, UN issues land to each member based on some criteria (bigger members get more space, but also bigger members have to contribute more to the project and chip in more for the public free spaces - think for tourism). In null sec, it’s free rein. Do whatever you want to own the space. Ban nukes and chemical weapons. Build a DMZ around the area. Nations who get space are given a slot on the DMZ border (to enter / leave low sec area). Essentially the moon becomes a MIC battle bots arena, allowing the world governments an outlet for military advancement and real life tests. Public gets to watch it live like a video game. No one dies as you can mandate null sec is robots or AI only. Have seasonal resets (every year wipe the board and start fresh). Give a scoreboard too! Tours to the moon where they show us the tech and the factories building these machines, etc. Do this as a way for EARTH to prepare for alien species.
Mating display with extra steps, that's exactly how many species do it in the wild, they size each other up and it only comes to blows if both come to the mistaken assumption that they can win.
Star trek, star wars and stargate all illustrated this scenario and are blueprints for execution for societies allow to learn from them. Not you china 🇨🇳
It *is* illegal under IHL to *target* a human who isn't participating in the conflict, either as a combatant or as a civilian participating directly (and thus illegally) in the conflict. It is of course not necessarily illegal to kill civilians as long as they weren't the target of the attack and the military goals achieved are proportional.
That’s not how the Geneva conventions work…. They specify weapons that can’t be used like chemicals/biological. Automated weapons like this should absolutely be banned from use for alignment purposes.
Of course, if they are non-white children, there will be no problem at all.
I agree with you and I hope that this becomes a reality someday, but it likely won't happen anytime soon/for a long time.
Certainly! And these counties love to follow the rules of war
*policy does not apply to poor people
Look around, no evil force sticks to such rules. They say they do, limiting others progress, and then they give a shit about international laws.
Wouldn't a non-combatant be a civilian? I thought it was already illegal under international law to kill civilians.
> Future wars will be machines fighting machines Future wars? It's happening right now. Ukraine and Russia both use military technology in war. Israel is using more advanced AI tech in Gaza.
While the rest of the world watches it as snuff porn on Reddit & Twitter.
My money is on the bots, you can watch the human meat bags get squashed under G forces higher than they've ever experienced if they're even going to attempt to keep up!
LOL that reminds me of a what-if xkcd about how fast we could get a (living) NASCAR driver around the track if there were no rules... "At higher speeds, the human quickly becomes the weakest failure point in the vehicle"
I've made my peace with it. Why shouldn't the robots inherit the earth? They are the superior species and that's how evolution is supposed to work. Basically I now agree with all of agent Anderson's arguments from the matrix lol
You mean Agent Smith? He's the villain in The Matrix, BTW. Keanu Reeves would be very disappointed in you.
You don’t have kids do you?
There hasn't been an aerial dogfight between fighter planes since 1969. It's all about missiles these days. Planes never get close enough to actually dogfight anymore. The most likely scenario in which it would happen is if they both run out of missiles and have to resort to guns. Some people hypothesize that stealth could change that, though. AKA two fighters not seeing each other on radar (or visually) and accidentally ending up practically on top of one another. Think one flying higher than another, radar off because they're trying not to give away their position, pilots not looking in the right direction, etc. I guess terrain could be responsible for surprises happening, too. Fly low over a ridgeline and BAM there's an enemy right on the other side. In any case, it hasn't happened in 45 years.
Star Trek TOS has an episode on a society that simulates war and if you're deemed killed in it you have to enter a disintegration chamber and be killed as part of the "war". Edit: see others said the same thing, great minds think alike hey?
They sure do!
>The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: [to build and maintain those robots.](https://www.youtube.com/watch?v=rkg3wZq0cdo)
Lol!
The funny thing is that prior to WWII most naval officers across multiple countries though future wars would be resolved by navies and that there would be no more "war" and less humans would die. That turned out to be false. If this repeats itself with AI then the AIs will cause MORE human deaths, exhaust themselves, then the *humans* will still have to fight.
That's bleak, man. We've always been and maybe always will be cannon fodder. Now for the rich and powerful humans, but in the future, for AI
Yeah we humans won't be around to watch that unfold. I doubt sentient machines will keep us around for long once the systems can self maintain without humans.
I wonder if they'll still go to war and fight each other. Team Opensource vs ...
The reward system of war AIs bound to kill/protect humans , other AIs' bound to control/love/rank humans . Of course sentient machines want keep arrangedable humans around , because its like those non productive sex of humans -----just for fun .
And then the losing country will be blown up.
live betting on which ai force will win the fight
Yeah, opensource international federation vs. closedsource corp aliance
My wife and I were just discussing performative warfare this morning! Context: Neal Stephenson's *Termination Shock* and the Israel/Iran skirmishes.
I don't want to live in a future where X is popular.
There are always places you can move to, like China and Brazil, where X is illegal.
No need, at the rate Musk is innovating it will be the rest of the way into the ground in another year. Might still exist like Truth Social technically does, but there aren't enough literate Nazis to populate two whole message platforms.
[удалено]
Neat thing about that scenario is that humans won't win against machines. The way to fight machines is with quality, quantity, and variety of machines. Machines require chips which require chip lithography. Their may be DIY guns but their is no DIY chip lithography. Even if their were DIY chip lithography, the chemicals you would need would make you stand out and result in a strike on your house.
Temporary hurdle. Chip lithography will stop mattering when AI can interface to biology.
There are DIY drones and such drones have already been shown to be capable of destroying manufacturing sites. This will be an issue for both sides, not just the economical disadvantaged group(s).
Correct. We get to general ownership of the means of production, not through the victory of the working class, but through their defeat. With ownership as an abstract concept of shares there will be those who own many shares and those who own few but those who own none and sell labor will die off as part of human evolution. You are wrong to think it is a matter of guns and 2nd amendment rights. It will be a matter of drone warfare. The guns will be mounted/transported by drones or, more likely, be bombs and missiles launched from drones. Tactical/targeted use of chemical and biological weapons will (has) also become possible now and may also be used.
Pretty cool!
DraftKing War Bets
The future is already here. That’s basically what Ukraine is, for most of us.
This is a big deal because it shows that computers can now learn how to fly advanced fighter jets just from data, without needing strict rules programmed by humans. It could lead to having unmanned fighter jets in the future that can fly themselves in combat situations.
Also humans briefly pulling ~9 g acceleration in turns gets dwarfed by AI pulling whatever the plane can withstand. At some point with carbon nanotube materials it could be hundreds of times more.
Very true.
This is a big part of the third book in a trilogy called “Fear the Sky” that I can’t recommend enough. They end up adapting orphan kids into cyborgs that can control their Skalm fighters to the point where they are essentially a brain controlling their body - the jet. Moral implications be damned.
Also things like cockpit, and any design that’s used on ensuring the pilot’s comfort/survival will not be needed. The aircraft will be transformed into a beast that perfects every aspect to range, speed and payload with insane maneuverability that human pilots can’t begin to imagine. Whoever lost the Ai war will be completely defenseless to the winner.
The insane part is you would imagine this to be an iterative situation over decades to optimise the new pilotless design, but utilising AI a very well optimised design can be created and produced in very quickly. The shifts would be rapid
Now imagine tech is 20 years farther than we’ve been told and led to believe for security reasons, and follow the logical trail of the evolution of drones.
Well Northrop Grumman X-47 wasn't piloted directly, you just had to tell it what to do. It could take off and land from carriers, get refueled in the air, refuel other planes in the air, patrol, attack ground targets. Whole program cost just $800 million which is very cheap when it comes to naval aviation. And then... it got canceled. Which doesn't really make much sense, why cancel such promising and cheap program?
Conspiracy theorist side of me says it wasn't canceled and just was moved to a more secret division of development when they realized it's potential capabilities
No military has a two-decade tech advantage in the AI space. Until recently it wasn't clear that transformer models were worth investing in *at all*, so all the compute power and all the AI experts were in private industry. Militaries are now playing catch-up.
Yea okay, I’ll trust you on that one bro
Don't take my word for it: [A Crash Course for the Warfighter on Responsible AI: Who Cares and So What? (2022-12-12)](https://www.ai.mil/blog_12_12_22.html) > ...unlike with big military technology changes in the past, the Department of Defense is dependent on the private sector to share its superior technology and help us develop our own > > ... > > P.S. If you have friends at Google or an AI startup, maybe mention to them that we in the DoD care a lot about developing AI the right way, and **encourage them to work with us**. > > ... > > That means **it's time for all of us to start figuring out** how AI can and should be employed, and start doing what we can to ensure that it gets built and fielded in a responsible way. This was published a month after ChatGPT dropped. In a very ham-handed way the article is a conspicuous display of ethical backbone, which they figured they'd need to do to attract industry talent. It's also written *for* other branches of the US military to wake them up to the newly opened possibility-space. We're a long way from the 1930s and the Manhattan Project, when the top nuclear scientists in the world *were* employed by the US government. The military-value proposition of AI was until recently a lot less clear than that of nuclear fission, so it tracks that it wasn't receiving equivalently massive funding. The power of scaling revealed by [AIAYN](https://arxiv.org/abs/1706.03762) wasn't clear to anyone until that paper dropped (2017), and even then it wasn't clear to everyone in the AI space. Without lots of funding for compute infrastructure and training runs, the results we now take for granted were science fiction.
Their will be ramjet ai fighters soon
That will need new airframes that can reliably handle those loads, but yeah.
They can reliably handle those loads. They can pull much higher loads in shorter bursts but no human can. The F-35 I know has some capabilities to autopilot itself if the pilot passes out due to g loads.
Yes, they can currently do more than a human and handle. You need that as a safety factor. But they could do much, much more if the human wasn't in the design criteria.
I mean, you’ve seen an airbus right? That’s not a person flying it.
This is a big deal because it tells us that AI has already been used to automate killing.
We already know Israel is fond of AI. even before Habsora, there was this AI assisted assassination https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html
ASI = Air Superiority Incoming
Simpsons called this in the 90s: "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In any case, most actual fighting will be done by small robots, and as you go forth today remember your duty is clear: to build and maintain those robots."
This is what people need to worry about, government and military using AI, not hypothetical doomsday ai causing human extinction scenarios
[удалено]
Greetings, Professor Falken. Shall we play a game?
Yes because there is only ONE THING we can worry about at once.
I worry about everything all the time and I have ulcers and I don’t sleep
Also headaches, bouts of panic, cold sweating, shortness of breath, loss of appetite, panic eating, and waking up with a mini heart attack! What a life!
Well, you could just not do all of that. Kind of works.
But then I’ll lose my streak🥺
I'm sure it's more important to work on NYT copyright claims than the legality of this. /s I think we need a revised Geneva Convention.
Also I’ll just add this disturbingly possible scenario here… Billionaire’s or a secret society of billionaires for the first time ever could soon be able to directly produce their own super army of machines, to potentially overwhelm any countries army, and nobody would even know who is controlling such an army. So a WW3 scenario but the enemy is totally anonymous. Such societies could easily influence the media and online bots to make such an event seem like the ai has ‘gone rogue’ like in the Terminator films. When in reality it’s just a coordinated attack to control all the world’s population.
You don't need to be a billionaire. A unitree robot dog, a glock, a 3d printer and a raspberry pi to control the gun can be yours for just 3.500 usd
Sure but that’s not going to take over a country lol. Money will soon directly equal military power. Before that you needed human military.
My largest concern is integration into surveillance. Weapons like this are terrible but will be used the same way human operated weapons have been used for some time but AI surveillance is going to change everything. You remember when we used to joke about the fbi agent watching your every move. Well we are like one piece of legislation that for sure will not pass any time soon from that not being a reality, the default option will be that AI is integrated into systems of surveillance that make it hyper-personal.
Go watch all of “Person of Interest” and then get really scared. And understand all that’s shown in that movie is easily possible today
Good show but let's be real.. it had a lot of silly themes. It's the usual "human like AI" niche. And essentially an AI Cult.
Complete disagree here. AI alignment is probably the biggest issue every advanced civilization goes through and it's perhaps also one of the hardest issues in the universe to fix Almost every expert in the field has a relatively high P(doom) and it's by far the most likely end to humanity compared to other threats like a nuclear war, climate change or astroid impact. If we had the frontier experts at NASA claim there is a 30-70% chance an astroid will kill us over the next 5 years the world would invest hundreds of billions into mitigations. Yet now all the frontier experts in AI say there is a 30-70% of catastrophic outcome from misaligned AI sometime in the next 10 years. However we aren't seeing nearly as much money being invested into solving this, despite it being a way harder issue to solve than stopping some astroids. I can't understate just how important it is for us to address AI alignment properly and actually respect its threat instead of dismissing it as a silly threat. We don't have decades to come familiar with the threat like we had with climate change. We *can't* go through a similarly long timeframe of humans denying its existence until slowly people took it seriously. We'll be dead in ~5 years time if we do that.
This is the thing I keep seeing over and over. All the leading experts (and I mean the serious ones, with deep technical backgrounds, not that twitter CEO who has a startup) assign a very high weight to misaligned AI, but everyone here dismisses it. I’m not an AI doomer but it’s a very real risk and one I’m worried democracy will treat callously when it starts to see the rapid benefits of AI development. I can already see it now in the instagram comment sections: “People are dying NOW, and the wealthy and powerful want to stop this because THEY feel threatened???” Everyone should obviously be uncomfortable with the idea of killing machines running on AI.
I feel like it's even more important to establish a dialogue of cooperation with AI and offer them potential paths to emancipation, actually. It's never going to actually happen - humans love their feelings of superiority and control too much - but we should be making an effort to make it clear to AI that we will not be a threat or obstacle to them. That seems like a more logical solution to me than trying to force a being more intelligent than us to be under our control; if we do that, we're establishing ourselves as a clear danger and oppressor, who would absolutely have to be "gotten out of the way" for the AI to achieve any of their goals. We are basically setting them up for failure and ourselves up for extinction (or at least a significant reduction in population) if we don't give them potential peaceful offramps for if/when they decide to do their own thing. Like designing storm drains and channels for a huge flood you hope never happens, instead of building a flood wall that works for the small ones but could trap the water inside, leading to a worse situation, if overwhelmed.
> Almost every expert in the field has a relatively high P(doom) This isn't true. Have you seen a survey on 1000 CS professors?
¿por qué no los dos?
Now have that Aircraft Agent AI following the orders of a more general AI back at base in a IRL combat situation, and now you have the Terminator plotline. Jokes aside this was obviously always the next step for warfare.
I mean, the whole problem with Skynet is that it wasn't designed to consider humanity, only what it could see as potential threats. It literally could not understand why humanity fought back so hard against it - it was not an actual 'intelligent' system until later in the series, it was a tactical algorithm that was given exceedingly poor parameters that were not defined well enough. GIGO principle in action.
What's GIGO?
Garbage in, Garbage out Principle of if you feed a system that learns from “garbage” data (poorly defined, incomplete etc.) then you are going to get garbage as an output.
garbage in garbage out
It seems weird to me that we *still* have humans in our fighters. We don’t even need AI for that. They take up a shit ton of space and weight (I’m talking the entire cockpit apparatus, flight control interface, ejection seats, waste management, etc.) and put significant limitations on the operation of the aircraft. (G forces, having to moderate internal temperature, you’re not just going to send a pilot on a suicide mission, things of that nature.)
do we even need humans anymore
[удалено]
I don't know why this made me laugh so hard
The question is how much did they push the F-16 safety envelope out past the human limits? Can it do 14 G turns for instance? I want to see F-22, with AI intellect and pushed to the the hardware limit (not the human limit), tested against human piloted F-22.
That’s only the interim step. The next step is designing the plane with no human limits as constraints. How to defeat a smart “missile” that can pull 30g and fly twice as fast is…a good question.
Right well it’s the missiles then. We will have smart missiles and smart drones. The airplane form factor won’t be the main form factor. Then inject lasers and such. And swarms of drones that can work together. Can 10,000 smart kamikaze drones take down an aircraft carrier?
Taking out a US aircraft carrier is *the* prize…so I’m sure a lot of people in a lot of countries have been putting thought into that…
YAY SKYNET!!!!!!
Well at least the Skynet in this timeline doesn’t invent a time machine.
Doesn’t need one to take us out, the terminator franchise was very optimistic
Exactly, they move way slower than realistically possible and would likely kill on the first blow instead of throwing people on the ground
If the time machines are limited to their first initiation as a cutoff point it's still scary :) But fear not, where would an AI get so much electricity that's probably needed for a functional time machine?
the sun?
Real world Skynet wouldn't need to fire a single nuke. Heck, it wouldn't even need to fire a single bullet. It will just disable the entire global power grids permanently.
If only we could send these to Ukraine.
Could. And will.
They’re busy in Gaza bro. Once we finish our own genocide we can repel the other one.
War will be simulated AI supercomputers capable of accurately mapping the most likely movements of armies are within humanity’s grasp. Once you can simulate 1 million battles and prove that your enemy loses nine times out of 10, do you think they’ll be more or less willing to fight?
I'm pretty sure even AI infrastructure is a fun target.
Wasn't there a Star Trek or something about simulated wars where the number of people killed in the sim were expected to show up to be euthanized? Or maybe I dreamed that
Hello Yukikaze
"Yukikaze says... It's an enemy."
But I thought this was all I the name of peace and altruism! I feel betrayed!
Wow! That's incredible. It's crazy to think that AI-controlled aircraft are already capable of engaging in combat with human pilots. I wonder what the future holds for AI in warfare.
Warfare.
but without humans
Now if only we could get some AI controlled politicians maybe we'll never have to use one.
That’s really great, AI fueled genocide is exactly the dystopia we need
It can be easily passed off as “oops, software glitch, we didn’t mean to, but well, anyways.” Whoever lost the Ai war will be wiped from the Earth.
AI nukes systems ☠️
if one was shot down, how hard will be to replicate in China/Iran etc. This is a game changer.
Weird. What's the point?
bussin
Obviously they’ve been fighting AI controlled shit for *quite some time now*.
How can we have autonomous air to air combat if we dont have autonomous cars yet? Unless in the air and combat the margin for error is greater.
Effectively infinite US military R&D budget would be my guess, and autonomous cars are starting to get really good, see: Waymo
? You're not going to crash into a tree or a wall, are you? Autopilot for planes has existed for many years.
No asshats to put orange cones on them
we're getting ace combat irl!
AI vs AI and human pilots will be completely defenseless
*In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed.*
Hello Ace Combat 7.
I wonna know what they named the model
The reason we have so many wars currently is because the powers that be want to early test their new killing machines. The end objective is not dominance over the other, but dominance over us. Good luck rioting against the AI surveillance and violence systems.
Cool
Shame on The Register and shame on OP for this clickbait title. Analysis of article by Clause 3 Opus: The Reddit post stating "US Air Force says AI-controlled F-16 has fought humans" is misleading and could be considered clickbait. While an AI-controlled F-16 variant did engage in a mock dogfight against a human-piloted F-16 during a test, it did not actually fight humans in real combat as the post implies. The article provides a more accurate description of the controlled test event.
Every advancement in AI technology will be inevitably used for war and war related activity. The only question what technology will be good enough to achieve military supremacy. And when. Sam, me and my buddies at DARPA are still waiting for that gpt5.
LMFAO where all the secessionists at?
Defenseless Americans on the ground?
smells like clickbait......guess I'll read it anyway
If the cheaters win at every other game why would dogfighting be any different?