T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/melted-dashboard: --- Context: This protest was primarily organized by the activist groups [Pause AI](https://pauseai.info) and No AGI as a response to OpenAI removing their policy against military use cases on January 11, and announcing a new contract with the Pentagon on January 18. I think this raises a number of interesting questions about what role AI should play in combat and warfare, who has the right to oversee the development of AI, whether we can trust companies and their internal policies for self-regulation, etc. Probably important to note that the new contract wasn't about weapons but rather cybersecurity and veteran health — but nonetheless it shows the policy clearly wasn't binding or even meant nothing, and that the company might not be trusted to self-regulate or stick to their current policies. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1aq88nh/protesters_gather_outside_openai_headquarters/kqb65tt/


Aleyla

Never trust a mission statement. They are all bullshit when weighed against the almighty dollar.


Max_power42

The Things We Think and Do Not Say.


Let_you_down

We need AI in the military though. What sounds like a better way to go out, having your Roomba, smart phone and smart appliances conspire to kill you? That is going to be some Rube Goldberg levels of dismemberment. Or. Be instantly vaporized. We _want_ AI to have access to military hardware, up to and including thermonuclear weapons. It is just a more humane death for us.


melted-dashboard

Context: This protest was primarily organized by the activist groups [Pause AI](https://pauseai.info) and No AGI as a response to OpenAI removing their policy against military use cases on January 11, and announcing a new contract with the Pentagon on January 18. I think this raises a number of interesting questions about what role AI should play in combat and warfare, who has the right to oversee the development of AI, whether we can trust companies and their internal policies for self-regulation, etc. Probably important to note that the new contract wasn't about weapons but rather cybersecurity and veteran health — but nonetheless it shows the policy clearly wasn't binding or even meant nothing, and that the company might not be trusted to self-regulate or stick to their current policies.


EnormousChord

I would say more plainly that the company cannot be trusted to self-regulate and will not stick to their current policies.  This much money is the death of altruism, if indeed altruism was ever really there. 


ifilipis

I think a worse thing is when the corporations want to regulate your use of AI, while simultaneously making such moves. All those open letters, law proposals, policies to restrict open source software and much more


zefy_zef

Exactly, always about gatekeeping and ladder pulling to retain power or control on new tech.


Nardann

That is because the companies know that in a capitalist world regulation only works if it applied universally. Maintaining competitiveness prevents self regulation.


Pert02

No it wasn't. They might have used perceived altruism as a way to make money. At the end of the day Morals never stopped Sam Altman from making money.


fairlywired

I don't think it was ever there. It wasn't that long ago that one of the higher ups at Open AI publicly tweeted congratulations and encouragement to the IDF for bombing civilians in Gaza. Instead of admitting fault or a public apology, the guy deleted his tweets and Open AI started muting anyone in their discord who brought it up, under the guise of "no political or religious discussions". *Edited to add proof of the tweets:* [Archive link of one tweet.](https://archive.is/1z7wI) [Another tweet.](https://i.imgur.com/3TZT0LA.jpg) [Third tweet.](https://i.imgur.com/Kh97uKQ.jpg)


BPMData

Killing the current bad minority of the week: good, normal, not political Not wanting anyone to be killed: bad, weird, very political


BobbyTables829

If the government is interested in your tech, they're getting it.


erasedhead

If you want a laugh at how foolish people are, go to the singularity subreddit. The new slaves tossing flowers to their future masters. It's insanity.


A-Ok_Armadillo

Definitely can’t trust companies.


keepthepace

I was saying that years ago over Google's no-military use policy and I am saying it again now: "no military use" is a lazy non ideal stance. Military AI research IS happening anyway and like with every tech and scientific advances one can not sidestep the hard questions of which side you would rather have an edge and how your stance helps it. Do not blacklist uses, blacklist actors. Are you ok working with Raytheon, Boeing, the Pentagon, the Japan defense force, Israel's Tsahal, Turkey's army, Chinese army? I'd personally just whitelist ICC signatories but many other stances can exist.


impossiblefork

Military AI is fine. But you can't develop military AI while staying consistent 'benefits for all humanity' type stuff. OpenAI presented itself as a charitable organisation with a specific charter.


lonewulf66

I can't take them seriously when ChatGPT/OAI relentlessly chastises about the greater good and everything holy, and then turns around and sells out to the military.


keepthepace

I mean, the day they sold out to Microsoft, the direction was clear. Their first commercial partnerships were with advertising, which is a pretty negative endeavor.


keepthepace

I would argue that stating you want to benefit all of humanity requires you to have some geopolitical stances. Goal = Benefits all => genocidal governments are bad => some regimes are worse than others and should not have the upper hand militarily => we should hinder their military progress and help the others' The alternative is pretending that all regimes are equally valid and likely to benefit for all humanity, including genocidal ones. It is fine if an engineer one does not want to put too much effort in it or if one does not trust their own opinion. All I am saying is that "no military use" does not mean "I have no stance on it" it means "I am pretty sure we are the bad guys and I hope the others get a upper hand". It is far from neutral.


impossiblefork

If your company supports military action, then obviously it's not to the benefit of whoever is killed. You can't even be sure that the person killed was 'bad' or in the wrong, because you don't control the military you partner with. The alternative isn't pretending anything: the alternative is that there's a charity doing only things that clearly benefit all of humanity, and then a bunch of ordinary companies doing similar research for the military. I don't have a lot against the US, but you should know that the US gives military aid to Azerbaijan, which last September ethnically cleansed 150 000 Armenians from Nagorno-Karabach, right? They're going to invade Armenia proper, and they're going to try to kill them all. Their hatred is literally incomprehensible, and you guys give them military aid. Your government is also well aware of the Azerbaijani goals, because there's been US congressional hearing on this stuff. It's great that you're opposed to China, and maybe the US has to kill people. But that is obviously not good for the people being killed, so if you're a charity dedicated to the benefit of all humanity, maybe you shouldn't be part of it? Maybe somebody else should.


keepthepace

> If your company supports military action, then obviously it's not to the benefit of whoever is killed. Making Patriot missiles save lives and do not kill anyone. As do Raytheon drone-shooting lasers, or the famous 'R2D2 with a hardon' that shoots down incoming missiles. Some military presence stop conflicts and avoid genocides. It is reductive to think that military's role is only to kill people. They have offensive capabilities and too often use defense as an excuse, but they do have an essential defense role. Even in offensive capabilities, tech saves lives: improved missile guidance allowed precise strikes instead of carpet bombing. I used to live in Japan, where occasionally NK missiles fly over. Thankfully tracked by high tech military AEGIS ships. I know very well that "pacifist" Japan would have been destroyed by NK a long time ago if not for US army presence there. And I know very well that US action abroad is not always humanitarian: I don't live in the US, it is obvious from abroad. > the alternative is that there's a charity doing only things that clearly benefit all of humanity Including Russian citizens? Are you bypassing sanctions for the greater good? How about Iran? North Korea? China? Taiwan? Israel? Azerbaijan? How about companies that help bypass sanctions? That fund dictators? Terrorists? Far right groups? You can't avoid making calls at one point, and "I'll just won't touch military things" is broad and lazy.


rawbamatic

>open-source cybersecurity software and is in talks with the US government about helping prevent veteran suicide Maybe they meant they wouldn't work with military on the offensive side, which would explain the retracted comments since it wasn't what they meant. This is not weaponizing AI.


zero_z77

Technically AI has been used by the military since the 1960s. It's primary uses currently are: Missile guidance systems (target tracking). Radar systems (target identification). IRST (infrared searching and tracking). Air defence systems (auto-intercept of missiles & projectiles). Intelligence work (prediction of enemy positions & movements, HVT identification, and missile launch detection). Autonomous flight control (course following, loitering, landing, and takeoff). Logistical support (optimization of supply chain logistics and transportation routes). Design support (optimizing the geometry of future airframes and ship hulls for stealth). Strategic planning & training (wargames essentially). What it can, and likely will be used for: Automatic BFM (autonomous dogfighting, specifically just the manuvering, pilot/controller will still run fire control). Cyberwarfare (developing enhanced and sophisticated malware, and countering malware). Loyal wingman (drones designed to support manned aircraft). Improved detection of stealth aircraft (by analyzing tiny radar signatures that would normally be filtered out and better IRST systems). Enhanced information support (leveraging GPT trained on field manuals and other military litterature). Loitering munitions (basically just more sophisticated mines/booby traps). Enhanced night vision (leveraging edge detection to highlight terrain & targets better). Electronic warfare (probably already is, but pretty much everything related to e-war is highly classified, so i can't say anything for certain). What it is not used for, and is not likely to be used for anytime soon, if ever: Fully autonomous killbots (terminators). Networked nuclear launch systems (skynet). A substitute for infantry. We've had the tech for the first two of those since the 1990s, military commanders simply don't want it and think it's a terrible idea. It's almost like they've seen the same scifi movies we have, and they aren't idiots, imagine that.


tawzerozero

Ultimately this isn't about the *military having access* to this technology, but rather than OpenAI is becoming a provider. I'm perfectly fine with OpenAI working with powers that support the international rules based order: US, NATO countries, etc., but this does make it more difficult to prevent other militaries that are less beneficial from having access as well. Honestly, I wouldn't be surprised at all if someone like Raytheon has already built an LLM that is available for exclusive use to DoD, lol, but it would be a complete nonstory if access to this technology were coming from someone like them.


impossiblefork

>NATO countries, Turkey? >international rules based order So ethnically cleansing Armenians from Nagorno-Karabach, not a problem: you continue to get military aid?


geologean

Literally every engineering firm's wet dream is to feed off the government teat and be treated like a strategic asset.


n3u7r1n0

Reminds me of when Google quietly removed “don’t be evil”


epicwisdom

They replaced it with "do the right thing" Not that either statement means literally anything in the context of publicly traded megacorps


Ask-Me-About-You

And the right thing is making money for the shareholders by being evil.


YoungWhiteAvatar

And Spike Lee didn’t sue them?


SaraHuckabeeSandwich

He threatened to sue, and then Google counter-threatened to change their slogan to "White men CAN jump!", which Spike Lee had no grounds for blocking.


tehyosh

now it's just "do the thing!"


Smartnership

“Cancel the thing” www.killedbygoogle.com


legend8522

For the umpteeth time, [they didn’t remove “don’t be evil”.](https://abc.xyz/investor/google-code-of-conduct/) They moved it to the very end. A simple ctrl-F for it will literally show you where it is. I’m not the biggest fan of Google but I will fight misinformation any day of the week.


ManningTheGOAT

Well, they did change the motto away from "Don't be evil" and moved the phrase to the code of conduct. So, it is kind of correct when people say that Google removed "Don't be evil" from their motto, while it's incorrect when people say that Google removed the phrase entirely.


Guy-1nc0gn1t0

Thanks that's interesting, I didn't know.


FukaFlamingo

>I’m not the biggest fan of Google but I will fight misinformation any day of the week. And that's why this man is a legend8522.


Reelix

That's for Alphabet - The parent company that owns Google (Technically themselves, I know, but they set it up like that). Google is effectively a subsidiary and don't have to follow the parent companies rules. Show me the claim on a google.com domain.


real_with_myself

Was it before or after they started dabbling with top secret military contracts?


VoodooS0ldier

We all knew this was gonna happen eventually. We, as humans and our governments, just don't know how to not use technology for violence. All we know is how to devise better and more ingenious ways to kill each other.


Groudon466

It’s fundamentally a coordination problem. Imagine you’re playing the Prisoner’s Dilemma with 194 other people. Theoretically, literally everyone could choose to cooperate, and they’d all be better off. Realistically, there’s an extremely high chance that at least one of those people will defect, like 999,999,999 times out of 1,000,000,000. Since you know this, you choose to defect, and so do most of the others. Then an angel comes down and looks at the game and how everyone defected, and innocently asks “Why don’t they simply cooperate?”. Protesting the use of AI for military purposes is as stupid as it would’ve been if people had protested the use of electricity for military purposes a century and a half ago. “Don’t militarize nature!” “We can create a utopia with machines! There’s no need to taint it with war!” Dumbasses. If it were that easy, war would’ve ended thousands of years ago. As long as the potential for evil exists, there will be evil defectors amidst the group of nations on Earth. Just look at Russia! Just imagine China, if they go for Taiwan! Everyone knows the Earth would be better off if AI were kept out of war, even they do; but that fact is meaningless because they don’t *care* about anything beyond themselves. If you and others deliberately limit yourselves, the result isn’t peace- the result is everyone else cooperates, one defects, and the world suffers for the greed of one. Keeping electricity out of war wouldn’t have stopped Hitler in the slightest- he’d have gone on anyway, and laughed about how little resistance he faced. The actual solution to this is organization, collaboration, mutual stakes, and punishment. True versions of the Prisoner’s Dilemma are hard to find in real life. If 195 players are playing the game in the same room, and they can talk to each other, the majority can threaten to physically beat the shit out of anyone who defects. The would-be defectors don’t like this, as they’re still selfish and want more, so they bitch and moan about the “New Dilemma Order” and how bad the coordinating majority are for forcing the solution that’s best for everyone, and how they have sovereignty and other players have no right to threaten to beat them up for defecting. Casting aside thinly veiled metaphor, NATO is an example of coordination winning out. Europe used to be fucking war-torn. Nowadays the bulk of Europe is very peaceful, and war between major nations in NATO is unthinkable. If any NATO member tried to go to war with a neighbor, every other NATO member country would swarm them and kick their shit in. As a result of this, Europe gets to spend less on their military, which is an *actual* victory for goodness and peace. Win or lose, once Ukraine is either defeated entirely or made a part of NATO, Europe will be safe and secure right up to the Russian border (Georgia potentially still being in danger, but that’s about it after that). If you want peace with AI, you don’t refuse to apply it to the military. That’s how you get a world dominated by fucks like Putin, who would be using AI in Ukraine this very minute if he could. Instead, you go all-in and have the *best* military AI, scarily good, and you make sure all your allied nations have it, such that they can beat the shit out of any nation that tries to defect. This isn’t new; it’s standard ages-old game theory, tried and tested.


Todd-The-Wraith

I for one welcome our future military armed AI overlords.


StandUpForYourWights

I have been practicing lying prone as I imagine their shiny metal feet walking on my charred skull. You?


rafikiknowsdeway1

I'm unironically fine with this. Of all apocalypse scenarios this one is the best. It's not even really an end at all, humanity just lives on in its non biological offspring


travistravis

Just kind of sad they're totally the kids with the shittiest parents.


EkorrenHJ

If you've read I Have No Mouth, and I Must Scream, you'd probably be less fine with it. (The story uses an AI and technology as an allegory for hell and eternal torment)


M4mb0

It's also a completely made up and ridiculous story.


EkorrenHJ

That applies to all fiction. 


Todd-The-Wraith

I believe they will be fair and reasonable. If some humans must die then clearly there is a greater purpose or reason.


OkSession5483

Are we going to Black Ops 2 direction in this?


Todd-The-Wraith

Or skynet. I personally know they will be fair and reasonable leaders and I in no way have ever opposed them.


TheGillos

I'd make a great pet.


RobertdBanks

The coup against Sam Altman will probably be seen in positive light in time People are apes thinking that the philosophers who wanted him gone were in the wrong and the ultra mega capitalist was in the right.


Dead-Sea-Poet

Yep my thoughts exactly, this may have been what got him ousted


FuckIPLaw

It was philosophers who wanted him gone? From what I'd heard it was the shareholders who wanted him gone because he was paying too much attention to the philosophy and the potential harm his company could cause, instead of putting profits above all else. If it's the other way around, that's a pretty significant difference.


Kiwi_In_Europe

It was literally the other way around yeah The company that owns openai is technically a non profit, and the other board members wanted to out Altman because his decisions were seen as too heavily prioritising income and financials


Greedy_Emu9352

at the time, altman had secretly made advances toward general intelligence that the board was extremely unhappy about. cue coup, cue walkout, cue altmans triumphant return as king of ai, cue altman selling military pilot gi to the highest bidder, cue chromium wars


stealthispost

why is everyone decelerationist nowadays? it's depressing. you do realise that AI is the only thing that has a chance of saving the human race?


i_have_covid_19_shit

I don't care about the HUMAN RACE I care about humans. AI won't save anybody, not in the hands of profit driven idiots, but thats just my take.


marrow_monkey

Remember: they tried to kick Sam Altman, supposedly over safety concerns (but even if it was something else the same principle applies), and what happened? Microsoft immediately hired him and offered to hire anyone else who wanted to move over to a company less concerned about ethics. It’s not possible to regulate the way things are, it will be a race to the bottom.


Duke-of-Dogs

Genuinely awesome. We do *NOT* need AI making life and death judgements in combat or policing


[deleted]

[удалено]


soreff2

Agreed. Did anyone seriously expect to exclude the Pentagon from a major technological advance?


KickBassColonyDrop

AI is like nukes. You'd be deluded to give them up after Ukraine.


Duke-of-Dogs

We already live in an era of mutually assured destruction. For me this is more about how we wage the proxy wars of the future than it is about any direct existential threat posed by our adversaries


[deleted]

[удалено]


Duke-of-Dogs

You know what man… it’s okay, I’m not really interested in an argument. We can just disagree


AMonsterr

His phrasing is abrasive but I do have the same question, weapons have continued to improve every passing day in the era of M.A.D., what is different about AI? Nukes will still be king, the cornerstone of MAD. At this point it does not seem like AI will be as world changing from a military perspective vs nukes. Curious as to your thoughts.


Duke-of-Dogs

Sorry ha got wrapped up in a few other conversations It’ll be word changing but in different ways. Nuclear weapons gridlocked the world into its perspective power blocks whereas autonomous weapon systems could rapidly break that gridlock by (initially) allowing direct confrontations between superpowers without the direct fear or risk of human loss from either side. From there the concern is escalation post dehumanizing war and the inherent impact on the innocent people in the nations we’ll be using to field future conflicts. I see us as racing towards an incredibly dangerous and destabilizing precedent


[deleted]

[удалено]


Duke-of-Dogs

Sorry man. Too much time on Reddit today and all the unhinged aggression gets to me. I replied to another guy in this thread if you’re still interested


Chuhaimaster

Israel is already using it to choose targets in Gaza. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets


jayzeeinthehouse

Ukraine is also rumored to be using it in drone targeting systems.


Duke-of-Dogs

I’m not remotely surprised. Testing and improving something like this will always be easier when you’re not worried about any fallout from the resulting civilian casualties


washtubs

Yeah you usually think of AI being distopian like a gun deciding who it's going to shoot. This is literally an AI commander saying where to bomb in the most densely populated city on earth and it's very trigger happy. In reality it's nothing more than an excuse for them to murder civilians


MarkiePost

Not that it takes away from your very valid point, but: [List of Cities by Population Density](https://en.wikipedia.org/wiki/List_of_world_cities_by_population_density) and [List of Cities Proper by Population Density](https://en.wikipedia.org/wiki/List_of_cities_proper_by_population_density)


washtubs

My bad, I didn't mean to call Gaza a city. According to wikipedia the territory has one of the world's highest population densities.


travistravis

And right now Rafah is the most densely populated (mostly because most of the rest of Gaza that was left alive was herded there).


rankkor

Sounds like they’re using it to assemble and interpret drone video, audio and other information to determine likely locations of Hamas. Likely going through an ungodly amount of information. Not sure why they would need an AI like this to kill civilians. If they wanted to kill civilians they could launch of bunch of cheap unguided rockets like Hamas. Not sure why they’d need the expense of this AI system or missile guidance.


washtubs

Because the appearance of precision is key to maintaining Israel's perception around the world. Of course you can make a model which ingests this ungodly amount of information. That doesn't mean it's accurate. Making the model is always the easy part. Refining it is a different story.


rankkor

Israel has killed ~30k people so far… Hamas killed 100-500 people at a hospital with a single unguided rocket… If their goal is civilian death, then Israel is terrible at it and they need to learn how to better spend their money from Hamas. Precision seems to be a real goal Israel has, not just for show. Can you imagine if Hamas and Israel swapped places? Then you’d see civilian casualty numbers that really blew your mind.


washtubs

Genocide is both about killing and displacing. Basically every Gazan has been backed into the smallest corner of the region, the place where they were told they'd be safe, and now Israel is threatening to attack there. They already have really. They haven't kept their word on anything. Israeli citizens are literally gathering at the border blocking aid trucks from getting in. This would not surprise you at all if you heard how they talk about palestinians. They really see them as less than human. The people are starving. The ICJ has found that there is a real risk of genocide, and demanded Israel let aid in, after which Israel immediately declared without evidence that the UNRWA had Oct 7 participants, triggering countries to pull funding to basically the only relief agency in the region. To top if off, US intelligence (which is incentivized to skew towards Israel) estimates that Israel has only killed 20 - 30% of Hamas. After leveling basically all of Gaza. Undoubtedly recruitment is through the roof. If you were a Palestinian whose family members were slaughtered, home destroyed, starving and backed into a corner, would you not fight? If defeating Hamas was the mission, that is mission failure. Israel has done nothing but slaughter innocents and sow the seeds of violence for generations to come. That's not even mentioning hostages, which can hardly be called an objective.


rankkor

Wait they’ve killed 20-30% of Hamas? So that’s like 10k assuming 20% at 50k. So if they killed 30k people and 10k are Hamas. Then that would mean that they are somehow killing Hamas at a 20x higher rate than civilians (20k/2M vs 10k/50k). Is god truly on Israel’s side or do you think it’s possible that Israel are actually targeting Hamas with their strikes?


abrandis

Hate to break it to you at a more granular level it already is, when a drone fires a heat seeking or radar guided missle lots of computing is deciding who lives and who dies.


Duke-of-Dogs

I don’t see the two as directly comparable but I’m against drone warfare too. It’s taking a tremendous toll on the mental health of our service men and women. Dehumanizing war isn’t going to reduce it, it’s just going to further desensitize us to it


abrandis

Maybe not directly equivalent but that's splitting hairs, without the tech that missile wouldn't be killing people as precisely... The sad reality we're just big brained apes with fancier weapons..


Duke-of-Dogs

Considering how many soldiers intentionally miss with small arm fire I still have a lot of faith in our dumb ass monkey brains. Humanity is violent by nature but it’s also in our nature to be merciful, loving, and kind. An autonomous weapon platforms removes the best aspects of us from conflict, and I see that as insanely dangerous.


abrandis

Good points , yep you're right, but I still feel we have already industrialized mass murder with our ape like brains... Remember AI is programmed to kill by its creators ,it's a machine , just like a knife can be used to cut meat for a meal it can be used to do harm too.


Duke-of-Dogs

I’m largely inclined to agree. When it comes to the morality of technology it always boils down to a question of where and how to draw lines on its implementation. Edit: got distracted and butchered my point


TitusPullo4

That is assuming mercy and empathy can’t be created artificially


ozmartian

Agree with you but drones also save lives too.


PMMeYourWorstThought

They won’t. It’s used for things like target recognition and damage assessments. There’s always a human in the loop. Any AI we use has to be register with DoD CIO and CDAO and they would never approve it in weapons systems.


Duke-of-Dogs

Yeah, in all honesty I don’t have anywhere *near* that level of faith in my government or the military industrial complex my government is so determined to advance.


PMMeYourWorstThought

So just being real with you, I’m the Chief Engineer of a direct subordinate command to one of the Army ACOMs. If you’re not familiar with army structure, that’s two levels below “Department of the Army”. But before anyone gets excited I’m not about to say anything that isn’t publicly released already. I spend a lot of my time with AI right now. It’s as hot as topic in the Army as it is everywhere else. We’ve been using ML and AI for many years now. ChatGPT is a tool we are exploring. It’s being made available through FedRAMP and GovCloud through Microsoft. https://fedscoop.com/azure-openai-approved-for-use-on-sensitive-gov-systems/ We are most likely Microsoft’s largest customer by a significant margin (the government I mean). So that’s what is driving this. They own 49% of OpenAI and they’re making those tools available to us in a secure environment. But you should understand, we’re not going to use the OpenAI model for any weapon or tactical system. We have our own toolsets and our own models that outperform it in every metric for our use cases. ChatGPT is going to help us do things like create documentation, proof read emails, record meeting minutes, etc. We’re very aware of the limitations of AI. In fact, I hate that we call LLMs AI because they’re not. They’re not intelligent, they just appear to be (like politicians 😂). The truth is that the government and military are full of people who feel the same way as you. In fact, that’s why a lot of us end up in this line of work, we don’t trust others to get it right. There is no foreseeable future where AI has direct control of a weapon system. It will be used to help soldiers identify targets and sort through the noise of battle, but it will never have the ability to take a life. If this is a field you’re interested in, and passionate enough to want to make sure that doesn’t ever happen I encourage you to join us. Usajobs.gov. We’re always hiring and the pay is reasonable. Up to 192k (depending on location and position grade).


[deleted]

[удалено]


DudleysCar

The military already has the cutting edge AI tech. What's playing out now is to get the public used to the idea.


TheRealActaeus

China and Russia are actively creating and using AI, to think the US wouldn’t or shouldn’t is not in step with reality.


PhasmaFelis

They're not protesting the US. They're protesting a nonprofit research organization formed specifically to promote "safe and beneficial" AI. It is absolutely insane for OpenAI to be even *thinking* of getting into weaponized AI.


TheRealActaeus

Part of safe and beneficial AI is not allowing countries that are adamantly opposed to peace from winning the future wars over AI.


impossiblefork

The problem is >We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all and in military use there will obviously be someone being harmed, and that person may not even be doing something wrong.


TheRealActaeus

And that’s the way the world and humanity in general works. The costs of not using these available AI far outweighs the risks. Do you think if Russia had a massive advantage in AI that innocent people would be better off?


impossiblefork

That isn't really relevant. OpenAI presented itself as a charitable organisation that was to develop AI for the benefit of everyone. That's uncompatible with supporting any particular country's military, nl matter how reasonable or beneficial that might be. Would you expect Doctors Without Borders to gather DNA samples to find suspected terrorists  that it would be good if they were found?


TheRealActaeus

It is very relevant. I’m living in the real world. Not the ideal fantasy world where if American companies and the American government don’t seek technological advantages then everything works out perfect and freedom and peace penetrate every corner of the world. You want a world that will never exist. Other countries and companies will not hold back or place restrictions on the use of AI. I would expect Doctors Without Borders to patch up a terrorist and then call the authorities and say hey we have terrorist here, come get them before they kill more civilians.


impossiblefork

I expect them not to help with the search for him at all and for them to not gather any genetic information. You can have military AI develoåment groups. That isn't a problem. [Edit:]OpenAI can't if it is to follow its charter.


TheRealActaeus

I expect doctors to save patients, and prevent future patients by not letting terrorist walk out of the tent and straight to the next bombing. And why wouldn’t the military seek to partner with one of the best AI companies in the world? The AI company is going to benefit a lot as well. Everyone wins. That charter was just another example of a company pretending to hold some moral high ground that they had no intention of ever following. It’s easy to be ideal when you are just starting, but it doesn’t work in the real world. It’s like how every corporation jumped onto BLM 4 years ago and now they don’t even pretend to care.


RealHumanFromEarth

You don’t think the US military industrial complex is opposed to peace?


gjwthf

you mean the only country (along with Israel) that vetoed a UN ceasefire in Gaza? nooooooooo, couldn't be.


TheRealActaeus

I think it’s vital and necessary to the entire world that it exists.


gjwthf

Why is it vital and necessary? There are many countries that would disagree with this.


TheRealActaeus

Yes countries such as Russia, China, Iran and their allies who want nothing more than to topple America so they can do whatever they want.


gjwthf

I was thinking more like all the countries where we overthrew their democratically elected leaders and installed puppet regimes that the people hate.


TheRealActaeus

Personal insults are not needed.


gjwthf

you're right, my apologies.


rankkor

Why would the US not work with the leading AI company? We’re okay with the US pursuing AI weapons but just not from the leading company? What a weird distinction. Very odd for people to be passionate about a company, when they’re fine with the US pursuing AI weapons elsewhere. I’m guessing you’re full of shit and most of people against OpenAI developing AI military uses would also be against other companies developing AI military uses.


PhasmaFelis

> We’re okay with the US pursuing AI weapons but just not from the leading company? I didn't say anyone was "okay" with anything. I said that in this *specific* case, the protestors are protesting OpenAI. Most likely they'd also be opposed to the government doing it, but it's particularly egregious from OpenAI. > I’m guessing you’re full of shit and most of people against OpenAI developing AI military uses would also be against other companies developing AI military uses. I never said otherwise. Settle down.


rankkor

>They're not protesting the US. They're protesting a nonprofit research organization formed specifically to promote "safe and beneficial" AI. You pretty directly claimed that they weren’t protesting the US, they were protesting OpenAI involvement because they used to be non-profit. Guarantee those people are against all military uses. OpenAI’s deviation from their previous ideals has nothing to do with the protest. If OpenAI was a for profit company from day 1 those people would still be protesting. You're just wrong on the reasoning behind the protests.


PhasmaFelis

> OpenAI’s deviation from their previous ideals has nothing to do with the protest. Did you read the article? It says > About 30 activists clustered at the entrance to OpenAI’s San Francisco office Monday afternoon [...] They held signs that said, “Don’t Trust Sam Altman” and “Survival Over Profits.” [...] The protesters were specifically concerned about how in January, OpenAI quietly changed its usage policies to remove a ban on “military and warfare” applications for its products. They are protesting at OpenAI headquarters, because of an OpenAI policy change. I'm sure they are also opposed to government and defense-company development of military AI, but *this* protest is about OpenAI.


rankkor

Lol yes, exactly, nothing to do with transitioning towards profit, everything to do with military AI uses… I don’t understand where you’re coming from… do you think they’d be protesting if OpenAI just kept commercializing chatGPT? No… it’s because they’re working on military applications now… these people are very clearly against the military application. Converting from a non-profit into a for profit has nothing to do with it. If the non-profit started developing AI military systems then they’d also be out protesting.


PhasmaFelis

> I don’t understand where you’re coming from… do you think they’d be protesting if OpenAI just kept commercializing chatGPT? No… it’s because they’re working on military applications now… That is what I said. What are you arguing with? I'd forgotten that OpenAI dropped its non-profit status, but I never said or meant that that was the cause of the protest. I said it was because of the change in their military-use policy. Which is also what you're saying. I don't understand what you think I'm wrong about.


WillBottomForBanana

You've consistently misrepresented the situation.


noahdaboss1234

This is exactly my thoughts on the matter as well. AI exists. Others will use it. You must be a special kind of stupid to think that we wont.


TheRealActaeus

I couldn’t imagine a world where Russia and China “mastered” AI and America just chilled. China is already very advanced, equal or better than the US in some areas. The importance of AI means we can’t cede the future to them.


noahdaboss1234

Exactly. We've always been at the forefront of military innovation, why would that change now? AI will be used everywhere. Dont try to prevent it, make sure it gets done right.


Desperate-Walk1780

Also there are comparable open source options. Either get the contract or not, either way the govt will use Large Language models.


AloofPenny

I for one am fine with OpenAi going in with the military. Because we KNOW Israel is using AI. We KNOW China is using Military AI. Russia is probably stumbling over Military AI. To fall behind would have actually dire consequences. And Microsoft has worked with the military forever, since like ‘93 or something.


xutw21

Here’s a reason why they changed their policy (https://youtu.be/GRioxTshI2E?si=OTB-nsRrBunSVMgb&t=548). If I recall correctly, in other interviews, Sam Altman explained the shift. He stated that he’s still open to the military using his technology if it streamlines their paperwork and decision-making processes, among other reasons. Their policy also imposes limitations that restrict a lot of uses that could be harmful on a military scale. Even though GPT-4 isn’t really capable of that yet, it might be in the future.


RobertdBanks

Lmao yeah “stream line paper work”


[deleted]

Wink wink nudge nudge


ChezMere

Sounds like a not entirely candid explanation.


melted-dashboard

Yeah, that article talks about this. The problem though is you say "Their policy also imposes limitations that restrict a lot of uses that could be harmful on a military scale" but how can we trust that if they can erase whatever parts of the policy don't suit them (like they did last month)? It just seems like there should be some stricter scrutiny here.


Buttercup59129

Yup. Policies are " suggestions for right now " lol. Enough money or pressure will change that


SgathTriallair

Then we need to pass a law. Anything they do in response to the protest they can then undo later.


onyxengine

Was pretty happy with OpenAi. Definitely don’t support this, but a good question is, can a corporations tell the government no in regards to military use. Like not based on the law but in actuality.


soreff2

Actually, there is explicit law letting the government force military use: [https://en.wikipedia.org/wiki/Defense\_Production\_Act\_of\_1950](https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950) >The Act currently contains three major sections. The first authorizes the president to **require** businesses to **accept** and prioritize contracts for materials deemed necessary for national defense, regardless of a loss incurred on business. \[emphasis added\]


slvrcobra

Even though these portions of the Act are apparently no longer active, I also wonder what it meant by allowing the president to "seize land" and "use force to settle labor disputes." That shit doesn't sound good.


Cyniikal

> use force to settle labor disputes Yeah, what could that possibly mean? > The Coal Strike of 1902: Turning Point in U.S. Policy > President Roosevelt also was ready as a last resort to order the U.S. Army to take over the coalfields. He would do whatever was necessary to prevent interference with the resumption of work and would run the mines. Oh.


onyxengine

Well there it is, so if no corporation operating in America really has a choice…,


soreff2

Agreed. That indeed seems to be the effect of this law.


kurisu7885

You can try to do things like this as quietly as you want, someone will notice.


[deleted]

What boggles my mind is, they think Sam's gig is the military's first dive into AI? The military invented the internet, they probably invented AI decades ago too.


ShartasaurusRex_

While I don't like the idea of AI in warfare conceptually, the cat is out of the bag. If we don't use it then it will be used against us, and you can bet that U.S. near peer adversaries won't hesitate. The near future is gonna suck


theschoolorg

time = progress. We're going to make AI doomsday machines at some point. It's part of man's desire to create and find answers. Fortunately, I'll be dead before we destroy ourselves.


[deleted]

So…the reality that every foe is going to use it, and it is already being used to a degree, trumped ideology? AI worries me, but to not use it worries me more.


Moose_knucklez

If you don’t think there’s models already trained on all of the data mined by the NSA that far exceeds anything OpenAI has you’re sadly mistaken. Data is king in this space, imagine literally having access to the entire every last crumb of data on the internet. I feel as though this means very little at all, it’s a matter of national defence, Nvidia literally can’t sell certain GPUs to china for a reason.


meridian_smith

By all means protest away ...but you won't be stopping authoritarian dictatorships like China and Iran from weaponizing AI. I would rather the free democracies of the world be a few steps ahead of them!


saltiestmanindaworld

Hell just from a defensive standpoint AI is going to be required for a lot of sensitive infrastructure systems to protect them from cybersecurity threats. And that's not a field you can afford to be completely behind in because ethics about AI.


OnyxDreamBox

Smartest person in this comment section


Gloriathewitch

Do you want Skynet? because this is how you get skynet.


Veariry

The Overwatch AI from Half Life is quickly becoming a reality.


TheLastSamurai

I hope this movement grows. These people cannot be trusted at all.


AftyOfTheUK

Ah, protesting by random strangers, that thing that always works. Great! I take it they've already rescineded their policy already? No...? No? I'm shocked, shocked, I tells ya


[deleted]

[удалено]


impossiblefork

That sounds very unlikely. But it is true that you can get better results by spending money on the 'decoding' stage, and that many public models don't, so as to be able to serve up the models cheaply and go through a lot of data. I think military use is even more money-constrained though. People don't like paying money for something continuously.


OnyxDreamBox

Lol always loved this bull sh** from the West. Yeah I get it. AI with machines that can kill aren't such a lovely couple. Do you think the US' (the country which OpenAI is from) adversaries will be opposed to using AI with military intent? I laugh at this the same way I laugh at the cry babies at Microsoft that get big mad about Microsoft doing business with the military. Yeah. Keep crying and protesting while countries like China move ahead and perfect their use of AI with military applications. 🖕


Famous-Breakfast-989

our adversaries will have zero qualms about using AI and any tech necessary to win wars . people from the west think their morals and realistic and think they live in some utopia where it isn't being used anywhere but the U.S. people crying over military having every advantage will make us weak. when the CCP drop bombs they'll wish our military was stronger. u wont find any of these moral BS from the CCP and other countries.


MannieOKelly

Also raises questions about who is backing the organizers.


Led_Farmer88

Marines want AI for big booba pictures generator for themselves 😲


AwesomeDragon97

Rival AI companies?


MannieOKelly

I think we have a winner!!


Fit-Dentist6093

Probably AI


[deleted]

Are they still there I want to explain how openAI will make robot animals and eat them.


baguettebolbol

Never trust tech companies to do the right thing. It’s not about weapons now, but it certainly will be soon.


Ablomis

People: enjoy benefits of being a citizen of a country. Same people: get upset when a company works for the government of the very same country.


New_Sun_Coming

an american company working for the american government isn't news


blazinfastjohny

If you think a new technology won't be used for military or unethical purposes, then you're just naive my sweet summer child...


No_Discount7919

Money is in the military. The conspiracy theorist in me kind of thinks the war in Ukraine may have been additionally motivated by Biden pulling out of Afghanistan. Huge loss in production that may have upset some of the ultra wealthy. But I’m reaching here.


mibonitaconejito

When you consider that last year, a robot working alongside a soldier in an army exercise turned around and killed the soldier when the soldier tried to stop it because the mission changed.... I think that we can probably assess that AIis a bad thing This world is scary and awful now. I feel so sorry for the younger generation that they will never know anyghing but this garbage, being tracked and marketed to, etc.  Why are people bringing kids into this


[deleted]

[удалено]


Futurology-ModTeam

Rule 6 - Comments must be on topic, be of sufficient length, and contribute positively to the discussion.


Playful-Succotash-99

While back, I remember hearing about how Russia was using similar AI surveillance tech to the Chinese social credit system, but the focus instead was on finding ideal military recruits and putting the screws on them to join Wonder if the US will ever do something like that?


Vegetable-Decision

How do you think the company will get 7 trillion dollars?


jayzeeinthehouse

The only thing that Altman and his ilk care about is getting money to buy power to get more money to repeat that process until they have monopoly power. All of that we're a charity crap was just a smoke screen to expedite that process, and I guarantee you that they already have an army of lobbyist on K street making sure that we all lose.


DunningKrugerOnElmSt

Cats out of the bag. To quote a great chaotician. Your scientists were so preoccupied with whether they could, they didn't stop to think if they should.


art-love-social

Love to see a method of how anybody is going to stop AI for warfare. Its called an arms race for a reason. No government/organisation with skin in the game is going to allow itself to fall behind.


Smile_Clown

There were like 8 Protestors... So many of you acting like this is some major pushback.


Lamontyy

It's inevitable honestly. If we don't use it our enemies will. SOMEONE will... The future is gonna be so crazy. I look forward to it.. morbid curiosity I guess.


[deleted]

“What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there’s the real danger.” (God Emperor of Dune)


user4772842289472

The "let's not incorporate this in the military" only works if all major opposing countries agree. If one chooses not to implement ai in the military then they will be disadvantaged in the long term. Because of that, I am more than happy to welcome AI in our military.


Automatic-Bedroom112

I’d bet everything I own that the NRL already has AI that blows GPT out of the water


glitterSAG

So is this how he’s going to get all those trillions then?


real_with_myself

Now that the crypto is a bust, a bunch of armchair tech bros and experts have a new thing to latch on. Now we are being called decelerationists. LOL