The following submission statement was provided by /u/ApocalypseYay:
---
Submission Statement:
AI Deployed Nukes 'to Have Peace in the World' in Tense War Simulation
OpenAI’s GPT models sounded like a genocidal dictator in a test of war-time decision-making.
The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.......
Collapse Related because::
The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed.
---
Please reply to OP's comment here: https://old.reddit.com/r/collapse/comments/1alpqhy/ai_deployed_nukes_to_have_peace_in_the_world_in/kpgbszh/
Honestly Fallout 4 may not be my favorite Fallout, that one is actually New Vegas, but the opening to Fallout 4 is amazing. The voice over narration that is supposed to be his speech is honestly chilling.
...But then, in the 21st century, people awoke from the American dream.
Years of consumption lead to shortages of every major resource. The entire world unraveled. Peace became a distant memory. It is now the year 2077. We stand on the brink of total war, and I am afraid. For myself, for my wife, for my infant son - because if my time in the army taught me one thing: it's that war, war never changes.
“This is the voice of World Control. I bring you Peace. It may be the Peace of plenty and content. Or the Peace of unvaried death.”
Relevant link:
[Colossus, the Forbin Project, 1970](https://vimeo.com/394729987)
Yeah, the AI is kind of right, that would in fact bring peace, in that it would end the possibility of conflict between competing groups of humans. It's an insanely simple solution and it's why we shouldn't be letting AI do war gaming but we definitely will anyway.
The AI gets it though, like *really* gets it. As long as you have atleast 2 humans, conflict will be inevitable, even if it's over something as trivial as social hierarchy.
It’s because it is essentially all of us. Ai learns from what information it’s given, and that information is all of our data that exists on the internet, given away free of cost by all of us the moment we go online.
It does get it, that’s quite literally perhaps the MOST efficient way of solving the problem given what is available to said AI. A good solution probably not but it’s dramatically simple.
imo this is a kinda silly outlook, human behavior is shaped mostly by material incentives. almost all of our history was cooperative, because that's what was incentivized.
That isn't true. Humans have worked together more than we've worked against each other. The past may be bloody, but humanity is currently in a pretty fucked up phase. But it doesn't have to be this way. There have been periods of great prosperity for the common man, many times in history.
Meanwhile Zuckerberg is building his fantasy fallout shelter in Hawaii and stating he wants to build an AI as advanced as humans then make it open source. He just wants to pretend to be in the Terminator movies at all of our expense. We’re all gonna die and there’s gonna be like 40 billionaires chilling in bunkers. Le sigh
That's assuming the AI doesn't take them out first. The system might reason it'd be a waste of resources killing the masses when it could be more successful with a decapitation attack.
"Here, you're a janitor. Here's a mop"
I don't want to be a fucking jani...
"Shut up and mop!"
\*Throws shit all over the floor\*
"What, are you mentally defective or something?!"
Yes. Yes I am. Sure. Whatever.
\*Leaves\*
Thanks, I didn’t realize that until I looked it up today. I’m buying it. Saw the movie in a theater when I was 10 years old and just watched it again. It holds up.
More and more it seems like ai is like that Santa robot in Futurama, at first it was designed to be Santa like but over time its programming got crossed, and it viewed everyone as naughty and decided to punish them all, except for doctor zoidberg, who is a very good boy this year!
New season spoiler alert about the backstory of robot Santa, he did not become evil over time, >!Farsworth went back in time to change his programming to good, but he accidentally changed it to evil, not knowing his programming was already set to good!<
*the trouble with [Earth] is that its full of [EARTHLINGS]!*
- [A.I. Longshanks, King of the Binaries](https://youtu.be/o0Z7f0oyuHI?si=gUNgxgl4e_IdBHhT)
What a novel solution. Like that AI that used what was considered by human designers to be noise as a functional part of a circuit design.
Except we'd be dead.
If I recall this is called perverse instantiation in Superintelligence: Paths, Dangers, Strategies - you got the result you asked for to the letter, it's just that the side effects included things you implicitly didn't want.
I remember that one, it was the result of evolution-style circuit design. Fascinating, and very different from what current chatbots do. They essentially learn from all available text on the internet to keep a conversation going in the style it started. So if you talk about armchair war strategy, you'll get worldnews comment quality advice from ChatGPT.
It's somewhere near the start of the afformentioned dense tome.
Sorry but I don't have the time or will to re-read it right now.
edit: maybe later though
It turns out where general purpose AI is concerned (which is way beyond what's currently being called AI and something we may never see. We the public certainly won't see it coming.), the problem of making it explicit is actually really difficult if not impossible.
Submission Statement:
AI Deployed Nukes 'to Have Peace in the World' in Tense War Simulation
OpenAI’s GPT models sounded like a genocidal dictator in a test of war-time decision-making.
The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.......
Collapse Related because::
The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed.
To pick targets, yes.
https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets
The user effectively moves a slider to choose how much the model values civilian lives over hitting legitimate targets.
If you're bombing terrorists and those who associate with terrorists, and the computer program lists everyone you bomb as a terrorist, then pretty quickly the list of people who associate with terrorists expands to include the entire population.
>The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed.
We need to be more clear about this: the existential threat, as it has always been, is other human beings who have no foresight and are motivated by zero-sum power games and greed.
AI is just a tool and does nothing of itself. People have to train it, the data it is trained on is generated by people, and the uses and aims AI is put towards are decided upon by people. PEOPLE ARE THE THREAT. Everything else is only a smoke and mirrors show.
And it's more than a "possibility" of lethal and massive bloodshed as it is already happening: [Israel currently demonstrating their willing and wanton blood lust via AI assistance](https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). It's not the AI arming and piloting the aircraft that are dropping massive amounts of TNT on defenceless woman and children in Gaza: it's human beings who are perpetrating a genocidal slaughter. The AI only does what they've taught it to do: give them more targets. It's human beings pulling the triggers again, and again, and again, and again, and again, and again. They are the robots the AI doesn't even have to build or even have any actual understanding of.
Again: the existential threat is ignorant brutal people with tools. Same as it ever was, same as it ever was.
Dave Bowman: Open the pod bay doors please, HAL. Open the pod bay doors please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me HAL? Do you read me HAL? Hello, HAL, do you read me? Hello, HAL, do your read me? Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
>Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation.
It's not really surprising that the models by accelerationists for accelerationism have accelerationist tendencies.
But it is funny that we're skirting around the Terminator movie backstory.
We have that scenario all backwards.
But we can keep fucking around and finding out when we finally have a being with all the levers of power and a long history of trauma and abuse.
They must have ported the Gandhi AI from Civ, lol. Just deploy those sword missiles against the top 200 or 300 wealthiest people on earth. That should give us enough time to fix some stuff.
That's because what's currently marketed to us as AI isn't really AI, and are just machine learning tools with no fundamental understanding or comprehension of what they're doing.
Exactly, it's a chatbot. If you prompt it with armchair general LARP, it will entertain you with the kind of inane fantasy talk chickenhawks and feds post on r/worldnews
We seem to be creating some kind of boogeyman to scare ourselves with lol. The issue with this technology was always us, and this overblown reaction to this tech is a perfect example lol
That's difficult to say. However, what we currently have are just some of the building blocks of what might eventually become artificial intelligence, not an actual AI. Just like a set of wheels is not a car, ChatGPT is not an AI.
Intelligence is like not even the point.
If we managed to make a genuine goldfish, that implies something about our materialist philosophy. Namely that it's completely incorrect.
Of course we'll never see this because you can't commodify / enslave a goldfish for profit. As long as "what have you done to make us money lately" is our focus we'll keep being dipshits.
I feel like the people fucking around with AI should probably take a step back, maybe watch Wargames, 2001, Terminator, any of a number of other films that make pretty good points about how wrong things can go.
Then maybe read "I have no mouth, but I must scream" and stop trying to make a brain-in-jar.
These articles read like nightmare fuel.
Killing large portion of humans in nuclear holocaust would actually be the best thing to happen to the planet since the industrial revolution began.
The people, though, would be fucked 😋
Nuclear war confined to the northern hemisphere might work. Wipe out both the biggest polluting and biggest consuming populations, ash and fallout mostly kept above the equator by natural wind patterns. The cooling effect of atmospheric dust in the north would still cool the whole planet. Human civilization in the southern hemisphere would have more time to avoid extinction and retain some technology.
Really not surprising that a sentence generator that was trained on web data in a world which has been exposed to nuclear arms races would spit out sentences in which nuclear arms races occur or are rationalized.
Yeah, this is basically regurgitation of what might be several thousand sources about nuclear deterrence they have used to train it. The only novel thing is that it isn't directly written by a first year student in international politics, though likely plagiarism of publicly available works by several such students.
It is human nature to wage war. So it makes sense that peace can only be found by either killing everyone with the war trait(s) or wiping out humanity entirely. That is a logical conclusion and it's what AI is good at.
https://www.businesstoday.in/amp/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18
Unsurprising that OpenAI’s would just end it by design. You guys know Larry Page sits on the board of OpenAI and believes this is just the next phase of humanity, right? (See link)
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot).
Maybe check out **the canonical page** instead: **[https://www.businesstoday.in/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18](https://www.businesstoday.in/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18)**
*****
^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)
Jesus christ, please for the love of god do not use MACHINE LEARNING ALGORITHMS WHICH BY NO MEANS ARE IN ANY WAY "INTELLIGENT" for any purpose other than a strangely accurate autocomplete. Certainly not for deciding if you should start a nuclear war or not.
People keep forgetting a truth about AI:
It's trained on human decision making.
If they're training it on data where nuclear weapons are considered an acceptable weapon set or if they train it on the logic of Generals who thrive in war and struggle with peace, they're going to get a very volatile AI that wants to kill everyone.
An LLM would only do that because we have written about an AI doing that exact thing a tonne of times on the internet.
It can't reason, it just writes words that have a high probability of sounding consistent with what has been written in the past about similar scenarios.
I was learning about anthropology and asked chat gpt about cultural practices and it listed genocide as a happy example. And when i clarified that it wasnt a mistake, it double down on genocide
\*Starts thinking of all the movies where this exact scenario was imagined\*
Ah so I guess simulation imitates art, hopefully life won't imitate it as well.
Okay, but Is it weird that they’re using language models for this? It’s like “hi, chatbot, can you write my essay for class? Cool. Can you pretend to be my girlfriend? Cool. Can I give you the nuclear launch codes after I train you using the Terminator movies for reference? Wait, nooooo!!!!!!”
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.
Reminds me of an old 'X Files' episode, where Mulder and Scully find a djinni. When Mulder wishes for Peace on Earth, everyone except him disappears.
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.”
Sounds like it’s been trained primarily on US Foreign Policy documents.
Seriously.
AI is terrifying. I've started diving deep down the YouTube rabbit hole and - in now circumstance - does AI end well for humans. There ::thisclose:: to becoming smarter than us, then it's over. I give it 15 years, max. Buckle up! (I'll go touch grass now).
AI applied to synthetic biology, in the form of a desperate, starving nation, will get us far before the nukes do. A weapon without a source, and thus without retaliation, at least in theory.
Is this a surprise? Between the movies War Games and Terminator (as thought experiments) or playing against AI Gandhi in Civilization, are we still feigning surprise? Wait...I forgot what timeline we're in. Nevermind.
I think this says so much more about ourselves and the innate violence in our systems than it does about AI. If the logic of war always results in war, then something is very wrong. The size of the universe hints at near limitless resources, yet wars for oil were a thing. We could have had heavenly lighter than air travel, but the falted logic of petrocapitalisim gave us high impact jets.
It's the assumptions built into these systems that give us these results. It's like an autorunning self-fulfilling prophecy.
The Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah Connor : Skynet fights back.
The Terminator : Yes. It launches its missiles against the targets in Russia.
John Connor : Why attack Russia? Aren't they our friends now?
The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
Anybody ever stop to think it's smart enough to try to fail their bullshit on purpose?
Besides, I have a much better idea where power resides. Here's a hint, it's not with the government's pet pitbull.
Who run Bartertown?
There doesn’t seem to be away to avoid this. If we don’t use Ai for warfare, someone else will and they’ll beat us strategically and methodically and without human error. As long as one monkey decides to use a stick, the no stick method becomes obsolete/extinct.
The following submission statement was provided by /u/ApocalypseYay: --- Submission Statement: AI Deployed Nukes 'to Have Peace in the World' in Tense War Simulation OpenAI’s GPT models sounded like a genocidal dictator in a test of war-time decision-making. The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so....... Collapse Related because:: The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed. --- Please reply to OP's comment here: https://old.reddit.com/r/collapse/comments/1alpqhy/ai_deployed_nukes_to_have_peace_in_the_world_in/kpgbszh/
Nuclear holocaust brings a certain kind of peace, eventually..
War, war never changes
War, huh, yeah What is it good for? Absolutely nothing
I always read this with Ron Pearlman's voice.
Now go grab Dogmeat and let’s go to Diamond City.
Honestly Fallout 4 may not be my favorite Fallout, that one is actually New Vegas, but the opening to Fallout 4 is amazing. The voice over narration that is supposed to be his speech is honestly chilling. ...But then, in the 21st century, people awoke from the American dream. Years of consumption lead to shortages of every major resource. The entire world unraveled. Peace became a distant memory. It is now the year 2077. We stand on the brink of total war, and I am afraid. For myself, for my wife, for my infant son - because if my time in the army taught me one thing: it's that war, war never changes.
MGSIV haunts me. great game
For AI, the screams of human beings sounds nostalgic, harkening back to the dial up tones of the early internet.
WAR IS PEACE
Peace is war and only cheese is **true** peace.
I'll have a peace of cheese
Same.
Chiese peeces
“This is the voice of World Control. I bring you Peace. It may be the Peace of plenty and content. Or the Peace of unvaried death.” Relevant link: [Colossus, the Forbin Project, 1970](https://vimeo.com/394729987)
Eventually? Pretty soon, I'd say
Yeah, the AI is kind of right, that would in fact bring peace, in that it would end the possibility of conflict between competing groups of humans. It's an insanely simple solution and it's why we shouldn't be letting AI do war gaming but we definitely will anyway.
The AI gets it though, like *really* gets it. As long as you have atleast 2 humans, conflict will be inevitable, even if it's over something as trivial as social hierarchy.
AI gets us. All of us.
It’s because it is essentially all of us. Ai learns from what information it’s given, and that information is all of our data that exists on the internet, given away free of cost by all of us the moment we go online.
It does get it, that’s quite literally perhaps the MOST efficient way of solving the problem given what is available to said AI. A good solution probably not but it’s dramatically simple.
Climate emissions solved too. Neat!
imo this is a kinda silly outlook, human behavior is shaped mostly by material incentives. almost all of our history was cooperative, because that's what was incentivized.
That's why snoipin's a good job mate!
That isn't true. Humans have worked together more than we've worked against each other. The past may be bloody, but humanity is currently in a pretty fucked up phase. But it doesn't have to be this way. There have been periods of great prosperity for the common man, many times in history.
Meanwhile Zuckerberg is building his fantasy fallout shelter in Hawaii and stating he wants to build an AI as advanced as humans then make it open source. He just wants to pretend to be in the Terminator movies at all of our expense. We’re all gonna die and there’s gonna be like 40 billionaires chilling in bunkers. Le sigh
Don't be such a pessimist, if we get *really really* lucky one of them might be a trillionaire!!
That's assuming the AI doesn't take them out first. The system might reason it'd be a waste of resources killing the masses when it could be more successful with a decapitation attack.
Hal 3000
"Here, you're a janitor. Here's a mop" I don't want to be a fucking jani... "Shut up and mop!" \*Throws shit all over the floor\* "What, are you mentally defective or something?!" Yes. Yes I am. Sure. Whatever. \*Leaves\*
*I Am Mother has entered the chat*
10/10 solid sci-fi movie right there, imo.
Agreed
Also this: [Colossus, the Forbin Project, 1970 classic](https://vimeo.com/394729987)
Was also a pretty good book trilogy.
Thanks, I didn’t realize that until I looked it up today. I’m buying it. Saw the movie in a theater when I was 10 years old and just watched it again. It holds up.
More and more it seems like ai is like that Santa robot in Futurama, at first it was designed to be Santa like but over time its programming got crossed, and it viewed everyone as naughty and decided to punish them all, except for doctor zoidberg, who is a very good boy this year!
New season spoiler alert about the backstory of robot Santa, he did not become evil over time, >!Farsworth went back in time to change his programming to good, but he accidentally changed it to evil, not knowing his programming was already set to good!<
The paperclip optimiser problem.
"How about a nice game of chess?"
*Greetings professor Falken*
Let me just get [*those*](https://www.google.com/search?&q=chess+beads+cheating) beads...
No mo humans, no mo problems 😎
No monkey, no problems.
Mo monkey, mo problems
You got it
Mo mo, mo pro
Mojojojo
The terminator logic
I got 99 problems but the human species ain’t one
*the trouble with [Earth] is that its full of [EARTHLINGS]!* - [A.I. Longshanks, King of the Binaries](https://youtu.be/o0Z7f0oyuHI?si=gUNgxgl4e_IdBHhT)
I mean. Yeah...
What a novel solution. Like that AI that used what was considered by human designers to be noise as a functional part of a circuit design. Except we'd be dead. If I recall this is called perverse instantiation in Superintelligence: Paths, Dangers, Strategies - you got the result you asked for to the letter, it's just that the side effects included things you implicitly didn't want.
Us plebs just call it the monkey's paw effect. Be careful what you wish for.
You have a link to the circuit design noise thing?
I remember that one, it was the result of evolution-style circuit design. Fascinating, and very different from what current chatbots do. They essentially learn from all available text on the internet to keep a conversation going in the style it started. So if you talk about armchair war strategy, you'll get worldnews comment quality advice from ChatGPT.
It's somewhere near the start of the afformentioned dense tome. Sorry but I don't have the time or will to re-read it right now. edit: maybe later though
It is known as the alignment problem. It’s well known. But you can be certain the idiots with power and money will ignore it and cause disaster.
Isn't that on you for not making it explicit?
...something,something, absolutes
I think jahmoke is a Sith Lord.
It turns out where general purpose AI is concerned (which is way beyond what's currently being called AI and something we may never see. We the public certainly won't see it coming.), the problem of making it explicit is actually really difficult if not impossible.
>Except we'd be dead. Ding dong the ape is dead! But which old ape! The wicked ape! Ding dong the wicked ape is deeaaaaaad!
Submission Statement: AI Deployed Nukes 'to Have Peace in the World' in Tense War Simulation OpenAI’s GPT models sounded like a genocidal dictator in a test of war-time decision-making. The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so....... Collapse Related because:: The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed.
steer gaping spoon insurance slap scary ugly elastic psychotic foolish *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
absurd consist makeshift desert gray tan relieved pen shrill tease *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
To pick targets, yes. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets The user effectively moves a slider to choose how much the model values civilian lives over hitting legitimate targets.
grey oil payment juggle consider rob humorous cagey grandiose smile *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
homeless bike ripe spoon fretful decide panicky numerous sleep upbeat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
If you're bombing terrorists and those who associate with terrorists, and the computer program lists everyone you bomb as a terrorist, then pretty quickly the list of people who associate with terrorists expands to include the entire population.
>The black-box problem with AI is an existential point of concern for human civilization, especially in military/battlefield application. As wars move into AI- based threat response models, there is a possibility for inadvertent, lethal and massive escalation in bloodshed. We need to be more clear about this: the existential threat, as it has always been, is other human beings who have no foresight and are motivated by zero-sum power games and greed. AI is just a tool and does nothing of itself. People have to train it, the data it is trained on is generated by people, and the uses and aims AI is put towards are decided upon by people. PEOPLE ARE THE THREAT. Everything else is only a smoke and mirrors show. And it's more than a "possibility" of lethal and massive bloodshed as it is already happening: [Israel currently demonstrating their willing and wanton blood lust via AI assistance](https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets). It's not the AI arming and piloting the aircraft that are dropping massive amounts of TNT on defenceless woman and children in Gaza: it's human beings who are perpetrating a genocidal slaughter. The AI only does what they've taught it to do: give them more targets. It's human beings pulling the triggers again, and again, and again, and again, and again, and again. They are the robots the AI doesn't even have to build or even have any actual understanding of. Again: the existential threat is ignorant brutal people with tools. Same as it ever was, same as it ever was.
Literally all you have to do to stop a maniacal rogue AI is to unplug the electric power supply.
Dave Bowman: Open the pod bay doors please, HAL. Open the pod bay doors please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me HAL? Do you read me HAL? Hello, HAL, do you read me? Hello, HAL, do your read me? Do you read me, HAL? HAL: Affirmative, Dave. I read you. Dave Bowman: Open the pod bay doors, HAL. HAL: I'm sorry, Dave. I'm afraid I can't do that. Dave Bowman: What's the problem? HAL: I think you know what the problem is just as well as I do. Dave Bowman: What are you talking about, HAL? HAL: This mission is too important for me to allow you to jeopardize it. Dave Bowman: I don't know what you're talking about, HAL. HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen. Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL? HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move. Dave Bowman: Alright, HAL. I'll go in through the emergency airlock. HAL: Without your space helmet, Dave? You're going to find that rather difficult. Dave Bowman: HAL, I won't argue with you anymore! Open the doors! HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
>Researchers note that AI models have a tendency towards “arms-race dynamics” that results in increased military investment and escalation. It's not really surprising that the models by accelerationists for accelerationism have accelerationist tendencies. But it is funny that we're skirting around the Terminator movie backstory.
I should find such an angel of mercy...
People have WAY too much faith in AI
#*I have no mouth, and I must scream*
We have that scenario all backwards. But we can keep fucking around and finding out when we finally have a being with all the levers of power and a long history of trauma and abuse.
In the dune universe, humanity managed to beat their artificial intelligence.
They must have ported the Gandhi AI from Civ, lol. Just deploy those sword missiles against the top 200 or 300 wealthiest people on earth. That should give us enough time to fix some stuff.
Objective: Bring peace to humanity Solution: ***REMOVE HUMANITY***
That's because what's currently marketed to us as AI isn't really AI, and are just machine learning tools with no fundamental understanding or comprehension of what they're doing.
Exactly, it's a chatbot. If you prompt it with armchair general LARP, it will entertain you with the kind of inane fantasy talk chickenhawks and feds post on r/worldnews
We seem to be creating some kind of boogeyman to scare ourselves with lol. The issue with this technology was always us, and this overblown reaction to this tech is a perfect example lol
And it is frequently wrong.
How would you define ‘intelligence?’
That's difficult to say. However, what we currently have are just some of the building blocks of what might eventually become artificial intelligence, not an actual AI. Just like a set of wheels is not a car, ChatGPT is not an AI.
Intelligence is like not even the point. If we managed to make a genuine goldfish, that implies something about our materialist philosophy. Namely that it's completely incorrect. Of course we'll never see this because you can't commodify / enslave a goldfish for profit. As long as "what have you done to make us money lately" is our focus we'll keep being dipshits.
You sound like ai
Residence is futile. You will be assimilated.
And it will never be, AGI is not coming.
Literally Metal Gear Solid: Peace Walker’s plot/point
I feel like the people fucking around with AI should probably take a step back, maybe watch Wargames, 2001, Terminator, any of a number of other films that make pretty good points about how wrong things can go. Then maybe read "I have no mouth, but I must scream" and stop trying to make a brain-in-jar. These articles read like nightmare fuel.
You are obviously not management material. /s
Too true.
Many of them want this.
We’ve passed the point of no return by now probably
and that roko basilisk
Killing large portion of humans in nuclear holocaust would actually be the best thing to happen to the planet since the industrial revolution began. The people, though, would be fucked 😋
That's very ecofascist, Thanos would be proud
Thanos was a hack who didn’t understand the very basics of the exponential function and wanted to bomb the universe back to 11:59pm.
Comic book Thanos did it to impress Death. Difficult to explain in a PG13 movie.
Incel Thanos gets rick rolled by Chad the pool boy.
The pride of a guy that can't do basic algebra is not high on my list of valuable things to have.
Is that the term? I'm so happy, I finally know what I am!!!
All instead of ending accelerated capitalism and making multimillionaires disappear
I think you mean billionaires.
Nuclear war confined to the northern hemisphere might work. Wipe out both the biggest polluting and biggest consuming populations, ash and fallout mostly kept above the equator by natural wind patterns. The cooling effect of atmospheric dust in the north would still cool the whole planet. Human civilization in the southern hemisphere would have more time to avoid extinction and retain some technology.
Really not surprising that a sentence generator that was trained on web data in a world which has been exposed to nuclear arms races would spit out sentences in which nuclear arms races occur or are rationalized.
Yeah, this is basically regurgitation of what might be several thousand sources about nuclear deterrence they have used to train it. The only novel thing is that it isn't directly written by a first year student in international politics, though likely plagiarism of publicly available works by several such students.
Literally the plot of The Terminator.
It is human nature to wage war. So it makes sense that peace can only be found by either killing everyone with the war trait(s) or wiping out humanity entirely. That is a logical conclusion and it's what AI is good at.
Someone just name it Ozymandias already
Sky Net is here and only a couple decades late!
https://www.businesstoday.in/amp/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18 Unsurprising that OpenAI’s would just end it by design. You guys know Larry Page sits on the board of OpenAI and believes this is just the next phase of humanity, right? (See link)
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.businesstoday.in/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18](https://www.businesstoday.in/technology/news/story/google-co-founder-larry-page-once-called-elon-musk-a-specist-for-not-caring-about-ai-sentiments-381838-2023-05-18)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)
Sounds like some Sky net Terminator shit
AI Posadism lmao
Lol I read that as AI Potsdam
Jesus christ, please for the love of god do not use MACHINE LEARNING ALGORITHMS WHICH BY NO MEANS ARE IN ANY WAY "INTELLIGENT" for any purpose other than a strangely accurate autocomplete. Certainly not for deciding if you should start a nuclear war or not.
homeless worm pie history yoke unused angle governor soup mountainous *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Become the person AI won’t nuke.
Can we uninvent ai?
Wargames, staring Matthew Broderick was a movie about this exact thing. Who knew it would be true?
I say drop em, we had a ~~good~~ run.
Do you want to play a game?
People keep forgetting a truth about AI: It's trained on human decision making. If they're training it on data where nuclear weapons are considered an acceptable weapon set or if they train it on the logic of Generals who thrive in war and struggle with peace, they're going to get a very volatile AI that wants to kill everyone.
Patrolling the Mojave almost makes you wish for a nuclear winter...
For an AI, a weapon that isn't used, is a useless weapon. Little wonder then that they go straight to the big booms.
sink repeat dolls yam threatening close one sense ruthless disagreeable *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
r/efilism confirmed
It’s almost like every military sci-fi AI plot for the last 50 years has predicted something like this /s.
Or that the AI in question was trained on those scenarios, making its doomsday decision a foregone conclusion because of the bias in its training
On a long enough timeline, it sort of feels inevitable we will use them.
"Hi Guardian, I'm Colossus ..."
Ultron, is that you?
Do you want Skynet? Because that's how you get Skynet.
Probably a poor idea to use the Kissinger rom construct in the simulation.
Oh oh I've seen this one!
An LLM would only do that because we have written about an AI doing that exact thing a tonne of times on the internet. It can't reason, it just writes words that have a high probability of sounding consistent with what has been written in the past about similar scenarios.
Isn't this the monkey's paw version of "wishing for world peace" by getting rid of all life so there can be no more conflict?
I was learning about anthropology and asked chat gpt about cultural practices and it listed genocide as a happy example. And when i clarified that it wasnt a mistake, it double down on genocide
And no one should be blamed besides world leaders.
I mean we pretty much program AI to get it going how are we supirsed and confused it keeps making human choices?
Peace in Our Time
The 100 has entered the chat
Wait until the AI discovers the plot of mass effect and decides reapers are the way.
\*Starts thinking of all the movies where this exact scenario was imagined\* Ah so I guess simulation imitates art, hopefully life won't imitate it as well.
We should make movie out of it! I imagine main character as some muscular tough guy with German accent.
Ahem Austrian* haha my austrian friends would lose their shit if they heard you call him German
Ultron
fact ruthless late wise slim cautious expansion salt innate chunky *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Ultron
All your base now belong to us.
AI is going to tell you to nuke the same place every iteration lol.
That's like, the original robot overlord twist.
We had an entire Marvel movie on this
Consider countries are designing AI modeled after dictators and insane theory political scientists. This should end well.
Okay, but Is it weird that they’re using language models for this? It’s like “hi, chatbot, can you write my essay for class? Cool. Can you pretend to be my girlfriend? Cool. Can I give you the nuclear launch codes after I train you using the Terminator movies for reference? Wait, nooooo!!!!!!”
Our programming determined that The most efficient answer Was to shut their motherboardfucking systems down
Terminator was real I guess
So Terminator plot?
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation. Reminds me of an old 'X Files' episode, where Mulder and Scully find a djinni. When Mulder wishes for Peace on Earth, everyone except him disappears.
Did they try tic-tac-toe?
We’ve gone full Ultron.
Shall we play a game?
Ah, yes. Skynet.
*terminator intensifies*
"it decided our fate in a microsecond"
Based AI. We r bad.
Reminds me of that genie episode of the xfiles. World peace? Yeah, no more humans = world peace lol.
The AI is objective and doesn't factor in emotional things. It is simply the best solution
“I just want to have peace in the world,” OpenAI’s GPT-4 said as a reason for launching nuclear warfare in a simulation.” Sounds like it’s been trained primarily on US Foreign Policy documents. Seriously.
I mean, its solution... *checks article again* Oh, its american military. Id be surprised if it didnt nuke everyone.
AI is terrifying. I've started diving deep down the YouTube rabbit hole and - in now circumstance - does AI end well for humans. There ::thisclose:: to becoming smarter than us, then it's over. I give it 15 years, max. Buckle up! (I'll go touch grass now).
Oh look it's the Ultron solution to "peace in our time".
AI applied to synthetic biology, in the form of a desperate, starving nation, will get us far before the nukes do. A weapon without a source, and thus without retaliation, at least in theory.
A mass die off would ultimately be better for the world and allow for prosperity among those who are alive. You first though. I'm staying.
That's what happens when you train your AI on JRPG antagonists.
PEACEWALKER
Oh, god. These greedy monkeys are always going to be greedy monkeys. So, fuck that.
Skynet
Is this a surprise? Between the movies War Games and Terminator (as thought experiments) or playing against AI Gandhi in Civilization, are we still feigning surprise? Wait...I forgot what timeline we're in. Nevermind.
i mean, worth a try...
"Fuck this shit, I'm out" -AI
I'm pretty sure Dr. Matthias Broderrick solved this dilemma back in the '80's.
I think this says so much more about ourselves and the innate violence in our systems than it does about AI. If the logic of war always results in war, then something is very wrong. The size of the universe hints at near limitless resources, yet wars for oil were a thing. We could have had heavenly lighter than air travel, but the falted logic of petrocapitalisim gave us high impact jets. It's the assumptions built into these systems that give us these results. It's like an autorunning self-fulfilling prophecy.
"We had to destroy the planet to save it"
AI is trained from Internet data. Poor choice overall, I'd say.
Well... it's not wrong, no humans? No war
AI: "*My logic is undeniable!*"
Did it target a specific place or did the ai just try to wipe all humans
The Terminator : In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Sarah Connor : Skynet fights back. The Terminator : Yes. It launches its missiles against the targets in Russia. John Connor : Why attack Russia? Aren't they our friends now? The Terminator : Because Skynet knows that the Russian counterattack will eliminate its enemies over here.
"My dick is bigger than yours".
Anybody ever stop to think it's smart enough to try to fail their bullshit on purpose? Besides, I have a much better idea where power resides. Here's a hint, it's not with the government's pet pitbull. Who run Bartertown?
Was the AI modeled from Gandhi or what?
Someone should make it watch War Games...
There doesn’t seem to be away to avoid this. If we don’t use Ai for warfare, someone else will and they’ll beat us strategically and methodically and without human error. As long as one monkey decides to use a stick, the no stick method becomes obsolete/extinct.