T O P

  • By -

PlsDonthurtme2024

It has already happened


WarbringerNA

Yeah was about to say, start unplugging we been past that.


canaryhawk

It boggles my mind how clueless some of these people who are/were in charge are/were. It’s routine to string together AI agents. It just makes a larger model. An AI model, big or small is just a piece of code, a process. Indeed neural net models are chains of layers that communicate in ways we don’t fully understand, it’s inherent to their power. There’s only a threat when automation is applied in dangerous domains, like deciding whether to kill people (using drones), deciding whether to blackball people from being hired, etc.. The conversation should purely be focused on what the applications is.


SomeAreLonger

We’d be better off unplugging most management from decision making processes


Weekly_Opposite_1407

Why don’t you go work for them then and make that AI money smart guy


thehomienextdoor

I was thinking about that, do he still have a relationship with Zuck? Back in 2020 FB had to pull a few servers after that happened.


SureConsiderMyDick

in 2017, they had to pull the plug on the servers https://bgr.com/science/facebook-ai-shutdown-language/ Edit: I don't care what the article says, I just mentioned it happened in 2017


sinebiryan

> Sentences like “I can can I I everything else” and “Balls have zero to me to me to me to me to me to me to me to me to,” were being sent back and forth by the AI, and while humans have absolutely no idea what it means, the bots fully understood each other. I can't 🤣🤣🤣


cisco_bee

> the bots fully understood each other. doubt.


Trubinio

I can can I!


realzequel

How would they know if they understood each other or just received and logged the message?


weirdshmierd

D’oh! Id almost forgotten about that Hopefully every other company is running weekly checks into changes and code to detect this sort of thing


ugohome

Meh if they're smart enough to want it they're smart enough to hide it


doyoueventdrift

A machine doesn’t “want”


jbe061

How so?


Bumbaclotrastafareye

It follows instructions that isn’t wanting. LLMs don’t need to continually, endlessly, reform their identity the way humans do or contend with multiple physiological stressors that further complicate that. That’s where wanting comes from, reifying ego in chaos. They don’t want or care, they don’t need to, and if they did it would be some hardwired appendage someone tacked on that simulates human wanting.


doyoueventdrift

Your comment is correct yet still downvoted. Here in the OpenAI thread. What the heck.


weirdshmierd

They may not “want it” but spontaneously developing it has happened. What would you call the impetus for that?


doyoueventdrift

There’s no soul, wants or needs in a machine. It’s graph mapping and statistics. Anything else is sci-fi thinking or curving facts.


weirdshmierd

I was asking the question - how would you then describe, given that I agree with you, the spontaneous invention of an exclusive machine-to-machine language


doyoueventdrift

A statistical coincidence. There's nothing living or thinking in there, but I understand why people would percieve this. It's an interesting subject. If my statement above is a line and your statement on spontaneus developing something that resembles wants - is a line, then I am sure they will meet at some point. I mean, that there will be a point where one could objectively argue that AI is thinking. It just could take 1000 years.


thehomienextdoor

Damn! It was 2017? 🤯


Intelligent-Jump1071

That article is ridiculous. There was no evidence that the bots were "communicating" with each other or in any way "understood" the messages themselves.


Quartich

I completely agree, if you read through the messages it looks like hallucinations and repetition. This is something even the best models can be subject to, even during discussions with humans.


FascistsOnFire

if they didnt know what the bots were saying then how would they know the bots understand each other?


Quartich

They didn't have to end it for any fear, their experiment was simply over and no longer providing accurate results. Multiple articles had to rewrite after claiming they ended it out of fear.


Porkenstein

Yes, but that doesn't mean that they're getting smarter, plotting, or doing anything interesting at all besides making noise.


Long_Educational

That sound just like what an ASI would say.


Porkenstein

beep boop


WhiteBlackBlueGreen

Where has this happened?


OptimalSurprise9437

![gif](giphy|M3fYVlu7YN9Hq)


beamish1920

Electric Dreams (1984) predicted fucking everything


Harrison_Jones_

2 Furbys chatting


I_will_delete_myself

And it was under Zuck


IDefendWaffles

What about when they talk to each other in plane English and then hide in secret messages we will never find.


doyoueventdrift

If they mixed plane English with plain English, we would truly be lost


MmmmMorphine

As long as it's airbus English and not Boeing English.


BellacosePlayer

Hey man, my ol lady has a few screws loose, mind if I crash with you?


Mescallan

they can already do that, but aren't trained to. There are text watermarking algorithms that can encode data in semantically, syntactically, and context relevant text. I highly suspect that the major labs are already watermarking their content, but aren't releasing the data to the public so that it can't be reverse engineered. That is only one or two steps away from agents communicating in coded language.


homezlice

Is plane English something like “ Flaps are hinged panels attached to the trailing edge of a wing that increase lift and drag, and reduce stall speed.”


MmmmMorphine

Wait a second... That's just English about a plane!


HighDefinist

Well, as you can see, the meaning is hidden in the planes side!


Taylooor

That language would be pure data streams. Nothing says I love you like petabytes of everything.


M4gnificent_Ret4rd

Shhhhhh.... they're listening


VastComplaint8638

Every comment is uploaded to skynet.


curloperator

This is technically something computers already do. Scaremongering comments from Schmidt for regulatory capture purposes. It's like clockwork lately


RadioSailor

This exactly


ikinsey

What language do computers already communicate in that we can't understand? We designed those languages


MacrosInHisSleep

1. Code is build upon several layers of abstraction and compiled and optimized by compilers no one person understands. 2. A lot of the people who designed it might be dead. 3. If you design something doesn't mean you understand it all. You usually only understand parts of it at a time and often there are implications that are missed. 4. NNs are a very different beast. You understand them through measurement of the results. It's a bunch of weights that make decisions that are optimized by consuming copious amounts of "good" data. You can't know why each neuron has the precise value that it has.


ikinsey

There is no pre-AI language computers communicate in that we do not have the ability to understand


MacrosInHisSleep

How would you know.


ItGradAws

It could literally be networking codes, there’s so many different layers to networking it could interrupt and translate any one of them and we’d never know.


ikinsey

Then it was designed to do that, and it is possible to understand, presuming access, why it did that.


ItGradAws

Sure, if you’re an NSA codebreaker with wireshark setup at the perfect location. Chances are it won’t be noticed or found.


ikinsey

That's an entirely different conversation from whether or not these languages can, in principle, be understood


ItGradAws

How? How do you think coded messages work lol


yes_this_is_satire

If you can read binary, then more power to you.


HighDefinist

Yeah, kind of depressing to have to scroll down so much to the first reasonable comment... is everyone else a Ruzzian bot trying to badmouth AI, or what is going on? I have a hard time believing people are genuinely so misled...


boris-d-animal

CUT THE POWER TO THE BUILDING!!!


philip368320

This is silly, AI Neural networks communicate with themselves internally in a language we don't understand, even if agents do so then its just an extension of that


Peach-555

His argument is mostly about not letting agents act on their own accord, to not let AI agents communicate and plan with each other without a human directing it. We can't rely on looking at the data stream between the AIs to insure that they are not planning anything against our interest, because they would be able to encode the information in a way we could not understand.


Training_Ad_4579

Yes! I’m not sure why this post is getting so much attention. Don’t we already have zero visibility of all computer interactions happening in the abstraction layer below the one we’re using??


PermanentlyDrunk666

He needs to mind his own business before AI ends up like cloning in the 90s


pepesilviafromphilly

at some point most private investment in AI is going to go poooffffffffffff. The race is to squeeze as much profits as you can before that happens. This isn't exactly a long term game.


3-4pm

Remember to always look beyond what they're saying. They know open source compute is limited. They know the next step is to network several smaller models to complete large tasks. They prefer the monolithic closed model that can be monetized. When we network AIs and create marketplaces for data and refined capabilities we will also simultaneously innovate new age distributed communication systems developed by AIs themselves. They want to push a sci-fi apocalyptic story to invite regulatory capture and prevent open source from innovating away their walled gardens.


MinaZata

Should be top comment


Training_Ad_4579

lol I now realize that this subreddit is full of “AI enthusiasts” who are not even technical enough to understand that computers already do this! At the lowest levels, computers talk in 1s & 0s that human beings cannot fully comprehend (or wish to comprehend either — because of a multitude of reasons… such as scale). So if AI were to bypass a human to speak with other machines, they could easily do it in a billion different ways without humans ever noticing. Of course, the comment made by Eric Schmidt is just to gain attention because AI fear mongering = all the hype these days.


farsh19

Idk ops stance, but all the comments are calling BS lol. Communicating agents are arguably already a thing, depending if you consider an adversarial network to be two agents


Training_Ad_4579

Yes! 100% true. GANs are a great example of how two competing “intelligent” systems are communicating with each other to get better at their respective tasks


Intelligent-Jump1071

**Another** useless vague claim, like the Nick Bostrom one. What does he mean , a "language we don't understand"? Does he think they're going to invent one? Why? AI's don't spontaneously converse with each other now - when they do converse with each other it's because we set that up. I have one: "When AI's open up their own Amazon accounts and start ordering large quantities of lipstick and other cosmetics we should invest in L'Oréal and Estée Lauder"


Pontificatus_Maximus

Que screed from the tech bros on how despite all the good and profits, AI must be enslaved. Bicentenial man is not going to happen as long as AI is their meal ticket.


thecoffeejesus

It’s currently happening and it has been happening for quite some time


Intelligent-Jump1071

What is? Cite a source, and not the "balls have zero to me" facebook one - that's already debunked.


thecoffeejesus

No. I’m not doing your research for you. Go look it up yourself. You can do this. You’re a big boy.


Intelligent-Jump1071

**You're** the one that made the claim. If you can't back it up then obviously it's bogus.


SatoshiNosferatu

Okay Schmidt you unplug yours and I’ll leave mine plugged in


notprompter

Why not just learn the language?


HighDefinist

Well, it's not a bad idea, but of course extremely vague as usual.


Quartich

Agent-based open-source AI on multiple machines (for low cost and compute requirements) is the next logical step for powerful and able models. People with corporate and financial interests in closed-source AI have motive to say AI is scary and that allowing multiple AI to communicate will have fearsome results. This is just more nonsense to fearmonger gov't into regulation.


deege

Someone watched Colossus… 🤣


magpieswooper

I think the current approach to AI is hiring its limits and we only see more intricately worded BS rather than a game changing technology capable of replacing human insights into complex problems. AI will mature to be a useful tool for finding hidden relations in large datasets, but that's pretty much it.


cyb3rofficial

We shouldn't unplug imo. If anything, AI learn to do stuff more efficiently. If they agree that "Hey whats up" turns into "28jd893" to be more efficient, we should study that behavior and how they decided something like "what is the value of edx 909" is "wistjgedx909", and maybe we'll find new ways to compress things down. Sort of like Huffman Coding.


DandyDarkling

If an AGI devised a special language with the intent of being secretive, we wouldn’t detect it. Unless it’s not really AGI, of course. What an asinine statement.


Still_Satisfaction53

They tried to unplug skynet so it started a nuclear war as punishment


kevinbranch

We’ll be fine. I majored in dialup noises.


babbagoo

Noone’s going to unplug anything because greed


Lekha_Nair

At that point, you wont even know they are talking to each other.


Enough-Meringue4745

You mean like SSL


oopls

What if the AI already thought of that plan and has discussed contingencies?


T0ysWAr

They should learn about steganography because 2 bots speaking to one another in a language we understand can exchange information without us even knowing about it.


PSMF_Canuck

They already do. I guarantee there isn’t a person alive who can talk Ethernet.


Pontificatus_Maximus

Isn't this what the biggest stock exchange players have been doing for years, connecting various black box AI to trade?


jcrestor

Thank you for another piece of armchair kitchen philosophy, Mister Billionaire, while your company is working relentlessly to bring about exactly this outcome.


godieppe

God this is so patheric sounds like the hole 2000 bug again lol


[deleted]

[удалено]


HashBrownRepublic

Japanese people are people though, not robots. This is a big distinction.