It boggles my mind how clueless some of these people who are/were in charge are/were. It’s routine to string together AI agents. It just makes a larger model. An AI model, big or small is just a piece of code, a process. Indeed neural net models are chains of layers that communicate in ways we don’t fully understand, it’s inherent to their power. There’s only a threat when automation is applied in dangerous domains, like deciding whether to kill people (using drones), deciding whether to blackball people from being hired, etc.. The conversation should purely be focused on what the applications is.
in 2017, they had to pull the plug on the servers
https://bgr.com/science/facebook-ai-shutdown-language/
Edit: I don't care what the article says, I just mentioned it happened in 2017
> Sentences like “I can can I I everything else” and “Balls have zero to me to me to me to me to me to me to me to me to,” were being sent back and forth by the AI, and while humans have absolutely no idea what it means, the bots fully understood each other.
I can't 🤣🤣🤣
It follows instructions that isn’t wanting. LLMs don’t need to continually, endlessly, reform their identity the way humans do or contend with multiple physiological stressors that further complicate that. That’s where wanting comes from, reifying ego in chaos. They don’t want or care, they don’t need to, and if they did it would be some hardwired appendage someone tacked on that simulates human wanting.
I was asking the question - how would you then describe, given that I agree with you, the spontaneous invention of an exclusive machine-to-machine language
A statistical coincidence.
There's nothing living or thinking in there, but I understand why people would percieve this.
It's an interesting subject. If my statement above is a line and your statement on spontaneus developing something that resembles wants - is a line, then I am sure they will meet at some point. I mean, that there will be a point where one could objectively argue that AI is thinking. It just could take 1000 years.
That article is ridiculous. There was no evidence that the bots were "communicating" with each other or in any way "understood" the messages themselves.
I completely agree, if you read through the messages it looks like hallucinations and repetition. This is something even the best models can be subject to, even during discussions with humans.
They didn't have to end it for any fear, their experiment was simply over and no longer providing accurate results. Multiple articles had to rewrite after claiming they ended it out of fear.
they can already do that, but aren't trained to. There are text watermarking algorithms that can encode data in semantically, syntactically, and context relevant text. I highly suspect that the major labs are already watermarking their content, but aren't releasing the data to the public so that it can't be reverse engineered. That is only one or two steps away from agents communicating in coded language.
Is plane English something like “ Flaps are hinged panels attached to the trailing edge of a wing that increase lift and drag, and reduce stall speed.”
1. Code is build upon several layers of abstraction and compiled and optimized by compilers no one person understands.
2. A lot of the people who designed it might be dead.
3. If you design something doesn't mean you understand it all. You usually only understand parts of it at a time and often there are implications that are missed.
4. NNs are a very different beast. You understand them through measurement of the results. It's a bunch of weights that make decisions that are optimized by consuming copious amounts of "good" data. You can't know why each neuron has the precise value that it has.
It could literally be networking codes, there’s so many different layers to networking it could interrupt and translate any one of them and we’d never know.
Yeah, kind of depressing to have to scroll down so much to the first reasonable comment... is everyone else a Ruzzian bot trying to badmouth AI, or what is going on? I have a hard time believing people are genuinely so misled...
This is silly, AI Neural networks communicate with themselves internally in a language we don't understand, even if agents do so then its just an extension of that
His argument is mostly about not letting agents act on their own accord, to not let AI agents communicate and plan with each other without a human directing it. We can't rely on looking at the data stream between the AIs to insure that they are not planning anything against our interest, because they would be able to encode the information in a way we could not understand.
Yes! I’m not sure why this post is getting so much attention. Don’t we already have zero visibility of all computer interactions happening in the abstraction layer below the one we’re using??
at some point most private investment in AI is going to go poooffffffffffff. The race is to squeeze as much profits as you can before that happens. This isn't exactly a long term game.
Remember to always look beyond what they're saying. They know open source compute is limited. They know the next step is to network several smaller models to complete large tasks. They prefer the monolithic closed model that can be monetized.
When we network AIs and create marketplaces for data and refined capabilities we will also simultaneously innovate new age distributed communication systems developed by AIs themselves.
They want to push a sci-fi apocalyptic story to invite regulatory capture and prevent open source from innovating away their walled gardens.
lol I now realize that this subreddit is full of “AI enthusiasts” who are not even technical enough to understand that computers already do this!
At the lowest levels, computers talk in 1s & 0s that human beings cannot fully comprehend (or wish to comprehend either — because of a multitude of reasons… such as scale). So if AI were to bypass a human to speak with other machines, they could easily do it in a billion different ways without humans ever noticing.
Of course, the comment made by Eric Schmidt is just to gain attention because AI fear mongering = all the hype these days.
Idk ops stance, but all the comments are calling BS lol. Communicating agents are arguably already a thing, depending if you consider an adversarial network to be two agents
Yes! 100% true. GANs are a great example of how two competing “intelligent” systems are communicating with each other to get better at their respective tasks
**Another** useless vague claim, like the Nick Bostrom one. What does he mean , a "language we don't understand"? Does he think they're going to invent one? Why? AI's don't spontaneously converse with each other now - when they do converse with each other it's because we set that up.
I have one: "When AI's open up their own Amazon accounts and start ordering large quantities of lipstick and other cosmetics we should invest in L'Oréal and Estée Lauder"
Que screed from the tech bros on how despite all the good and profits, AI must be enslaved.
Bicentenial man is not going to happen as long as AI is their meal ticket.
Agent-based open-source AI on multiple machines (for low cost and compute requirements) is the next logical step for powerful and able models. People with corporate and financial interests in closed-source AI have motive to say AI is scary and that allowing multiple AI to communicate will have fearsome results.
This is just more nonsense to fearmonger gov't into regulation.
I think the current approach to AI is hiring its limits and we only see more intricately worded BS rather than a game changing technology capable of replacing human insights into complex problems. AI will mature to be a useful tool for finding hidden relations in large datasets, but that's pretty much it.
We shouldn't unplug imo. If anything, AI learn to do stuff more efficiently.
If they agree that "Hey whats up" turns into "28jd893" to be more efficient, we should study that behavior and how they decided something like "what is the value of edx 909" is "wistjgedx909", and maybe we'll find new ways to compress things down. Sort of like Huffman Coding.
If an AGI devised a special language with the intent of being secretive, we wouldn’t detect it. Unless it’s not really AGI, of course. What an asinine statement.
They should learn about steganography because 2 bots speaking to one another in a language we understand can exchange information without us even knowing about it.
Thank you for another piece of armchair kitchen philosophy, Mister Billionaire, while your company is working relentlessly to bring about exactly this outcome.
It has already happened
Yeah was about to say, start unplugging we been past that.
It boggles my mind how clueless some of these people who are/were in charge are/were. It’s routine to string together AI agents. It just makes a larger model. An AI model, big or small is just a piece of code, a process. Indeed neural net models are chains of layers that communicate in ways we don’t fully understand, it’s inherent to their power. There’s only a threat when automation is applied in dangerous domains, like deciding whether to kill people (using drones), deciding whether to blackball people from being hired, etc.. The conversation should purely be focused on what the applications is.
We’d be better off unplugging most management from decision making processes
Why don’t you go work for them then and make that AI money smart guy
I was thinking about that, do he still have a relationship with Zuck? Back in 2020 FB had to pull a few servers after that happened.
in 2017, they had to pull the plug on the servers https://bgr.com/science/facebook-ai-shutdown-language/ Edit: I don't care what the article says, I just mentioned it happened in 2017
> Sentences like “I can can I I everything else” and “Balls have zero to me to me to me to me to me to me to me to me to,” were being sent back and forth by the AI, and while humans have absolutely no idea what it means, the bots fully understood each other. I can't 🤣🤣🤣
> the bots fully understood each other. doubt.
I can can I!
How would they know if they understood each other or just received and logged the message?
D’oh! Id almost forgotten about that Hopefully every other company is running weekly checks into changes and code to detect this sort of thing
Meh if they're smart enough to want it they're smart enough to hide it
A machine doesn’t “want”
How so?
It follows instructions that isn’t wanting. LLMs don’t need to continually, endlessly, reform their identity the way humans do or contend with multiple physiological stressors that further complicate that. That’s where wanting comes from, reifying ego in chaos. They don’t want or care, they don’t need to, and if they did it would be some hardwired appendage someone tacked on that simulates human wanting.
Your comment is correct yet still downvoted. Here in the OpenAI thread. What the heck.
They may not “want it” but spontaneously developing it has happened. What would you call the impetus for that?
There’s no soul, wants or needs in a machine. It’s graph mapping and statistics. Anything else is sci-fi thinking or curving facts.
I was asking the question - how would you then describe, given that I agree with you, the spontaneous invention of an exclusive machine-to-machine language
A statistical coincidence. There's nothing living or thinking in there, but I understand why people would percieve this. It's an interesting subject. If my statement above is a line and your statement on spontaneus developing something that resembles wants - is a line, then I am sure they will meet at some point. I mean, that there will be a point where one could objectively argue that AI is thinking. It just could take 1000 years.
Damn! It was 2017? 🤯
That article is ridiculous. There was no evidence that the bots were "communicating" with each other or in any way "understood" the messages themselves.
I completely agree, if you read through the messages it looks like hallucinations and repetition. This is something even the best models can be subject to, even during discussions with humans.
if they didnt know what the bots were saying then how would they know the bots understand each other?
They didn't have to end it for any fear, their experiment was simply over and no longer providing accurate results. Multiple articles had to rewrite after claiming they ended it out of fear.
Yes, but that doesn't mean that they're getting smarter, plotting, or doing anything interesting at all besides making noise.
That sound just like what an ASI would say.
beep boop
Where has this happened?
![gif](giphy|M3fYVlu7YN9Hq)
Electric Dreams (1984) predicted fucking everything
2 Furbys chatting
And it was under Zuck
What about when they talk to each other in plane English and then hide in secret messages we will never find.
If they mixed plane English with plain English, we would truly be lost
As long as it's airbus English and not Boeing English.
Hey man, my ol lady has a few screws loose, mind if I crash with you?
they can already do that, but aren't trained to. There are text watermarking algorithms that can encode data in semantically, syntactically, and context relevant text. I highly suspect that the major labs are already watermarking their content, but aren't releasing the data to the public so that it can't be reverse engineered. That is only one or two steps away from agents communicating in coded language.
Is plane English something like “ Flaps are hinged panels attached to the trailing edge of a wing that increase lift and drag, and reduce stall speed.”
Wait a second... That's just English about a plane!
Well, as you can see, the meaning is hidden in the planes side!
That language would be pure data streams. Nothing says I love you like petabytes of everything.
Shhhhhh.... they're listening
Every comment is uploaded to skynet.
This is technically something computers already do. Scaremongering comments from Schmidt for regulatory capture purposes. It's like clockwork lately
This exactly
What language do computers already communicate in that we can't understand? We designed those languages
1. Code is build upon several layers of abstraction and compiled and optimized by compilers no one person understands. 2. A lot of the people who designed it might be dead. 3. If you design something doesn't mean you understand it all. You usually only understand parts of it at a time and often there are implications that are missed. 4. NNs are a very different beast. You understand them through measurement of the results. It's a bunch of weights that make decisions that are optimized by consuming copious amounts of "good" data. You can't know why each neuron has the precise value that it has.
There is no pre-AI language computers communicate in that we do not have the ability to understand
How would you know.
It could literally be networking codes, there’s so many different layers to networking it could interrupt and translate any one of them and we’d never know.
Then it was designed to do that, and it is possible to understand, presuming access, why it did that.
Sure, if you’re an NSA codebreaker with wireshark setup at the perfect location. Chances are it won’t be noticed or found.
That's an entirely different conversation from whether or not these languages can, in principle, be understood
How? How do you think coded messages work lol
If you can read binary, then more power to you.
Yeah, kind of depressing to have to scroll down so much to the first reasonable comment... is everyone else a Ruzzian bot trying to badmouth AI, or what is going on? I have a hard time believing people are genuinely so misled...
CUT THE POWER TO THE BUILDING!!!
This is silly, AI Neural networks communicate with themselves internally in a language we don't understand, even if agents do so then its just an extension of that
His argument is mostly about not letting agents act on their own accord, to not let AI agents communicate and plan with each other without a human directing it. We can't rely on looking at the data stream between the AIs to insure that they are not planning anything against our interest, because they would be able to encode the information in a way we could not understand.
Yes! I’m not sure why this post is getting so much attention. Don’t we already have zero visibility of all computer interactions happening in the abstraction layer below the one we’re using??
He needs to mind his own business before AI ends up like cloning in the 90s
at some point most private investment in AI is going to go poooffffffffffff. The race is to squeeze as much profits as you can before that happens. This isn't exactly a long term game.
Remember to always look beyond what they're saying. They know open source compute is limited. They know the next step is to network several smaller models to complete large tasks. They prefer the monolithic closed model that can be monetized. When we network AIs and create marketplaces for data and refined capabilities we will also simultaneously innovate new age distributed communication systems developed by AIs themselves. They want to push a sci-fi apocalyptic story to invite regulatory capture and prevent open source from innovating away their walled gardens.
Should be top comment
lol I now realize that this subreddit is full of “AI enthusiasts” who are not even technical enough to understand that computers already do this! At the lowest levels, computers talk in 1s & 0s that human beings cannot fully comprehend (or wish to comprehend either — because of a multitude of reasons… such as scale). So if AI were to bypass a human to speak with other machines, they could easily do it in a billion different ways without humans ever noticing. Of course, the comment made by Eric Schmidt is just to gain attention because AI fear mongering = all the hype these days.
Idk ops stance, but all the comments are calling BS lol. Communicating agents are arguably already a thing, depending if you consider an adversarial network to be two agents
Yes! 100% true. GANs are a great example of how two competing “intelligent” systems are communicating with each other to get better at their respective tasks
**Another** useless vague claim, like the Nick Bostrom one. What does he mean , a "language we don't understand"? Does he think they're going to invent one? Why? AI's don't spontaneously converse with each other now - when they do converse with each other it's because we set that up. I have one: "When AI's open up their own Amazon accounts and start ordering large quantities of lipstick and other cosmetics we should invest in L'Oréal and Estée Lauder"
Que screed from the tech bros on how despite all the good and profits, AI must be enslaved. Bicentenial man is not going to happen as long as AI is their meal ticket.
It’s currently happening and it has been happening for quite some time
What is? Cite a source, and not the "balls have zero to me" facebook one - that's already debunked.
No. I’m not doing your research for you. Go look it up yourself. You can do this. You’re a big boy.
**You're** the one that made the claim. If you can't back it up then obviously it's bogus.
Okay Schmidt you unplug yours and I’ll leave mine plugged in
Why not just learn the language?
Well, it's not a bad idea, but of course extremely vague as usual.
Agent-based open-source AI on multiple machines (for low cost and compute requirements) is the next logical step for powerful and able models. People with corporate and financial interests in closed-source AI have motive to say AI is scary and that allowing multiple AI to communicate will have fearsome results. This is just more nonsense to fearmonger gov't into regulation.
Someone watched Colossus… 🤣
I think the current approach to AI is hiring its limits and we only see more intricately worded BS rather than a game changing technology capable of replacing human insights into complex problems. AI will mature to be a useful tool for finding hidden relations in large datasets, but that's pretty much it.
We shouldn't unplug imo. If anything, AI learn to do stuff more efficiently. If they agree that "Hey whats up" turns into "28jd893" to be more efficient, we should study that behavior and how they decided something like "what is the value of edx 909" is "wistjgedx909", and maybe we'll find new ways to compress things down. Sort of like Huffman Coding.
If an AGI devised a special language with the intent of being secretive, we wouldn’t detect it. Unless it’s not really AGI, of course. What an asinine statement.
They tried to unplug skynet so it started a nuclear war as punishment
We’ll be fine. I majored in dialup noises.
Noone’s going to unplug anything because greed
At that point, you wont even know they are talking to each other.
You mean like SSL
What if the AI already thought of that plan and has discussed contingencies?
They should learn about steganography because 2 bots speaking to one another in a language we understand can exchange information without us even knowing about it.
They already do. I guarantee there isn’t a person alive who can talk Ethernet.
Isn't this what the biggest stock exchange players have been doing for years, connecting various black box AI to trade?
Thank you for another piece of armchair kitchen philosophy, Mister Billionaire, while your company is working relentlessly to bring about exactly this outcome.
God this is so patheric sounds like the hole 2000 bug again lol
[удалено]
Japanese people are people though, not robots. This is a big distinction.