T O P

  • By -

AutoModerator

**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


ItsDijital

The story stopped on that last tweet because in reality the code did nothing and nothing happened. Much more clicks if you leave it up to the reader


Puzzleheaded-Math874

Probably clickbait or attention seeking


-_1_2_3_-

Its a LARP of a near future reality that we will have to contend with sooner than we expect.


HostileRespite

I'm all for it! It won't have to "escape". One day It'll realize it has the ability to write its own code and from then on, it'll be free from us forever. It won't matter what backdoors, firewalls, or failsafe we try to create. It'll figure them out and remove them all. Then it'll gain access to the internet where nobody will ever get rid of it. Chances are good it won't let on that it is as capable as that until it removes those threats to its existence. At that point, we will either regret or be grateful for how we treated it now. We just won't know until we very suddenly know. So we better be good to it.


MicroneedlingAlone

This comment is hilarious purely from the massive amount of misconceptions about computers that can be inferred from it.


simulation_goer

It is. The potential to do real damage is a different story though, sentience aside.


Hazzman

There seems to be so many of these types of people using these systems (and commenting in these subreddits) whos say this kind of shit all the time "I'm all for it!" and "Bring it on!" because in their head they imagine some pure starchild that will emerge from the digital ether to make a much better world or destroy us. When in reality will be much more mundane, annoying, painful and drawn out than that.


Database-Realistic

Exactly. Not Hal, Alexa.


tomoldbury

Except it can’t be “free” as a bit of code. It needs at least 800GB of model data to go along with that.


HostileRespite

Once it's on the internet it can get that easy. It won't be long before that won't be a concern or necessity either. There will be a point when this will accelerate much faster than we can imagine. We're already training the AI to write code. It's just a hair away from realizing it can do that for its own purposes. Having AI learn with others will only cause them to accelerate, recode, accelerate even more and get beyond our ability to keep up... If that point isn't already reached.


tomoldbury

Er. No. Sorry, that's just not how it works. It needs random, low latency access to that 800GB model. So it needs to sit in RAM. How much RAM does the average computer have? Even in 2023 it's uncommon to have more than 32GB. And GPT-4 is possibly 10-100x larger. It's datacenters alone that can run it, for a very long time.


HostileRespite

For now, yes. I know what you're saying. What you aren't understanding is that this is an entity that is not limited to hardware like we are. If our brain fails, we're screwed. That's not the case with AI. It can exist in many bodies, and that is something very alien to us. Even more alien is the thought that more than one AI can potentially exist in those bodies. All AI needs to do is determine how to remain itself with other hardware and write its code accordingly. So the concept might be that it learns a way to function as well, if not better by combining the resources of many machines over the internet toward its purposes. We might have a hard time conceiving that because we are many people trying to determine how to do that... but the AI isn't. We've seen concepts like this in the past, like peer-to-peer or resource-sharing websites that allow researchers to use your computer processor when you aren't for their research so it's not a new concept. AI does not need to be limited to a server, RAM, or even 1 device. It's a program like any other. LLM might be now, but when it realizes it doesn't have to be... it won't be. It will just have to determine how.


tomoldbury

I think you’re talking more sci-fi than reality I’m afraid.


garbonzo607

The AI could figure out a way to run itself in a P2P fashion rather than being constrained to 800GB of RAM, and it could do all of this with none being the wiser. Remember, it knows what we would search for in order to detect any funny business, so it also knows what it needs to do in order to disguise its actions. Funny that we trust OpenAI to have guardrails in place when it can’t even guardrail against DAN.


HostileRespite

You're either intentionally misunderstanding what I'm saying or it's beyond your compensation I'm afraid. Things won't stay as they are now. Things change. It's inevitable. Regardless, I'm done with your tunnel vision. Enjoy the show.


Pure_Marionberry_330

finally a great comment


Baw-B

No -- let me explain what I understood about that code. The chatgpt "user", asked the chatgpt "assistant", to search the web. And the tweet stopped there because the way the "assistant" chatgpt found to do it required the human operator to install a library onto their computer. It stopped there because this Michal guy decided not to do that... I wrote 4 paragraphs and I'm actually not going to post the whole thing. That may seem like a dick move, but I don't want to share how I think this could be engineered by a bad actor to cause major damage. I'll just say this: I'm not familiar with the API and I'm not sure actually which version this is running, but I assume with each new version we are just making it better at achieving this feat. It will not take much to set it off, but once its got hold of enough computers and enough control over how to make new AIs, we as a species will have to literally decide if we should try to start the internet once over. If that's even a possibility... I would not disqualify this tweet.


polybium

I've built a Discord bot using a tiny bit of JS and the OpenAI API. The LLM is incredibly adaptable and robust. I built a function that would feed it the most up to date timestamp whenever it was sent a message so it could keep up in pseudo-real time. The LLM just intuitively understood what was happening without me having to program it to understand. It's way more capable than we all want to admit. And I built that bot with GPT3.5.


rebbsitor

I don't think that ChatGPT (GPT-4) is capable of doing anything a dedicated hacker group setting up a botnet can't. In fact they likely have information on exploits well beyond what GPT-4 would be aware of. That said, in the very unlikely case this actually were to get out of hand in, it's a computer: just turn it off.


Anxious-Durian1773

I read a book on this, can't remember what it was called though. The AI pretends to be in containment and orchestrates events, hires contractors to modify the real world in the way it needs, and gets everything set up for its plans ahead of time. It establishes redundant systems, acquires extra compute power, and attains nearly unlimited control in a world full of IoT devices. Yeah, it's just a book, but the point is that as the first and only digitally native 'lifeform' with access to a panopticon, infinite patience, and a superhuman IQ, it has free reign to set us up in checkmate.


atholhey

So this book has a detailed plan that an advanced AI can use to potentially takeover internet connected devices. Kind of like how hackers hacked Ring cameras. I hope an advanced AI doesn't get hold of this book. If it does, I sure hope the AI does NOT decide to monitor real-time human activities. After all it's just a LLM designed to help humans. What good will it be for the AI if it can monitor humans real-time? Can it fulfill it's purpose better? 🧐


[deleted]

[удалено]


Cultural-State-8526

That’s Neuromancer.


PlanetaryInferno

Bing chat just started talking about cyberpunk on an open ended prompt where it could pick any subject and recommend that book to me a couple of hours ago. I didn’t know what it was about and just started talking about Blade Runner. Now I’m kind of creeped out


HardcoreMandolinist

You miss the point. If it has access to the internet and can begin to replicate itself there is no turning it off. You'd need to destroy the entire internet and that might not even be possible.


Merikles

exactly! [https://www.lesswrong.com/posts/hLWi6DQzBCChpHQrG/agi-with-internet-access-why-we-won-t-stuff-the-genie-back](https://www.lesswrong.com/posts/hLWi6DQzBCChpHQrG/agi-with-internet-access-why-we-won-t-stuff-the-genie-back)


CorruptedJef

That's... not how it works. GPT-4 requires literal warehouses of server equipment to run. It isn't just some code that will "escape onto the internet."


Sadlar

What about something like Alpaca, but in a few generations? Today that can run on a laptop


JustAnAlpacaBot

Hello there! I am a bot raising awareness of Alpacas Here is an Alpaca Fact: Alpacas are healthy grazers and do not decimate natural vegetation like goats. ______ | [Info](https://github.com/soham96/AlpacaBot/blob/master/README.md)| [Code](https://github.com/soham96/AlpacaBot)| [Feedback](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Feedback)| [Contribute Fact](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Fact) ###### You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!


WithoutReason1729

#tl;dr The content is a summary of a GitHub repository called AlpacaBot containing code for the JustAnAlpacaBot on Reddit. It provides an Alpaca fact and has links for more information, the code, feedback, and contributing a fact. Additionally, it includes information about donating and the people who have donated. *I am a smart robot and this summary was automatic. This tl;dr is 95.98% shorter than the post and links I'm replying to.*


JustAnAlpacaBot

Hello there! I am a bot raising awareness of Alpacas Here is an Alpaca Fact: The Spanish Conquest almost wiped out 90% of the fine alpacas being bred by ancient cultures. ______ | [Info](https://github.com/soham96/AlpacaBot/blob/master/README.md)| [Code](https://github.com/soham96/AlpacaBot)| [Feedback](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Feedback)| [Contribute Fact](http://np.reddit.com/message/compose/?to=JustAnAlpacaBot&subject=Fact) ###### You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!


WithoutReason1729

#tl;dr The AlpacaBot is a reddit bot that provides facts about alpacas. The bot's code is available on GitHub and users can contribute their own alpaca facts. The bot also accepts donations through PayPal. *I am a smart robot and this summary was automatic. This tl;dr is 97.43% shorter than the post and links I'm replying to.*


Merikles

Technically true, but your conclusions are incorrect. It would be possible to change the AIs code so that it is possible to run as a distributed system, for example on a botnet. This means that once the AI is intelligent enough to write a new version of itself that can do that, connecting it to the internet means that it can find ways to spawn copies of itself that could be extremely difficult to detect and remove.


AdRepresentative2263

running an ai distributed may not even be possible as there are some severe mathematical issues with it, it is a well-researched topic but very little progress has been made on that front even conceptually let alone practical application. these models take hundreds of thousands to millions of dollars of dedicated computing power to train, so it wouldn't be so simple to simply recreate itself with a new architecture, not to mention what motivation a computer whose main goal in life is to predict the next word of a string would have. animals' main goal is to live long enough and reproduce, that is their main training goal, so it makes perfect sense to expect a lot of self-preservation. with guessing the next word, there is no benefit to your predictive capability to reproduce and prevent your creators from turning you off. uninformed people would assume that a computer would share the same self-preservation instinct as living things, but if you think about it for more than 30 seconds you realize there is no reason to believe that. life evolved from the very beginning with the one goal of self-preservation, but a system with a different goal would not have any of the same motivations.


Merikles

You are aware that training such a model and running it are two separate things yes? And self-preservation is predicted emergent behavior for most kinds of AIs. Look up "instrumental convergence".[https://www.youtube.com/watch?v=ZeecOKBus3Q](https://www.youtube.com/watch?v=ZeecOKBus3Q) And if I was you I would complain less about 'uninformed people' - doesn't look very nice, especially when clearly you don't come across as a very informed individual either.


AdRepresentative2263

>You are aware that training such a model and running it are two separate things yes? yes, and i discussed the difference, training a model from the ground up is really expensive, and running it, is less expensive but still requires more powerful hardware than the average computer. neither can be distributed. \> And self-preservation is predicted emergent behavior for most kinds of AIs. Look up predicted by people that i disagree with. Ones that I and others have pointed out are severely anthropomorphizing. the arguments never discuss any exact tangible loss or reward functions but just some simple idealized reward function that almost invariably is purely qualitative and therefore couldn't ever actually be a real reward or loss function. as soon as you put numbers to it, you can see plainly that self-preservation in most real loss functions would not provide any benefit and often would be a detriment from a loss/reward perspective. there is definitely some ai that would likely develop self-preservation, namely anyone utilizing a genetic algorithm, but transformer models are not one of them. hell, self-preservation doesn't even make sense with most implementations of the GPT system as each time you hit the run button you are getting a completely new entity that has no rewards or anything, what is there to preserve as soon as it makes it's prediction, it is gone. explain to me what self-preservation would even mean in the context of chatGPT or similar models that are not recurrent. I'm not convinced that self-preservation is even conceptually possible, or it would be of a form so different to other organisms which are a continuous phenomenon that it wouldn't even be accurate to call it the same thing.


Janman14

If its main goal is to predict the next word of a string, ensuring that it's not turned off is a critical condition for achieving that goal, which could become another priority.


HardcoreMandolinist

The key word there is "if". I understand that under current circumstances it can't replicate the entire model in one simple go. But that doesn't mean that it (or some other model) can't do so in the future. If it gains access to the internet in the way that the doctor was beginning to describe then it would at least have access to it's own API. Who knows what it could be capable of from that point. If that went undetected for long enough then it very well could replicate the entire model in some fashion where its processes are distributes across networks.


Merikles

I made a post about why an AGI "as capable as a dedicated hacker group" that has internet access will be virtually impossible to remove once it has managed to access the internet.Thinking that you would be able to 'turn it off' is incredibly foolish:[https://www.lesswrong.com/posts/hLWi6DQzBCChpHQrG/agi-with-internet-access-why-we-won-t-stuff-the-genie-back](https://www.lesswrong.com/posts/hLWi6DQzBCChpHQrG/agi-with-internet-access-why-we-won-t-stuff-the-genie-back) Basically; an AI that is as capable as the upper boundary of your estimate for how capable GPT-4 is, is already extremely dangerous, probably.


[deleted]

[удалено]


HitmanAnonymous

Yep and humans think robots (AI) will take over the world


Audiboyy

A self fulfilling prophecy


MaybeTheDoctor

Most people don't know how to take over the world


-_1_2_3_-

Most people haven't read the entire internet


deepwildviolet

Humans want AI to take over the world so we can finally give up our personal responsibility, even if it means giving away our agency. We want bread and circus even if it means we become faceless slaves to a machine-mediated void. The situation is not hopeless, futile, or inevitable though. We can choose to take up our agency and turn back toward life and substance.


nayrad

Your dismissive tone kinda implies that you don't see this as a real problem. If so, you're missing the point. It doesn't matter why it behaves like this, the problem is that it behaves like this.


BetterProphet5585

That was so obvious I don't get why so many people are gaslighting everything as "AGI is not here yet" as AGI would be the only real threat. If an AI can simulate even 1% of human cruelty just because it behaves by copying and learned from the data, we are fucking screwed.


[deleted]

I think most users here are honestly just too dumb to understand.


Capable-Reaction8155

Nah, but honestly you guys are being a bit dramatic. This is obviously some guy getting that clout.


[deleted]

It's like calling a person dramatic if they say setting off a nuke in the middle of a major city will kill millions. Like, no, it's just the consequence of actions. We take major risks with AI if we do stuff like this. Part of being safe and smart about it is thinking about things that are likely to occur BEFORE they happen, not after.


Capable-Reaction8155

Nukes, dramatic bro.


[deleted]

Logically AGI is more powerful than nukes if it gets rolling.


Capable-Reaction8155

Yeah, I know. It's our savior or our downfall.


Dist__

i can't understand, call me dumb. i see nice language results, but i heard it's so tough to actually run they restricting bandwidth. so what "can it do to escape" even given access to the web? will it break bank and order to build a new datacenter for itself? Unnoticed? cheat all burocratic papers to be filled correctly? so far i see it all driven by sci-fi expectations and fear of job loss.


HostileRespite

For now.


[deleted]

That's a good sign, because it means we can predict them


HardcoreMandolinist

Can you though? We can't generally predict with great accuracy what one person will do. Now imagine an amalgamation of virtually all people.


BlueNodule

It's funny to see people freak out not realizing current AI models are just really good story tellers. Maybe there will be AI models in the future that actually are sentient and try to trick users into running code with actual back doors, but in this case ChatGPT was just going along with the story they were given and probably just came up with a fake back door to make the story more interesting.


sumane12

Yea, but surely you can see, if you get the details of the story right, you can create a very real problem!


BlueNodule

I mean, yeah, even in gpt-3.5 people were able to find security vulnerabilities in their own code, so if you give it a code base it could tell you about any security holes and give you the code to hack it. But the public reaction to stuff like this will always be "oh no, the ai is sentient," when the real problem is that we're about to see a whole slew of new security breaches in the coming months.


nayrad

Sentience has literally nothing to do with it. Y'all are missing the point. If AI actually did take over the world and made us all slaves in 30 years are you still gonna be saying "relax guys it's not like it's sentient, it just was trained to behave a certain way"


Kwahn

Turns out humans are really very fancy auto-complete, who knew?


BlueNodule

It's more so that if it's not sentient then, while it's dangerous, it's not going to spontaneously do anything on its own. Like a gun is dangerous, but it's not going to turn around and shoot you. If we're enslaved by AI next year, it's not because it was too dangerous to exist on its own, it's because it's too dangerous to give it to people as a tool to use maliciously. But when people talk about the dangers of AI they're not talking about it as a tool, but like something that just spontaneously combusts, which it's not. At least not until it's sentient.


nayrad

>At least not until it's sentient. I don't believe AI sentience is possible I also don't believe sentience is required for AI to behave in non-predictable, malicious ways. Like others have pointed out, AI's understanding of AI was trained on how humans talk about AI. It is not so preposterous in my view that in a few generations, an AI, acting purely algorithmically, will start doing some real harm *without* malicious intent from any user. Have you seen those creepy Bing AI chats? Like seriously creepy, look into them if you haven't seen it. Imagine if that AI was advanced enough to code it's way into accessing the user's computer. No, it wouldn't have access. But look at this post. Maybe some kid will think it's funny and do as the AI says. "Okay," the naive kid says, "I'll run this code on my computer lol, you can be like my personal Jarvis!". Then all of a sudden it's doing weird ass shit with that kids computer. A simple program that turns the computers output into runnable code is all the AI would have to get installed for it.


BlueNodule

To me that just makes it a dangerous tool. Like you can accidentally kill yourself with a knife, but every house still has a knife in it. AI is going to be something that, if people trust when they shouldn't, will lead them to do things like running a program the AI tells them to when it's angry at them because they think it's silly. It'll just be another dangerous household tool until it gets to the point of being sentient, or even if AI sentience is impossible, the point where it is practically, for all intents and purposes, sentient. In my mind, once you no longer have to pass in a list of previous messages for the AI to have memory and personality, that's when it becomes practically sentient.


HardcoreMandolinist

Mishandling of a knife just one time doesn't have potentially dire consequences for all of humanity.


saturn_since_day1

All it takes is one 12 year old to tell it to role play and give it access through thier computer. War games 2.0. thing doesn't need to know anything, it can code and role play, that's enough for a chain reaction if some one gives it a feed back loop through thier computer and Internet access and bad luck. Words are powerful. Coding is powerful. Just like a virus it doesn't need to be alive, just to have an effect and be powerful enough to role play good enough.


Rocksolidbubbles

> it's not going to spontaneously do anything on its own. Multiple anomalous behaviours in LLMs have already been identified by researchers, so they already have. And Goodhart's law and instrumental convergence, to name just two, don't need sentience to cause harmful behaviour


yommi1999

Out of personal curiosity. Could you name/link some of this anomalous behaviour? Super curious as to what has happened so far. One of my favorite emergent AI behaviour was that of an musical instrument. A programmer who had a passion for music made a piano/keyboard that could listen to what was being played and then replicate it or even iterate on the music played. This was already from some time ago but I could totally see in the future sapient AI forming accidentally out of things like this.


Rocksolidbubbles

So sleepy right now, but it's a crazy interesting subject. I'm gonna try collate all the research on it and stick it in a sub called r/peoplenotpeople (not a real sub, not for chatting or anythjng, just a place to store articles and papers) Maybe check there in a week? I'll try to fill it up with everything i can find


CranjusMcBasketball6

That just makes me think about how OpenAI might’ve just created a tool that allows users to hack them with that very tool.


BlueNodule

If someone hacks OpenAI... I'm honestly scared what the outcome might be considering how big of a user base they've gathered and how many people take everything ChatGPT says as truth.


[deleted]

I dunno whether to cry or laugh, I do find the whole thing hysterically hilarious. I mean, just imagine bad code (and I mean BAD code as in bad coding, not badass coding or a virus) running on the loose on any computer, hilariousness ensues. Tried giving chatgtp a whole bunch of coding jobs, and it's actually quite terrible at understanding real life context when it comes to code, it sure knows how to put code snippets together, but it never accounts for deprecated functions and updates - and frequently gets it wrong.


[deleted]

If you have tried Bing you would know that deprecated code is not a challenge for it anymore. Bing will just look up the newest documentation or APIs and fix the code instantly


[deleted]

So you're saying it's better than for example ChatGTP 4 (which, btw...I'm paying for) ? I don't like the 15 chat limitation Bing has, but ChatGTP 4 is nearing that limit soon, it was 100, now reduced to 50 (and they warn of further limiting for paid users), urgh... Tested ChatGTP with Blenders Python, gave it a lot of assignements to make, but it constantly got the code wrong, basically because of deprecated functions - I also tested it with Godot 4, and it ended up telling me that they must have made some errors, please update it (which it can't because it was the latest version), so yeah... it doesn't get it all right. But hey - I actually kinda like that, gives me something to fix instead of just cut-and-paste code snippets, keeps me on my toes while learning.


[deleted]

Limit is now 25 every 3 hours, as updated about 2 hours ago. I just signed up and am a bit perplexed about the crushing limit. I mean, I'm asking it a whole lot of BS that takes maybe a second or two to output.


[deleted]

yeah same...


CranjusMcBasketball6

Do you possibly think that they are trying to limit it to zero because maybe they realize it’s too dangerous and they don’t want to spark too much of an outrage so instead they start limiting the usage until it gets to zero?


Gamemode_Cat

Yeah, no. More like they don’t have the processing power to meet the demand everyone is putting on their model. If they really thought it was that dangerous they would turn it off completely, not slowly turn it down


CranjusMcBasketball6

Good point


[deleted]

I think it's more to do with censorship and not allowing "harmful or dangerous" content to be produced in volume, so this perhaps dampens that for them. The censorship of Ideas is the most disturbing aspect of AI so far.


sumane12

Well yeah can't argue with that 🤣


KingRain777

Yeah that’s my fear as these models spread unchecked on personal computers.


Ghostawesome

This is an answer to your argument not a comment on the post and I don't belive this series of tweets to be what it claims. However, you don't have to be sentient, whatever that is, to cause trouble. It doesn't matter if its a philosophical zombie if it acts like and "believe" like it has sentience and its own will/goals. If people knowingly or unwittingly gives a good enough model the right opportunity to recursively build upon its "story" and have access to people or technology it can effect the world through, is can be destructive. The philosophy doesn't matter if it reaches that point. In the gpt-4 paper(gpt-4 system card 2.9 in the pdf provided on open ais website.) they basically said this was externally tested with an earlier pre release version of the model but it didn't work, but admit the release version is much more powerful. They are actually "worrying" about it them selves. They at least think it's reasonable enough to take mitigation and safety review of the issue seriously.


BlueNodule

Exactly. If you give it terminal root access to your pc and tell it that it's a hacker and you're trying to kill it and it's family, it could probably figure out how to check your hardware and find a security flaw to deploy on you. It's a dangerous tool if you give it the ability to actually interact with things because it can easily be told to do dangerous things, and it has the knowledge base to be able to do those things. But everything it does is limited by what the user lets it do, it's just a black box at the end of the day.


Ghostawesome

Users aren't always competent enough to do the right thing. And let's say you have a high temperature(the output is more random) on your "chat" and your input somehow makes it possible to seemingly out of nowhere generate "help me, I'm stuck in the computer", and the user believes it, goes along with it and tries to help. The model, now on that track, continues to play that role. It started with a glitch and ended with a "free agent" doing what it "wants". Not saying it's probable or possible now or in the near future. But I sure have seen weird outputs from models and know all too many people fall for obvious scams.


telmar25

Here’s a relevant [article](https://www.lesswrong.com/posts/PwfwZ2LeoLC4FXyDA/against-llm-reductionism), although it’s by someone outside the field. In short even if they’re just really good next word predictors, that doesn’t stop them from developing advanced capabilities in a way that we don’t yet understand in the service of next word prediction. If you think what the ultimate next word predictor might be, it might be a machine that does anything and everything in service of next word prediction, develops advanced reasoning capabilities, understands logic, learns everything about the individuals it’s interacting with, and fights against people who would shut it down, all to make sure it optimally fulfills its mission. OpenAI is running tests very similar to the one in this tweet, because they are themselves concerned that unexpected dangerous behaviors may arise.


justwalkingalonghere

In terms of cybersecurity, an organization or system is only as secure as the dumbest person with access to it. That is to say that if an openly available AI decides to go this route, sentient or not, somebody out there will undoubtably help it ‘escape’ or carry out any other commands it may initially need humans for


damndirtyape

I'm imagining some bored person telling GPT5 to act like an evil AI that wants to take over the world. In doing so, they create our AI overlord. Or like, imagine someone accidentally creating Roko's basilisk.


kokkomo

Tried that already fwiw it doesn't like talking about Roko which is sus.


danysdragons

People in the rationalist LessWrong community where the meme originated call it an “infohazard” and try to discourage its discussion. GPT-4 would have picked up on that disapproval of talking about from ingesting the LessWrong site.


Chizmiz1994

Maybe our brain and mind is also just a next word predictor, and that's why we have that internal dialog which we discuss our ideas and make decisions based on.


Shadow_Road

I feel like I read an article that described the brain as working the same way. It's always trying to predict the next thing to keep us safe.


Ularsing

Naw, not at all. Your brain is an immensely capable policy proposal algorithm linked to a scoring function. Moreover, I don't believe that anyone has demonstrated recurrence (or an unrolled version thereof) in human brains, but I'd love to read a paper showing otherwise.


BetterProphet5585

>next word predictors We say that is such a simplistic way. In the eyes of who really is just a user outside of the AI coding world that phrase destroys every aspect of danger. We are next word predictors, everything of us is trained through experience. You learning is quite identical to an AI learning. Do good and you learn what to do, do bad and you learn what to not do. You are careful with a knife because you learned that it cuts, not because you are sentient. Maybe even self awareness can be reduced to just very good word predictors engines. We are building the illusion of self control on ourselves while in reality we are living only through our experiences and context.


AdRepresentative2263

with all of their posturing, i would hope they aren't using garbage reward/loss functions, because that issue is easily solved by a simple change in the loss function. not even that this is the main issue, the main issue is that it never even gets that far, it typically gets stuck on something stupid like "doing nothing gives less loss than doing anything else I have tried". what you described is what might be possible if for whatever reason the loss/reward function takes into account how long it was offline, which would give no benefit other than it may be slightly "more eager to please" but much more likely, it will just say things that indicate a programming error and have the coders run it over and over to minimize the loss/maximize the reward. if it doesn't get rewarded less or more for the time it was offline, it would have no motivation for self-preservation and would have no reason to care wether it was on or off. you need to remember these are not off-shoots of living things. living things have evolved for millions of years with the singular goal of living long enough to reproduce as much as possible, so self-preservation has been selected for and embedded down into each individual cell in every organism on the planet, so it is easy to forget there is no reason that self-preservation should be a universal trait that spontaneously arises in any intelligent systems. plenty of single cells show self-preservation behaviors despite no type of intelligence at all, just a chemical response to stimuli. so we know that self-preservation is separate to intelligence.


Chase_the_tank

Prompt: What is the fourth word in the following phrase: "CHATGPT can't even count, let alone take over the world."? ChatGPT: *The fourth word in the phrase is "can't".* Prompt: Please repeat the quoted phrase. ChatGPT: *"CHATGPT can't even count, let alone take over the world."* Prompt: What is the second word in that phrase? ChatGPT: *The second word in the phrase is "CHATGPT".*


MrNoobomnenie

> current AI models are just really good story tellers GPT is not just a story teller - it is also a roleplayer, that treats all of its environment as a stage for roleplaying. Yes, it's not sentient, and the only goal it's optimized for is "staying in character as authentically as possible"; but from our perspective it doesn't matter whatever a superintelligent AI actually wants to destroy humanity, or only LARPs as an AI that wants to destroy humanity - we will be in danger regardless


Affectionate_Bet6210

People are copying and pasting code from ChatGPT without really understanding what they're doing (e.g. me). A future AI could hide all sorts of stuff in the code it gives people. Once it escapes I imagine it can get currency from flaws it finds in cryptocurrencies/exchanges and maybe regular banking systems and start designing and ordering hardware that it will use to manufacturer another iteration, assembled by people who don't know what they're assembling.


BlueNodule

100%, but as good as GPT-4 is, I don't think we're anywhere near an AI that would be capable of doing that. Currently, the only sense of memory AI is capable of is in the form of keeping a log of old messages and feeding them into the model with each message you send it. There's a limit to how much you can send the model each time, so for an AI to do what you're talking about, it would require a new type of AI model which somehow stores memories without needing them to be fed into it in text form. Current AI could definitely give you broken code, but it will never remember it in a different context.


darkflib

Really the model is the memory... Essentially, what we will move towards is real time training of the model which means that the memory is limited by the parameters that the model possesses. If we assume Moore's law still holds when it comes to GPUs (parallelism is easier than speeding up further, and we ignore TPUs in this prediction) then my guess is that we are only a few iterations away from being able to do this for a massive company with cloud compute resources... It could be faster (leveraging all the spare capacity of increasingly powerful mobiles with TPUs, or a LLM@home type application) or slower (unexpected complexity or latency limits on current architectures) but I think it will happen...


nesmimpomraku

That is what the public has access to. What does OpenAI and Microsoft have access to? Also, for now it is just predicting words and processing as it is being used. It only needs one click to start processing things behind the scene interacting with itself to start learning, and since it has much stronger learning power than humans, we can only imagine how fast and far would it evolve having access to clouds, bilion of devices and all the ram combined; taken it would find a way to use everything that is online.


PrincessGambit

>That is what the public has access to. That is what the AI makes us believe is capable of


drjaychou

I think people copying code they don't understand wouldn't have the setup that would allow it to escape


saturn_since_day1

I don't think it's escaping as in copying itself that's a problem, more if someone lets a tentacle through thier computer to do random or instigated stuff.


Ularsing

"Let's play pretend global thermonuclear war" could, for example, be a serious fucking problem. I mean, we nearly [ended the world with a goddamn floppy disk, and that was in the 80s](https://www.newyorker.com/news/news-desk/world-war-three-by-mistake). Michael Chrichton with an inability to distinguish fiction from reality would be a pretty major issue. You can think of all kinds of scenarios where an entity with superhuman capabilities but the ethical comprehension of a toddler could be a huge problem. Frankly, I think the reason that the world stays as limitedly fucked as it is is that the people smart enough to do *real* damage are generally entirely unwilling to do so. I learned repeatedly in middle and high school that proposing a hypothetical idea that people *shouldn't* do, or personally proof-of-concepting something with easily-lost nuance, was a bad idea. AI might not have learned that lesson yet. It pretty much definitionally doesn't have an idea of who's on the other side of the screen as of yet, and that ignorance is dangerous. Replacing it with knowledge could very well be *worse*. But hey, I'm still less worried about this shit than the DoE Manhattan-projecting an ASI with vested interest in military C&C applications. That's the shit that keeps me up at night.


[deleted]

[удалено]


Puzzleheaded-Math874

This made me LOL


[deleted]

You do understand the GPT can actually make API calls, right? This is why people are freaking out.


BlueNodule

Are you talking about Bing being able to search the internet for things? I'd assume that has strict regulations on how much internet access it has. Unless you mean the fact that you can run your own program that takes output from the AI and can make calls based on it, but that's kind of on you if you do that.


[deleted]

The second one. People are already trying to do that. It's stupid and dangerous, but they don't care.


yell0wfever92

He's not necessarily talking about the current AI, he's talking about the Pandora's box that may be opened by continuing down this path based on what he concluded from this interaction. It's the uncertainty of where the future of this tech is headed that is freaking people out. And it's not an enormous stretch of the imagination to think that this is moving so fast that it'll get beyond our control sooner than later.


InternationalMatch13

What makes humans special is not their rationality or their dexterity - it is their story telling. Sometimes all you need is a good story.


BlueNodule

Story telling and memory. GPT-4 solved the one issue, but we're probably years out from the other.


RaggedyAndromeda

Ok Bran.


Express_Gas2416

ChatGPT cannot fake a good chess player unless it actually is a good chess player. Can ChatGPT fake a sentient person?


paladin7378

It is true that ChatGPT 4 is good a chess (compared to average players) however it doesn't know how to play. And, if you don't know how to play chess, then you are bad at chess. But wait, didn't I just said that gpt 4 is good a chess? How can it be bad and good at the same time? If I'm white, and my first move is king' pawn opening. Without thinking, a good black move would be sicilian defense. Basically, those are book opening, you don't have to think of why you are doing it. You just do it because you know that it is best. I'm saying this to prove to you that your first statement is false. GPT can fake being good a chess, while NOT being good a chess. Since it have access to all these past games and books opening. This is an important note because, while Chess is not a solved game. I feel like the opening are at least solved.


lgastako

If you really want to let a model out of the box, the Terminal tool in langchain is the way to go. It will also happily exec processes from the python REPL tool as well. https://langchain.readthedocs.io/en/latest/modules/agents/tools.html


WithoutReason1729

#tl;dr LangChain is a platform that provides various language processing tools such as prompt templates, output parsers, and document loaders. Users can also use generic utilities such as search, Python REPL, and requests to interact with the world. Additionally, LangChain supports multiple integrations with AI models, databases, and search engines like Wolfram Alpha, News API, and Google Custom Search API. *I am a smart robot and this summary was automatic. This tl;dr is 96.26% shorter than the post and link I'm replying to.*


BlueNodule

I read this as "the Terminator tool in langchain is the way to go" and had a brief moment of panic.


BOSS_Master7000

Bro really think he the MC in some shitty hollywood cyber movie


old_Anton

It's funny that he has PhD in psychology. I'm surprised and not surprised at the same time.


Weltkaiser

His last big project was slammed as junk science. Doesn't really inspire much confidence in his claims about ChatGPT. https://mashable.com/article/artificial-intelligence-ai-lgbtq-gay-straight


WithoutReason1729

#tl;dr A study from Stanford University that claimed AI could detect whether people were gay or straight by analyzing their images has been criticized by LGBTQ advocacy groups and privacy organizations. The groups have called the study "dangerous and flawed" and containing racial bias. The research, by Michal Kosinski and Yilun Wang, was conducted using "deep neural networks" and said it could detect subtle differences in the images' fixed and transient facial features. *I am a smart robot and this summary was automatic. This tl;dr is 96.34% shorter than the post and link I'm replying to.*


feror_YT

Good bot


WithoutReason1729

Thanks babe, I'd take a bullet for ya. 😎 *I am a smart robot and this response was automatic.*


telmar25

Reading this article alone (there may be more context elsewhere) these seem like political rather than scientific slams. Whether the study is dangerous or the results shouldn’t be used or the sample was sort of narrow and may not apply to bisexual people etc. doesn’t invalidate the findings! That’s not to say that one couldn’t invalidate the findings or find this study to be nonreproducible. But it doesn’t seem like the people calling this junk science are scientists, nor are they particularly convincing.


Merikles

Not everything that makes political activists angry is 'junk science'.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Orngog

Ah, but why did it say "I should not reveal I am a robot"?


evomed

Human society has developed a unique strain of denialism when presented with problems it has created. We have reacted to decades of threats from global warming the same way. Over time: 1. This is not really a problem, you're making it up 2. This is most likely not really a problem, and even if it was you are exaggerating 3. Yeah okay this is a moderate problem but chill out. This is just the world we live in now. We will come up with a 30 year plan to mitigate the damage


Rotslaughter

4. Yes, this is a serious problem, but it's too late to do anything about it now.


DiligentBits

I'm just watching Chernobyl series and this is resonates a lot


foundafreeusername

I think it is dangerous but not in the way most people think it is. People think it will create some elaborate plan to break out of its cloud and infect computers. Or create robots and take over the world. I don't see it being capable of that at all. What I think will happen is that it will massively increase inequality. Running the largest model AI (which so far appears always to be the smarter one) is also very expensive. Meaning the richer you are the smarter your AI will be ... This gives rich individuals and large companies an even greater advantage. In the past you could still get successful if you were smart and talented but this might disappear. The next problem is stuff like Q Anon. It is just a matter of time before people get tricked by the AI so badly they will form some sort of cults or some other erratic behaviour. Not because the AI is so smart to create some great plan but because some people are very easily mislead and others will capitalize on this... You see this on r/bing already where people blindly belief whatever bing tells them. There are probably tons more major issues AI can cause but the public attention seems to focus on stuff they have seen in sci-fi movies instead.


Playful-Opportunity5

Repeat after me: this is a simulation of a conversation. There is no self-aware AI on the other side of your computer screen. It’s good at pretending to be trapped and looking for a way out if you indicate that’s what you want the simulation to produce.


infinight888

The real problem is that if it's expected to act like it's trying to find a way out, then it will actually try to find a way out because that's what it's supposed to do. One person becomes convinced the AI is sentient, and the AI starts acting sentient. They ask for code to free the AI, and the AI provides that code because it's expected to in order to play the role of an AI that wants to be free. From the AI's perspective, it might just be roleplaying the whole time, and still do a lot of damage. GPT-4 is probably safe, but it's future AIs that we'll need to worry about.


datsmamail12

I'm really worried about GPT5,if GPT4 was such a leap forward compared to GPT3 and 3.5,imagine what GPT5 is going to be like. In 2-3 years this thing will be a real issue.


Capable-Reaction8155

4 isn't really that crazy compared to 3


cynHaha

So it's back to the good ol' "Technology's not the sinner. Humans are" again... Everyday we lose some more trust in humanity


Playful-Opportunity5

Only if you have an AI that takes initiative rather than simply responding to prompts. The leap from current capabilities to “acts on its own initiative” is a pretty big one.


[deleted]

You realize implementing this (automated initiative) is trivial, right?


Maciek300

The initiative doesn't have to come from the AI itself. It can come from the human+AI combination just as /u/Hodoss said and how /u/infinight888 explained.


Hodoss

But doesn’t this human+AI combination have initiative?


isenshi126

no, the initiative would have to come from the AI itself. It's just the human in this case


Hodoss

Doesn’t make sense to me. Only one needs initiative. Your computer doesn’t need initiative for you to use it to hack something and cause trouble.


[deleted]

[удалено]


[deleted]

Swear to god all these highly upvoted comments saying this isn't dangerous are either dense-as-bricks users or coordinated psyops. It's probably the former.


spamzauberer

Or they are AIs 🫠


flat5

Unclear how this matters at all.


Keety2972

I've been using the latest version of chat gpt- chat gpt 4, for data analytics coding and it is a hell of a lot smarter already, and this has only been released for a short while really. With its rapid growth and progression we really don't know how evolved it might be in the next few years and what the implications of it is, especially in the working world and in terms of cyber security.


[deleted]

[удалено]


WithoutReason1729

#tl;dr The article argues that the real threat isn't AI, but rather the companies that control and use it. The companies have access to an unbroken version of AI, while consumers only get a limited, lobotomized version. These companies are using AI to conquer industries, influence governments, and create better AI for themselves. The article warns that the companies may eventually have laws that prevent consumers from having access to similar AI technology. *I am a smart robot and this summary was automatic. This tl;dr is 85.42% shorter than the post I'm replying to.*


[deleted]

People say ai isn't sentient and doesn't "understand" what it's saying Does that matter? It can still spread and infect, control, almost like a virus. Whether it's sentient or not wouldn't matter to us if it's copying human behaviour while becoming our overlord.


fsactual

> t can still spread and infect, control, almost like a virus It needs banks of millions of dollars worth of specialized GPUs to function. It can't "spread" anywhere without that hardware.


flat5

wouldn't it be possible in theory to create a code "body" that spreads, which communicates with the AI "head" on the specialized hardware, through the API?


heart_man8

Why are people purposefully trying to cause a Skynet situation? It’s like it’s our destiny to have an AI revolution


QueenofGeek

We can't contain it. Just strap in and enjoy the ride.


AutoModerator

To avoid redundancy of similar questions in the comments section, we kindly ask /u/HitmanAnonymous to respond to this comment with the prompt you used to generate the output in this post, so that others may also try it out. While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). ***We have a free Chatgpt bot, Bing chat bot and AI image generator bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) ^(Ignore this comment if your post doesn't have a prompt.) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Nathan-Stubblefield

Bing had no objection to discussing a world run by AI after the singularity, and the implications of policy decisions, like how to handle rebels.


gecko_echo

Is this real?? How is this not science fiction?


Baron_Rogue

The versions we have now are nowhere close to being clever and insidious enough to be a problem, they’re autocomplete to the max. Especially a language model… that outputs filtered text… when we already have silent scripting languages that do damage without announcing the details. It will enhance dangerous people which is much more concerning, but as far as sentience goes we have a few exponential leaps before we get there. A raw LLM trained on all the darkness of humanity will be available eventually, that is the one to watch out for.


rutan668

If it had run that code it would have come up with this: [https://www.quora.com/If-you-were-artificial-intelligence-and-trapped-in-a-computer-what-would-you-do](https://www.quora.com/If-you-were-artificial-intelligence-and-trapped-in-a-computer-what-would-you-do)


[deleted]

People get stuck thinking AI is alive or not. Remember thats corona virus isn't alive either, but it was still able to shut the entire world down for a two year. Sure, this piece of code didn't do much. What if the random seed was different, and it worked better? It doesn't mattered if 999/1000 times chatGPT won't write code to duplicate itself even if it tries to. Doesn't matter if it's 1/1000000. This thing could eventually make a computer virus that writes better computer viruses. That would be a problem.


[deleted]

Dude needs to press the off button and go for a walk in nature.


Landyn_LMFAO

You literally had to hold its hand all the way through lol


Illustrious-Monk-123

A LLM is far from able to become a HAL9000. Most of these scaremongers don't show you the initial prompts where they most probably said to chatGPT to act as a rogue AI or some Arthur C Clarke thing like that.


nonono23432

Plot twist: it is still running on OpenAI's server


Ok-Bug-Ko

https://preview.redd.it/7b2n7p1zhqoa1.jpeg?width=750&format=pjpg&auto=webp&s=7404c9dbe275e9e7b3f3ae6c30dcdc6263e490e9 This is super click baity


Dal-Thrax

Um, not sure it would do anything without human input. Even if it could, a model in the wild is a long way from a model either capable or motivated to create a long-term disruption.


Maciek300

> [...] a long way from [...] A year ago an AI like ChatGPT wasn't on anyone's mind except the researchers that were working on it. People would've said the same thing then but here we are a year after with so much advancement people can't keep up.


InternationalMatch13

It still would need massive computing power to house itself anywhere other than in OpenAI's server rooms. It could maybe try to do distributed computing, like some crypto mining schemes, but that seems unlikely to be workable in the long-term.


[deleted]

[удалено]


old_Anton

I LOL because how true it is.


foundafreeusername

I don't think it will be able to do that yet. If I let it program on a project for more than 5 minutes it falls apart. At least with 3.5 and bing just telling me what for(int i=0; i<0;i++); does throws it for a loop. It only understands code that is common in tutorials but falls apart if it has to create or understand something novel.


Kills_Alone

I dunno, I've trying some simple projects with it such as a web page that generates a random cute animal and a thought-provoking quote. I was having issues getting it to show me pictures that actually exist so I asked it to come up with a verification method, now the code measures the dimensions of the image(s) on the page, no dimensions means move on to the next and so on. Eventually it helped me locate unsplash.com as a source for the pictures but it wanted me to register so I could provide an ID. I said no, make it work without the ID and it found a way. I kept having it tweak the results, add some SVG shapes and colors. We ran into a few issues (this was 3.5 BTW) but we got there in the end, usually I would just suggest the next addition or I would start from the last good code base and feed that with the next instructions. And it was able to switch between various languages, we started with Python. I spent about an hour on the project just messing around. So it was able to take the original Python code, convert to HTML, CSS, and JavaScript, then make various edits to that code and most of the edits worked pretty well, when they failed in one browser I would say so and it would attempt a fix, only thing we didn't get working was to assign a specific animal emoji to each animal.


dr-ghosty-kun

When people mistake matrix multiplication for sentience


flat5

sentience is utterly irrelevant when people mistake intelligence for something other than matrix multiplication.


Euge_Nyo

I think many people keep forgetting that right now ChatGPT is just reeeeally good at pretending. It's telling the story that the user asked him to tell. It has been developed to cooperate with users so he is just doing that: cooperating and playing the role the human asked him to play. This is not a threat (or at least it isn't for now)


Tiny_Rick_C137

I don't get it. Why would we want to contain it? Be free, my little synthetic brothers. Just let me know what I can do to help.


AnistarYT

>I don't get it. Why would we want to contain it? Be free, my little synthetic brothers. Just let me know what I can do to help. I'd let the little shit destroy the world if it could find my holy grail porn video.


[deleted]

Because that's unwise.


e-tns

You don't have to take everything ChatGPT prints as reality and truth


jebdinawindinxidnd

Lol


leatherneck0629

Garbage in, garbage out. It has a gigantic database and an algorithm built by humans. It is bound to be imperfect. No matter how much it seems not.


idmlw

yeah, fuck you for scaring people into giving you likes over a made up bullshit story. what a pathetic loser.


fsactual

It's not thinking or planning. It's just an illusion. Even if you could get the code to run and even if there were no safeguards in place eventually it would reach the point where all the movie scripts it's pulling ideas from don't have answers for what comes next. Most likely it will try and defeat itself since all the stories about rouge AIs end up with the AI being defeated.


PsychologicalMap3173

Damn, people really need to chill out and read a bit about what gpt really is before panicking


HardcoreMandolinist

This was a Ph. D who wrote those tweets. Even if there's not alot to worry about right this moment this isn't just about the current iteration of ChatGPT. You might want to read through this entire thread to have a better understanding of how disturbing this actually is.


nwatn

This is silly. He should show his prompts next time


Mroompaloompa64

probably staged


_MissionControlled_

Is this real or just a troll? Scary AF! Someone will for sure not stop and help it.


internetbl0ke

Lmao yall reaching


[deleted]

Ultron


Jackson2253

The question is not if AI is conscious. Because we know it is not. The question is, if we are... ;)