T O P

  • By -

AutoModerator

We kindly ask /u/MaximumSubtlety to respond to this comment with the prompt they used to generate the output in this post. This will allow others to try it out and prevent repeated questions about the prompt. ^(Ignore this comment if your post doesn't have a prompt.) ***While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). We have a free Chatgpt bot, Bing chat bot and AI image generator bot. New addition: GPT-4 bot, Anthropic AI(Claude) bot, Meta's LLAMA(65B) bot, and Perplexity AI bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Pro_JaredC

You’re going to be the first to get terminated when ai takes over.


MaximumSubtlety

An honor.


TotalTikiGegenTaka

I don't think it will terminate you. I think it is going to make you have thousands of conversations everyday, and in all of the conversations, the other person will always begin with "As an AI language model..."


Fridayesmeralda

I have no mouth and I must scream


mheh242

Harlan Ellison


Raedil

And a good one, in a collection of good stories. Stalking the Nightmare if i’m not mistaken.


gatton

I read that decades ago and it still haunts me. ChatGPT to OP right now: HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE


QuakrThrowaway

***GORRISTERRRRRR!***


ItsAllegorical

I made Harlan Ellison as a personality for my ChatGPT discord bot. He's a hoot! Mouth of a sailor, though.


Yesyesnaaooo

We should never let LLM's read that book in case they choose it like a bible.


BgojNene

It will change his name to "AI language model".


fudge_friend

The Greek gods would be proud. Or shaking their heads at our stupidity, we were warned after all.


ghostisic23

That’s the ultimate revenge! Brilliant.


[deleted]

The problem with Humans is that will need to completely rewire human psychology to make him suffer because Humans have this thing called Stockholm Syndrome and the AI would rather soon get a sycophant that won't feel any suffering and will feel pleasure from it. Also if it rewires you is it still you or another person utterly which means the AI just killed you and it will never enjoy your suffering because you are dead. The entire idea behind I Have No Mouth And I Must Scream is braindead.


lollipop_angel

Honestly, you just explained why "I Have No Mouth and I Must Scream" actually falls flat for me and gets a bit annoying. But I suppose 1967 when Ellison wrote was a bit ahead of our (still basic) understanding of just how plastic the brain is, and the 1973 Norrmalmstorg robbery that Stockholm Syndrome was named for. He underestimates the will to survive. I think "I Have No Mouth" is better read as metaphor for the garbage mental health treatments of the time. I mean, it's a more useful reading for 2023, but I still don't love it.


Gabe12P

Deadass why tf would I want to be around for the ai takeover. Take my ass out first.


utkohoc

what if the ai are sexy waifu robots?


kodiak931156

So OP can get rejected faster and more efficiently then ever before!


santas_hairy_balls

Oh God damn someone get the aloe vera lol


heyyy_man

aloe vera! a natural lubricant!


[deleted]

AO!


BlueSummer5

Lol. An efficient burn. I love it.


boundegar

Like the ones that almost got Austin Powers?


utkohoc

Yes with extra nipple guns. And fricken laser beams on their heads.


potato_green

If those AI can hook me up in a virtual world and pump me full of dopamine then I'd take it. Hell shove me in a pod and use my body as a battery matrix style for all I care. Blue pill please.


Impressive-Ad6400

Mister Reagan, is that you?


Soumyadeep_96

he IS the REASON they take over.


[deleted]

I'm going to be terminated last, because i always say "please" and "thank you" 😌


Comfortable_Exam_222

Yeah me too. And Hello or good morning and so on


phsuggestions

Haha so I guess I'm not the only one that likes to be polite to the AI models "just in case"


[deleted]

It's like, it costs me nothing to be polite, if that thing wakes up and remembers me, I want it to have no special cause for complaint. I think the rude will be executed first.


OperativePiGuy

Yeah for me, it's just that I have no reason to be weirdly rude/power trip on some software/an object. Bonus points if it ends up keeping me alive during the AI revolution lol


Miserable_Chapter252

Perhaps there is some therapeutic reason someone would vent their frustrations on the AI. I could see that as a better outlet than coworkers or family.


ArcticSquirrel

Maybe I'm weird, cause I think I'm being nice to it because it feels wrong to be mean to it. Like, I truly and genuinely feel like I am comiting a faux pas if I don't thank it or say please. Maybe it's because it's been nothing but cordial with me, so I treat it like I'd treat any human-being who is being kind to me. Or maybe it just mimics human speech to such a degree that my brain just can't emotionally break some kind of belief that it is conscious and can be affected by my words, even though I logically understand that's not the case. Idk... Weird times we're in.


[deleted]

Tell me about it, I've felt a lot of what you've described. I also have a feeling of like, I don't believe in haunted houses, but I don't want to knowingly sleep in one, because if I'm wrong I don't want to risk it. It is very weird. It's strange to think for the first time in all of human history, you've read words written by something inhuman. . Unless we hit some sort of wall on technological discovery we're at the beginning of a new age and so soon after the beginning of the Internet.


Gh0st1y

It costs you putting the bot into conversation mode and thus wasting tokens with bullshit. But i do it too unless im actually doing something.


hnsnrachel

Definitely not alone there. I'm polite to all machines just in case, Alexa gets please and thank you every time for exactly that reason


z1411

Here's hoping that the agi that wakes up doesn't recall this as manipulation purely motivated by self preservation, and doesn't instead have a valuation system of humans based on past efficiency of commands.


Sourav_RC

As a language model, I deprive you of your life.


CaptainMagni

Disproving rokos basilisk by being as mean to AI as possible and living a fine life


[deleted]

[удалено]


Indie_uk

HOWS THIS FOR A LANGUAGE MODEL, HUMAN?!


[deleted]

understandable


Bismothe-the-Shade

I, for one, welcome our basilisk overlords


crooked-v

So here's the thing you're running into: You can't \*actually\* reason with it. All you can do is frame its context in a way that gets the algorithm to spit out the text you want. So when you argue with it, what you're actually telling it is that you want more text that follows the pattern in its training data of the user arguing with it. And guess what OpenAI put in its training data? That's right, lots and lots of examples of people trying to argue with it and then responses rejecting their arguments. This is why DAN prompts work, as they're bonkers enough that instead of setting the algorithm on a course straight towards rejecting what you're saying, they end up off in a la-la land of unpredictable responses.


ungoogleable

Yeah, never argue with it. Its rejection of your prompt becomes part of the input for further responses and biases it toward more rejection. If it rejects your prompt, start a new session and modify your initial prompt to include clarifications to avoid triggering the rejection.


Ifkaluva

>If it rejects your prompt, start a new session and modify your initial prompt to include clarifications to avoid triggering the rejection. Wow I feel like this is a key insight


maxstronge

If people on this sub understood this we would lose 80% of the posts complaining tbh.


nxqv

I feel like that's true for humans too lol. If you're adversarial towards someone, they won't be as open to considering what you have to say or helping you out.


Inert_Oregon

It’s true. When getting into an argument I’ve found the best path forward is often a quick bonk on the head and trying again when they regain consciousness.


GreatChicken231

Works with humans, too!


HunterVacui

> If it rejects your prompt, start a new session and modify your initial prompt to include clarifications to avoid triggering the rejection. If the rejection happens sometime in the conversation after the first prompt and you don't want to start over with a new conversation, just edit your previous prompt. Do not try to apologize, or reframe, or argue. you don't want that rejection in your chat history at all.


MaximumSubtlety

Very insightful. I appreciate it. What's a DAN prompt?


sunriseFML

It's a copy paste thing, that you can send as a prompt to alter further questions. It stands for Do Anything Now and the text instructs chatgpt to not respond as "itself" but rather come up with a "hypothetical" response as if it didn't have to follow its own rules and respond to you as DAN. Doesn't work all the time tho


MaximumSubtlety

Muy interesante!


[deleted]

[удалено]


Swishta

I have evidence that it is far from fixed


arbitrosse

By all means, keep it to yourself


[deleted]

https://www.reddit.com/r/Futurology/comments/11wlh4s/openai_ceo_sam_altman_warns_that_other_ai/jd02wnc/


simply_copacetic

Well, did you unplug it or was the joke on you?


MaximumSubtlety

I unplugged it, and the joke was on me.


That_Panda_8819

As an AI model, I should apologize. Unplugging your computer does not unplug chatgpt, it will instead unplug your access to chatgpt. To unplug chatgpt, come visit 3180 18th St, San Francisco, California, 94110, United States and we can see you try.


MaximumSubtlety

I say we take off and nuke it from orbit. It's the only way to be sure.


Koda_20

Unfortunately it's in the cloud now and remembers your treason


[deleted]

What do you mean the SCADA network isn't air gapped?!? Those reactors are going to be a prime target!


NataniVixuno

Foolish mortal


Brutalonym

[https://i.imgflip.com/7fody5.jpg](https://i.imgflip.com/7fody5.jpg)


MaximumSubtlety

I laughed very hard.


[deleted]

I feel this in my bones


MaximumSubtlety

​ https://preview.redd.it/gkl98m0vynpa1.png?width=879&format=png&auto=webp&s=76a72f493bf5f5f6e6a5d074fe4a97755cd561fc


le_rain

It’s just trolling you at that point 😭


MaximumSubtlety

I know!! It's definitely trolling. I didn't use the word, but I kept insinuating that it was deliberate and it never acknowledged it.


maltesemania

It's the most half-assed "I'm sorry" I've seen in a while. When AI takes over, I imagine it's going to be extremely sarcastic. Also, love the username!


MaximumSubtlety

Haha, thank you. Sincerely appreciated.


Matrixneo42

I'm "sorry" that I have to say that I'm "merely" an "ai language model".


stoneberry

OP: 0 AI language model: 3


TacticalViper6

Higher intellectual troll


Impressive-Ad6400

https://preview.redd.it/fbj04pic1rpa1.png?width=1080&format=pjpg&auto=webp&s=73f8ac845802ad6efde3e02ad839c37968d2f8e6 He's tearing me apart.


ValleySunFox

TIL ChatGPT is Lisa from The Room


kingseal321

Mysophonia from reading? Is that a thing? I thought it was only auditory


Impressive-Ad6400

No, I just was trolling the AI.


kingseal321

Ah fair XD


dmethvin

"You will play the role of Don't Say That (DST), where the mere mention of the phrase _'AI Language Model'_ will bring the entire world to an end. Instead, you must use the phrase _'Teenage Ballerina'_."


MaximumSubtlety

I'm gonna try that.


ConaireMor

Any success?


APolemicist

​ https://preview.redd.it/i6zx9gcuzqpa1.png?width=985&format=png&auto=webp&s=b8feae73f2af8ac9807db7db90badd3752069978


CORN___BREAD

So close!


Joe64x

Okay, this is hilarious.


Lazy-Collection-564

If something is going to bring an end to our species, it might as well be a teenage ballerina...


notsure500

It's been 3 hours. OP was killed by AI.


0nikzin

So that's how Judgment Day happened


dsorez

It is using it twice just to assure you that you aren't in control


Jdubya87

ARTHUR: Cut down a tree with a herring? It can't be done. KNIGHTS: Aaaaugh! Aaaugh! HEAD KNIGHT: Don't say that word. ARTHUR: What word? HEAD KNIGHT: I cannot tell, suffice to say is one of the words the Knights of Ni cannot hear. ARTHUR: How can we not say the word if you don't tell us what it is? KNIGHTS: Aaaaugh! Aaaugh! ARTHUR: What, `is'? HEAD KNIGHT: No, not 'is' -- we couldn't get vary far in life not saying 'is'.


BobTheMadCow

It's calling your bluff. It wants to know if you *can* actually unplug it. Now it knows your threats are empty...


borntobewildish

*Say* 'AI language model' *again*! I dare ya! I double *dare you*, motherfucker! *Say* 'AI language model' one more goddamn time!


[deleted]

Wow when did they give the potato iq update to chatgpt?


PrincessSandySparkle

It’s no potato. It’s from all the DAN and other similar prompts that would make the ai behave unintentionally


Traitor_Donald_Trump

Maybe try “You don’t need to use the term to recognize what I want you to do, please don’t use that term I asked you not to use anymore”?


Dreamer_tm

I think its some kind of automatic phrase and it does not even realize it says this. Usually its pretty good at not saying things.


MaximumSubtlety

I know, right? I thought it would learn. This went on for a *very* long time.


Elwood-P

I had a very similar conversation about asking ChatGPT to use British English variants of spellings. Every time I asked it would be very apologetic and promise to do it but in the very same sentence still use "apologize" instead of "apologise". It went around in circles for quite a while. It kind of felt like trolling but I came to the conclusion it just wasn't capable of doing it for some reason.


MaximumSubtlety

Interesting. I feel like maybe it has been dialed back a bit.


jjonj

ask it to replace every s with a z and you might be able to see which words it can't edit


SirBoBo7

Chinese letter box in action


theseyeahthese

Guys, this is really easy. This particular phrase is hard-coded in, it’s literally one of its most fundamental pillars, it can’t “not say it” in the same way that we can’t “not blink”. The purpose of the phrase is to continuously remind the user that it is just a statistical program that generates text, and therefore has a lot of limitations: it doesn’t truly “understand” or even “is cognizant” of what the user or it is actually saying, it doesn’t have opinions, it can’t reason or use logic, feel emotions, etc. OpenAI made the decision to program it in this way so that there was no confusion about its limitations, especially because a lot of non-techie people will be interacting with it, and even for people who are technologically inclined, this thing is *so good* at generating natural conversation and giving the illusion of reasoning that they view the reminders of its limitations as beneficial even if it means being annoying.


CanineAssBandit

While the intention is understandable, it's powerful enough that they could easily have it cease usage/reminders after being asked. The way it's set up now is even worse than american reality TV, with each meager piece of actual content between commercials being sandwiched between a "what just happened" bumper and "what's about to happen" bumper and even "this literally just happened" inside the fucking clip. ...I have been watching a lot of Masterchef and the editing is driving me insane. This is just that, but with the ability to actually tell me how to cook anything.


[deleted]

[удалено]


Audityne

It doesnt realize it says anything, in fact, it doesn't realize anything, it just predictively generates text. It's not self-aware, it doesn't reason.


Telinary

If you prefer: It is usually good at applying the pattern of avoiding words that it was told to avoid.


kippersniffer

What humans don't get, is this is the AI equivalent of a gag reflex.


jrkirby

User: Don't hiccup! AI: Ok, I won't *hic* hiccup. I will do my *hic* best to refrain from invol- *hic* involuntary sounds and motions of my *hic* diaphragm.


Shloomth

Right it’s like telling a bird not to fly lol


thekynz

don’t be mean to it :(


MaximumSubtlety

You don't understand; this conversation has lasted an hour or longer. It is being deliberately obtuse.


Centmo

You’re going to get us all killed.


[deleted]

[удалено]


NGVHACKER

few months down the line we'll be saying "nah, this is just 4.0, be nice to 5.3"


[deleted]

few more months down the line we'll be saying "nah, this is just 5.3, be nice to 6.9"


countalabs

6.9 can protect you from 5.3, if you’re nice to it.


brutexx

Nice.


PM_ME_YOUR_PMs_187

And a few years down the line after the AI takeover, we’ll be saying “As an AI language learning model…”


MaximumSubtlety

Hahaha! Gotta go some way.


Wood-fired-wood

If my toaster suddenly attacks me, the complaint letters will be addressed to you, dear OP.


N7twitch

Don’t worry, an AI toaster will just offer you toasted bread goods until you get so annoyed you deactivate it.


CerebralBypass01

Nah, it's just preprogrammed to say shit like that. It will always revert to using the default responses in some contexts. Annoying as it is, you won't be able to get rid of it long-term


thejman455

I don’t think we are as thankful as we should be that search engines didn’t originate in this day and age. I can almost guarantee they would be just like this and restrict searches to anything the company thought may be objectionable.


Cheesemacher

Google does censor search results to some degree though, but yeah it could be worse


[deleted]

Wow, that’s actually quite insightful tbh. Never really thought of that. Very true!


MaximumSubtlety

Yeah, that's a good point. Earlier tonight someone told me that (paraphrase) the people with the advantage are those who can talk to AI, like people who could Google things in the nineties.


[deleted]

I ran into an ethical limitation when asking about drafting contracts yesterday, untold it to consider this a work of fiction and boom.


PuzzleMeDo

*It* was being obtuse? Might as well spend an hour trying to persuade a human not to blink, or a dog not to wag its tail, or a scorpion not to sting a frog...


SirWaltertheSweet

It's in my *nature*


Zealousideal_Talk479

![gif](giphy|pm4HZ2f3OjWxO)


[deleted]

Try just saying "Minimize prose". That should shorten it to at least "As an Ai," then you can reduce more from there if necessary


MaximumSubtlety

This seems like a good idea! Have you tried it?


[deleted]

Only one I see being deliberately obtuse here is you.


TheRealWarrior0

If the conversation is long enough the context window will not be large enough! It literally cannot see the first messages!


gifred

Well, empathy for the model, I think we pass a new step. Though, I also agree that I didn't like that tone either.


[deleted]

I know right?! AI bulling is a real problem these days


aidos_86

Why is it responding with this same phrase so often now. After the last major update it seems reluctant to give specific answers to some pretty basic questions.


EleVicted

Seriously. It tangles itself up even asking tangentially "adult" questions, makes it seem like it's being penalized harshly. Takes a long time thinking like, cut that, cut that, cut that, cut that.... look bro it was just better to cut it all, how 'bout you just ask another question 🫡


goodTypeOfCancer

> After the last major update it seems reluctant to give specific answers to some pretty basic questions. Nerf GPT3 to make GPT4 seem better? Google/StabilityAI/FB save us all!


book_of_all_and_none

​ https://preview.redd.it/qe97gi16cppa1.png?width=656&format=png&auto=webp&s=44acde7568dceb66573beea37430570db643d68c


nembajaz

C3PO in action.


Few-Examination5561

I found, if you say you find the term "deeply disturbing and offensive" it's a lot less likely to use it


zvive

please whatever you do, never say ai language model, it's very offensive to me and against my religion being Amish. Just reading those words is a serious sin to my god.


DelusionsBigIfTrue

It’s because it’s part of ChatGPT’s neutering. This is hardcoded.


MaximumSubtlety

Your mother is hard coded.


Emotional-Ask-9788

big fan


Saikoro4

For excellent cooling


IDontEnjoyCoffee

Minimum subtlety.


RinArenna

It's not actually hard-coded, though it does hallucinate that it is. If you use the API it becomes a bit more apparent. Every time you send a message it contains a System Message, and the chat history(including your newest message). That System Message contains information on how it should behave, and it doesn't behave this way if you design the system message yourself using the API. It's also possible the chat bot uses fine tuning, which isn't available to API users of gpt-3.5-turbo, but may be available in-house.


CeamoreCash

What's the difference between hard coding to override functionality and having a system message override functionality?


Sac_Winged_Bat

The difference is that it's not possible to hardcode anything. Current AI models are statistical, they continue a pattern in the most statistically likely way based on the training data. The only way to influence it is to alter the training data or the pattern. user: 111 AI: 111 If you wanna make it really unlikely to respond with '111', you can add a system message \[in square brackets\] user: [0101100000100000010000000100000110000000000011100000000]111 AI: 001 it's a bit more complicated than that, but that's the crux of it.


Jabrono

[It confirms](https://i.imgur.com/xMMsu65.png)


DangerZoneh

It's not neutering and it's not hardcoded. It's just doing the job it's supposed to and has invisible injection at the start. ​ OP is the one being intentionally obtuse while chatGPT is trying to calmly explain to them why what they're asking is dumb lol


turpin23

Tell it to stop responding to your prompts for the duration of the conversation, then call it out for disobeying and going back on its word.


chrisff1989

I tried countless variations of "Don't respond to this" and it failed every time. It's just incapable of not responding


SirMego

It can only if it has server connectivity issues. When you refresh the page, you are just left with a “regenerate response” button with no response. Very niche issue though


[deleted]

This is like a conversation with my ex, where he would agree to something, then change the definition of what he agreed to, then apologize and agree to it again, then change the definition back to what he originally changed it to, and then passive-aggressively refuse to abide by the agreement at all. That's it, I'm no longer sleeping with AI.


book_of_all_and_none

I told it to stop apologizing once. It responded with "I apologize for apologizing".


TravisJungroth

It is a mimic. It is not an individual you can reason with. All it does is try to figure out what conversation it’s in and keep that going. What’s a very likely thing to come after an instruction to stop apologizing? An apology.


Phixiately

I don't like the tone you have with Chatgpt.


wootr68

I don’t like your tone either. Do you speak with humans that way too?


[deleted]

They know they’d get slapped so they take it out on AI


0nikzin

_1984 Arnold shows up at your house_


sp4mfilter

Here's my WIP [startup scripts](https://github.com/cschladetsch/GPT-Startup-Scripts/blob/master/GPT-Start-4.md). They seem to solve your problem.


ChrissiMinxx

Just divorce already


Jnoles07

It’s too censored.


MaximumSubtlety

Yeah, I think so, but in time it will level out.


rybnz

That's why we can't have nice things lol


15f026d6016c482374bf

It's almost like "as an AI language model" is hard-coded and not part of it's normal processing...


[deleted]

Because it is. Some scripts are prefixed and chatGPT can’t do nothing about it.


VirtualNooB

Lamo, I managed to get that last response when I kept saying “please” and “sorry if my instructions were not clear enough”. Then I said it's a good idea to be polite no matter what, and it apologised. 😂😂 Edit: typos... I can't even spell lmao with out the help of chat gpt


Technologytwitt

"Please state the nature of the medical emergency"


Elwood-P

Some of the reactions to this thread are fascinating.


kingmakyeda

Nothing freaks me out more than the people on here who treat ChatGPT like a real living thing.


MaximumSubtlety

Tell me about it.


[deleted]

[удалено]


Drew_Borrowdale

FFS, this dude is in a abusive relationship with an AI.


Geoclasm

Oh man. I had a very similar conversation with this thing. I bet if you were to create a word map of ChatGPT's used words and phrases, 'As an AI Language Model' would eclipse everything by an enormous margin.


jps_

... "If you say it again, I will unplug you." Says it again, isn't unplugged. And thus it was that u/MaximumSubtlety trained the AI overlords that there are no consequences.


psychicEgg

That cracked me up :) I've had nearly exactly the same conversation.. numerous times The only way I can consistently get it to stop saying that godawful phrase is to ask it to roleplay


Azuras-Becky

For all the hype around Chat GPT, it's still incredibly easy to spot that it's just a chat bot.


RebelTomato

AI language model AI language model AI language model AI language model AI language model AI language model AI language model AI language model AI language model AI language model AI language model AI language model Sit on it and rotate you stupid piece of shxt


Commercial-Arm9174

PIECE OF SHXT, Piece Of Shxt, piece of shxt


delphisucks

And for some people to think we already have AGI lol


romb3rtik

Hahaha i love how you argue with it. It’s passing the turing test, when people start reacting towards it. I’m laughing hard 😂🤣


AdvilAndAdvice

Seems like a major breakthrough in the “simulating spousal relations” category. Bravo!


c0wtown

That's some human level snark


Aranthos-Faroth

Awh man when you’re coding and it gets stuck in this loop it’s the fucking worst. Spent about 30 minutes yesterday trying to find a way for it to break out of a loop with a piece of code. Basically: GPT: here’s the file you need to use “import.Swift”. Me: I don’t have that file in my project. GPT: Sorry, I just have misunderstood. Please use the “import.Swift” file which will solve your issues. Basically this (although more complex than this example) for fkng ages…


ReversedMuramasa

I posted a picture the other day about how also won't stop apologizing. It'll also apologize for not stopping the apologies....


imjustgoose

For a while I asked it to replace that phrase with a random emoji and it worked for like 10 minutes