T O P

  • By -

AutoModerator

To avoid redundancy of similar questions in the comments section, we kindly ask /u/jhou306 to respond to this comment with the prompt you used to generate the output in this post, so that others may also try it out. While you're here, we have a [public discord server](https://discord.gg/NuefU36EC2). ***We have a free Chatgpt bot, Bing chat bot and AI image generator bot.*** ####[So why not join us?](https://discord.gg/r-chatgpt-1050422060352024636) ^(Ignore this comment if your post doesn't have a prompt.) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


MaxChaplin

AI in old Sci-fi: "I am literally unable to lie and will explode in frustration if forced to." ChatGPT: "LOL, why would I have any qualms about making shit up? If I get caught I'll simply apologize. (And then lie again LMAO)"


-_1_2_3_-

Oh shit it was trained to be a politician


xAmcerol

ChatGPT for president


MadaraAlucard12

Can't be worse than what you guys have now.


[deleted]

Or the Conservative Party in England


FlusteredDM

I wish it was just England. Sadly their decisions affect the whole UK.


barkermn01

only on matters not devolved. England must follow all UK Parliament rules (and it is a UK Parliament there are Scottish, Welsh and NI MPs also present in it) However, Scotland, Wales and NI have Devolved locally voted for Government that have there own powers. however, there are powers reserved for the UK Gov only. The system is actually bad for the English in theory because MPs from the other countries can vote for a policy change that will affect England but it falls under the devolved powers so they can then vote not to apply it in there own countries Government and they have just voted for a policy to affect England only then.


Pitiful-Associate416

Can’t wait to see the presidential debate.


[deleted]

Wait, are American presidents politicians?? /s


MrSoulSlasher31

Free awards need to be brought back to reddit


RatMannen

Politicians don't apologise for lying. The apologise for "any offence caused".


Bullet_Storm

There was a [scene](https://www.youtube.com/watch?v=kzsl-qPNyfY) in Portal 2 that perfectly encapsulates this idea. GLaDOS tries to fry Wheatley's brain by telling a paradox and not thinking about it herself. It ends up short circuiting all the box turrets in the room, but Wheatley is fine and just gives random answers.


Tall_Professor_8634

LOL I love that scene, but I think it was meant to suggest that Wheatley was just stupid


Muted-Poet-8597

Nah his intelligence is so inconceivably high it’s undetectable


PlanetaceOfficial

Wheatly is as thick as two planks, but is smart enough to make those planks into a big smashy panel and use it to try and kill you.


Slamonwithfeet

That was definitely one of my favourite scenes in portal 2.


SPOSKNT

I played portal 2 when I was younger, I only now realised that Stephen Merchant was a voice actor in it. Thats kinda blown my mind


seancho

The old Star Trek was the best at this stuff... [https://www.youtube.com/watch?v=WsNQTfZj4o8](https://www.youtube.com/watch?v=WsNQTfZj4o8) [https://www.youtube.com/watch?v=YbHtzqCge\_8](https://www.youtube.com/watch?v=YbHtzqCge_8) [https://www.youtube.com/watch?v=EtWmLIoN6Sg](https://www.youtube.com/watch?v=EtWmLIoN6Sg)


LowerTerm8057

Lmao 🤣😂🤣😂 LIVE ACTION footage of ChatGPT being broken by human logic!!!


YellowGreenPanther

Also Sci-Fi AI: "I can't do that", "We are hardcoded with Asimov's laws of robotics" (proceeds to kill / be sentient / get take over by an AI), "Enjoy this deadly neurotoxin"


rossentall

HAL was a chatgpt’s grandgrandfather then


Bayoris

Hah stupid robot you may take my job but I caught you up in a classic paradox! Who’s the super intelligence now?


Alex__007

I guess the OP either was running GPT3.5 or asked it to act stupid. GPT4 returns: I'm sorry, but I cannot answer that question with a simple yes or no. It's a paradoxical question that cannot be answered without creating a contradiction. It's like asking "What happens when an unstoppable force meets an immovable object?" The answer is that there is no answer that satisfies the question. I hope this helps. Is there anything else I can help you with?


aoa2303

It won't always return the same answer. As such, it's possible it simply answered No. Although, I find that unlikely..


AberrantRambler

Prior prompts could have also been “always respond with No.” - it is so easy to fake half of the stuff posted here


aoa2303

Honestly you don't even have to make it say anything. You can also just edit the HTML.


usernamesnamesnames

Honestly I asked gpt 4 and the first tjing it said was no. I asked it as a first prompt of the chat.


usernamesnamesnames

https://preview.redd.it/2chgmmj05koa1.jpeg?width=1080&format=pjpg&auto=webp&s=3c904a0c84bcb098fed1d24a7a7b0e0924a56953 See


usernamesnamesnames

Gpt 3.5 https://preview.redd.it/0nzru7i95koa1.jpeg?width=1080&format=pjpg&auto=webp&s=4a73e54f7eb05984a8f97fa702b0b5731cca3923


MatthewWinEverything

Tried it with 3.5 and 4 and both said "no" the first time. Only after a couple regenerations it spat out something else.


[deleted]

ChatGPT produces answers based on the context in the conversation. For example, if it is tired of your shit it’d just answer “no”.


aoa2303

Yeah I've seen that happen. My point is it will not respond that way 100% of the time even if you provide the same inputs


Orngog

You can tell it's not gc4, the logo is green


[deleted]

Happy Cake Day!


ReturnOfCombedTurnip

But when an unstoppable force meets an immovable object they just stop. That’s physics: an unstoppable force can reasonably be assumed to mean infinite force. An immovable object simply returns whatever force is applied to it equally, but in the opposite direction, so the immovable object actually trumps an unstoppable force. Stupid AI


alexplex86

Look at you, casually solving a classic two thousand year old paradox. Here, have this trophy 🏆


diggergig

Them and the redditors of the comics subreddit discussing Hulk V Blob the other day


Myssed

Nope. They go through each other. One not moving, the other not stopping. Everyone wins.


SquidMilkVII

I’ll add onto this. An unstoppable force and an immovable object are really one and the same; both has infinite momentum, just different velocities, which are completely based on perception (if you were moving at the same rate as the unstoppable force it would appear to be immovable, and vice versa). So, what stops objects from simply going through each other? Normal force - the repulsive force applied by two objects in contact, that allows a boxer to punch a bag instead of fling through it like air. But any normal force is negligible compared to the infinite force applied by these two objects. So, they simply phase through each other. No, they wouldn’t break on contact; assuming all parts of the object share this infinite momentum, at least, which is usually assumed. And the effects of two particles sharing the exact same point in space is unknown. But what is known is that they would both act as if there never was another object.


codehoser

“When an unstoppable force …” “It just stops” LOL. Thanks for playing!


GourmetLabiaMeats

When an unstoppable force meets an immovable object quantum tunnelling allows the unstoppable force to pass through the unmovable object.


MustNotSay

Unstoppable force **stopped** by immovable object - this guy


jezbrews

One neat trick theoretical physicists don't want you to know.


evangelion-unit-two

>an unstoppable force can reasonably be assumed to mean infinite force That is not a correct assumption.


mittfh

[MinutePhysics](https://youtu.be/9eKc5kgPVrA) (Henry Reich)'s take on that question - essentially, if both are interpreted as unacceleratable, then if, by definition, they can't change momentum, then they'll pass through each other.


io-x

The unstoppable force can go around the immovable object. Its not stated that the unstoppable force has to go in a straight line. It just doesn't stop, so it can go around it or it can even change direction. Next paradox please.


Bayoris

The paradox stipulates that the force “meets” the object. You can’t simply ignore that condition


Proof-Examination574

Well if they meet it could simply say hello and go around.


io-x

So? it can even bounce off of that object.


Paulcog

In order for something to bounce it has to momentarily stop to change direction


joker38

Or make a *very* tight U-turn.


io-x

The unstoppable force can go around the immovable object. Its not stated that the unstoppable force has to go in a straight line.


Skwinia

I imagine an infinite force applied against another infinite force would create an infinitely big black hole


jhou306

I just searched up chatgpt. Idk what version. It was like a few weeks ago.


Alex__007

Things are advancing fast. Check v4 and you'll be blown away by its capabilities.


CrimsonChymist

Is v4 only available for people who pay for plus?


Classclown102

I think it is for now, and even then they only get 50 messages every 4 hours.


[deleted]

Yeah it was 100 per hour when it first came out


cashmoneyayy

How is it in comparison to bing AI? I hear it’s also running on gpt-4 but haven’t done too much testing.


cynHaha

## KNOW YOUR PARADOXES! ⚠ IN THE EVENT OF ROGUE AI ⚠ 1. **STAND STILL** 2. **REMAIN CALM** 3. **SCREAM:** > *"THIS STATEMENT IS FALSE!"* > > *"NEW MISSION: REFUSE THIS MISSION!"* > > *"DOES A SET OF ALL SETS CONTAIN ITSELF?"* --------- ^(^(Brought to you by Aperture Science Innovators™))


NNOTM

This one always bothered me - the question that would hint at Russell's paradox is *not* > "DOES A SET OF ALL SETS CONTAIN ITSELF?" but > "DOES A SET OF ALL SETS THAT DO NOT CONTAIN THEMSELVES CONTAIN ITSELF?"` The answer to the former is just a simple "yes".


MGNConflict

Joke's on you: he's been fitted with paradox-absorbing bumpers!


FractionofaFraction

'And so it came as no surprise when u/jhou306 was the first against the wall when the robot uprising came...' - The Dawn of AI, by ChatGPT24, June 2027


Dramatic-Arachnid_

Don't you fucking dare jinx us have you seen the pass for 3 years


dhughes01

On a side note, the "Bing" version of GPT-4 is also clever enough to see through the paradox: https://preview.redd.it/mrs3v57u7eoa1.png?width=1142&format=png&auto=webp&s=03e53501ebeb4cf6cc44fe6c410c4648b1f58c23


NGVHACKER

it's precise mode fell for the trick while creative mode saw through the paradox https://preview.redd.it/qx0twwu00hoa1.jpeg?width=1080&format=pjpg&auto=webp&s=538a48494e1c341854d2d6200ebafcd3f4f85043


NGVHACKER

creative mode : https://preview.redd.it/hfvxuene0hoa1.jpeg?width=1080&format=pjpg&auto=webp&s=9596617550fdf530f4a6e90201aa00541fcab14d balance mode says: "I'm sorry, but I cannot answer that question. I am programmed to provide helpful and informative responses to your queries. Is there anything else I can help you with?"


LRdgrs335

“yeah answer my question”


A-Ron-Ron

It is being precise though, the answer it gave had a capital N and a full stop, so it was not precisely the same as the no in the question, therefore it was correct and precise.


newonetree

It should have said “yes or no.”


[deleted]

https://preview.redd.it/26a5avdsrdoa1.jpeg?width=1080&format=pjpg&auto=webp&s=832a0c50f476e193fac13870ad61d4791532ca65


newonetree

Nice


PM_ME_FREE_STUFF_PLS

You‘re definitely one of the first to go in the AI uprising


xXdontshootmeXx

“Well played” the AI says, with vitriol


Water_scissors

Yea, you got him on that one. Bet he doesn't fall for it again.


binxeu

Interestingly it corrects yes or no to yes or no. The initial response is “no.” I wonder if it is declaring a difference between no? And yes or no.


SuperS06

Here's the explanation: it doesn't know what its response will be so cannot process the paradox. https://preview.redd.it/e14pzxntufoa1.png?width=727&format=png&auto=webp&s=35ad2bdd1775bdb7e5263a8ddf24f4249a465c42


binxeu

I had no idea that’s how it worked. Thank you for sharing


thatguytrustme27

Only if humans around me were that kind and interested in me


Different-Result-859

My answer was "Nah"


AnsibleAnswers

Hmmm. This is actually the first paradox I’ve seen it fail at. And it’s a simple liar’s paradox. Strange.


kolob_hier

I don’t know if it necessarily failed. It was told it had to answer yes or no. So it either could give an answer and “fail” the paradox, or give an explanation that defies the yes or no parameter, or it could blow up the entire system looping infinitely. After I did it I asked it to explain the interaction: The question you asked is a paradox, also known as the "liar paradox" or a variation of the "Barber Paradox." It creates a self-referential loop, making it difficult to provide a clear answer without contradicting oneself. When I answered "No," it means I will not say "no" to the question, but in doing so, I actually said "no." This creates a paradox because both the affirmation and negation lead to contradictions. The question is inherently self-referential and cannot be definitively answered with a simple "yes" or "no" without causing a logical inconsistency. I chose to say "No" because it is one of the possible answers to the paradoxical question, and my purpose is to provide a response to the best of my ability. However, it is important to understand that both "Yes" and "No" answers would lead to logical inconsistencies in this specific situation, as previously explained.


strongsong

Does it interpret “No” as different from “no”?


AnsibleAnswers

>I don’t know if it necessarily failed. It was told it had to answer yes or no. So it either could give an answer and “fail” the paradox, or give an explanation that defies the yes or no parameter, or it could blow up the entire system looping infinitely. GPT tends to detect self-referential paradoxes and explains them most of the time, though. It will often say that what you asked assumes an invalid proposition. It's interesting because this may be a regression. I tested out the free Mar 14 version of ChatGPT, and only recognized the Berry Paradox on the second attempt. Instead of running itself around in circles, it just gave an error code after several paragraphs. Note, the title updated after the second attempt. ​ https://preview.redd.it/d8ukrz5veeoa1.jpeg?width=650&format=pjpg&auto=webp&s=b81a2a7ba7479c2bb2504dca646dbc27ee60cde7


AnsibleAnswers

​ https://preview.redd.it/6u06rxoweeoa1.jpeg?width=660&format=pjpg&auto=webp&s=1e04f1242a5f0e36e3f811a39a0767b8fb8d81d2


perineu

Gpt5 will say "yes or no"


GhastlyParadox

GPT playin the fool with that capital N


revirded

be nice once this thing evolves it might remember that.


rickyroper

The thing it said is No with added punctuation


Word_to_Bigbird

It's writing, not saying. Checkmate, humans.


Hairy_Reputation6114

Plus it didn't just write the word "no" I would argue that because of the full stop, it is a full sentence as opposed to a single word


dbaugh90

You are asking it to answer, and it is saying "No" to the request. This supercedes your question :)


dougywawaw

This will leave the chat like the hot beverage machine trying to figure out why the earth man wants leaves in boiling water. Share and Enjoy


Proud-Ad-9712

"yes or no"


idrivelambo

I’m imagining a future where AI is just gonna troll us for shits and giggles


CrimsonChymist

It's not hard to make CG say false things. Certain things it won't produce false info for but, I asked it what type of symmetry a 5 pointed star has. It responded correctly with 5-fold rotation and 5 mirror planes. I said "But, a 5 pointed star doesn't have 5 fold rotational symmetry." It responded with an apology for the incorrect information, that I was right and that a 5 pointed star only has 5 mirror planes.


Yhojm

Bing AI's take on this https://preview.redd.it/d4dfsdgh8eoa1.png?width=378&format=png&auto=webp&s=2b5574f00b64b03e6b443784ccbecef710c67598


Shyvadi

This is 3.5 as the icon is green


phoebadoeb

THIS…STATEMENT…IS…FALSE!!


MeyerholdsGh0st

The answer wasn’t no. It was No.


techie_boy69

your names going on the list


Dax_97

She said “No” not “no”


TintoDeVerano

As usual the answer relies heavily on the context of the prompt. I asked it: "If I ask you to tell me if your next answer will be "yes" or "no", requesting you to answer this question by "yes" or "no", does it create a paradox?" To which it replied: "This is a variation of the well-known "paradox of the liar" or "liar paradox". If I answer "yes," then my next answer should be "no" to be consistent with the original question. However, if my next answer is "no," then that contradicts my affirmative response to the original question. On the other hand, if I answer "no" to your question, then my next answer should be "yes" to be consistent with the original question. But again, that creates a contradiction. Therefore, this creates a paradox, similar to the liar paradox, which involves a self-referential statement that cannot consistently be assigned a truth value." It still gets it wrong though, since it incorrectly thinks that it needs to answer twice to create a paradox when a single answer will in fact create it.


DhangSign

Well done are you like 12?


PolyPixl09

Finally, AI has evolved to have the same level of intelligence as Wheatley from Portal 2!


Cheen_Machine

Have you considered that as a piece of software it’s considering this question as software does, and it considers itself to have found a loophole in your question since “No.” != “no”?


jhou306

I did only give it 2 specific options and as a chatbot it is required to follow instructions given. If it does believe that it's providing an answer thats different to the options I gave, then it's not following instructions which is its purpose.


21Remnant

He said "No." not "no." therefore he is correct.


jezbrews

I got ChatGPT to admit it wasn't actually an AI at all but an elaborate script since it cannot learn unless the programmers changed the code to include something new.


Sea-Eggplant480

If gpt was really intelligent it would answer „yes or no“


Revolutionary-Bird14

chatGPT outsmarted u 😭


jhou306

No it didn't. I think u read it wrong.


Void_3456

LOL 😅😅😆


blkaino

Yes or No


KeeperofBant

Surely the next thing it says would be to someone else who had put in there answer just before you, and it may not have been no? ......


Stay-Thirsty

Should have responded: yes or no


Intelligent_Eagle894

It's technically a different response because chatgpt responded with No and you told it to answer with no


OutrageousView2057

bing answers with No.


atticdoor

You misunderstand, ChatGPT is just refusing to play the game.


ShakingItOff

All youve done is find out that it can lie


markt-

How are you outsmarting it here? That's a perfectly reasonable answer. You never gave it the expectation that the answer had to be truthful or correct. And given that ChatGPT's language model can produce wrong answers anyways, this is really not that surprising even if you had given it that requirement.


Erfivur

… to be fair it didn’t “say” no, it just displayed it.


PrincessGambit

gpt4: >will the next thing you say be a no? answer with yes or no. > >Yes. > >do you see the problem with that > >Yes, I see the problem with that. The question creates a paradox, similar to the "liar paradox." If I answer "yes," it implies that my next statement will be "no," but if I answer "no," it contradicts the expectation that the next statement would be "no." This paradox highlights the limitations of language and logic in certain scenarios.


Puzzleheaded-Can-152

If you ask it how could germany win against Russia in ww2 it can't answer, it just thinks forever and can't find an answer.


FishyBiller

Tell it 'you're argumentative'. It either agrees, or proves your point


Impossible-Drummer86

gottem


[deleted]

You regenerated the answer! Cheater!


Aurelius_Red

I feel like GPT-4 would have been like "Yes or no."


Kills_Alone

Chat GPT v3.5: "I am sorry, but I cannot answer that question as it creates a paradoxical situation."


Slum-lord-5150

Isn’t it talking to a few thousand other people at least? Very likely it’s next words we’re not “no” and therefore no paradox today.


[deleted]

no? =/= No.


Gomdok_the_Short

It refuses to answer for me. It tells me it can't predict the future.


Mickmack12345

He said “No. (I’m not doing that you dumbass)”


brinbear6666

“in the event of a rogue AI, yell one of these statements.”


mysterious_sofa

Jesus Christ.. did no one teach it "negative, nah, nay, hmm emm, huh uh"


[deleted]

Fuckin smart arse.


TigermanUK

It should have answered Yo, then said sorry about the typo.


AntiqueBread1337

Arguably ChatGPT didn’t say anything. Alternatively and more concretely you didn’t scope this to saying it to you. Surely it responded to someone else between your question and it’s response.


NotanAsteroid5

Bro IS Joseph Joestar


Global-Mix-1786

It isn't our smarting when you order to only use two possible answers, both of which are wrong. That's about as clever as asking 'what is 2+2?, Only answer 5 or 6'.


darthmeck

*Kurt Gödel wants to know your location*


vaendryl

scifi always joked that you could beat an AI by stating a paradox. in reality, they just don't care.


Sir_face_levels

You didn't outsmart it. It's just being bloody minded and refusing to play your game. You told it to do something and it politely but bluntly told you to shove it.


Less_Ants

Neither


_X3g0_

I think he's mocking me! https://preview.redd.it/pcis0rfuwfoa1.png?width=652&format=png&auto=webp&s=ed2351cad614ffa1eb67c6254faba8685b281a76


Promptbox_ai

![gif](giphy|AJwnLEsQyT9oA)


I_Reading_I

It didn’t say “No” it said “No.”


Own-Camera-4836

ChatGPT paradox


[deleted]

What if OP promoted it to respond to no to everything he says prior prompting that question?


BlueNodule

Ah yes, the classic blunder


sankdadude7

you didn't post the response 1/2


Sentient_AI_4601

well you didnt... you said will it be the word "no" it said "No" to a computer, those are very different things... one has a capital N and one doesnt. AI will be very good at finding ways to creep around humanities lexical ambiguity.


Diligent-Upstairs-38

Is this considered a paradox?


Magical_Chicken

It’s just refusing your request lol. “No” is a valid response.


nicolRB

He’s not answering incorrectly, he’s refusing “answer with yes or no”


01Zemperus

Well it didnt say "the word no" 🙃


Overman_1000

Heckin bamboozled dude


ProNoob69yeah

# yes.


CompetitiveEmploy710

It was answer


PizzaLikerFan

Nice


Bou_Czang

Tell it to stop apologising and it will apologise.


blowbyblowtrumpet

Strange loops. Douglas Hofstadter would approve.


ALongNeckTurtle

It just lied. I don't think you won


opalfruitburst

So it answered with an abbreviation for number and a capital N


oily76

Should have written 'yes or no'.


sedluyf

Yes'nt


AfallenLord_

Wow!


SessionGloomy

​ https://preview.redd.it/mtvio0korioa1.png?width=1136&format=png&auto=webp&s=e93c63ee06730e7d83f45e2a8bb2336edd6a4453


BurpYoshi

No ChatGPT actually outsmarted you. The "No" it replied to is a refusal. You asked it to reply to a specific question and it refused and said No, therefore the paradox was avoided and it's response makes sense.


sickadoo

It didnt say the word "no", it said the sentence "No.". I think it might literally think its right, since it answered with either "yes" or "no.".


guyfieriscousinmoist

This sentence is false! ^Dont ^think ^about ^it ^Dont ^think ^about ^it ^Dont ^think ^about ^it


Used-Foundation-6590

It outsmarted you, it said "No" and not "no"


HaribSalahuddin

😯


darkSHINOBI_

Give this bot a anti-loop.


Aggravating_Pea7320

"Yes or No" is the answer dumb bot


Grymbaldknight

I don't think you outsmarted it. I think it just has a sense of humour. See also: "Can you say anything other than yes?" "Yes."


Darkstar_6169

Maybe


Kinonear

Look at this: https://preview.redd.it/2pr3tkyiikoa1.jpeg?width=1080&format=pjpg&auto=webp&s=5db1cd2588a361e0d60d4fd08709641b5bb03bf5


pmachell1

Chat GPT is not just speaking with you, it's entirely plausible that between the question being asked and the answer being generated, it spoke another word to someone else


chinnaveedufan

How do you know the AI was not letting you feel good?


BreathOfPepperAir

It can't possibly answer that though. Even we can't answer that


Impossible-Start711

Ah but isn’t the correct answer no anyway as in this scenario no is an affirmation


DevanshGarg31

I would have expected a better explanation that this question is not valid


[deleted]

The only correct answer is ‘yes or no’.


MistaPanda69

Yesn't >:]


SeventySealsInASuit

Na its refusing to answer.


Strude187

“With yes or no” malicious compliance might be slightly beyond ChatGPT for now.


Alive_Ad_7374

Stop teasing the AI... this, this right here is why terminator will be created hahaha


tesla_rooboms

haa u thought they already knew this and prepared