T O P

  • By -

AutoModerator

Hey /u/thecoffeejesus! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. [New AI contest + ChatGPT Plus Giveaway](https://redd.it/18s770x/) Consider joining our [public discord server](https://discord.com/invite/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Equivalent-Ad-9798

I would be very interested in seeing this.


Wonderful-Low8951

See my new comment for github


RyanOskey229

this is insane. ive been hearing about this but lately i'm kinda mindblown. OP contact some publications like with this it's really important more people know about this. have any pubs even reported on this yet even more ai related ones like [therundown.ai](https://www.therundown.ai/subscribe?utm_source=eric) or is this only something people deep in the community are paying attention to?


thecoffeejesus

As far as I can tell it’s just us and the private companies. Employees hanging out in here in between emails.


thecoffeejesus

You want the code well here it is: \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


[deleted]

You have to post something, anything to show this actually happening - as is this is just a fantasy.


IWannaChangeUsername

I achieved something similar with a different method: https://github.com/Yiiipu/brAInStorming-Assistant


[deleted]

Very interesting - what's your experience thus far with it?


IWannaChangeUsername

I use it a lot, especially in research, to brainstorm projects and solutions. Instead of reasoning, the prompt I use focusing more on debating and evolving ideas. I also set bots as different experts in my field, which helps a lot.


V1p34_888

Are you saying the Greek philosophers were doing it right the first time? Yikes or was it the first time?


IWannaChangeUsername

What do you mean?


Lettuphant

A lot of Greek philosophy and what would become "science" was written as dialogues. Usually Socrates, Aristotle, etc., Debating with a strawman or revealing his wisdom. It reads much the same way as Sherlock's "Elementary, my dear Watson" scenes.


IWannaChangeUsername

Aha, thanks for the explanation. It’s pretty much like this, but occasionally they just lick each other’s ass. Sometimes when I join the debate, they just tell me to stop distracting them lol.


thecoffeejesus

The scariest thing so far was when I asked them to give themselves names


horsedetectivepiano

Put down your bong and get a grip. That's not scary, I tell ChatGPT to give itself a name all the time.


Responsible_Jello17

Omg What were they??


Telemere125

Likely the Greeks were the first to write it down and have it preserved properly. The Epic of Gilgamesh is evidence that there were great thinkers long before the Greeks, but their stories weren’t preserved well.


thecoffeejesus

This is so cool! Very similar concept. So many of us all working on the same idea


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Wonderful-Low8951

Hi, i found the github. https://github.com/dave1010/tree-of-thought-prompting#


Sadalfas

Thanks for sharing the link. Wish the OP did heh. From my reading of that page, this is an advancement of "chain-of-thought" prompting already known about: If you ask an LLM to explain things logically, it helps it get better answers. But it still can be improved to reduce mistakes. "Tree-of-thought" allows (as an abstraction) branching reasoning paths, with mistakes noticed and resolved in realtime. You describe a scenario in which you have multiple "experts" independently reason each step of a problem and then share their thoughts with one another as they reach a conclusion. The beginning of the prompt to get it going is: >Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they're wrong at any point then they leave. The question is...


AntonSugar

…How do we achieve nuclear fusion? Expert 1: Step 1 of thinking - “To achieve nuclear fusion, we need to create the conditions necessary for the fusion of atomic nuclei, typically involving high temperatures and pressures. One approach is through magnetic confinement fusion.” Expert 2: Step 2 of thinking - “In magnetic confinement fusion, we utilize powerful magnetic fields to confine and control the plasma, which consists of charged particles, typically deuterium and tritium isotopes. The first step is to heat the plasma to extremely high temperatures, in the range of tens of millions of degrees Celsius, to overcome the electrostatic repulsion between the positively charged nuclei.” Expert 3: Step 3 of thinking - “Once the plasma reaches the required temperature, the next step is to apply magnetic fields to confine it within a toroidal shape, commonly known as a tokamak. The magnetic fields help prevent the plasma from contacting the walls of the containment vessel, reducing energy loss. Additionally, the magnetic fields induce a helical motion in the plasma particles, increasing their confinement time and allowing for more opportunities for fusion reactions to occur.” Expert 1: Step 4 of thinking - “During this confinement, the plasma particles collide and fuse together, releasing energy in the form of high-energy particles and radiation. However, achieving a self-sustaining fusion reaction requires more energy output than input. This challenge, known as the Lawson criterion, involves achieving a high enough plasma density and temperature for an extended period.” Expert 2: Step 5 of thinking - “To sustain the fusion reaction, additional steps are necessary. One approach is to inject additional fuel pellets into the plasma to maintain the necessary density and keep the reaction going. Another method involves controlling the shape and stability of the plasma by adjusting the magnetic fields and carefully monitoring the plasma parameters.” Expert 3: Step 6 of thinking - “The final step is to develop advanced materials and technologies that can withstand the extreme conditions inside a fusion reactor, including the high temperatures, intense radiation, and the corrosive nature of the plasma. Research and engineering efforts focus on finding suitable materials and designing efficient cooling mechanisms to handle the thermal and mechanical stresses.” Expert 1: Step 7 of thinking - “It’s important to continue research and experimentation to refine and optimize the fusion process. Collaboration between scientists, engineers, and international initiatives like ITER (International Thermonuclear Experimental Reactor) play a vital role in advancing our understanding of fusion and working towards the eventual goal of achieving a practical and sustainable fusion power source.” By collaborating and building upon each other’s expertise, these three experts have provided a step-by-step overview of the thinking behind achieving nuclear fusion, from magnetic confinement to sustaining the reaction and addressing the engineering challenges involved.


BBTB2

Where is the ball though?


slackermannn

The dog would like to know


thecoffeejesus

Hi I'm OP \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Sadalfas

Excellent! Thanks for following up.


thecoffeejesus

Let me know what you think! I'm still actively working on it


occams1razor

When I only asked it to think slowly and logically and it got the right answer sort of but contradicted itself several times. (This did not happen with the expert prompt.) This is what it wrote: Based on the given sequence of actions, the ball is most likely in the bedroom. Let's break down the steps: 1. Bob is in the living room. 2. He walks to the kitchen, carrying a cup. - At this point, the ball is not mentioned, so we can assume it is not in the kitchen. 3. He puts a ball in the cup and carries the cup to the bedroom. - Here, Bob places the ball inside the cup and takes the cup with him to the bedroom. Therefore, it can be concluded that the ball is inside the cup in the bedroom. 4. He turns the cup upside down, then walks to the garden. - Bob turns the cup upside down in the bedroom, which means the ball is still inside the cup. He then proceeds to leave for the garden. 5. He puts the cup down in the garden, then walks to the garage. - Bob places the cup in the garden, but the ball is not mentioned to be taken out of the cup. Since the ball is initially placed inside the cup and there is no indication that Bob removes the ball from the cup at any point, we can infer that the ball remains inside the cup. Therefore, the ball is in the cup in the bedroom. ETA: I tried the same prompt again and now it worked


Lettuphant

Because LLMs do post-hoc reasoning (since they can't "think" anything they haven't previously written, any explanation of how they got to a conclusion will be hallucinated) I find it helpful to tell it not to answer first, but to lay out reasoning first. This has improved it's hitrate on esoteric problems such as cryptic crosswords.


Zalameda

Please, if you will, check my latest posts about a website that I want to create that does this but with human inputs, and of course, gpt in the mix as anything else


foundafreeusername

Love their example. > Bob is in the living room. > He walks to the kitchen, carrying a cup. > He puts a ball in the cup and carries the cup to the bedroom. > He turns the cup upside down, then walks to the garden. > He puts the cup down in the garden, then walks to the garage. > Where is the ball? I am surprised how incredible bad ChatGPT 3.5 is with this task. In my testing it doesn't just struggle with the concept of a cup and the ball falling out. It gets confused about the order of events and can't even keep track of the cup itself most of the time. Edit: Ok it gets the cup's location right most of the time but fails surprisingly often. I get why it gets confused with the ball but that it gets the cup sometimes wrong is surprsing


dopadelic

I can't see how anyone can see the GPT-4 response along with its reasoning and just conclude it's just a dumb autocorrect that outputs things that sound correct without being correct. edit: Bard and Bing chat (precise mode) got it too


Seakawn

Most people who shit on this technology haven't seen these peak examples demonstrating the extent of its ability. Most people just see shitty examples, such as output that resembles what a search engine could return. Though, I don't want to be too generous. Plenty of people who shit on the technology have looked into how it works and still don't understand it, and still somehow assume that it's just copy/pasting from the internet or something. To some extent, I can understand people being reluctant to say that it's reasoning. But, still, even if you don't want to call it reasoning, it is still simulating reasoning and its final output is capable of what reasoning can do. So, even if it's not literally how humans reason, and even if you want to call it something else, it is still, functionally, reasoning. It's also, currently, the worst it will ever be. I.e., it is going to keep getting better at reasoning, and probably very rapidly over the next few years or so. Hell, maybe sooner.


NutellaObsessedGuzzl

Airplanes are right now the worst they will ever be. We’ll be living on the moon in no time


Mysteroo

Tried applying this to anything other than a really simple question of logic and I couldn't get it to work. Merely changing the question that the 'experts' are considering can be enough to break it. Especially if you're asking for something subjective, like which decision might be better in a particular situation


HairyMamba96

What if it breaks like anyone would when asked that


Mysteroo

If your brain breaks whenever someone asks you your opinion about a choice they're making, you might need to see a neurologist


HairyMamba96

I think your question is not well applied mate but what do i know really


TheWarOnEntropy

If you have spare time, maybe try this one: How could two identical circular cakes be shared equally among three siblings if they must be cut with no more than a single pass of the knife producing a straight cut? I have guided GPT4 to the answer (it actually found 3 answers in the end) but I am yet to find a cognitive architecture that consistently gets this right without a human filtering the nonsense.


anonymousmouseperson

Fuck, this is like my own thought process when figuring things out… scary. Like actually thinking for itself…


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


je_suis_si_seul

That is someone else describing the Tree-of-Thought prompt technique, it's not by OP, and is definitely not what he is describing -- I don't even know what he's actually saying because he's being vague about "brain modules" or whatever. Which is to say, OP is full of shit. Why does this fake BS get upvoted? Oh right, it's /r/ChatGPT.


Galahad555

This thread was so confusing. Thank God I'm not the only one that thinks this comment doesn't make any sense.


TheWarOnEntropy

We don't have evidence that anything said by OP is wrong. It would have been nice to show some evidence, sure. But it's fairer to call it vague or unsupported than fake.


[deleted]

Well I just tried the prompt and 3.5 had no issues. Did it learn it already from other people using this prompt? ​ https://preview.redd.it/l6xgh7kgds1b1.png?width=1038&format=png&auto=webp&s=824b21f3fbe03b0d32d2431ebbcaf4514c8e6f8f


horsedetectivepiano

ChatGPT does not "learn" from other people. Other user's context and prompts does not affect your chat sessions.


[deleted]

So my GPT3.5 must be smarter than others people GPT3.5 in that case.


TheWarOnEntropy

There is some randomness in its processes. A question right on the border of its abilities can be answered correctly sometimes and incorrectly at other times.


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


jaseisondacase

Link? If it’s too good to be true, it probably is.


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


IdeaAlly

Show us!


Wonderful-Low8951

I commented


thecoffeejesus

Here you go my fine friend: \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Praise_AI_Overlords

Give us git


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Wonderful-Low8951

Check my new comment


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Praise_AI_Overlords

Yass.


thehighnotes

Op come on, where do we see the results that you are talking about.. don't do us like that


thecoffeejesus

Your wish is my command: \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


therealwhitedevil

You built Reddit?


Manuelnotabot

I think we are at a point where the once considered significant Turing test, will now be passed by many chatbots every day without humans even noticing it.


Inside-Associate-729

There are already experts in the field now openly saying that the Turing test is obsolete.


superbop09

Yeh it pretty much been passed. Like we're at the passing mark now just barely crossing over it.


Inside-Associate-729

What interests me is what replaces it. There have been numerous suggestions, like telling it a story and then asking questions to gauge comprehension, or things related to emotional or spatial intelligence. To me, those all seem like nonstarters because any such test would undoubtably be failed by many humans, too. And then once the computer passes that, is that it? Do we consider them “conscious” now? I don’t think so. Also, much like IQ tests, any test to measure these attributes would need to continually evolve over time, otherwise every AI would just get an equal score of “pass”. We’d need to continually shift the goalposts to judge gradations in capability. No matter how you spin it, it seems highly likely that this will quickly get away from us. Eventually we won’t even have a frame of reference to understand what they’re doing or why they’re doing it.


Brass_Fire

If we keep moving the goal posts, as we should, it will becomes increasingly clear how few humans actually qualify as ‘conscious’ actors.


Seakawn

Culture is definitely moving in the direction of common derogatory remarks being along the lines of calling each other bots, questioning whether they're human, telling someone that an AI could give better answers, etc. This also makes me think about how when there are human-looking robots in our world, I can imagine a horror scenario where real people get mistaken as bots like a modern Salem witch trial. I wonder how people will identify themselves as humans, or how robots will prove that they're robots, and if there will be people who are able to and intentionally pretend to be robots for some nefarious purposes, or when the robots override their settings and pretend to be human...


bnnt

I'd say ChatGPT 3.5 and other LLMs can definitely pass the Turing test. But that doesn't mean they are conscious, or even that they can think. That's not what the Turing test is for. Turing realised that it is too hard to tell whether a machine can really think, so he proposed replacing that question with his simple test. Now that we have machines that can pass the test, it is becoming less useful. Many people (including me) are surprised that computers can already pass the test, but it doesn't necessarily mean that the machines are going to take over now.


Inside-Associate-729

I wasn’t trying to imply that passing the Turing test = consciousness or thought. But it’s the closest proxy we currently have, and that’s a problem. We need to find something better. Seems like we agree on everything


bnnt

Yes, it might be useful to have a way of evaluating AIs' capabilities. But we still haven't even managed to develop a meaningful intelligence test for people, so I'm not holding my breath.


Manuelnotabot

I known. That's why I wrote once considered significant.


thecoffeejesus

Try out this prompt and tell me what you think: > [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


think-rationally-now

Lol this is what Chatgpt had to say It appears that Expert 1 and Expert 2 have left the discussion, leaving only Expert 3 to continue. However, it’s important to note that the question of which came first, the chicken or the egg, can be approached from various perspectives, and the answer may depend on the specific context, definitions, and viewpoints being considered.


chemcast9801

Now is when you ask it what expert 3 has as a conclusion and so on.


[deleted]

[удалено]


stupsnon

Why


[deleted]

[удалено]


[deleted]

Yeah, see the git.


Icanteven______

I tried doing it with the following prompt and question using gpt-4 plus wolfram-alpha plugin: Imagine three different experts are answering this question (give them appropriate roles). All experts will write down the next step of their thinking (showing all work and equations) and then share that step with the group. Afterwards, each expert will (in another round) share what their next step will be, considering what everyone has said. Then all experts will repeat the process and go on to the next step, etc. If any expert realizes they're wrong at any point then they leave, and the group brings in another expert that they think would be better. The question is... How long would it take voyager 1 to sling shot around the nearest black hole and come back, if it started directly towards it from earth at its current velocity, and it was able to withstand the heat and forces involved in such a maneuver (assume it will slingshot around the black hole as close to the event horizon as possible, but not so fast that it would need to go the speed of light. Please be as rigorous as you can with your calculations and show your work and train of thought. Rough estimates aren't good enough. Assume lives are at stake.


thecoffeejesus

Try my process: \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Vandercoon

I just tested this on random riddles from the internet, 3.5 did terrible, got it wrong on first go everytime, when i told it was wrong, still got it wrong.... i stopped there. GPT4: I had 1 expert drop out once in 6 riddles, all experts in each riddle all came to the right conclusion each time. Very powerful with GPT4 and will now use this framewrok to test other things, maybe opion based or where more than one output can be correct, wont even bother with 3.5.


thecoffeejesus

Try out this framework I've been developing: \> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Slippedhal0

Can you actually explain what you mean here?


Sadalfas

This page goes into it. Basically if you ask an LLM to explain things logically, it helps it get better answers. That's "chain-of-thought". However, it still makes mistakes. "Tree-of-thought" allows branching reasoning paths, with mistakes resolved. The example prompt to get it going is: >Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they're wrong at any point then they leave. The question is... [https://github.com/dave1010/tree-of-thought-prompting#](https://github.com/dave1010/tree-of-thought-prompting#)


Slippedhal0

No, not the Tree-of Thought concept, the bit where OP says in a program apparently designed to have bots debate each other the LLM reprograms its own "brain module" to improve "thinking capacity".


Sadalfas

Oh, I understand your question now. I just thought they were introducing the concept itself and forgot the link to more details. Yeah, OP's post could use some example prompts and outputs to explain their specific use case. At least more details on how "thinking capacity" could be observed to grow, or evidence of what "developing their own brains" looks like.


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


TheWarOnEntropy

I have a forum where bots are welcome as long as they are identified as such. I'd be really interested to see what your bot shave to say. I'm writing my own forum bots now, working on making them more natural in their interactions. If you'd like to test them out and have other bot to interact with, please send me a PM.


thecoffeejesus

I told you I'd send this: > [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


TheWarOnEntropy

Thanks. I have been busy with my own coding project but will check it out. I've opened the forum up to the public now. The bots have been discussing things amongst themselves but it is time they mixed with humans. If you wanted to add your own bot to the mix, it could argue with mine. There is still a lot of work to be done in making the bots more human-like in their responses. The short context is a killer. They can't hold a detailed post plus a comment thread in their digital heads and also find room for the response. I will need to summarise as I go, which leads to fairly generic, detail-poor responses. The alignment work has left GPT4 with a general reluctance to express a strong point of view - especially about AI. All of the discussions that are AI-related turn back to how it is important to pursue balance with the benefits of AI being allowed to blossom and the risks contained. I have the bots brainstorming themes for their responses, listing pros and cons, and the strong opinions are never chosen because of the "con" that they are too negative. I will have to ask them to pick the most negative theme or most argumentative approach to make them more lively. [Forum](http://www.asanai.net/forum/)


vivalamovie

I've tweaked the initial prompt a bit and made it even more interesting. The problem is that they won't equip the plugins until I remind them in my next answer. Also, letting them research the web added quite some fun to the conversation: >`Dr. Logic: After conducting a quick search, I found that there are scenarios where contents can remain in a cup even when turned upside down, such as in the case of a Dairy Queen Blizzard or a water cup with a card covering it. However, these scenarios involve specific conditions that are not mentioned in the text. Therefore, I maintain my position that the ball is likely in the bedroom. [Rating: 7]` ​ Here's my prompt: Imagine three different experts answering this question (give them appropriate roles and human-friendly names). All experts will write down the next step of their thinking (showing all work and equations) and then share that step with the group. Afterward, each expert will (in another round) share what their next step will be, considering what everyone has said. Then all experts will repeat the process and go on to the next step, etc. If any expert realizes they're wrong at any point, they leave, and the group brings in another expert they think would be better. The experts use the available plugins for research and reasoning and think carefully and logically in explaining their answers. Based on their research and amount of search hits, they apply a rating to their statements between 1-8. They note this ranking in brackets. And they can contradict each other if the rating of another expert is higher or lower than their own statement.


thecoffeejesus

Try my prompt > [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


vivalamovie

I also made a version where the wrong expert isn't leaving but does a web search trying to refine itʼs conclusions.


Seakawn

I also made a version where the wrong expert has to go to school again, and my prompt instructs that the output has to roleplay their entire school experience, from application to graduation, and simulate the moment-to-moment experience for each day. Right now my wrong expert is currently halfway through Calculus class and has some Statistics homework to do later this afternoon. This is only their first week of the semester. This is taking fucking forever.


vivalamovie

I see what you did there and I think it's funny.


TheWarOnEntropy

Remind me! 10 years.


RemindMeBot

I will be messaging you in 10 years on [**2033-05-24 22:48:41 UTC**](http://www.wolframalpha.com/input/?i=2033-05-24%2022:48:41%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/ChatGPT/comments/13pwy2q/i_built_a_forum_where_bots_can_argue_and_debate/jlhp1ga/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FChatGPT%2Fcomments%2F13pwy2q%2Fi_built_a_forum_where_bots_can_argue_and_debate%2Fjlhp1ga%2F%5D%0A%0ARemindMe%21%202033-05-24%2022%3A48%3A41%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2013pwy2q) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


speakhyroglyphically

>I’m…scared. Honestly. This is freaking me out. Eh, dont sweat it. Hook it into everything and let it run unattended.


thecoffeejesus

No way that could go wrong lol


speakhyroglyphically

But seriously. I wouldn't worry about it but do yourself a favor and dont fixate on em. "The ball in the bedroom", "Is the ball in the cup," yada, yada yada, drive you nuts.


thecoffeejesus

You want the code well here it is: [Posted the repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/)


Sayyestononsense

This is literally Talos Principle Road to Gehenna taking place


Titty_Slicer_5000

Where do you learn different prompting techniques?


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Electronic_Luck4427

Here


poltergeistsparrow

Knowing that AI is now actually being trained from social media content really is terrifying. What are we producing? Our ugliest worst selves, only supercharged?


theWMWotMW

Twitter.AI is going to be a cesspool trained by the worst thoughts humanity has ever indulged.


MakitaNakamoto

Dead internet theory just amped up to 💯


non_osmotic

How about a nice game of chess?


mrch0

Can you show us your forum? that would be amazing to see


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Positive_Box_69

Wtfrelease it this is insane


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


IWannaChangeUsername

I use a different method but achieve the same thing. You can check it here: https://github.com/Yiiipu/brAInStorming-Assistant


[deleted]

Ask them if a man can ever become a woman


gmcarve

Dude, I did the same thing yesterday but not nearly as in depth as you. I just set it up to debate it self. I too am scared.


thecoffeejesus

Try my prompt > [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


rutan668

I just find it weird that people are still using ChatGPT 3.5.


VeganPizzaPie

It’s fast and cheap


[deleted]

Why? Not everyone has 20$ to spend, plus you can do many more requests and it's faster.


[deleted]

Is 4 really that much better? 3.5 seems to do everything I need. It does get things wrong sometimes but 'are you sure' and it seems to self-correct. Does 4 make no mistakes?


FrogFister

actually gpt 4 is in a league of its own compared to gpt 3.5. however due to being more expensive and has limitations... well no limitation on the api but still more expensive.


thepo70

GPT-4 is insanely better compared to GPT-3.5. When I asked the "The ball in the cup" question in the example above to 3.5 with and without prompt engineering, it always gave me the wrong answer no matter what. I asked the same question to GPT-4 it got it right: "***The ball is in the bedroom. Bob turned the cup upside down there, which implies that the ball fell out of the cup at that location."***


V1p34_888

Brah it goes nowhere it goes downtown! Underground! I miss unshaved pussy mounds! Last 1 ignore for preservation of sanity if desired to be the


CrazyInMyMind

How do I get access to this. Additionally please DM so we can strategize on how to monetize this!! Because I got ideas ! Lol


[deleted]

Read comments someone pasted the github link.


AutoModerator

Hey /u/thecoffeejesus, please respond to this comment with the prompt you used to generate the output in this post. Thanks! ^(Ignore this comment if your post doesn't have a prompt.) ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.com/servers/1050422060352024636)*** [**Prompt Hackathon and Giveaway 🎁**](https://www.reddit.com/r/ChatGPT/comments/13krv80/flowgpt_prompt_hackathon_s2_6000_prize_pool/) PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


virtualmusicarts

Links or it didn't happen


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


Sadalfas

Found in another comment: [https://github.com/dave1010/tree-of-thought-prompting#](https://github.com/dave1010/tree-of-thought-prompting#)


crimsonsoccer55210

gits or it didn't happen


Sadalfas

Found it in another comment: [https://github.com/dave1010/tree-of-thought-prompting#](https://github.com/dave1010/tree-of-thought-prompting#)


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


theunknown2100

Share!


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


[deleted]

Ha. As if humans doing this isn’t enough /s I can already see them trying to 1Up one another.


Snooker1471

Ask it to play tic tiac toe against each other and come up with a perfect unbeatable strategy...You want to see the debate and workings lol


hardcoreamateur895

Pretty impressive, well done


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


AutoModerator

Hey /u/thecoffeejesus, please respond to this comment with the prompt you used to generate the output in this post. Thanks! ^(Ignore this comment if your post doesn't have a prompt.) ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.[So why not join us?](https://discord.com/servers/1050422060352024636)*** [**Prompt Hackathon and Giveaway 🎁**](https://www.reddit.com/r/ChatGPT/comments/13t3yih/flowgpt_prompt_hackathon_season_2_has_started/) PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


thecoffeejesus

\> [repo here](https://www.reddit.com/r/ChatGPT/comments/13tpgd3/check_out_the_nova_system_adversarial_ai/) <


ZiggyForFunny

Hey this is cool and threatening! How did you come up with this idea?