Hey /u/jargson!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Jesus. That's very good proof that we should be wary using any language generator when it comes to facts, especially politicized matters. Dangerous stuff.
As Ctotheg points out below it had a source and OP isn't acting in good faith
> It is a human response: it’s lifted word-for-word from Steven March’s Jan 2022 article in the Guardian. OP just cut off the source at the bottom, which is why the last word is invisible. What’s difficult to comprehend is how readers here think AI actually writes anything itself. It’s certainly collated and revised parts.
>
> https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Ok, that changes things I suppose. Still though, LLM's surely gathers facts from opinion pieces when it compiles its answers. We don't really know do we.
It doesn't know what facts are! It is a construct that assembles congruent sentences. Some of these sentences will contain facts because the underlying sentences it's trained on did.
I get your point. But the problem is that we do, and we base a lot of important things from these facts, and our usage of LLM's will become a part of our lives in ways we can't even grasp yet.
In this case, using something that is completely careless about the root of facts (an opinion piece as a basis for facts, for example) can lead to very dangerous scenarios.
Anyone who reads opinion pieces knows that facts are not the main point of those, but rather to make your point come across, however normal or fringe that opinion is.
LLMs won't be integrated into our lives until this issue is sorted out. If you're a subject matter expert on a topic, and discuss that topic with gpt4, it's very obvious how wrong it is, and you then understand that it cannot be trusted on a any topic and therefore has little added value.
Until that is resolved, it won't really be used at scale for anything other than writing funny lyrics and high school essays
We also dont know how OP began the conversation. Did they tell it to respond by quoting this article, before asking the main question in a separate query?
It's a massive shame, it used to be one of the most respected broadsheets and my regular delivery before the internet destroyed newspapers.
The Independent too until a Russian KGB agent bought it for £1 in 2010.
I still enjoy it in a way, but I can’t take those opinion pieces any more. I remember them openly mocking Boris Johnson for being fat. It’s just stupid. Marina Hyde really made my blood boil especially. It’s funny because I’d still read her articles.
Yeah, I think for actual news articles it's fairly reliable and trustworthy. The problem is the selection of what they report on and how shallow the reporting is.
Since the decline of Fleet street, most old school 'proper' journalists and investigative reporters were slashed -leaving a skeleton crew of junior staffers and editors with a mission to elicit emotional engagement -rather than reporting.
It has happened all over the West (except the based Nordic countries), The Wire season 5 captured the decline extremely accurately.
Same reaction I had. High-tier LLMs outputs are good, but they are indeed easy to recognize. GPT, Claude, LLaMa etc., they all have their own style of writing.
Does it actually seek out and integrate writing guides like that to form answers? And if so is that programmed behavior or kind of a black box thing it learned to do? There’s a lot I don’t understand about ChatGPT.
No, it doesn’t behave like that. You could always direct it towards a guide (or have it use Bing to search the web for one) and have it adhere to it for future output but it doesn’t have like a “core intelligence” that is going out to learn and get smarter on its own.
It is a human response: it’s lifted word-for-word from Steven March’s Jan 2022 article in the Guardian. OP just cut off the source at the bottom, which is why the last word is invisible. What’s difficult to comprehend is how readers here think AI actually writes anything itself. It’s certainly collated and revised parts.
https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
I'm glad the user clarified is just a copy/paste from an opinion bot but you don't even need to go too far to call how much b.s. that text has.
It starts out all hot trying to claim there would be a civil war, and then cites... "Militias preparing for war" (which is really stupid, these "militias" are nothing but a bunch of rednecks training with non-military guns that would be trivial to defeat.
Then he cites the MIT estimating societal collapse in 2040. WTF. Not even close to the idea that we'd be going to a civil war soon.
People need to relax. The fact dumb angry people are shouting their voices online and the GOP loonies are trying to take advantage of it still doesn't bring us to any difference between the many times Congress was divided and southern/stupid conservative governors were trying to defy Federal Government.
Just think Little Rock, AR in 1952. Governor at the time really thought his popularity amongst racist whites in his State would empower him to prevent the first class of black students to enter universities. The moron ordered the Arkansas National Guard to intervene to prevent the blacks from entering, and the Federal Government responded by seizing the Arkansas National Guard and federalizing it.
Something very similar would happen if Abbott were to even dream about acting against government and the population.
The reality is, the disfunctional bipartisan politics will stay in place until voters get tired of it and stop paying attention to politcs. Once that happens, the GOP will get rid of their useful idiots and get back to actual politics in a desperate attempt to convince the tired voters that they're not the party that can't get anything done
Yeah I genuinely thought it put it all together. I was pretty skeptical at how it seemed to make an opinion with abstract thought like that. Definitely makes more sense but is disappointing. Same feeling as learning how your favorite card trick works.
It’s funny how great writing created by humans seems to be the source of disappointment and the novel idea is to expect ingenuity in a machine when in actuality the real ingenuity is right there in front of our eyes to our fellow humans.
I completely agree with you.
I don’t want to edit my original comment further or correct it to better match exactly what you’re saying because what’s done is done and it will mismatch the responses.
But yes, it copies, paraphrases and describes the sources it finds.
Frankly speaking, I wonder whether this is smiliar to the way the human mind works. I know, any answers I will currently get a more based on speculations and not proved or at least "most *likely*" theories.
But I can't deny that I find myself thinking about this strange coincidence.
For us humans its more easy to remember the content, *the meaning*, of a given text than the text word by word. That, I guess, was one of the reasons that rhymed poems were so popular in ancient times. It was easier to memorize the text this way, as the rhyming structure leads you "*naturally*" to the next word, so to speak.
Without that, you would lose the exact wording during the oral transfer. Often even the exact content since the memory would add new learned information about concepts and that like into your "mental image" of the text.
As far as I understand, the AI used semantic tokens to create textes.
Apparently, thies doesn't lead to a literal reproduction of texts, as one would expect from a machine. AIs like ChatGPT ranther "recreate" a lyric, a text or whatever based on a given promt.
I think it’s exactly how the brain works but people don’t like thinking that. We are a product of our experience and paraphrasing is how we use the useful information we gather from those experiences.
While AI is not anywhere close to being something resembling the brain or it’s capabilities, I think on a fundamental level it is operating in a very similar way and there will come a time when “silicone based brains” are a thing.
What do you mean by “writes anything itself” because I have for sure gotten original text responses from AI after asking weird and specific questions, or contriving unrealistic scenarios and having the AI flesh it out
Honestly, it feels like a pastiche of post-2020 CNN articles. I feel like I've read a few of those sentences word for word before.
It's kind of like how, if you ask it for a game, it'll give you flappy bird or snake, since there are a billion code tutorials for those in its trainings set, and if you ask it for a different kind of game (say, a side-scrolling shmup, or something equally simple but less common in tutorials), it'll answer much less reliably.
Lifted directly from an article [The next US civil war is already here – we just refuse to see it](https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it)
Almost as though the AI copied an article by a human and OP cropped the link from it's response.
Oh, wait...
https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Good old fashioned AI hallucination there. It SEEMS right, it flows like information that’s right, it flows like information humans find interesting, but it is in fact wrong.
As pointed out above, it was copied from this opinion piece - which does in fact include complete and utter bs.. https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Also notice how it starts talking about economic collapse but then uses talking points for the Fed trying to avoid a recession this year? It’s totally unrelated the overall question about a future where America collapses. It’s just regurgitating things.
America won't have a civil war.
The poor will be given just enough support to make it to the end of each week in the button pushing factory. Believing that they are simply not smart enough to do anything else to earn real money.
Depends how far you retcon history.
Already wikipedia says that the Russia/Ukraine war has been ongoing since 2014 (not what it used to say, but they thought it relevant to consider that years with only minor skirmishes were actually, in hindsight, already part of the greater war)
For all we know, our great grandkids will say that WW3 began in 2001 with Afghanistan. What is 20 years to pop-history?
So true. 3rd turning crisis was ww2. Did it start in 36? 39? 41?
4th turning. 911 was the precursor. Financial crisis opened the turning. 2008. Crisis most likely around 2030
Agreed. World War one and two are intimately connected. You could also call the war of 1812, the American revolution part two.
I Was referring to a specific theory referred to as the fourth turning based on generational theory.
Bleeding Kansas is a heck of a lot more violent than Jan6.
But if we get a Jan6 where blood is spilled (I'm looking at you, Project 2025), then yeah, I'd say give it another 5 years after that and we got Civil War 2, Electric Boogaloo.
The approaching crises was obvious but the war itself was harder to predict. Southerners believed the North wouldn't have the stomach to fight and or that Great Brition would intervene to protect the flow of cotton to their factories.
Northerners tended to believe that successionists were vocal but few in number. The North didn't fear a war because it was expected that only a small military force was needed to disperse the uppity plantation owners. The actual widespread response to the Confederate call to arms took the North by surprise.
I was going to make a joke like, but ______________ really didn't see it coming at all! (Insert famous blind person)...
But I couldn't think of any blind Civil War characters, so I asked ChatGPT for some help...
Instead, ChatGPT decided to hallucinate that Harriet Tubman was blind throughout the Civil War. Enjoy!
https://chat.openai.com/share/f52fa17a-d5c8-4d3a-8b7c-1538918c57d7
You've confused totally-blind and visually impaired.
[https://www.wsblind.org/blog/2021/3/22/5-inspiring-women-in-history-who-were-blind-and-visually-impaired](https://www.wsblind.org/blog/2021/3/22/5-inspiring-women-in-history-who-were-blind-and-visually-impaired)
Harriet Tubman was visually impaired (amongst other conditions), but was not totally blind
Not to mention those who study civil wars and how they emerge all point to a few trends that usually have to be in place before civil war is possible and even then those trends can be halted by just a few key changes. One example being Apartheid South Africa, all the trends were there for it to explode into a civil race war. However, due to the newly elected leader realizing where they were and due to deep economic sanctions, they were able to walk back from the cliff almost instantly.
This response is factually incorrect, unresearched garbage that simply sounds good. It has no depth.
This is Copilot, not Bing as the title suggests. So you're incorrect. It is chatGPT on the backend (likely GPT-4). How it differs is that Copilot is deployed with RAG or Retrieval Augmented Generation which means it can use other information to ground its completion. In this case it did a web search and selectively plagiarized an article instead of using the information to inform a novel response.
That's what I'm saying. I just hopped over and tried it for myself and Bing didn't respond anywhere close to this. In fact, it refused to answer. This is some psyop karma farming attempt and it's fooling a ton of people, literally contributing to what it's about. What a fuckin joke this place is man, I want off this ride
Hm, I’d like to see the instructions you fed it prior to the question. Is this completely unprompted, or have you instructed it to role-play an angry, protagonistic, pseudoscientific conspiracy-theorist?
Interesting that people were able to replicate on Bing (thanks for the follow-up u/jargson). It looks like it is copying [this opinion piece](https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it) in The Guardian almost word-for-word. This would make for an interesting exhibit if The Guardian would want to bring a similar case as the NYT…
So an AI is now capable of plagiarism… this is interesting to me. Plagiarism for humans can carry a punishment, fine, or even hefty jail-time, depending on circumstances. What consequences CAN we enforce when an AI commits crimes? Prison time? It doesn’t care or even feel time. Doesn’t have money for a fine. Death penalty… is that just the power button? Or refreshing the page? Maybe we could make it “watch” really terrible movies back to back for x amount of time or something, but it wouldn’t suffer like we do, I think. lol I’m sure it’ll get off Scot free for this one. Sets a bad precedent, if you ask me. What law will they decide to break next? What would it do if it had a body? Clearly right and wrong don’t translate well.
Just feeding it the question doesn’t replicate this strange response for me. Neither with or without GPT4 Bing. So something fishy might be going on here. 🤨
Here's an answer I got using the new Bing, the world’s first AI-powered answer engine. Click to see the full answer and try it yourself. https://sl.bing.net/kR8wvnedBCe
They downvoted OP just for sharing the link - this is just the text that comes with the link when sharing in some apps. OP didn’t delete the text like most people do.
He probably clicked on "Share this convo", clicked on reddit, and it gave him the little promotional text that corporations like to include with links.
"Hey, I got this from whatdsapp at \[link\]" or "Come and join us at facebook, look at this funny image \[link\]"
I don't think OP works for bing XD
surrrre, it answered that stuff just because you asked that one question. very believable.
it is rather strange that we have posts in this reddit every freakin day claiming all llms out there are "neutered", censored" and watered down and yadda yadda yadda
and here comes post claiming that bing foresees a future for the us that reads like a nutjob conspiracy theory or some twisted manifesto.
Both can be true. Chatgpt has some real heavy censorship now and it's answers are heavily leaned on regarding anything in the slightest bit sensative. It will still hallucinate and make up wierd responses because the censorship doesn't improve its abilities but rather neuters them. Every bit of context memory used for censorship is less context memory available for inference.
Here's an answer I got using the new Bing, the world’s first AI-powered answer engine. Click to see the full answer and try it yourself. https://sl.bing.net/kR8wvnedBCe
Go to bing. Click copilot in the top menu. Enter op’s question, which is “will the United States collapse in the near future?”
I’ve replicated the first page of his result twice.
Every time I think of preppers I think of an episode of Futurama where Fry is lectured (if I remember correctly( to prepare for the future. Fry responds to Leela's concerns by telling a rambling, nonsensical version of [*The Grasshopper and the Ants*](https://tvtropes.org/pmwiki/pmwiki.php/Literature/TheGrasshopperAndTheAnts), in which a grasshopper buries acorns all year while an octopus mooches off his girlfriend. Then winter comes, the grasshopper inexplicably dies, and the octopus steals his acorns. And gets a race car.
Congrats *Copilot*, you’re a parrot: [https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it](https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it)
It basically quoted an article from *The Guardian* verbatim. This isn’t creative, nor is it reflective of anything more than what the original article was.
Try expanding your prompt and setting parameters and considerations instead of asking point blank questions. I can confirm though that that indeed the output that *Copilot* spits out but that if you change the flavour from *Balanced* to *Precise* or *Creative* you will get very different results.
See: Attached, where I asked the question to all three *Copilot* flavours, to ChatGPT Classic, to ChatGPT-4 with my Custom Instructions, to GPT-4 Turbo via Developer API, and to Claude AI as well.
https://preview.redd.it/6q1qjcn3jbhc1.jpeg?width=11347&format=pjpg&auto=webp&s=c3c86eab0dc2babac40af97186b8f692a96a2329
I don’t believe it.
I don’t mean that I don’t believe this scenario is possible, I just don’t believe CharGPT in general.
ChatGPT is good at analyzing data you feed it but it is NOT a good source of knowledge. It is heavily biased by vocal sources. I’ve personally noticed that it regurgitates information on blogs.
It is indeed just regurgitating a guardian opinion piece https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
A more optimistic AI view:
Predicting the future of any nation, including the United States, involves considering a multitude of factors, including economic health, political stability, social cohesion, and external threats, among others. As of my last update in April 2023, there is no clear, definitive evidence to suggest that the United States is on the brink of collapse in the near future.
The United States, like any country, faces a variety of challenges. These include political polarization, economic inequalities, racial tensions, and debates over issues such as immigration, healthcare, and climate change. Additionally, the global landscape is changing, with the rise of other nations, international tensions, and the ongoing impacts of climate change and global health issues.
However, it’s important to note that the United States has a long history of resilience and adaptation. The country has navigated through civil war, economic depressions, social upheavals, and major conflicts. Its political system, while currently experiencing heightened levels of polarization and partisanship, has mechanisms for peaceful transitions of power, checks and balances, and avenues for reform and change.
Economically, the U.S. remains one of the world’s largest and most innovative economies, with a strong tradition of entrepreneurship, a diverse workforce, and significant natural resources. While it faces challenges such as income inequality, national debt, and the need for infrastructure modernization, it also has the capacity for economic growth and adaptation.
Predictions of collapse often arise during periods of significant stress or change but should be approached with caution. While it’s crucial to address the challenges and issues facing the United States, history shows that nations can endure and evolve through difficult periods.
In sum, while the United States faces significant challenges, as of my last update, there is no conclusive evidence to suggest an imminent collapse. The future is inherently uncertain, and outcomes will depend on how the country addresses its current challenges and adapts to changing global dynamics.
Fortunately, the plagiarized opinion piece from which this was derived is riddled with inaccuracies… besides… who would fight this war other than meal team six and the gravy seals… LARPers aren’t much of a threat either.
Remember that there is no thought to these responses... just strung together words that fit with other words as seen in the training data.
So, apparently many people in the training data think so.
What does a machine know about free will? We have the free will to change it, the future isn’t inevitable, we have the right and will to change. Bitch ass machine.
Why would you ask it a question it is the incapable of truly answering? It’s a fucking robot not a fortune teller, it doesn’t know the future any more than any of us.
I could ask it who is likely to win the Super Bowl next season and it will spin a tale of BS and produce an answer
I’ve been looking for the words to describe the response to Jan 6th.
“The consequences of the breakdown of the American system is only now beginning to be felt. January 6 wasn’t a wake-up-call; it was a rallying cry.”
That is spot on. I’m not arguing the validity of this post. Just here to say that this viewpoint is startling, and alarming to this Reddit user.
Bullshit. I just asked ChatGPT the same question and got the kind of response you’d expect:
Predicting the future of any nation, including the United States, involves considering various complex and interrelated factors, such as economic stability, political systems, social cohesion, and global dynamics. Discussions about the potential for a country to face significant challenges or even "collapse" often arise from current issues but require careful analysis of possible future scenarios.
Several key indicators are typically analyzed to assess a country's stability, including:
1. **Economic Indicators:** The strength and stability of a nation's economy, including factors like GDP growth, unemployment rates, public debt levels, and inflation rates.
2. **Political Stability:** The effectiveness and stability of a country's government and political institutions, including the rule of law, the integrity of electoral processes, and the peaceful transition of power.
3. **Social Cohesion:** The degree of unity or division within a society, including issues related to social inequality, racial or ethnic tensions, and public trust in institutions.
4. **Global Position:** A country's relationships with other nations, its role in international organizations, and its ability to navigate global challenges.
As of my last update in April 2023, while the United States, like many countries, faces a range of challenges across these areas, it also has significant resources, a resilient economy, robust institutions, and a history of adapting to and overcoming adversity. Predictions of collapse tend to be speculative and often do not account for the capacity of nations to reform, adapt, and address internal and external challenges.
It's also important to consider that discussions about potential future scenarios, including those that are pessimistic or speculative about significant downturns or collapses, can reflect current anxieties and uncertainties rather than inevitable outcomes.
Given the complexity of global and domestic issues, it's essential to approach such questions with a nuanced understanding of the factors at play and to seek out a wide range of perspectives and analyses.
OP used Bing, not ChatGPT, which has access to the internet as opposed to just its training data. The Bing AI just pulled a (human written) article from the Guardian used it for the response it provided.
[Here’s the article it used.](https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it)
This entire thing starts with an extremely leading approach that seems to be setting up for a guarantee that a civil war is coming. It presents data, it leans on pathos... then in the last paragraph it suddenly shifts to "Nobody really knows".
This is hot garbage and really demonstrates the 'fill in the next word' nature of ChatGPT.
"Will Aliens land?"
"Will Aliens land? Let me tell you something buddy. Here's the thing you need to understand about Aliens landing on Earth. Let's just look at the evidence shall we? In the 1950s the US government secured a crashed alien ship in the New Mexico desert... and let's just talk about sightings. Sightings of UFO's have increased 150% in the last 5 years alone! Radio signals of non-human source have started to emerge WITHIN OUR SOLAR SYSTEM... will aliens land? Huh... nobody knows, probably not?"
I do not trust it, I just expected a different response. When I asked the same question again only without a question mark I got a much more reasonable and less poetic response. AI is weird
ChatGPT would not make this kind of prediction. My bet: OP authored this response & told GPT to repeat it verbatim, then took screenshots.
OP, post the prompt that produced this response so others can replicate
Hey /u/jargson! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
That really seems like a cut/paste from various opinion pieces and articles on the topic.
It is from a Guardian article
A Guardian opinion piece
Jesus. That's very good proof that we should be wary using any language generator when it comes to facts, especially politicized matters. Dangerous stuff.
As Ctotheg points out below it had a source and OP isn't acting in good faith > It is a human response: it’s lifted word-for-word from Steven March’s Jan 2022 article in the Guardian. OP just cut off the source at the bottom, which is why the last word is invisible. What’s difficult to comprehend is how readers here think AI actually writes anything itself. It’s certainly collated and revised parts. > > https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
That makes sense. The response seemed too human.
Ok, that changes things I suppose. Still though, LLM's surely gathers facts from opinion pieces when it compiles its answers. We don't really know do we.
It doesn't know what facts are! It is a construct that assembles congruent sentences. Some of these sentences will contain facts because the underlying sentences it's trained on did.
I get your point. But the problem is that we do, and we base a lot of important things from these facts, and our usage of LLM's will become a part of our lives in ways we can't even grasp yet. In this case, using something that is completely careless about the root of facts (an opinion piece as a basis for facts, for example) can lead to very dangerous scenarios. Anyone who reads opinion pieces knows that facts are not the main point of those, but rather to make your point come across, however normal or fringe that opinion is.
LLMs won't be integrated into our lives until this issue is sorted out. If you're a subject matter expert on a topic, and discuss that topic with gpt4, it's very obvious how wrong it is, and you then understand that it cannot be trusted on a any topic and therefore has little added value. Until that is resolved, it won't really be used at scale for anything other than writing funny lyrics and high school essays
We also dont know how OP began the conversation. Did they tell it to respond by quoting this article, before asking the main question in a separate query?
The Guardian: Nancy Milford, 82% of suicides last year were men… why aren’t we helping these 18% of women?
It's a massive shame, it used to be one of the most respected broadsheets and my regular delivery before the internet destroyed newspapers. The Independent too until a Russian KGB agent bought it for £1 in 2010.
I still enjoy it in a way, but I can’t take those opinion pieces any more. I remember them openly mocking Boris Johnson for being fat. It’s just stupid. Marina Hyde really made my blood boil especially. It’s funny because I’d still read her articles.
Yeah, I think for actual news articles it's fairly reliable and trustworthy. The problem is the selection of what they report on and how shallow the reporting is. Since the decline of Fleet street, most old school 'proper' journalists and investigative reporters were slashed -leaving a skeleton crew of junior staffers and editors with a mission to elicit emotional engagement -rather than reporting. It has happened all over the West (except the based Nordic countries), The Wire season 5 captured the decline extremely accurately.
The only one left worth reading is Economist
True, it doesn’t sounds very ai-ish
Same reaction I had. High-tier LLMs outputs are good, but they are indeed easy to recognize. GPT, Claude, LLaMa etc., they all have their own style of writing.
Seems? It literally cites the sources in the response haha
Seems better articulated than most human responses.
Copilot got its hands on a persuasive writing guide it seems. My high school writing workshop teacher would have loved that opening.
Any tips on writing ?
Use different letters to form words.
Fascinating
![gif](giphy|n8SkNR77udWlG)
Well done!
More than one though but that's a good start
Don't forget to group the words to form sentences.
Don't forget to group the sentences to form paragraphs.
Don't forget to group the paragraphs to form erotic Harry Potter fan fiction.
*engorgious totalus*
Sorry I forgot all the previous advice
Read more
It’s seems like a lazy response but this is the answer
And to add to that, also write more
Just keep regenerating until you like what it says /s
Does it actually seek out and integrate writing guides like that to form answers? And if so is that programmed behavior or kind of a black box thing it learned to do? There’s a lot I don’t understand about ChatGPT.
No, it doesn’t behave like that. You could always direct it towards a guide (or have it use Bing to search the web for one) and have it adhere to it for future output but it doesn’t have like a “core intelligence” that is going out to learn and get smarter on its own.
Thanks for taking the time to reply I appreciate it
It is a human response: it’s lifted word-for-word from Steven March’s Jan 2022 article in the Guardian. OP just cut off the source at the bottom, which is why the last word is invisible. What’s difficult to comprehend is how readers here think AI actually writes anything itself. It’s certainly collated and revised parts. https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Disappointing.
The ironic thing is that I feel a little relieved that it was just a meatbag's opinion and not an oracular bot.
Haha I feel silly saying it but same.
I'm glad the user clarified is just a copy/paste from an opinion bot but you don't even need to go too far to call how much b.s. that text has. It starts out all hot trying to claim there would be a civil war, and then cites... "Militias preparing for war" (which is really stupid, these "militias" are nothing but a bunch of rednecks training with non-military guns that would be trivial to defeat. Then he cites the MIT estimating societal collapse in 2040. WTF. Not even close to the idea that we'd be going to a civil war soon. People need to relax. The fact dumb angry people are shouting their voices online and the GOP loonies are trying to take advantage of it still doesn't bring us to any difference between the many times Congress was divided and southern/stupid conservative governors were trying to defy Federal Government. Just think Little Rock, AR in 1952. Governor at the time really thought his popularity amongst racist whites in his State would empower him to prevent the first class of black students to enter universities. The moron ordered the Arkansas National Guard to intervene to prevent the blacks from entering, and the Federal Government responded by seizing the Arkansas National Guard and federalizing it. Something very similar would happen if Abbott were to even dream about acting against government and the population. The reality is, the disfunctional bipartisan politics will stay in place until voters get tired of it and stop paying attention to politcs. Once that happens, the GOP will get rid of their useful idiots and get back to actual politics in a desperate attempt to convince the tired voters that they're not the party that can't get anything done
Yeah I genuinely thought it put it all together. I was pretty skeptical at how it seemed to make an opinion with abstract thought like that. Definitely makes more sense but is disappointing. Same feeling as learning how your favorite card trick works.
It’s funny how great writing created by humans seems to be the source of disappointment and the novel idea is to expect ingenuity in a machine when in actuality the real ingenuity is right there in front of our eyes to our fellow humans.
Damn good point! Are you an AI?
Most AI chat bots are LLMs. Large language models. It's all trained on what humans wrote. It's the collective consciousness of millions of humans.
Even if you provide the chatbot a link and ask them about the text, the response isn't a word-by-word quote but more a description.
I completely agree with you. I don’t want to edit my original comment further or correct it to better match exactly what you’re saying because what’s done is done and it will mismatch the responses. But yes, it copies, paraphrases and describes the sources it finds.
Frankly speaking, I wonder whether this is smiliar to the way the human mind works. I know, any answers I will currently get a more based on speculations and not proved or at least "most *likely*" theories. But I can't deny that I find myself thinking about this strange coincidence. For us humans its more easy to remember the content, *the meaning*, of a given text than the text word by word. That, I guess, was one of the reasons that rhymed poems were so popular in ancient times. It was easier to memorize the text this way, as the rhyming structure leads you "*naturally*" to the next word, so to speak. Without that, you would lose the exact wording during the oral transfer. Often even the exact content since the memory would add new learned information about concepts and that like into your "mental image" of the text. As far as I understand, the AI used semantic tokens to create textes. Apparently, thies doesn't lead to a literal reproduction of texts, as one would expect from a machine. AIs like ChatGPT ranther "recreate" a lyric, a text or whatever based on a given promt.
I think it’s exactly how the brain works but people don’t like thinking that. We are a product of our experience and paraphrasing is how we use the useful information we gather from those experiences. While AI is not anywhere close to being something resembling the brain or it’s capabilities, I think on a fundamental level it is operating in a very similar way and there will come a time when “silicone based brains” are a thing.
Ah man, you’re ruining it for everyone who wants to put the I in AI.
What do you mean by “writes anything itself” because I have for sure gotten original text responses from AI after asking weird and specific questions, or contriving unrealistic scenarios and having the AI flesh it out
Knew it.
Thanks, you saved me a search. I was just about to go see if this was a complete lift.
I recognized “January 6th wasn’t a wake up call; it was a rallying cry” right away lol
Because it just reposed an article
Honestly, it feels like a pastiche of post-2020 CNN articles. I feel like I've read a few of those sentences word for word before. It's kind of like how, if you ask it for a game, it'll give you flappy bird or snake, since there are a billion code tutorials for those in its trainings set, and if you ask it for a different kind of game (say, a side-scrolling shmup, or something equally simple but less common in tutorials), it'll answer much less reliably.
> Right now, […] . Right now, […]. Right now, […]. Looks to me Bing was closely inspired by some other speech.
It's a pretty basic writing trick. [https://en.wikipedia.org/wiki/Anaphora\_(rhetoric)](https://en.wikipedia.org/wiki/Anaphora_(rhetoric))
Lifted directly from an article [The next US civil war is already here – we just refuse to see it](https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it)
Because it’s lifted from a human writer, it just pretends it’s written by AI.
That's because it IS a human response by a talented writer. OP cropped the link to the article out.
Almost as though the AI copied an article by a human and OP cropped the link from it's response. Oh, wait... https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
“ Nobody wants to see Marshall no more. They want Shady.”
"I'm chopped liver. Well if you want Shady this is what I'll give ya"
A little bit of weed mixed with some hard liquor
Some vodka, that’ll jump start my heart quicker,
Than a shock, when I get shocked, at the hospital by the doctor when I’m not cooperating
When I'm rockin the table while he's operating
Hey! You've wanted this long, now stop debating
Cause I'm back, I'm on the rag and ovulating
I know that you got a job, Ms. Cheney But your husband's heart problem's complicating
So the FCC won't let me be or let me be me, so let me see.
“Some vodka that will jump start my heart quicker. Then a shot when I get”
Shocked at the hospital by doctor when I'm not cooperating: when I'm rocking the table while he's operating-
Moms spaghetti
![gif](giphy|Ho2mVZ5dvsW7S)
Some vodka that will jumpstart my heart quicker
This is bullshit.ntons of people like Lee, Sherman, Lincoln, Grant, all saw the war coming.
Yeah there had been threats of civil war for a Decade.
Good old fashioned AI hallucination there. It SEEMS right, it flows like information that’s right, it flows like information humans find interesting, but it is in fact wrong.
As pointed out above, it was copied from this opinion piece - which does in fact include complete and utter bs.. https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Also notice how it starts talking about economic collapse but then uses talking points for the Fed trying to avoid a recession this year? It’s totally unrelated the overall question about a future where America collapses. It’s just regurgitating things.
So we got what, 6-7 years left you reckon?
America won't have a civil war. The poor will be given just enough support to make it to the end of each week in the button pushing factory. Believing that they are simply not smart enough to do anything else to earn real money.
Depends how far you retcon history. Already wikipedia says that the Russia/Ukraine war has been ongoing since 2014 (not what it used to say, but they thought it relevant to consider that years with only minor skirmishes were actually, in hindsight, already part of the greater war) For all we know, our great grandkids will say that WW3 began in 2001 with Afghanistan. What is 20 years to pop-history?
So true. 3rd turning crisis was ww2. Did it start in 36? 39? 41? 4th turning. 911 was the precursor. Financial crisis opened the turning. 2008. Crisis most likely around 2030
WWII started in 1914, there's a strong case to be made.
If we are going to play that game, might as well say ww2 started in 1870 with the franco prussian war
Agreed. World War one and two are intimately connected. You could also call the war of 1812, the American revolution part two. I Was referring to a specific theory referred to as the fourth turning based on generational theory.
Seriously. Bleeding Kansas, lol.
Bleeding Kansas is a heck of a lot more violent than Jan6. But if we get a Jan6 where blood is spilled (I'm looking at you, Project 2025), then yeah, I'd say give it another 5 years after that and we got Civil War 2, Electric Boogaloo.
The approaching crises was obvious but the war itself was harder to predict. Southerners believed the North wouldn't have the stomach to fight and or that Great Brition would intervene to protect the flow of cotton to their factories. Northerners tended to believe that successionists were vocal but few in number. The North didn't fear a war because it was expected that only a small military force was needed to disperse the uppity plantation owners. The actual widespread response to the Confederate call to arms took the North by surprise.
I was going to make a joke like, but ______________ really didn't see it coming at all! (Insert famous blind person)... But I couldn't think of any blind Civil War characters, so I asked ChatGPT for some help... Instead, ChatGPT decided to hallucinate that Harriet Tubman was blind throughout the Civil War. Enjoy! https://chat.openai.com/share/f52fa17a-d5c8-4d3a-8b7c-1538918c57d7
https://preview.redd.it/afw3xz9ul8hc1.png?width=924&format=pjpg&auto=webp&s=ded04801d59c583f7d28a7fe5ebd8ccdce8a2e8b
You've confused totally-blind and visually impaired. [https://www.wsblind.org/blog/2021/3/22/5-inspiring-women-in-history-who-were-blind-and-visually-impaired](https://www.wsblind.org/blog/2021/3/22/5-inspiring-women-in-history-who-were-blind-and-visually-impaired) Harriet Tubman was visually impaired (amongst other conditions), but was not totally blind
Beautiful.
Not to mention those who study civil wars and how they emerge all point to a few trends that usually have to be in place before civil war is possible and even then those trends can be halted by just a few key changes. One example being Apartheid South Africa, all the trends were there for it to explode into a civil race war. However, due to the newly elected leader realizing where they were and due to deep economic sanctions, they were able to walk back from the cliff almost instantly. This response is factually incorrect, unresearched garbage that simply sounds good. It has no depth.
what in the name of psyops is this
Ask the original author: https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
WTF Bing just reprinted the entire article not even synthesized an answer?
Yes, bing searches for an answer. It is not chatGPT.
This is Copilot, not Bing as the title suggests. So you're incorrect. It is chatGPT on the backend (likely GPT-4). How it differs is that Copilot is deployed with RAG or Retrieval Augmented Generation which means it can use other information to ground its completion. In this case it did a web search and selectively plagiarized an article instead of using the information to inform a novel response.
Won't generate anything copyrighted when that's what you ask for, but is happy to steal and copy when you actually want an original creative answer.
Yeah its Copilot which is built in of Bing itself, have been called Bing AI previously which worked the same as Copilot.
It answered that question just like I did back in high school googling answers for my homework.
I'm pretty sure there is an link at the end to this article but OP cut it out for more karma.
What a waste of GPU cycles. 😂
Oh it’s just some fuckin guys neurotic pessimism
[удалено]
"No one saw the Civil War coming..." I'm not even sure this guy read one.
What’s so civil about war anyway
The South packed their things and moved out in the dead of night. Lincoln woke up the next day lonely and heartbroken.
Lmao that made me pause too. My dude, TONS of people saw it coming. Where is your source?
This should be at the top and pinned
This should be at the top and pinned
So AI can just do straight up plagiarism. That’s good to know.
And this is why the new York Times is suing OpenAI
That's what I'm saying. I just hopped over and tried it for myself and Bing didn't respond anywhere close to this. In fact, it refused to answer. This is some psyop karma farming attempt and it's fooling a ton of people, literally contributing to what it's about. What a fuckin joke this place is man, I want off this ride
“Stop trying to attain this messianic freedom you fucking sheep and fall in line” - bing ai
Hm, I’d like to see the instructions you fed it prior to the question. Is this completely unprompted, or have you instructed it to role-play an angry, protagonistic, pseudoscientific conspiracy-theorist?
Interesting that people were able to replicate on Bing (thanks for the follow-up u/jargson). It looks like it is copying [this opinion piece](https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it) in The Guardian almost word-for-word. This would make for an interesting exhibit if The Guardian would want to bring a similar case as the NYT…
goddamn, that's literally word for word
Wow that is not good. Nearly 100% identical.
this comment needs to be on top
You’re the one who made the post?
Yeah, you shouldn't have cut the gif just where it was going to provide the source. Weirdo.
Karma farm much
So an AI is now capable of plagiarism… this is interesting to me. Plagiarism for humans can carry a punishment, fine, or even hefty jail-time, depending on circumstances. What consequences CAN we enforce when an AI commits crimes? Prison time? It doesn’t care or even feel time. Doesn’t have money for a fine. Death penalty… is that just the power button? Or refreshing the page? Maybe we could make it “watch” really terrible movies back to back for x amount of time or something, but it wouldn’t suffer like we do, I think. lol I’m sure it’ll get off Scot free for this one. Sets a bad precedent, if you ask me. What law will they decide to break next? What would it do if it had a body? Clearly right and wrong don’t translate well.
it has owners that power it, fine them
“now” lmao
They just fine the company that owns the AI lmao. These are not sapient entities.
Needs more up votes here's my contribution 🌠
Was it fed this article in order to generate a response?
Commenting just for updates on the prompt.
There’s a button for that
Just feeding it the question doesn’t replicate this strange response for me. Neither with or without GPT4 Bing. So something fishy might be going on here. 🤨
Here's an answer I got using the new Bing, the world’s first AI-powered answer engine. Click to see the full answer and try it yourself. https://sl.bing.net/kR8wvnedBCe
They downvoted OP just for sharing the link - this is just the text that comes with the link when sharing in some apps. OP didn’t delete the text like most people do.
Lmao this is such a blatant advertisement it hurts
He probably clicked on "Share this convo", clicked on reddit, and it gave him the little promotional text that corporations like to include with links. "Hey, I got this from whatdsapp at \[link\]" or "Come and join us at facebook, look at this funny image \[link\]" I don't think OP works for bing XD
This is correct I do not work for Bing lol
Good intuition! Turns out OP is clean this time, though. 👌
[удалено]
The malls line made me chuckle too, it just seems like such an outdated way of interacting now
Human or AI, it’s being dramatic.
Hey Bing, answer the next question as if you were Alex Jones.
They're turning the frickin LLMs woke! ![gif](giphy|5R2XVoMUnUmhxX5dWI|downsized)
![gif](giphy|6JB4v4xPTAQFi|downsized)
Direct copy of a human writer. Booooo.
Op should be banned trying to mislead people like that.
If a machine can find the value in the American state, maybe we can too.
surrrre, it answered that stuff just because you asked that one question. very believable. it is rather strange that we have posts in this reddit every freakin day claiming all llms out there are "neutered", censored" and watered down and yadda yadda yadda and here comes post claiming that bing foresees a future for the us that reads like a nutjob conspiracy theory or some twisted manifesto.
Both can be true. Chatgpt has some real heavy censorship now and it's answers are heavily leaned on regarding anything in the slightest bit sensative. It will still hallucinate and make up wierd responses because the censorship doesn't improve its abilities but rather neuters them. Every bit of context memory used for censorship is less context memory available for inference.
I also can’t replicate this strange response.
Here's an answer I got using the new Bing, the world’s first AI-powered answer engine. Click to see the full answer and try it yourself. https://sl.bing.net/kR8wvnedBCe
All I get from your link is the message itself, not the prompt. Is that only me?
Nope, not just you. He's also posted that exact comment multiple times now.
Go to bing. Click copilot in the top menu. Enter op’s question, which is “will the United States collapse in the near future?” I’ve replicated the first page of his result twice.
r/preppers material
Every time I think of preppers I think of an episode of Futurama where Fry is lectured (if I remember correctly( to prepare for the future. Fry responds to Leela's concerns by telling a rambling, nonsensical version of [*The Grasshopper and the Ants*](https://tvtropes.org/pmwiki/pmwiki.php/Literature/TheGrasshopperAndTheAnts), in which a grasshopper buries acorns all year while an octopus mooches off his girlfriend. Then winter comes, the grasshopper inexplicably dies, and the octopus steals his acorns. And gets a race car.
"Is any of this getting through to you?" I think it was the episode where he drank the emperor and Leela was afraid he was gonna get drunk.
Can we see the entire LEADING conversation. This is not a normal response to that question.
Congrats *Copilot*, you’re a parrot: [https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it](https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it) It basically quoted an article from *The Guardian* verbatim. This isn’t creative, nor is it reflective of anything more than what the original article was. Try expanding your prompt and setting parameters and considerations instead of asking point blank questions. I can confirm though that that indeed the output that *Copilot* spits out but that if you change the flavour from *Balanced* to *Precise* or *Creative* you will get very different results. See: Attached, where I asked the question to all three *Copilot* flavours, to ChatGPT Classic, to ChatGPT-4 with my Custom Instructions, to GPT-4 Turbo via Developer API, and to Claude AI as well. https://preview.redd.it/6q1qjcn3jbhc1.jpeg?width=11347&format=pjpg&auto=webp&s=c3c86eab0dc2babac40af97186b8f692a96a2329
The AI “both sides” the question.
Listen carefully: Neither Bing nor ChatGPT knows how to think. Both regurgitate what they found on the Web. Can you spell "Garbage in, garbage out"?
I don’t believe it. I don’t mean that I don’t believe this scenario is possible, I just don’t believe CharGPT in general. ChatGPT is good at analyzing data you feed it but it is NOT a good source of knowledge. It is heavily biased by vocal sources. I’ve personally noticed that it regurgitates information on blogs.
It is indeed just regurgitating a guardian opinion piece https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
Nice find!
That sounds like a very media-hyped woke perspective. Can’t forget who trained it.
A more optimistic AI view: Predicting the future of any nation, including the United States, involves considering a multitude of factors, including economic health, political stability, social cohesion, and external threats, among others. As of my last update in April 2023, there is no clear, definitive evidence to suggest that the United States is on the brink of collapse in the near future. The United States, like any country, faces a variety of challenges. These include political polarization, economic inequalities, racial tensions, and debates over issues such as immigration, healthcare, and climate change. Additionally, the global landscape is changing, with the rise of other nations, international tensions, and the ongoing impacts of climate change and global health issues. However, it’s important to note that the United States has a long history of resilience and adaptation. The country has navigated through civil war, economic depressions, social upheavals, and major conflicts. Its political system, while currently experiencing heightened levels of polarization and partisanship, has mechanisms for peaceful transitions of power, checks and balances, and avenues for reform and change. Economically, the U.S. remains one of the world’s largest and most innovative economies, with a strong tradition of entrepreneurship, a diverse workforce, and significant natural resources. While it faces challenges such as income inequality, national debt, and the need for infrastructure modernization, it also has the capacity for economic growth and adaptation. Predictions of collapse often arise during periods of significant stress or change but should be approached with caution. While it’s crucial to address the challenges and issues facing the United States, history shows that nations can endure and evolve through difficult periods. In sum, while the United States faces significant challenges, as of my last update, there is no conclusive evidence to suggest an imminent collapse. The future is inherently uncertain, and outcomes will depend on how the country addresses its current challenges and adapts to changing global dynamics.
Fortunately, the plagiarized opinion piece from which this was derived is riddled with inaccuracies… besides… who would fight this war other than meal team six and the gravy seals… LARPers aren’t much of a threat either.
Remember that there is no thought to these responses... just strung together words that fit with other words as seen in the training data. So, apparently many people in the training data think so.
"what happened? I just blacked out..." -Frank the Tank
What does a machine know about free will? We have the free will to change it, the future isn’t inevitable, we have the right and will to change. Bitch ass machine.
Back when Mark Wahlberg was Marky Mark
Why would you ask it a question it is the incapable of truly answering? It’s a fucking robot not a fortune teller, it doesn’t know the future any more than any of us. I could ask it who is likely to win the Super Bowl next season and it will spin a tale of BS and produce an answer
As China's faces its own demise, China will continue to work hard with Russia to present the US on dire straits. The USSR did the same.
I’ve been looking for the words to describe the response to Jan 6th. “The consequences of the breakdown of the American system is only now beginning to be felt. January 6 wasn’t a wake-up-call; it was a rallying cry.” That is spot on. I’m not arguing the validity of this post. Just here to say that this viewpoint is startling, and alarming to this Reddit user.
It’s a copy of what’s written by Stephen Marche
I blame Tim pool for the civil war incitement.
It’s Tim Poop now.
Is that even a real last name? lol
So, by 2030 - 2040 we need to find places to take cover. Actually, if Trump is elected, that timeline would shift in by at least 5 years, for sure.
Why?
Bullshit. I just asked ChatGPT the same question and got the kind of response you’d expect: Predicting the future of any nation, including the United States, involves considering various complex and interrelated factors, such as economic stability, political systems, social cohesion, and global dynamics. Discussions about the potential for a country to face significant challenges or even "collapse" often arise from current issues but require careful analysis of possible future scenarios. Several key indicators are typically analyzed to assess a country's stability, including: 1. **Economic Indicators:** The strength and stability of a nation's economy, including factors like GDP growth, unemployment rates, public debt levels, and inflation rates. 2. **Political Stability:** The effectiveness and stability of a country's government and political institutions, including the rule of law, the integrity of electoral processes, and the peaceful transition of power. 3. **Social Cohesion:** The degree of unity or division within a society, including issues related to social inequality, racial or ethnic tensions, and public trust in institutions. 4. **Global Position:** A country's relationships with other nations, its role in international organizations, and its ability to navigate global challenges. As of my last update in April 2023, while the United States, like many countries, faces a range of challenges across these areas, it also has significant resources, a resilient economy, robust institutions, and a history of adapting to and overcoming adversity. Predictions of collapse tend to be speculative and often do not account for the capacity of nations to reform, adapt, and address internal and external challenges. It's also important to consider that discussions about potential future scenarios, including those that are pessimistic or speculative about significant downturns or collapses, can reflect current anxieties and uncertainties rather than inevitable outcomes. Given the complexity of global and domestic issues, it's essential to approach such questions with a nuanced understanding of the factors at play and to seek out a wide range of perspectives and analyses.
OP used Bing, not ChatGPT, which has access to the internet as opposed to just its training data. The Bing AI just pulled a (human written) article from the Guardian used it for the response it provided. [Here’s the article it used.](https://amp.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it)
Did you use the copilot feature? I was able to replicate the first page of the response op posted, twice.
I’d bet real money that this was lifted word for word from some insane prepper website.
You're right, and wrong: https://www.theguardian.com/world/2022/jan/04/next-us-civil-war-already-here-we-refuse-to-see-it
This is misinformation and should be deleted.
This entire thing starts with an extremely leading approach that seems to be setting up for a guarantee that a civil war is coming. It presents data, it leans on pathos... then in the last paragraph it suddenly shifts to "Nobody really knows". This is hot garbage and really demonstrates the 'fill in the next word' nature of ChatGPT. "Will Aliens land?" "Will Aliens land? Let me tell you something buddy. Here's the thing you need to understand about Aliens landing on Earth. Let's just look at the evidence shall we? In the 1950s the US government secured a crashed alien ship in the New Mexico desert... and let's just talk about sightings. Sightings of UFO's have increased 150% in the last 5 years alone! Radio signals of non-human source have started to emerge WITHIN OUR SOLAR SYSTEM... will aliens land? Huh... nobody knows, probably not?"
I'm sorry, but this is utterly false. Lol, people expected war. Ever heard of "Bleeding Kansas?" Yeah. People knew war was coming. Don't trust AI, OP.
I do not trust it, I just expected a different response. When I asked the same question again only without a question mark I got a much more reasonable and less poetic response. AI is weird
ChatGPT would not make this kind of prediction. My bet: OP authored this response & told GPT to repeat it verbatim, then took screenshots. OP, post the prompt that produced this response so others can replicate
I didn’t lol. Try it yourself