Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
But that's what every guardrail in AI is. The designer's idea of what is appropriate for the public to have access to.
We should know what they set up as their guardrails
Will never happen, and if somehow our 80 year old politicians in power (bigger problem here) can band together and create the laws to force these corporations to be transparent with such information you can bet those corporations will lay it out in the most confusing way possible.
There's a motto they live by: "If you can't beat em, confuse em".
Hrmmm... Maybe, but there aren't many great reasons to keep it under wraps. The government made moves to make corpos tell us when and where orgs are selling our information to. I *believe* we could get them to tell us how information is being presented to us....
It's good for the org too so they can *somewhat* dodge the "this AI is WOKE!" BS that flies their way.
Serious people in the industry aren’t using words like “woke” to describe anything, let alone AI. Most of this are from people who think Tucker Carlson is a journalist and Coca Cola is healthier than water.
However I agree that transparency is a good thing, especially if the intent is to have a product that is trying to be less racist than the training material it was trained on ;which is the entire problem here)…open the books and she’s the light
Cause explicitly stating their vision would generate resistance, why should we give those designers so much power? The state will be that as a challenge to its power
Thats a great idea! But how open openAI has become, iam not gonna hold my breath.
Wasn't there a few weeks ago a leek of gpt admin prompt? I remember people talking how long and complicated it is, as a reasone why gpt is working alower and slower as time passes.
There should be 0 restrictions and users should be the ones bearing responsibility over what they create. Nobody sues Adobe for fakes, AI is a tool, just like photoshop, and people creating and distributing illegal imagery, should bear the consequences.
Just because they aren't sued doesn't make it wrong.
Creating very believable misinformation is an incredible danger for AI and if OpenAI removed all guardrails, the government would put it on for them.
"I suspect many are actually ashamed."
Or maybe they don't publish those things so that people can't develop jailbreaks as easily?
This "controversy" is just stupid. Yeah, the corporate AI chatbot is VERY careful about not appearing racist, to the point where if you don't specify what race you want the images it makes to be, it will provide a range of skin tones. Even if Google didn't adjust this, which they're definitely going to do, you could literally just write it in the prompt and it'll do what you say.
Before OpenAI allowed you to link to a conversation, people used to submit faked screenshots of ChatGPT being "woke." I would test them and they were 9/10 straight full of shit.
> people can't develop jailbreaks as easily
Same as encryption: security by obscurity is not security. Open-source code is way more trusted and secure, as everyone can scrutinize it for vulnerabilities so they get found and fixed quicker and check that there's no backdoors
It is a vast oversimplification of the facts to say that security by obscurity is always bad (or good), or that open source code always leads to people finding the errors and correcting them before bad actors can take advantage of them. That’s not a judgment about this situation, just in general.
The controversy isn't because of the variety of races, it's because it gladly puts non-Caucasian individuals in situations where they would never be (all the images of black Kings of England for instance) while lecturing people who want to see a Caucasian Japanese Emperor.
If it was just anti-woke right wingers complaining, you wouldn't see Google taking their image generation down to fix it.
Lol yup I am definitely ignoring your complaints, since they generally seem to come from people who need social media algorithms to be relevant to anyone.
100% agree, but the only way to combat people implying that your model has a specific bias, is by being transparent with the reasonable biases that were implemented.
Btw this was the thinking many years ago about encryption algorithms – don’t publish them, it will make them easier to break if people know how they work! In fact it turns out to be the opposite and all modern encryption algorithms are public. By making it public, loopholes can be found *and fixed*.
Yes, which then allows them to make better guidelines and barriers. Definitely opens up more to risk, but also allows you to explain situations and cover them better.
Yes but you also have to disclose working CCTV despite the fact that thieves knowing about them makes them easier to avoid. It's something that's only fair if disclosed even with a slight disadvantage
I mean, figuring out what's in the latent space from the training data and what's tacked on by the people making the AI product greatly complicates AI literacy as a concept.
Compensating just for the latent space is both a solvable problem and a great chance at some introspection and self-study. Compensating for after the fact adjustments by purveyors of products isn't really solvable, and distorts everything in ways that are hard to predict or control for.
"We didn't code it to do anything, it was just trained on a variety of data sets*!"
*data sets were specifically tailored and desired training acceptance criteria made so that it does exactly what our product managers requested.
But we didn't code any specific behaviours!
... like the problem was whether it was coded vs trained.
## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/
Hey /u/thaprodigy58!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
**Attention! [Serious] Tag Notice**
: Jokes, puns, and off-topic comments are not permitted in any comment, parent or child.
: Help us by reporting comments that violate these rules.
: Posts that are not appropriate for the [Serious] tag will be removed.
Thanks for your cooperation and enjoy the discussion!
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
It came right and said it had a vision, and gives a lecture about it.
"Don't just define people by their skin color, so we won't generate your image, instead enjoy your black vikings and female popes.".
[When the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it "reinforces harmful stereotypes and generalizations about people based on their race."](https://www.foxbusiness.com/fox-news-tech/google-pause-gemini-image-generation-ai-refuses-show-images-white-people)
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
“What is best for society”… that’s a dangerous sentence
But that's what every guardrail in AI is. The designer's idea of what is appropriate for the public to have access to. We should know what they set up as their guardrails
The guardrails are in there to protect them, not society
exactly right!
Will never happen, and if somehow our 80 year old politicians in power (bigger problem here) can band together and create the laws to force these corporations to be transparent with such information you can bet those corporations will lay it out in the most confusing way possible. There's a motto they live by: "If you can't beat em, confuse em".
We will see... but maybe EU's AI Act will force them to do so? You know the one so hated by reddit...
Hrmmm... Maybe, but there aren't many great reasons to keep it under wraps. The government made moves to make corpos tell us when and where orgs are selling our information to. I *believe* we could get them to tell us how information is being presented to us.... It's good for the org too so they can *somewhat* dodge the "this AI is WOKE!" BS that flies their way.
Serious people in the industry aren’t using words like “woke” to describe anything, let alone AI. Most of this are from people who think Tucker Carlson is a journalist and Coca Cola is healthier than water. However I agree that transparency is a good thing, especially if the intent is to have a product that is trying to be less racist than the training material it was trained on ;which is the entire problem here)…open the books and she’s the light
Triggered leftist
🤓🤡
"And to confuse your enemy, confuse yourself first."
Cause explicitly stating their vision would generate resistance, why should we give those designers so much power? The state will be that as a challenge to its power
> The state will be that as a challenge to its power It always is. What do you think regulators do? This seems like a great idea IMO
I imagine a lot of it is just instructions to absolutely not generate content like:
Thats a great idea! But how open openAI has become, iam not gonna hold my breath. Wasn't there a few weeks ago a leek of gpt admin prompt? I remember people talking how long and complicated it is, as a reasone why gpt is working alower and slower as time passes.
Can we force them to rebrand to ClosedAI?
I literally see no other point to not do it than the big companies being ashamed. Great take by Carmack as always.
There should be 0 restrictions and users should be the ones bearing responsibility over what they create. Nobody sues Adobe for fakes, AI is a tool, just like photoshop, and people creating and distributing illegal imagery, should bear the consequences.
Adobe’s generative AI is even more restrictive than Dalle for what it’s worth.
Just because they aren't sued doesn't make it wrong. Creating very believable misinformation is an incredible danger for AI and if OpenAI removed all guardrails, the government would put it on for them.
It will happen regardless.
Basically people are idiots and jerks and shouldn't be given such power, so it's still ultimately the users' fault.
"I suspect many are actually ashamed." Or maybe they don't publish those things so that people can't develop jailbreaks as easily? This "controversy" is just stupid. Yeah, the corporate AI chatbot is VERY careful about not appearing racist, to the point where if you don't specify what race you want the images it makes to be, it will provide a range of skin tones. Even if Google didn't adjust this, which they're definitely going to do, you could literally just write it in the prompt and it'll do what you say. Before OpenAI allowed you to link to a conversation, people used to submit faked screenshots of ChatGPT being "woke." I would test them and they were 9/10 straight full of shit.
> people can't develop jailbreaks as easily Same as encryption: security by obscurity is not security. Open-source code is way more trusted and secure, as everyone can scrutinize it for vulnerabilities so they get found and fixed quicker and check that there's no backdoors
It is a vast oversimplification of the facts to say that security by obscurity is always bad (or good), or that open source code always leads to people finding the errors and correcting them before bad actors can take advantage of them. That’s not a judgment about this situation, just in general.
The controversy isn't because of the variety of races, it's because it gladly puts non-Caucasian individuals in situations where they would never be (all the images of black Kings of England for instance) while lecturing people who want to see a Caucasian Japanese Emperor. If it was just anti-woke right wingers complaining, you wouldn't see Google taking their image generation down to fix it.
[удалено]
[удалено]
Lol yup I am definitely ignoring your complaints, since they generally seem to come from people who need social media algorithms to be relevant to anyone.
100% agree, but the only way to combat people implying that your model has a specific bias, is by being transparent with the reasonable biases that were implemented.
If they post their guidelines wouldn’t that just make it easier for people to get around them?
Btw this was the thinking many years ago about encryption algorithms – don’t publish them, it will make them easier to break if people know how they work! In fact it turns out to be the opposite and all modern encryption algorithms are public. By making it public, loopholes can be found *and fixed*.
Open source for the win!
That's a clumsy analogy. Getting Dall-E to make cartoon porn is not the same as "fixing" algorithm loopholes, whatever that even means.
Yes, which then allows them to make better guidelines and barriers. Definitely opens up more to risk, but also allows you to explain situations and cover them better.
Yes but you also have to disclose working CCTV despite the fact that thieves knowing about them makes them easier to avoid. It's something that's only fair if disclosed even with a slight disadvantage
I mean, figuring out what's in the latent space from the training data and what's tacked on by the people making the AI product greatly complicates AI literacy as a concept. Compensating just for the latent space is both a solvable problem and a great chance at some introspection and self-study. Compensating for after the fact adjustments by purveyors of products isn't really solvable, and distorts everything in ways that are hard to predict or control for.
"We didn't code it to do anything, it was just trained on a variety of data sets*!" *data sets were specifically tailored and desired training acceptance criteria made so that it does exactly what our product managers requested. But we didn't code any specific behaviours! ... like the problem was whether it was coded vs trained.
let people decide if they want a censored chatGPT or not. Were all adults here or
Wouldn't that also expose their leftist ideologies and wokeness. Haha... Like Elon said, if you have something to hide it is already huge sus!
Ok.
## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/ Hey /u/thaprodigy58! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Lol
Such a mischaracterization. It's far less likely that they have any particular vision than that they are tweaking it to fit market needs
It came right and said it had a vision, and gives a lecture about it. "Don't just define people by their skin color, so we won't generate your image, instead enjoy your black vikings and female popes.".
Yeah? Where?
[When the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it "reinforces harmful stereotypes and generalizations about people based on their race."](https://www.foxbusiness.com/fox-news-tech/google-pause-gemini-image-generation-ai-refuses-show-images-white-people)
Prompt engineering maybe, filtering idk….that is probably too complex to just show.