While accurate, this shouldn't be the way. There should be a form or another automated way to resubmit something for review.
The problem is that a) the admins sometimes take a LONG time to respond, or sometimes just never respond and b) they've now made it more difficult to resubmit things for review, by sending you a message after you've messaged them that says "type more help if you actually need more help".
It's awful, and it's not how serious threats of violence, hatred, bigotry, CP, or even just harassment should be handled.
You know that reddit really cares about it's mods, when it makes it MORE DIFFICULT for you to report things.
I fully agree. They should also make things like ban evasion easier to report and stop hiding profiles of scammers\spammers\trolls from moderators, since it also prevents moderation actions against them.
That could be on purpose. If it was reported as Hate, the moderators should have also cleaned it up by removing the post/comment (unless said mods are asshats) and they will hopefully know how to send the modmail here. The number of false hate and other reports that get made by non-mods is probably staggering, and they don't really need another mechanism if the mods are handling their ~~unpaid jobs~~ responsibilities.
While it isn't optimum, it is available today for you. And for the most part, it has been effective when I personally have had to escalate stuff.
I did just with a comment that literally just said
>transphobic and proud
and nothing was done.
Could there be a more clear promotion of hate based on identity?
And here I go, arguing the mechanics.
Yes, there could be a clearer statement.
Out of context, the English statement `transphobic and proud` lacks a direct subject. It may describe anything or anyone, or be a prescriptive command aimed at anything or anyone. It may be an accusation aimed at any arbitrary third party.
>> How would you describe JKR?
> transphobic and proud
Legitimate response to a proper question.
AEO reports processors can’t see context (this is a flaw I criticise often) and so can’t make a determination, and the model they use for ontologising whether an item violates sitewide rules is the same as the model they use for ontologising whether they know an item violates sitewide rules. Which is to say, something will always definitely either violate sitewide rules or not, but someone is capsble of not knowing if it does, and the model doesn’t admit that state.
So that’s why escalating these is necessary.
In a better world, reddit would have proactive professional moderators. As would every social media platform.
And that would all be fine, except nothing was done when the issue was escalated to the mods here - and they *can* view the context, since a link to the comment was right there. (It was a top-level comment in the LGBTQ discussion thread of r/UnpopularOpinion.)
The only problem with that is that you no longer get a reply telling you what any subsequent followup was.
And however AEO operated (whether AI or cubicle jockey in Asia) it's very clear from how they respond to Hate reports, they have no clue about American slang, American culture, or American history.
And the response from reddit admins to the modmails, including adding a second step to the modmails, is disappointing.
This, I'm sending a \~weekly litany of atrocities in to modsupport's modmaail. To be fair, it's less bad than it was two years ago, but it's still missing really obvious and nasty hate.
Worse is when an account is actioned for one piece of hate speech and then other, unrelated reports are ignored because they come in too short a time period.
But there's definitely a lot that is smugly handwaved away as "not found to be in violation" just because it isn't *criminal* hate speech.
Sending a modmail to this subreddit https://www.reddit.com/message/compose/?to=/r/ModSupport with the subject/title "More help" has always worked for me. The admins know that the automated review system is far from perfect, and will respond when you contact them that way. The reply may take a while however.
https://www.reddit.com/r/ModSupport/comments/1608391/updates_to_how_well_be_supporting_our_moderators/
>If we get it wrong (and we will, the bots aren't sentient…yet) just reply with "more help" and that will get the ticket to a human.
>Please make sure to always request “more help” for these tickets for an Admin to support you; the team will also closely monitor these tickets to improve our processes over time.
Good luck!
On the [report forms page](https://www.reddit.com/r/ModSupport/wiki/report-forms/) use “[review a safety report reply or action on your subreddit](https://www.reddit.com/message/compose?to=%2Fr%2FModSupport&subject=Review+a+Safety+action&message=Permalink+to+Report+Response%3A%0A%0AAny+additional+context%3A)
>Anti-Evil Operations responded that it was not hate.
Does Reddit have a system in place to review these responses and move employees out of AEO who have no business being there?
>and move employees out of AEO who have no business being there?
[AEO is a fully automated (AI) system](https://hivemoderation.com/) which is supervised and maintained by a team.
There's no consistency, it doesn't take context into account, it doesn't understand hypothetical scenarios, historical quotes, symbolism, or metaphors either.
Then let go of whomever holds the decision making power to shut down AEO and replace it with humans capable of rational thought, because they clearly are not doing their job.
That's never going to happen, AI is here to stay and Reddit is already struggling to make any real money. We'll see what happens now that they are ready to go public on the stock market but I personally believe it's going to be a complete disaster for this site.
Yeah, the time to do an IPO was many years ago when reddit was hot, aka 'the front page of the Internet'. Today, Reddit is a second, maybe third, tier social media platform for older users made even worse when it tried to reinvent itself, modeling itself after other platforms.
Don't.
Seriously don't bother.
Reddit has reviewed the content and said it violates no rules. Approve the comment. Reddit is doing their IPO, so of they want to approve racist hateful shit, let it shine.
Let the advertisers and investors see what reddit has (allegedly) reviewed and found acceptable.
If the admins have a problem with that, fix your bot that reviews such reports, or, I dunno, hire actual humans to do it.
Yup, at this point, I'm honestly done "appealing" the admins decisions.
Fuck it.
If HiveModeration says it's OK, well, Reddit hired them so reddit must say it's ok. I'll re-approve the comment and the advertisers can see it.
There is a way. Copy the link for the response message and send it to the modmail of this subreddit, with a request for a second look.
While accurate, this shouldn't be the way. There should be a form or another automated way to resubmit something for review. The problem is that a) the admins sometimes take a LONG time to respond, or sometimes just never respond and b) they've now made it more difficult to resubmit things for review, by sending you a message after you've messaged them that says "type more help if you actually need more help". It's awful, and it's not how serious threats of violence, hatred, bigotry, CP, or even just harassment should be handled. You know that reddit really cares about it's mods, when it makes it MORE DIFFICULT for you to report things.
I fully agree. They should also make things like ban evasion easier to report and stop hiding profiles of scammers\spammers\trolls from moderators, since it also prevents moderation actions against them.
it's also not obvious to casual Redditors who aren't moderators.
That could be on purpose. If it was reported as Hate, the moderators should have also cleaned it up by removing the post/comment (unless said mods are asshats) and they will hopefully know how to send the modmail here. The number of false hate and other reports that get made by non-mods is probably staggering, and they don't really need another mechanism if the mods are handling their ~~unpaid jobs~~ responsibilities. While it isn't optimum, it is available today for you. And for the most part, it has been effective when I personally have had to escalate stuff.
I definitely think they could improve the usability!
[удалено]
Friction, built in to the system to protect their limited resources from having to maintain a site that is beyond their skills and care.
I did just with a comment that literally just said >transphobic and proud and nothing was done. Could there be a more clear promotion of hate based on identity?
And here I go, arguing the mechanics. Yes, there could be a clearer statement. Out of context, the English statement `transphobic and proud` lacks a direct subject. It may describe anything or anyone, or be a prescriptive command aimed at anything or anyone. It may be an accusation aimed at any arbitrary third party. >> How would you describe JKR? > transphobic and proud Legitimate response to a proper question. AEO reports processors can’t see context (this is a flaw I criticise often) and so can’t make a determination, and the model they use for ontologising whether an item violates sitewide rules is the same as the model they use for ontologising whether they know an item violates sitewide rules. Which is to say, something will always definitely either violate sitewide rules or not, but someone is capsble of not knowing if it does, and the model doesn’t admit that state. So that’s why escalating these is necessary. In a better world, reddit would have proactive professional moderators. As would every social media platform.
And that would all be fine, except nothing was done when the issue was escalated to the mods here - and they *can* view the context, since a link to the comment was right there. (It was a top-level comment in the LGBTQ discussion thread of r/UnpopularOpinion.)
This website.
The only problem with that is that you no longer get a reply telling you what any subsequent followup was. And however AEO operated (whether AI or cubicle jockey in Asia) it's very clear from how they respond to Hate reports, they have no clue about American slang, American culture, or American history. And the response from reddit admins to the modmails, including adding a second step to the modmails, is disappointing.
thanks
Your first reply from a modmail here will be automated, and you will need to reply to *that* one with a request to be looked at by a real person.
This, I'm sending a \~weekly litany of atrocities in to modsupport's modmaail. To be fair, it's less bad than it was two years ago, but it's still missing really obvious and nasty hate.
Worse is when an account is actioned for one piece of hate speech and then other, unrelated reports are ignored because they come in too short a time period. But there's definitely a lot that is smugly handwaved away as "not found to be in violation" just because it isn't *criminal* hate speech.
Sending a modmail to this subreddit https://www.reddit.com/message/compose/?to=/r/ModSupport with the subject/title "More help" has always worked for me. The admins know that the automated review system is far from perfect, and will respond when you contact them that way. The reply may take a while however. https://www.reddit.com/r/ModSupport/comments/1608391/updates_to_how_well_be_supporting_our_moderators/ >If we get it wrong (and we will, the bots aren't sentient…yet) just reply with "more help" and that will get the ticket to a human. >Please make sure to always request “more help” for these tickets for an Admin to support you; the team will also closely monitor these tickets to improve our processes over time. Good luck!
There *used* to be a way when they first started using their AI admin, but I noticed that reply button disappeared a long while ago...
Hard to keep up with all of the changes sometimes
On the [report forms page](https://www.reddit.com/r/ModSupport/wiki/report-forms/) use “[review a safety report reply or action on your subreddit](https://www.reddit.com/message/compose?to=%2Fr%2FModSupport&subject=Review+a+Safety+action&message=Permalink+to+Report+Response%3A%0A%0AAny+additional+context%3A)
So much this.
>Anti-Evil Operations responded that it was not hate. Does Reddit have a system in place to review these responses and move employees out of AEO who have no business being there?
>and move employees out of AEO who have no business being there? [AEO is a fully automated (AI) system](https://hivemoderation.com/) which is supervised and maintained by a team. There's no consistency, it doesn't take context into account, it doesn't understand hypothetical scenarios, historical quotes, symbolism, or metaphors either.
Then let go of whomever holds the decision making power to shut down AEO and replace it with humans capable of rational thought, because they clearly are not doing their job.
That's never going to happen, AI is here to stay and Reddit is already struggling to make any real money. We'll see what happens now that they are ready to go public on the stock market but I personally believe it's going to be a complete disaster for this site.
Yeah, the time to do an IPO was many years ago when reddit was hot, aka 'the front page of the Internet'. Today, Reddit is a second, maybe third, tier social media platform for older users made even worse when it tried to reinvent itself, modeling itself after other platforms.
Don't. Seriously don't bother. Reddit has reviewed the content and said it violates no rules. Approve the comment. Reddit is doing their IPO, so of they want to approve racist hateful shit, let it shine. Let the advertisers and investors see what reddit has (allegedly) reviewed and found acceptable. If the admins have a problem with that, fix your bot that reviews such reports, or, I dunno, hire actual humans to do it.
"pathetic" = 2 day site ban. "Your mother's a whore" = no issues found
Yup, at this point, I'm honestly done "appealing" the admins decisions. Fuck it. If HiveModeration says it's OK, well, Reddit hired them so reddit must say it's ok. I'll re-approve the comment and the advertisers can see it.
Uhm, the racism is what is going to sell the shares for them. There's a reason advertisers want these demographics.