With recent events hilighting the value of quality moderation, it got me to consider: How can we help you out?
What steps, considerations, encouragements or warnings would you give the userbase regarding best practices for using the report feature (or other interraction with mods)? Reporting less, reporting more, putting more detail in the form, or just leaving it blank?
I was thinking of maybe putting together a psa-style infographic (a la think before you post) if the answers you give are poignant or significantly unknown to the average user.
If you’re in doubt if some content violates the rules, report it, and let the mods decide if it’s okay or not okay. That is not abuse of the report function.
Include a short description on why you’re reporting some piece of content. Specially in larger comms, the mod queue can get really large. If reasonable mention the rule being violated; a simple “r1 off topic” goes a long way.
Context is everything. If what a user said only sounds bad in a certain context, say it. If the user is clearly problematic due to their profile, say it.
You’re probably better off not interacting with the content that you’re reporting.
Don’t boss the other users around. It’s fine to be informative; it is not fine to act as a moderator when you are not one. If moderative intervention is necessary, report it.
Stop giving shitty mods a free pass. Honest mistakes happen; but if the mod in question is assumptive, disingenuous, trigger-happy, or eager to enable certain shitty types of user, spread the word about their comm being poorly moderated. And don’t interact directly with the comm. I think that at least here in the Fediverse we should demand higher standards from our mods.
You’re probably better off not interacting with the content that you’re reporting.
I’ll second this. Responding to the problematic content usually just leads to a lot more problematic content that the mods need to sort through.
Okay already glad I made this post as I have definitely erred in this regard several times!
I have definitely erred in this regard several times!
I think that everyone did this at least once, so don’t worry too much. Still, it’s less work for the mods if you don’t do it.
For sure. Responding is the natural thing to do. But, for me at least, I end up reading every response and deciding if I’m safe to just remove the first reported comment, or every child comment, or a mix.
Maybe some mods just delete the parent comment and all children to keep it simple, but I try to weigh each response fairly.
Also depending on what it is responding can just make you into a target for harassment when/if that person comes back on a different account. Best to just avoid it and not make yourself known to them.
I’d add and don’t be butt hurt if the Mod does not agree.
I’ve done that before and had mods threaten to ban me if I reported anyone again. Ironically, it was someone that was harassing and being uncivil that I was reporting. Like WAY worse than most comments I’ve seen removed. And this was on a fairly popular community here that is still around.
It’s not always so easy to assume that mods are going to be fair. Reporting people comes with a risk.
If your description of the events is accurate: that’s a shitty mod, and a good example of what I wrote in the last paragraph. We should be denouncing this sort of crap, and avoiding comms where it happens.
It’s not always so easy to assume that mods are going to be fair. Reporting people comes with a risk.
I’m not assuming that the mods are going to be fair. I’m taking into account that shitty mods do a favour to you when they out themselves, as they’re basically showing you which comms to avoid.
If you have that happen to you I recommend reaching out via DM either to your instance’s admins or the admins of the server their community is on. Give them the context of the report and also explain how that mod reacted. If a mod acts like that they are very likely an abusive moderator, so make sure to let admins know about that.
So, the things I appreciate as an admin are
a) Make sure you state whether it’s simply a matter of breaking community rules, or whether it needs an admins attention. As it stands now, admins see all reports made for all communities on our instances, which can be a lot. I leave reports for communities that I don’t moderate, but they then sit around until a moderator actions them. And the large incoming volume, plus the older ones hanging around can make it hard to tell at a glance which ones require an admins eyes.
b) If it’s not immediately obvious from the reported post, tell us where to find the context we need to see why the post is a problem. “Look at posters history” / “Will make sense if you read the previous post” etc.
Read the rules for a community and report accordingly. People like to complain when stuff stays up, but nobody has bothered to report it…
One thing to be aware of, at least when I’ve tried it reports do not federate from Lemmy to kbin.
Also, not everything has Mastodon-style reports built in. Not sure about modern-day Friendica, but AFAIK, Hubzilla and (streams) don’t.
So if you’re on Mastodon, you report a Hubzilla user on their home hub, and nothing happens, that doesn’t mean that moderation is neglected to such degrees that Fediblocking the whole hub is justified. It doesn’t mean either that Hubzilla’s culture is so much different from Mastodon’s although it is.
It simply means that Hubzilla doesn’t understand Mastodon reports.
Tangentially related: if you see something that needs to be addressed now, like CSAM or gore, notify an instance admin via Matrix. That tool can send push notifications, so you’re more likely to get a prompt response. Some instances also have public Matrix chats you can use.
You can find the Matrix account info for Lemmy users by clicking the “Send Secure Message” button in a user’s profile.
Still report as well, it sends emails to the mods and the admins. Just make sure it’s identifiable at a glance, like just type “CSAM” or whatever 1-2 words makes sense. You can add details after to explain but it needs to be obvious at a glance, and also mods/admins can send those to a special priority inbox to address it as fast as possible. Having those reports show up directly in Lemmy makes it quicker to action or do bulk actions when there’s a lot of spam.
It’s also good to report it directly into the Lemmy admin chat on Matrix as well afterwards, because in case of CSAM, everyone wants to delete it from their instance ASAP in case it takes time for the originating instance to delete it.
I’ve noticed a sharp increase in spam and I’ve been reporting each one simply as “spam”.
I then block the user
Many of these posts have dozens of down votes.
Several go back months, which I discover when a new variant turns up.
I’m unsure if what I’m doing is helping or not, and as an ICT professional, I’m not sure why this obvious spam isn’t caught earlier.
It’s helpful. The issue is the moderation with mass removal of content doesn’t always federate in ways that are obvious. And it was worse with older versions of lemmy.
Report it, and at least your instance admins can remove it from your instance
I then block the user
Is this not a slightly selfish action? It solves the problem for you, but doesn’t make the community better for everyone. I feel like blocking users should be reserved for issues like harassment, not spam.
Blocking spam is not selfish, no.
I’m not sure I’d call it selfish. There are users with particularly distasteful opinions that I prefer to downvote when I see them expressed. If I blocked them, then I wouldn’t be able to help shape the community in a positive manner.
If I blocked them, then I wouldn’t be able to help shape the community in a positive manner.
Yeah, I feel like this is an important aspect that many users miss.
That’s fine to do once you’ve reported it: you’ve done your part, there’s no value still seeing the post it’s gonna get removed anyway.
But what about future posts from the same account? If I’ve blocked the account, I can’t report posts I don’t see.
Is this not a slightly selfish action? It solves the problem for you, but doesn’t make the community better for everyone. I feel like blocking users should be reserved for issues like harassment, not spam.
This is an aspect that I had not considered. Even thinking about it now leaves me unsure of the best way forward.
Specifically, whilst it’s a valid argument that blocking the user only solves this for me, and not blocking would help me see if the issue was dealt with, I feel that leaving the user free to roam across my screen is impacting me directly and if I’m not a moderator in a community, it’s not my place to second guess their decision to leave such a user and post in place.
In other words, I’m stating to a moderator that I think that this post is spam and should be dealt with accordingly, but if you leave it alone, that’s your choice.
I moderate several communities outside of the fediverse and spam in my communities is a one-strike ban. That’s not what everyone does.
Having now thought through this again, now in more detail, I’m comfortable with blocking the user.
I don’t and will not use blocking for users basically for that reason, it doesn’t actually solve anything it’s just pretending things are okay for you when they aren’t for everyone else. It’s probably the reason why reporting doesn’t happen as much as it should.
Even in harassment it is particularly useless because even though you stop seeing them and getting notifications they can continue replying in your threads, they can even use that to turn people against you. If it at least prevented posting and commenting on your profile I’d use it in those situations, but as it stands it doesn’t, so it’s useless to me, because if somebody is harassing me, the last thing I want to do is hide them while allowing them to keep harassing me, it gives them mote opportunities to cause trouble and removes my opportunity to report them for that trouble.
Very well put.
With recent events hilighting the value of quality moderation
What happened?
thread that explains it: https://lemmy.blahaj.zone/comment/8694944
Here’s a few on reports:
-If you think it merits a report report it.
-If you’re unsure if it breaks a rule DM a mod.
-Idealy on a report write the rule # broken. If you’re typing past three sentences it’s may have unnecessary info.
-If someone is breaking a “don’t be mean” rule, the answer isn’t for you to break it too.
Here’s a few non report ways you can help:
-Set a positive tone.
-A lot of good posts/comments probably never get heard because people think it’s too lower effort or no one is interested.
-Crosspost to related communities to help people find related communities.
Report AND downvote. Reports only go to your instance and the originating instance whereas downvotes go everywhere. Highly downvoted content will get noticed by someone, eventually.
downvotes go everywhere
Unless you’re in kbin.social, which doesn’t federate downvotes.
That said it’s still handy for other kbin users and a good indication that it’s been reported already.