Obviously, the internet has always been a toxic place, (the phrase “flame war” has been around for decades,) but it seems to have gotten so much worse over the last few years. I used to think decentralization of the internet would fix the worst of it, but Lemmy seems to have gotten worse alongside the rest of internet culture, proving me wrong. How do we fix/improve this culture of toxicity?


My own personal thoughts on things that might change to improve:
I’m pretty interested about the prospects for something like “curated lists”, where people can publish ban lists or “upvote lists” or something like that that users can subscribe to if they decide that they like a particular curation list’s material. Something that can leverage positive and negative recommendations more-readily. My understanding is that Bluesky has something along those lines.
Reddit originally was intended to rely on voting to do per-user recommendation. Over the years, it kind of drifted away from that. At the time I left, it still didn’t do that. I think that it’s probably also possible to create automated recommendations based on things like a user’s upvotes. I suppose that there’s some echo chamber potential here, depending upon how one votes.
I see a lot of people being negative on the Threadiverse, people that sound often depressed or something, but not really people fighting between each other that much. There are people who could be nicer, but in terms of interpersonal fighting, I don’t see that much. That being said, I do avoid some instances.
Beehaw.org has a relatively-restrictive moderation policy. That’s not what I personally prefer, but I will say that it has a fairly-upbeat set of discussions on its communities compared to most instances. It defederated with lemmy.world, but has not with lemmy.today (my home instance) and a number of others, so if you’re specifically on the hunt for more-positive conversation, you might investigate it.
My own personal belief is that making votes public has reduced the amount of “I disagree with you, so I downvote” stuff. It’s also possible that there are other factors going on, but I think that after lemvotes.org in particular became widely-available, the amount of what I’d call downvoting in discussions on controversial topics declined on here. There have been some instances that disallow downvotes entirely (beehaw.org is an example of an instance that does this).
From a moderation standpoint, there are some policies from Reddit subreddits that I think were generally successful. /r/Europe had a pretty hard “do not edit article titles” rule. This went further than I personally would have, as sometimes I think that adding context to a title could be useful, but that avoided a lot of issues where people would insert their personal positions into post submissions rather than in a top-level comment. I think that some form of that can be a useful convention.
On an directly-opposing note: I think that a lot of articles are clickbait (and some are ragebait, and the latter tends to drive unpleasantness). I’ve seen various proposals to try to let users submit alternate article titles and those be voted on or something like that. Maybe it’d be a good idea to let users submit alternate titles and mods pick from them or something like that. Reddit didn’t do that, but maybe things along those lies could be successfully done.
In general, I don’t think that Reddit got many things wrong. One thing I think it did get wrong was to change how blocking worked at one point from “I ignore all comments from a user” to “that user cannot respond to me”. The Threadiverse software packages presently work like “old Reddit”. I think that that’s a good idea. On Reddit, this change to how blocking worked resulted in a lot of people posting inflammatory content, then blocking the other user so that they couldn’t respond, so it’d look like the other user had conceded the point. Then the other user — now infuriated — would go start responding to other comments in a thread pointing out that this first user had blocked them. That never ended well.
We do have automated stuff to try to detect tone, sentiment analysis. This sometimes gets used to do things like identify users getting upset in automated calls and direct them to a human. It might be possible to automatically flag potential flamewars for moderators, to reduce the time until they get noticed.