- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
It’s the logical end point of a particular philosophy of the internet where cyberspace is treated as a frontier with minimal oversight. History offers a pretty clear pattern here with any ungoverned commons eventually getting overrun by bad actors. These spam bots and trolls are a result of the selection pressures that are inherent in such environments.
The libertarian cyber-utopian dream assumed that perfect freedom would lead to perfect discourse. What it ignored was that anonymity doesn’t just liberate the noble dissident. It also liberates grift, the propaganda, and every other form of toxicity. What you get in the end is a marketplace of attention grabbing performances and adversarial manipulation. And that problem is now supercharged by scale and automation. The chaos of 4chan or the bot filled replies on reddit are the inevitable ecosystem that grows in the nutrient rich petri dish of total laissez-faire.
We can now directly contrast western approach with the Chinese model that the West has vilified and refused to engage with seriously. While the Dark Forest theory predicts a frantic retreat to private bunkers, China built an accountable town square from the outset. They created a system where the economic and legal incentives align towards maintaining order. The result is a network where the primary social spaces are far less susceptible to the botpocalypse and the existential distrust the article describes.
I’m sure people will immediately scream about censorship and control, and that’s a valid debate. But viewed purely through the lens of the problem outlined in the article which is the degradation of public digital space into an uninhabitable Dark Forest, the Chinese approach is simply pragmatic urban planning. The West chose to build a digital world with no regulations, no building codes that’s run by corporate landlords. Now people are acting surprised that it’s filled with trash, scams, and bots. The only thing left to do is for everyone to hide in their own private clubs. China’s model suggests that perhaps you can have a functional public square if you establish basic rules of conduct. It’s not a perfect model, but it solved the core problem of the forest growing dark.
It’ll be interesting to see how Discord enshittifies.
It’s the default destination for the niche-interest “cozy web,” and they could go down several paths.
It already has. Any time I boot it up it asks me to sign up for premium this or that.
Seems believable. I’m curious, how do lemmy instances protect themselves from ai slop and bots?
Manual labor, the Communist Party of China pays us to keep Lemmy free of bots and revisionists.

Alt text: You guys get paid?
manual moderation, and there are some moderation bots that can detect spam.
In my imagination, some sort of referral/voucher system might work. A invites B, B invites C. C turns out to suck. Ban C, discredit B heavily and discredit A lightly. Enough discredit and you get banned or can’t invite more people.
Apart from not being that interesting for now, the first line of defence for most is manually-approved sign ups, as far as I can tell.
When the Fediverse grows, I think that weeding out accounts that post slop will be the “easy” part; the hardest part will be to identify the silent bot accounts that do nothing but upvote.
I vaguely remember kbin allowing you to see who upvoted a particular post, so it might not be too difficult.
Tough to differentiate bots that only vote from human lurkers who only vote.
Yeah, you’d need some graph analysis. Bots will all simultaneously upvote certain things, and over time a pattern should emerge.
They don’t, but they are uninteresting for now








