When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority. Within hours, we got a taste of what we’d only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.
This isn’t just a Digg problem. It’s an internet problem. But it hit us harder because trust is the product.
It’s a social media problem. It’s going to be hard to provide pseudonymity, low-cost accounts relatively freely, and counter bots spamming the system to manipulate it. The model worked well in an era before there were very human-like bots that were easy to produce.
It might be possible to build webs of trust with pseudonyms. You can make a new pseudonym, but the influence and visibility gets tied to, for example, what users or curators that you trust trust, so the pseudonym has less weight until it acquires reputation. I do not think that a single global trust “score” will work, because you can always have bot webs of trust.
Unfortunately, the tools to unmask pseudonyms are also getting better, and throwing away pseudonyms occasionally or using more of them is one of the reasonable counters to unmasking, and that doesn’t play well with relying more on reputation.
Im beginning to think that, as annoying for users and difficult to build a userbase for as it may be, the answer might ultimately have to be for future social sites to charge people for use in some way, be it to create accounts or as a subscription or just for the ability to post/comment/vote or whatever. If it’s no longer going to be feasible to keep bots out, and there’s a financial gain for their use, then they’re going to get used, so at that point it has to be somehow more expensive to run a bot than that bot can be expected to bring in as a result of it’s contribution to an advertising or manipulation campaign, to deter them. On the bright side, I guess it might lead to a shift away from advertising everywhere. Either you charge people and therefore dont need ads, or you dont, and have most of your ads being “seen” by bots, which advertisers probably don’t want to spend money to reach anyway.
It’s a social media problem. It’s going to be hard to provide pseudonymity, low-cost accounts relatively freely, and counter bots spamming the system to manipulate it. The model worked well in an era before there were very human-like bots that were easy to produce.
It might be possible to build webs of trust with pseudonyms. You can make a new pseudonym, but the influence and visibility gets tied to, for example, what users or curators that you trust trust, so the pseudonym has less weight until it acquires reputation. I do not think that a single global trust “score” will work, because you can always have bot webs of trust.
Unfortunately, the tools to unmask pseudonyms are also getting better, and throwing away pseudonyms occasionally or using more of them is one of the reasonable counters to unmasking, and that doesn’t play well with relying more on reputation.
Im beginning to think that, as annoying for users and difficult to build a userbase for as it may be, the answer might ultimately have to be for future social sites to charge people for use in some way, be it to create accounts or as a subscription or just for the ability to post/comment/vote or whatever. If it’s no longer going to be feasible to keep bots out, and there’s a financial gain for their use, then they’re going to get used, so at that point it has to be somehow more expensive to run a bot than that bot can be expected to bring in as a result of it’s contribution to an advertising or manipulation campaign, to deter them. On the bright side, I guess it might lead to a shift away from advertising everywhere. Either you charge people and therefore dont need ads, or you dont, and have most of your ads being “seen” by bots, which advertisers probably don’t want to spend money to reach anyway.