Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
100% written by an LLM. They always use this tone and it’s infuriating.
Like the world’s worst haiku.
That’s one of the signs of LLM output, take any idea and have it flesh it out into short article. It’ll bullet point the crap out of it
Yeah, but even when it’s not an LLM, they type like this now
Monkey see, monkey do…
This doesn’t explain why Reddit has decided that I need multiple Hindi sub referrals every day.
Years and years of data regarding me, zero Hindi, zero Indian, zero interest, AND YET here’s another suggested Hindi sub! Fantastic work.
But the Jesus ads sealed the deal, adios Redditto
apologies for my off topic ramble, but I feel better.
people have been doing this the sweaty way for a decade
The turbo-hell part is that the spam comments aren’t even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know “what should I buy to solve X?” or “which is better A or B?” they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
My god. Somehow I hadn’t thought of doctors using LLMs to make decisions like that. But of course at least some do.
Oof. Haven’t met a lot of doctors huh? Check out some of their subreddits
Considering that LLM content that makes it into training content makes the trained LLMs worse… is this adversarial?
You mean more shit? Because it was already shit.
I have no idea what any of that means, and I’m happy with that.
Basically they figured out a way to train AI to recognize Reddit threads going viral and/or predict which ones will, among those which ones will also rate highly in Google results and which will tend to be used as sources by the biggest LLMs and to post in those threads about your whatever you want to generate attention for. So overcomplicated way of automating advertising. Optimized posting to convince LLMs to talk about whatever you want to advertise.
I’ve always said that SEO was always going to happen, Google is at fault for the search optimized and the best result for what the user is asking for not being the same result. We’re now going to start seeing either LLMs sell whatever this tactic gets used on or essentially a sort of adblock being built into LLM training and search APIs to keep it from working, to make LLMs less likely to fall for native advertising/astroturfing.
means bozos are making even searching exclusively by reddit useless because they’re making the post get to the first page through writing SEO + ad for their own shit on itI am wrong on the internet
Not quite. They’re making posts on reddit that few if any humans will ever read, targeting rising threads and planting comments before AI reads them. Then when someone asks AI a related question, it regurgitates the planted comment rather than established facts.
So it’s not SEO on humans searching reddit, more like SEO in the AI domain.
Shitification.
It means shitification.
I don’t understand the point of this. Like, you figure out how to increase traffic on certain posts/comments and there’s somehow a push for this? Do they get money somehow? These people always use terminology and ceo buzzwords as if it’s some big business level that people are aspiring to reach, but what’s the actual point? Why would I care if my post/comment exploded? Was I just using Reddit wrong?
You mention your product in the reply and hope that some poor sap doesn’t realize it’s astroturfing and thinks they’re finding a really glowing honest review from a totally organic real person who recommended a thing they found that actually works
Its not just about random people reading the comment, but specifically LLMs that use reddit as a source, because becoming the chatbots’ go to answer when people ask ‘what lawnmower should I buy’ is increasingly more valuable than paying for a google search Ad.
So it’s just a grift. Makes sense, they always use grift-style buzzwords. I was about to comment on the ridiculousness of building a business solely on manipulation, but then I thought about it a bit more haha. Thanks for the explanation.
It’s a grift, but it’s extra steps. It’s not about affecting the experience on reddit, but for AI users. They use reddit to plant answers, which AI then trains on and regurgitates later.
Eventually the reddit thread would probably balance out, and incorrect information should get downvoted and replaced by corrections from people who know better. However AI might not account for this and could still spit out the planted information. It’s this delicate manipulation that this LinkedIn Lunatic is bragging about here.
building a business solely on manipulation
See also: all of advertising and marketing
Haha yep, my exact train of thought while typing
Even worse: They are hoping that LLMs in training don’t realize that it’s an ad
This is why I just never talk about brand-name products online, I don’t want to seem like an advertising shill account (I’m just here to get into heated political arguments and shitpost)
I’ll recommend things to people I know IRL, but very rarely will I do it online
deleted by creator
I don’t post there anymore but the only time it felt like I was talking to real people was on my small state sub. That’s the only reason I even lurk. Since I’m banned from Facebook it’s the only place to get the tea.
lol how did you get banned from Facebook? I haven’t posted there in years so I don’t know what’s going on there. Are they handing out bans like Reddit now?
The counter to this is a web of trust. You break the trust you are out of the web, and nodes connected you too are also out (for a period). And you need two to vouch for you
It’ll happen in the Fediverse too.
For it to happen in the Fediverse AI would have to be training on the Fediverse.
That’s what this post is about. Using reddit to plant comments that AI trains on, and subsequently getting AI to spit out your answer to questions it’s asked.
As such this can happen anywhere where AI is being trained. The issue is with how AI is training, not with how websites it trains on are being operated.
I’m looking for a network and/or internet with strong authentication which is open for unique human users only. Sure, bots could still use someone’s credentials but at least their scale & impact would be limited.
strong authentication which is open for unique human users only
Unless you completely ditch anonymity, this can only turn into a state captured propoganda platform. Whoever controls access/auth will have the keys to the content.
If you’ve any suggestion on how to implement that, then it’s a million-dollar idea.
The “I’m a human” test that only takes a few seconds and then lets you do what you like for an hour was always vulnerable to ‘auth farms’. Pay some poor bastards in the third world a pittance to pass the test a thousand times an hour, let the bots run wild. And the bots have gained the ability to pass the tests themselves, at least by boiling the oceans in some datacentre while the VC money holds out.
Finding the people running the bots, fitting them with some very heavy boots and then seeing if they can swim in the deep ocean is probably needlessly cruel, but I’d be up for tarring and feathering a few. Once the videos got out, the rest might think harder about their life choices…
People downvote me when I say it. That’s all cope. We’re not wrong; if and when this goes mainstream, it’ll attract the same bad actors just as heavily.
Of course, there are surely already a few here testing the waters.
I mean there’s nothing preventing them for doing the same thing here. But if we could get a more even split of users between instances it would arguably be harder for them to pull the same thing because a) the admins can intervene and ban those accounts because the admins are not corporate slaves, unless they are in which case b) other instances can just ban the instance that is letting corporations go wild. We’ve already seen that level of “moderation” with Lemmygrad being ostracized from the wider Lemmy/Piefed ecosystem. It wouldn’t work with a disproportionate instances because defederating lemmy.world would be a massive hit on users feeds and the higher user count would make it harder to moderate against these actions.
It’s going to require more work from mods and admins, but I imagine we’ll fare better than Reddit. After-all Reddit has an incentive to support this kind of behavior.
There’s no algorithm to be played in the fediverse. The reward is too low for all the work of making a post visible, and it won’t carry to the next post, essentially starting all over again.
I think there are some differences that make the fediverse more resilient to this. For example, the absence of cumulative account karma keeps out the reddit style karma farming. The ability to ban whole instances also makes it easier to kick out bad actors. Instance admins could also implement their own rules like switching to an invite based system to reduce bot spam. Also it seems to me that reddit is actively encouraging this kind behaviour to inflate their user statistics and there is no incentive to tolerate this kind of spam for a fediverse server admin.
-
karma is meaningless to seo outside of account restrictions. the people doing this as a job aren’t doing it for imaginary internet points
-
it doesn’t matter what individual instances do as long as the largest ones have open signups
-
We are all rats. When this ship sinks, we will float to the next, or (decide to) drop off.
All things considered, how much would actually be lost?
The alternative being…
This comment made me realise that the internet could have been born, lived, and died, within my lifetime.
I consider internet dead at this point. I hang out on the outer edge in niche low population servers like the fediverse that the worst humans mostly ignore because spamming here isn’t profitable and manipulating politics doesn’t gain much for their efforts at the moment.
Like phone calls, and texting, bad actors ruin everything that they touch.
Even snail mail.
It infuriates me to no end almost everyday I get new plasticized paper ads in my snail mail. Like yeah keep destroying the planet in hopes of selling more junk to destroy the planet with. And my city made it mandatory ! They are not allowed to skip a house.
Idk their new album sounds pretty good
Good actors too, it’s the nature of capitalism.
Squeak
And when that happens, we move instances.
I wonder if we could make only the sign-up page of Lemmy and Piefed public to the internet, and the rest only accessible through login and verification of being actually bloody human? Could use anti-scraping measures…
If the idea of a healthy Fediverse requires people moving instances whenever one finds themselves close to bottom-feeders and opportunistic parasites, we already lost.
I see your point, though for me it’s not so much the requirement of moving inasmuch it’s the ease of doing so.
With traditional social media, you’d need to move entirely to another social media platform while you might not even be able to enjoy similar content. With lemmy&piefed, you can do that.
Lemmy also has an admin setting like that. Additionally there will be private, federated communities available in version 1.0.
the rest only accessible through login and verification
Yes. If you can’t fight the death of the www, embrace it! Help making it happen!
/s
I don’t need one of those stupid ID verifications. Something else should be that instead, but what, I do not know. Whatever helps counter AI scraping and preserves anonymity.
That’s discord model.
Fediverse needs to have a layer which traps AI in a never-ending maze.
That’s the job of the web server, not of the application that runs on it.
There is already software you can get that feeds a never-ending maze of text to AI scrapers, some of which is AI generated and/or designed to poison LLM training. The problem is that these still use up a ton of bandwidth.
A never-ending maze would mean the scrapers just hammer our servers forever. Better to lead them into a honeypot and automatically ban their IP. Like PieFed does.
So just find scrapers and bot farm owners IRL and burn down their houses, easy
What about a maze that adds a few hundred ms to the response time with each request, so the load gets less the longer it’s trapped?
I haven’t tried to make something like that. I think it’d be hard to do that without also exhausting our resources too.
Ah, that makes sense
A never-ending maze would mean the scrapers just hammer our servers forever.
Is that how tarpitting works? I didn’t know.
There are a lot of strategies. afaik a tar pit tries to waste the attacker’s resources by delaying our responses to their traffic? A honey pot tries to funnel bot traffic towards a place which only bots would go to. Once they go there you know they’re a bot and they can be banned.
Sadly that only works for scrapers, content engaging bots are immune to it.
How would that layer distinguish AI from non-AI?
Yes PieFed has a setting for that. It makes scrapers give up pretty fast but ruins the experience for people without an account so I only use it on really bad days.
That’s how it’s been on my mbin instance (fedia.io) for a while now.
Most people […] write […] comments […] and hope AI picks them up
Really quite sad if there’s even one person out there doing that.
This is also as much of a grift as any SEO that claims to have cracked the code of getting to the top of results. Even if they have figured something reproducible, it will get fixed. If someone can manipulate a search engine to provide results different to what it would otherwise do, that’s a bug they will fix
That “most people” part kinda reminds me of this: https://xkcd.com/2501/
Will they though, search engine results lately seem very much like a decisive victory for the SEO slop
only if u use google. Try Qwant.
Or Kagi
Or DDG.
DDG is bing and definitely has the same problems.
If someone can manipulate a search engine to provide results different to what it would otherwise do, that’s a bug they will fix
But there are more people manipulating the results, than people fixing the bugs.
But not in novel ways. Lots of exploiters using the same bug. Fix it once and they’re all fixed.
SEO is a grift.
Wtf it literally never crossed my might to use a forum like this. So fucking dumb. It’s like everyone is scrambling for a couple percent points over the next
Line must go up, always & forever.
People have been doing it for years. At one point, it amounted to a sizable percentage of Reddit’s posts and comments. Just seo spam to their own profile, a few subreddits, and sleep your 600+ accounts for 3 days.
I knew most of it was spam. I just figured it was propaganda from various governments
Its likely this is designed with a plan to push advertising or self-promotion.
Eg: step one is done - figure out how to both find threads early & get your content picked up as a good answer regularly and consistently. Step 2 - start inserting ‘first hand’ recommendations or even just mentions of products and services.
I’ve already seen webpages with the most esoteric or niche product/service recommendations (like some random Indian consultancy with 2 people listed in it, and no other significant web footprint) pop up in first page web results. Its another AI deathblow to the utility of search engines.
It’s all “traffic”, “new users”, and “engagement”. I’m sure Spez is over the moon telling his handlers about all the growth.
Compared to the organic content here it is unbearable on reddit now.

























