The turbo-hell part is that the spam comments aren’t even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know “what should I buy to solve X?” or “which is better A or B?” they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
The turbo-hell part is that the spam comments aren’t even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know “what should I buy to solve X?” or “which is better A or B?” they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
My god. Somehow I hadn’t thought of doctors using LLMs to make decisions like that. But of course at least some do.
You never want to know how the sausage is made.
Oof. Haven’t met a lot of doctors huh? Check out some of their subreddits
Considering that LLM content that makes it into training content makes the trained LLMs worse… is this adversarial?