This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.
This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.