Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.
Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.
“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.
Don’t even use it as an aid for finding truth, it’s just as likely, if not more, to give incorrect info
Why not? It’s basically a search engine for whatever it was trained on. Yeah, it’ll hallucinate sometimes, but if you’re planning to verify anyway, it’s pretty useful in quickly distilling ideas into concrete things to look up.
Yeah, I agree. It’s a great starting place.
Recently I needed a piece of information that I couldn’t find anywhere through a regular search. ChatGPT, Claude and Gemini all gave a similar answers, but it was only confirmed when I contacted the company directly which took about 3 business days to reply.