Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”



This makes sense. However doctors aren’t perfect either and one thing properly trained AI should excel at is helping doctors make rare diagnoses or determine additional testing for some diagnoses. I don’t think it’s quite there yet but probably close to being a tool a well trained doc could use as an adjunct to traditional inquiry. Certainly not something end users should be fiddling with with any sort of trust though. Much of doctor decision making happens based on experience - experience biases towards common diagnoses which usually works out because well, statistics, but it does lead to misdiagnosis of rare disorders. An Ai should be more objective about these.
Even if AI works correctly, I don’t see responsible use of it happening, though. I already say nightmarish vertical video footage of doctors checking ChatGPT for answers…