Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • rudyharrelson@lemmy.radio
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    2
    ·
    edit-2
    4 hours ago

    People always say this on stories about “obvious” findings, but it’s important to have verifiable studies to cite in arguments for policy, law, etc. It’s kinda sad that it’s needed, but formal investigations are a big step up from just saying, “I’m pretty sure this technology is bullshit.”

    I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that’s been replicated by multiple independent groups makes it way easier to argue to a committee.

    • Knot@lemmy.zip
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 hours ago

      I get that this thread started from a joke, but I think it’s also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions

      It’s also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people’s hunches, it’s not sad, it’s a good thing!

    • irate944@piefed.social
      link
      fedilink
      English
      arrow-up
      22
      ·
      4 hours ago

      Yeah you’re right, I was just making a joke.

      But it does create some silly situations like you said

        • IratePirate@feddit.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          A critical, yet respectful and understanding exchange between two individuals on the interwebz? Boy, maybe not all is lost…

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 hours ago

      The thing that frustrates me about these studies is that they all continue to come to the same conclusions. AI has already been studied in mental health settings, and it’s always performed horribly (except for very specific uses with professional oversight and intervention).

      I agree that the studies are necessary to inform policy, but at what point are lawmakers going to actually lay down the law and say, “AI clearly doesn’t belong here until you can prove otherwise”? It feels like they’re hemming and hawwing in the vain hope that it will live up to the hype.

    • BillyClark@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      it’s important to have verifiable studies to cite in arguments for policy, law, etc.

      It’s also important to have for its own merit. Sometimes, people have strong intuitions about “obvious” things, and they’re completely wrong. Without science studying things, it’s “obvious” that the sun goes around the Earth, for example.

      I don’t need a formal study to tell me that drinking 12 cans of soda a day is bad for my health.

      Without those studies, you cannot know whether it’s bad for your health. You can assume it’s bad for your health. You can believe it’s bad for your health. But you cannot know. These aren’t bad assumptions or harmful beliefs, by the way. But the thing is, you simply cannot know without testing.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      Also, it’s useful to know how, when, or why something happens. I can make a useless chatbot that is “right” most times if it only tells people to seek medical help.