• geekwithsoul@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    It’s more than “AI isn’t perfect”, it’s that AI isn’t even good. Moderation, and even summaries, require more than predictions - they require understanding, which AI doesn’t have. It’s all hallucinations, and it’s just that through sheer dumb luck and hoovering in so much ill-gotten data that sometimes the hallucinations happen to be correct.

    • Yezzey@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      6 hours ago

      Dismissing it all as “hallucinations” is like saying all cars are useless because they can crash. No tool is flawless but imperfect doesn’t mean worthless.

      • geekwithsoul@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        Nice strawman, but not applicable. A car can mechanically fail, resulting in a crash or a human can operate it in such a manner as to cause a crash. It can’t crash on its own and if driven and maintained correctly, won’t crash.

        An AI, on the other hand, can give answers but never actually “knows” if it’s correct or true. Sometimes the answers will be correct because you get lucky but there’s nothing in any current LLM out there that can tell fact from fiction. It’s just based on how it’s trained and what it’s trained on, and even when taking from “real” sources, it can mix things up when combining sources. Suggest you read https://medium.com/analytics-matters/generative-ai-its-all-a-hallucination-6b8798445044

        The only way a car would be like an AI is if every time you sat in the car, it occasionally drove you to the right place and you didn’t mind the other 9 out of 10 times it drove you to the wrong place, drove you using the least efficient route, and/or occasionally drove across lawns and fields, and on sidewalks. Oh, and the car assembles itself from other people’s cars and steals their gas.