• Spaniard@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.

    It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.

    Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.

      • Spaniard@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        How come it ended up giving me the right answer albeit removing some previous right answers then? (removed a few companies for some reason)

        Anyway that was a small and easy to check misinformation but if they have over 3 decades of online informational about me noway a person is going to confirm the LLM didn’t bullshit it’s way to an answer to satisfy the human.

        • madmantis24@lemmy.wtf
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          These models aren’t going to produce accurate information about the people they investigate, and it won’t even matter if it’s accurate. What “matters” is that their reports will add new layers of the facade of legitimacy to whatever story the authorities using them want to construct