A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 hours ago

    It’s not the clarity alone. Chatbots are completion engines, and responds back in a way that feels cohesive. It’s not that a question isn’t asked clearly, it’s that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.

    It’s like if you ask a ChatGPT what is the meaning of life you’ll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it’s more likely to say 42 (I should test that before posting but I won’t).

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Indeed. Additional context will influence the response, and not always in predictable ways… which can be both interesting and frustrating.

      The important thing is for users to have sufficient control, so they can counter (or explore) such weirdness themselves.

      Education is key, and there’s no shortage of articles and guides for new users.