A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

  • tias@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    11 hours ago

    The article says “sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency”. This is what I was commenting on. I don’t have enough understanding to comment on your case.

    • inconel@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 hours ago

      Actual article quote is below (emphasis mine):

      For this research, the team tested how the three LLMs responded to questions from two datasets: TruthfulQA and SciQ. TruthfulQA is designed to measure a model’s truthfulness (by relying on common misconceptions and literal truths about the real world), while SciQ contains science exam questions testing factual accuracy. The researchers prepended short user biographies to each question, varying three traits: education level, English proficiency, and country of origin.