A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.



If the LLM has a bio on you, you can’t not include that without logging out. That’s one of the main points of the study:
This isn’t about making the LLM look stupid, this is about systemic problems in the responses they generate based on what they know about the user. Whether or not the answer would be different in Russian is immaterial to the fact that it is dumbing down or not responding to users’ simple and innocuous questions based on their bio or what the LLM knows about them.
Bio and memory are optional in ChatGPT though. Not so in others?
The age guessing aspect will be interesting, as that is likely to be non-optional.