• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That’s what they’re meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.

    I mean the issue aren’t women or anything, it’s using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        16 hours ago

        Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.