Moderation is work. Trolls traumatize. Humans powertrip. All this could be resolved via AI.

  • okr765A
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    The AI used doesn’t necessarily have to be an LLM. A simple model for determining the “safety” of a comment wouldn’t be vulnerable to prompt injection.