• affenlehrer@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 hours ago

    Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).