• sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    Valve’s customer service responses have always been mostly a canned series of bot messages.

    Their in-house support has always been 99% automated.

    Its very obvious if you’ve ever interacted with them at more than an occasional, superficial level.

    You have to be quite persistent to get a message from an actual human being.

    Yep, the automated messages often have the name of ostensibly a human attached to them.

    So do all kinds of other bots, since way before ChatGPT and LLMs took off.

    What, did you a think a human person actually read every single complaint report of a hacker or cheater in a video game with any kind of a massively used anti cheat system?

    No! You have bots, analytic systems, screen that shit, just the same as all our resumes on Indeed, or our activity and profiles on dating apps have been being analysed and evaluated by bots, again, since way before LLMs got as prevalent as they are today.

    Then you filter. Humans only see the odd ones that defy categorization, basically, or trigger a certain set of flags that are designated as ‘probably needs an actual human to handle this one’.


    This has been a tech industry standard for almost two decades.

    Valve is just now overhauling that system to use an LLM, because those are actually better than a very complex series of chained regex searches.

    The alternative would be to do what Meta or Google or Amazon do: Hire armies of tens to hundreds of thousands of offshore contractors and give them all PTSD for pitiful wages, manually evaluating everything.

    Apparently this is not widely known, by people who’ve never worked in an entreprise level tech company?


    Using LLMs to evaluate and assist a massive anti-cheat system is actually a great way to be be able to do an anti-cheat system… without hooking directly into your kernel.

    These things are very good at pattern recognition, and if you tune them to specifically only work with inputs from the server from gaming sessions, you can significantly improve server-side/backend detection of players/clients doing things that are highly suspicious or outright impossible given the actual rules of the game.