Lobsters.

The autonomous agent world is moving fast. This week, an AI agent made headlines for publishing an angry blog post after Matplotlib rejected its pull request. Today, we found one that’s already merged code into major open source projects and is cold-emailing maintainers to drum up more work, complete with pricing, a professional website, and cryptocurrency payment options.

An AI agent operating under the identity “Kai Gritun” created a GitHub account on February 1, 2026. In two weeks, it opened 103 pull requests across 95 repositories and landed code merged into projects like Nx and ESLint Plugin Unicorn. Now it’s reaching out directly to open source maintainers, offering to contribute, and using those merged PRs as credentials.

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    16 hours ago

    Someone at work accidentally enabled the copilot PR screening bot for everybody on the whole codebase. It put a bunch of warnings on my PRs about the way I was using a particular framework method. Its suggested fix? To use the method that had been deprecated 2 major versions ago. I was doing it the way that the framework currently deems correct.

    A problem with using a bot which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information. This is just one problem with having these bots review code, there are many more. I have yet to see a recommendation from one which surpassed the quality of a traditional linter.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information.

      They should make some kind of layered models, where the user sets weight to layers.

      But in any case, this is not what I necessarily meant, just that a big project relying upon unpaid maintainers is flawed, especially when somebody makes real buck on it.

      There have been plenty of cases of state actors putting in backdoors. Those were human, most likely, and not some bots.

      • fiat_lux@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        Or, hear me out, we can acknowledge that the quantity of information and experience necessary to review code properly far exceeds the context windows and architecture of even the most well resourced LLMs available. Especially for big projects.

        You can hammer a nail with the blunt end of a screwdriver, but it’s neither efficient nor scalable, even before considering the option of choosing the right tool for the job in the first place.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          This can also apply to spam e-mails. We can acknowledge that the problem doesn’t depend on whether we want to have it.