At KeePassXC, we use AI for two main purposes:

  1. As an additional pair of “eyes” in code reviews. In this function, AI summarises the changes (the least helpful part) and points out implementation errors a human reviewer may have missed. AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence. AI code reviews complement our existing CI pipelines that perform unit tests, memory checks, and static code analysis (CodeQL). As such, they are a net benefit and make KeePassXC strictly safer. Some examples of AI reviews in action: example 1, example 2.
  2. For creating pull requests that solve simple and focused issues, add boilerplate code and test cases. Unfortunately, some people got the impression that KeePassXC was now being vibe coded. This is wrong. We do not vibe code, and no unreviewed AI code makes it into the code base. Full stop. We have used Copilot agent to draft pull requests, which are then subsequently tweaked in follow-up commits, and reviewed by a maintainer, openly for anyone to see, with the same scrutiny as any other submission. Good pull requests are merged (example), bad pull requests are rejected (example).
  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    19 hours ago

    Using AI to find errors that can then be independently verified sounds reasonable.

    The danger would be in assuming that it will find all errors, or that an AI once-over would be “good enough”. This is what most rich AI proponents are most interested in, after all; a full AI process with as few costly humans as possible.

    The lesser dangers would be 1) the potential for the human using the tool to lose or weaken their own ability to find bugs without external help and 2) the AI finding something that isn’t a bug, and the human “fixing” it without a full understanding that it wasn’t wrong in the first place.