• Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    9
    ·
    4 days ago

    this is an “Don’t allow anyone access your backups without following protocol.” problem.

    Congratulations you just identified the AI problem.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        edit-2
        4 days ago

        Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.

        The alternative that would have prevented this issue was to not use AI for this.

        • luciferofastora@feddit.org
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          A human with the same permissions would have been capable of fucking up too. Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

          (Relying on AI is dumb anyway, but that’s not the biggest issue in this specific case)

          • Encrypt-Keeper@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 days ago

            Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

            Correct. You too have now identified the AI problem. This was the job of a human senior infrastructure engineer that they delegated to an AI agent. They’ve found out why it’s not an AI’s job.

            • luciferofastora@feddit.org
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 days ago

              I can’t read the original twitter link, but I’m not sure they handed it the job of a senior infrastructure engineer. The article says “routine”, which to me is something you can hand off to a junior just fine. When they hit a snag, they obviously should stop and ask what to do, but even then, a human might want to avoid admitting ignorance and try to fix it themselves instead. They shouldn’t have privileges to fuck up that badly.

              So while it’s on the AI for taking destructive steps, I do think there’s a human error in the form of grossly irresponsible rights allotment. If this was a first-of-its-kind incident that shows otherwise stellar AI fucking up badly, I’d classify it as a pure AI problem, but their limits are hardly novel at this point. There have been previous incidents circulating the media. We’ve had memes about it. If you can’t stay up to date on your tools and their shortcomings, you shouldn’t be using them, because discovering a footgun becomes a question of “when”, not “if”.

              That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits. The chainsaw doesn’t know the difference between tree and bone, so it’s on you to make sure it stays away from anyone’s legs. So while “Chainsaw can saw legs if wielded improperly” is a problem that was accepted as a tradeoff for its utility, you can’t really blame the chainsaw if you zip-tied the safety.

              (Again, not to say Anthropic is blameless for letting its random generator generate randomly destructive shit. I just don’t think that’s the only point of failure here.)

              • Encrypt-Keeper@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 days ago

                That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits.

                Yes and in this case using it for this job at all was clearly not within safe limits. You keep hammering on “It’s not the AI’s fault it was given a job with too big of a blast zone for it to safely do” after I’ve said “This type of job has too big a blast zone for an AI to safely do” and somehow you’ve convinced yourself that these are two different things.

                • luciferofastora@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  Yes and in this case using it for this job at all was clearly not within safe limits.

                  Do you have any detail on what “this job” was? Like I said, I don’t have access to the original statement because twatter wants me to log in to see it.

                  What I do see is “routine task in the […] staging environment”, and that doesn’t sound like a big blast zone job. Again, it’s comparable to a job you’d give a junior engineer. There shouldn’t be much a junior engineer can fuck up, no matter how “creative” their solutions.

                  Whether it’s a human junior engineer, an automatic script or an agentic AI, they should never have more privileges than they need for their job. Granting someone or something that isn’t the senior admin permission to delete a volume is irresponsible.

                  The AI generating that fucking awful idea is on the AI (or its developers). Both are partial causes for the incident. It’s not just human error, but it’s also human error that would have been dangerous regardless of AI involvement.

                  • Encrypt-Keeper@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    3 days ago

                    Granting someone or something that isn’t the senior admin permission to delete a volume is irresponsible.

                    Correct. Like I said this was the job of a senior admin.

                    They gave the AI the job of managing IaC for their environment. Then were shocked when the AI managed the environment incorrectly. This is absolutely not something you let a junior engineer anywhere near.

                    You seem to be suggesting that the AI should be able to do the job they gave it without being given the permission required for it to do. The thing about doing things in IT, is you need to have permissions to do the things you’re asked to do. So you have to make sure the person you give permissions to is reliable and knows what they’re doing. The AI did not.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.

        However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.