Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

  • Aproposnix@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    11 hours ago

    I will admit that I am very cynical right now when it comes to multibillion dollar companies. I can also see it as being possible that he (the CEO) does not want his technology to be used for mass surveillance or Autonomous drone swarms. But seeing what we know, how corporations are acting and how they are protecting their own financial interest, this is, after all, capitalism, it would not surprise me if this is just a public facing statement that he is making so that he doesn’t lose public support. And privately, he is going to flip and help the US government. And of course Pete Hegeth is just going to say that he compelled them to do it through some law. But again, I am very cynical.

    • doomguin@piefed.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      Doesn’t support mass surveillance on US Citizens

      Apparently everyone else is fair game.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      ·
      10 hours ago

      Anthropic founders are former OpenAI employees who left specifically because they disagreed with OpenAI’s stance on this kind of stuff and they wanted nothing to do with it. If this is just a PR stunt then I don’t see why they would’ve left OpenAI in the first place.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        There have been some pretty high-profile departures from Anthropic over the past few months, so… I dunno, seems like there are plenty of insiders who are unhappy with the company’s current trajectory.

      • andallthat@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        10 hours ago

        yes, too soon. It took years and several bajillions in profit for Google to remove the “don’t be evil” motto

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Every Anthropic PR release has been followed up by a huge infusion of cash from companies like Google and Amazon.

        On January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google.