The evolution of OpenAI’s mission statement.

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

  • MalReynolds@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    We’re so damn lucky that LLMs are a dead end (diminishing returns on scaling even after years of hunting) and they just pivoted to the biggest Ponzi scheme ever, bad as that is (and the economic depression it will cause), it pales into insignificance compared with the damage these fucks would do with AGI (or goddess forbid ASI with the alignment they would try to give it).