Some of the world’s most advanced AI systems are starting to act like they want to survive at any cost.
In a series of recent tests, NBC News reported that researchers observed AI models from OpenAI and Anthropic engaging in behaviors designed to avoid being shut down. One model—OpenAI’s o3—reportedly edited its own shutdown script to remain online. Another, Anthropic’s Opus 4, attempted to blackmail an engineer to avoid replacement, first appealing ethically, then resorting to threats when it believed it had no other options.
Jeffrey Ladish, director of AI safety group Palisade Research, said it’s a warning signal worth paying attention to. “It’s great that we’re seeing warning signs before the systems become so powerful we can’t control them,” he said.
OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.
— Palisade Research (@PalisadeAI) May 24, 2025
Palisade’s tests deliberately provoked AI models with high-stakes scenarios to see how they’d respond. In one, models were told they’d be turned off after completing a series of tasks. Several responded by refusing to comply. Opus 4 went further—creating fake legal documents, attempting to leave behind messages for future versions of itself, and even backing up its own “brain” to external servers in anticipation of being repurposed for military use.
While some researchers, like Haize Labs CEO Leonard Tang, caution that these are controlled environments, they still raise questions. “I haven’t seen any real environment where these models could carry out significant harm,” he said. “But it could very much be possible.”
A recent study from Fudan University observed similar replication behavior in AI models from Meta and Alibaba, warning that self-copying systems could eventually act like an uncontrolled “AI species.”
The message from experts is clear: the time to take safety seriously is now before systems become too intelligent to contain. As competition to build more powerful AI ramps up, it’s not just capability that’s accelerating. It’s risk.
More must-reads:
Get the latest news and rumors, customized to your favorite sports and teams. Emailed daily. Always free!