Mrinank Sharma, who led Anthropic's Safeguards Research Team since 2025, resigned on February 10, 2026. In a letter shared on X, he warned that "the world is in peril, not just from AI, or bioweapons, but from a whole series of interconnected crises." He cited internal pressures that force safety teams to "set aside what matters most." Sharma plans to move to the UK and pursue a poetry degree.
Who Is Mrinank Sharma?
Mrinank Sharma joined Anthropic in August 2023 and rose to lead the company's Safeguards Research Team when it was formed in early 2025. He holds a PhD in Statistical Machine Learning from Oxford. His work at Anthropic included studying AI sycophancy, developing defenses against AI-assisted bioterrorism, and authoring "one of the first AI safety cases."
Before his resignation, Sharma was considered a key figure in Anthropic's safety infrastructure—the very team responsible for ensuring Claude doesn't cause harm.
The Resignation Letter
Sharma's letter was notably philosophical—and unsettling. He wrote that he had "repeatedly seen how hard it is to truly let our values govern our actions," both within himself and within institutions "shaped by competition, speed, and scale."
"The world is in peril, not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity."— Semafor
The letter also referenced "CosmoErotic Humanism," a philosophical framework, and announced plans to pursue a poetry degree and practice "courageous speech."
Internal Tensions Revealed
Sharma's resignation hints at friction between Anthropic's public commitment to safety and internal business pressures. According to Semafor, he wrote that the safety team "constantly faces pressures to set aside what matters most."
This comes as Anthropic faces scrutiny over Claude Cowork, which triggered a $285 billion selloff in software stocks. One anonymous employee told Futurism: "It kind of feels like I'm coming to work every day to put myself out of a job."
A Pattern of Departures
Sharma's exit is part of a broader trend. On the same day, Zoë Hitzig, a researcher at OpenAI for two years, announced her resignation in a New York Times essay, citing "deep reservations" about OpenAI's emerging advertising strategy.
In 2024, OpenAI's Superalignment team lost two key researchers over disagreements about prioritizing safety versus commercial objectives. The pattern suggests AI safety researchers are increasingly uncomfortable with the direction of frontier AI labs.
Anthropic's Response
Anthropic told CNN it was "grateful for Sharma's work advancing AI safety research." The company clarified that he was not the head of safety nor in charge of broader safeguards at the company—a statement that appears to downplay his role despite his team leadership position.
Notably, Anthropic CEO Dario Amodei recently stated at Davos that AI progress is accelerating too rapidly and advocated for regulatory measures to slow industry advancement.