OpenAI Sounds Alarm on Rapid AI Growth, Taps Ex-Anthropic Expert to Lead Safety Efforts
- Editorial Team
- 13 hours ago
- 4 min read

OpenAI — one of the world’s most influential artificial intelligence companies and the maker of ChatGPT — has signaled growing concern about how fast AI technology is advancing, and taken a major step to strengthen its internal safeguards by hiring a high-profile safety expert. On February 4, 2026, OpenAI announced that Dylan Scandinaro, a former technical team member at rival AI lab Anthropic, has joined the company as its new Head of Preparedness, a role focused on identifying and mitigating serious risks from increasingly powerful AI systems.
A New Safety Leadership Role at a Critical Time
The appointment of a Head of Preparedness underscores not just OpenAI’s rapid technological progress, but also how seriously the company is now taking the broader implications of its work. CEO Sam Altman publicly welcomed Scandinaro’s arrival in a post on X (formerly Twitter), noting that the pace at which advanced AI models are evolving — including those developed internally — demands “commensurate safeguards” to ensure their benefits can be realized without causing undue harm.
This role, which sits within OpenAI’s Safety Systems team, was first advertised in late 2025. Altman has previously described the job as challenging and even “stressful,” highlighting the weight of responsibility that comes with leading efforts to understand, test for, and alleviate severe risks that next-generation AI could pose.
In practical terms, Scandinaro is expected to work on preparing for potential dangers from AI that go well beyond today’s use cases — including risks associated with AI agents that might act autonomously in complex environments, and the possibility that highly capable systems could be misused in ways that harm cybersecurity, economic stability, or social wellbeing.
Why AI Safety Matters More Than Ever
OpenAI’s move comes amid a growing global conversation about how quickly artificial intelligence is advancing. In the past few years, large language models and general-purpose AI systems have gone from tools that help write emails or create art to technologies capable of automating complex reasoning, generating software code, and transforming entire industries. This evolution has sparked tremendous excitement — but it has also raised legitimate concerns among researchers, policymakers, and even AI company leaders about how to control or govern these systems responsibly.
Some AI experts argue that we may be closer than ever to Artificial General Intelligence (AGI) — an AI that can perform any intellectual task that a human can — though such a level of capability remains theoretical. Regardless, the risk profile of AI systems has shifted. They are no longer just productivity tools; their outputs and behaviors can now have real-world consequences that are harder to predict, from cybersecurity vulnerabilities to ethical dilemmas around misinformation or economic displacement.
On the public stage, figures outside OpenAI have also been voicing grave concerns about the pace and impacts of AI innovation. For instance, Anthropic’s CEO, Dario Amodei, has warned that AI could upend employment patterns in the near future, potentially affecting up to half of all entry-level white-collar jobs within a few years due to automation — a shift that could be “unusually painful” for workers and the broader economy.
Beyond economic impacts, deep concerns persist around how AI could be misused for harmful ends — whether that be subverting cybersecurity, producing biological threats, or magnifying political and social manipulation. These multifaceted challenges have caused some prominent AI safety teams and researchers elsewhere to emphasize the need for robust governance frameworks, collaboration between governments and industry, and proactive risk management.
Scandinaro’s Background and the Stakes at OpenAI
Dylan Scandinaro brings to OpenAI extensive experience in AI safety and risk analysis. Prior to joining OpenAI, he worked on technical safety issues at Anthropic — a lab founded by former OpenAI researchers with a mission focused explicitly on ethical and safe AI development. Scandinaro has also held roles at major research and technology firms including Google DeepMind and Palantir, giving him a wide perspective on both the promise and peril of rapidly scaling AI systems.
The compensation for the position reflects both the expertise required and the strategic importance of the job. According to the public job listing, Scandinaro is expected to earn a base salary of at least $550,000 per year (roughly ₹4.16 crore) — one of the higher salary ranges for safety-oriented roles in the AI industry.
OpenAI’s safety credentials have faced scrutiny in recent years as the company has pursued increasingly powerful models. Several former researchers have publicly left or criticized the company’s approach to safety, arguing that risk and alignment considerations were not always prioritized as strongly as technological scaling and commercial deployment. The creation of the Head of Preparedness role — and the choice to fill it with a respected voice in the safety community — can be seen as a strategic effort to bolster internal risk management and demonstrate commitment to broader responsible AI goals.
The Bigger Picture: Industry-Wide Risk Management
OpenAI’s announcement is part of a broader trend in the AI field: as AI models become more capable, both companies and regulators are grappling with how to balance innovation with precaution. Industry researchers and academic experts alike have proposed frameworks, safety protocols, and even international agreements designed to prevent unsafe or premature deployment of highly autonomous AI systems.
But despite these efforts, effective governance remains elusive. Critics point out that current internal safety frameworks — while a step forward — may not fully address extreme or emergent risks without external oversight or more robust regulation. The technical complexity of these models only complicates these challenges, underscoring why leadership roles like OpenAI’s Head of Preparedness are increasingly seen as essential.
In hiring Dylan Scandinaro, OpenAI appears to be acknowledging both the extraordinary opportunity that AI represents and the extraordinary risks that come with it. As the technology evolves, how companies like OpenAI manage safety — and how the world responds — may well shape the future of AI and its role in society for decades to come.