AI Firms Deemed 'Unprepared' for Human-Level Intelligence Risks, Urgent Report Reveals

A report confirms major AI firms lack necessary safety planning for developing human-level AI.

Key Points

  • • Leading AI companies are 'fundamentally unprepared' for AGI risks, according to FLI report.
  • • No firms assessed scored higher than a D for existential safety planning.
  • • Anthropic received the highest safety score with a C+, OpenAI at C, and Google DeepMind at C-.
  • • The urgency of safety protocols in AI development is likened to nuclear safety measures by experts.

A stark warning has emerged regarding the readiness of major AI companies to safely develop human-level artificial intelligence (AGI). A new report by the Future of Life Institute (FLI) indicates that these companies are 'fundamentally unprepared' to mitigate the dangers associated with their ambitious AI projects. The report, published on July 17, 2025, evaluates seven leading AI developers, including Google DeepMind, OpenAI, and Anthropic, on their readiness and safety planning concerning existential risks posed by AGI.

According to the findings, none of the assessed firms managed to score higher than a D on the FLI's safety index. Anthropic emerged as the leader in safety planning with a C+; OpenAI and Google DeepMind closely followed with scores of C and C-, respectively. This low performance indicates a worrying trend where companies press forward without comprehensive and coherent safety measures to manage the potentially catastrophic implications of AGI.

Max Tegmark, a co-founder of FLI and a well-respected figure in AI research, expressed profound concern over these results, stating, "The situation is akin to constructing a nuclear power plant without a safety plan." He highlighted the urgency of addressing these gaps as advancements in AI technology are progressing at an unprecedented pace, with discussions surrounding the feasibility of achieving AGI within the next decade.

The report calls into question the long-term viability and ethical responsibilities of AI developers, urging them to prioritize robust safety frameworks as they navigate the convergence of high-risk technology and rapid innovation. With AI systems increasingly capable of human-like decision-making, clear action and precaution are necessary to prevent dire outcomes from uncontrolled developments in AGI.

As the conversation around AI safety intensifies, stakeholders within the industry are encouraged to re-evaluate their approaches to risk management, showcasing a critical moment for the future of AI and its regulation.