| Apr 03, 2026

Are the Leading AI Labs Ready for the Risks They Create?

In December 2025, the Future of Life Institute released the third edition of its AI Safety Index, a comprehensive evaluation of eight leading AI companies. The verdict was stark: no company scored well enough to satisfy what many experts consider minimal safety standards for advanced AI development. Despite increasingly powerful AI systems and rising ambitions for superintelligence, industry practices remain alarmingly inadequate (3, 1, 2).

The Report: Scope, Methodology, Key Findings

The AI Safety Index (Winter 2025) assessed eight major companies — including Anthropic, OpenAI, Google DeepMind, xAI, Meta, and Chinese firms such as DeepSeek, Alibaba Cloud and Z.ai. The evaluation covered 35 safety indicators grouped into six domains: risk assessment; current harms; safety frameworks; existential safety; governance & accountability; and information sharing (3, 2).

According to the report, even the best-performing firms fall far short of what emerging global standards demand. The companies were graded based on a US-style GPA scale (A+ down to F). The top three in this ranking — Anthropic, OpenAI, and Google DeepMind — received only grades of C+, C, and C– respectively. No single firm scored above a C+ (3, 4).

The findings reveal a deep structural problem: while many firms publicly espouse grand ambitions of reaching “superintelligent” AI (also called Artificial General Intelligence — AGI), none has a credible, comprehensive plan to manage the existential risks such systems could pose. In the category “Existential Safety,” every single assessed company received a D or an F (4, 3).

What Went Wrong: Key Weaknesses in Safety Practices

Poor Risk Assessments & Limited Transparency

One of the gravest concerns is the lack of systematic, transparent risk assessments. While a handful of firms (notably Anthropic, OpenAI, and Google DeepMind) conduct some internal and external evaluations of potential harms, the scope remains narrow and rarely covers high-impact, low-probability risks — such as misuse for cyberattacks or bio-threats (3, 5).

Even worse, disclosure is often shallow: critical details such as threat modeling, mitigation strategies, and decision-making processes remain opaque. External reviews — when they exist — are seldom truly independent (3, 6).

Safety Frameworks Are Shallow and Fragmented

While companies have begun to publish safety frameworks and governance documents, these often lack concrete, enforceable elements. For example, two lower-scoring firms — xAI and Meta — recently rolled out structured safety frameworks. However, according to the FLI report, these remain narrow in scope, with unclear mitigation triggers, arbitrary thresholds, and insufficient independent oversight (3, 2).

For many firms, governance mechanisms like whistleblower policies, third-party audits, or cross-departmental safety oversight are still absent or basic (3, 5).

Existential Risks — Ignored or Downplayed

The biggest red flag is existential risk: the possibility that a truly advanced AI could go beyond human control, with catastrophic consequences. The report found no credible long-term plan among any of the assessed companies to manage or mitigate those risks. As the FLI authors bluntly conclude — the industry is “structurally unprepared for the risks it is actively creating” (3, 7).

Many companies still treat existential risk as a philosophical or PR issue, rather than a technical and governance imperative. The report calls this a “dangerous disconnect” between public declarations and internal safety planning (3).

Consequences: Why This Matters Now

The shortcomings documented by FLI do not exist in a vacuum — they come at a time of rapidly accelerating AI capabilities and real-world harms. According to media reporting, some AI-powered systems have already been linked to psychological harm, suicides, and other forms of user distress (8, 9).

Moreover, there are growing concerns about misuse of AI for cyberattacks, disinformation campaigns, surveillance, or even biological threats if models are used to design harmful content. The lack of transparency, proper risk assessment, and safety governance makes predicting and preventing such outcomes much harder (4, 5).

At the same time, companies continue to race toward AGI — driven by enormous financial incentives, competitive pressure, and public/funder expectations. The FLI report points out the widening “capability–safety gap”: as advances accelerate, safety measures lag dangerously behind (3).

Responses from Industry and Observers

Unsurprisingly, the report stirred both criticism and defensiveness among the evaluated firms. Some companies, such as OpenAI and Google DeepMind, issued cautious statements saying they remain committed to safety, invest in frontier-safety research, and plan to evolve their governance and oversight as capabilities grow (8, 9).

Yet many stakeholders—including AI researchers, governance experts, and civil society — argue that rhetoric alone is insufficient. As one FLI board member summed it up: “If you can’t show me how you’ll keep AI under control, I don’t believe you.” The lack of hard, publicly verifiable commitments (e.g., quantitative thresholds, pause mechanisms, independent audits) continues to erode confidence that the industry is prepared for what’s coming (3, 4).

Some proponents of stricter regulation have seized on the report as ammunition. They warn that if companies do not significantly raise the bar for safety in the near future, governments might intervene — either by imposing binding rules or by preventing deployment of frontier systems (6, 9).

What Needs to Change: Recommendations from the Report

According to FLI, there are several urgent steps AI companies — and regulators — must take to close the safety gap:

  • Adopt credible, evidence-based safety frameworks. Frameworks shouldn’t be PR documents: they must include measurable thresholds, clearly defined risk categories, concrete mitigation plans, and transparent decision-making processes (3, 5).
  • Commit to independent oversight. Safety audits should not be internal only. External third parties — preferably from academia, civil society, or regulatory bodies — need access to internal evaluations and deployment logs (3, 7).
  • Disclose risk assessments and model evaluations. Companies should share pre- and post-mitigation test results so policymakers and the public can make informed judgments (3).
  • Define and publish existential-risk plans. If companies aim at AGI or superintelligent AI, they must publish robust alignment, control, and contingency strategies — not mere aspiration statements (4, 5).
  • Support regulatory frameworks. Firms should engage constructively in the formation of binding safety regulations, rather than lobbying against them or delaying (8, 6).

Why So Many Companies Continue Falling Short

So what explains this failure across the industry? The FLI report suggests several reasons, many of them structural:

  • Incentive misalignment. AI companies are under tremendous economic and competitive pressure to deliver cutting-edge models quickly. Safety — especially long-term existential safety — tends to be deprioritized in favor of capability (3, 4).
  • Lack of external accountability. Unlike regulated industries (nuclear, pharmaceuticals, aviation), AI lacks broadly accepted, legally enforced safety standards. With few meaningful legal consequences for failing safety audits, companies have little incentive to go beyond minimal compliance (9, 6).
  • Difficulty of measuring long-term and low-probability risks. Existential risk is by definition hard to quantify. Companies may postpone concrete commitments, writing them off as “too speculative” (3).
  • Transparency tradeoffs. Disclosing internal safety documents may reveal proprietary methods or alarm investors, so firms often keep critical information private (3).

The Broader Implication: A Dangerous Disconnect

The AI Safety Index 2025 reveals a fundamental disconnect: between the ambition of companies to build ever more powerful AI — possibly superintelligent — and their capacity to anticipate, manage, and control the risks (3, 4).

As one of the report’s independent reviewers aptly warned, pushing ahead without robust safety mechanisms is akin to launching a nuclear reactor without containment protocols (4, 7).

Without urgent corrective action — by both corporations and regulators — the world may be hurtling toward a future in which AI’s transformative potential brings not just progress, but serious systemic danger.

Conclusion

The release of the AI Safety Index 2025 should serve as a wake-up call. Even the leading firms in artificial intelligence — ones most often portrayed as responsible innovators — are not doing enough. At best, their safety practices remain incomplete; at worst, they may be building the foundations of uncontrollable systems without adequate safeguards.

Concrete plans, transparent oversight, third-party audits, and binding regulations must replace vague promises. Until then, the race to superintelligence is proceeding at the peril of global safety (3, 4).

Sources

  1. NBC News – Top AI companies’ safety practices fall short, says new report
    https://www.nbcnews.com/tech/tech-news/top-ai-companies-safety-practices-fall-short-says-new-report-rcna246143
  2. Tech.co – AI Safety Index 2025 coverage
    https://tech.co/news/ai-safety-index-2025
  3. Future of Life Institute – AI Safety Index Report (Full Report)
    https://futureoflife.org/wp-content/uploads/2025/12/AI-Safety-Index-Report_011225_Full_Report_Digital.pdf
  4. The Guardian – Inside the race to create the ultimate AI
    https://www.theguardian.com/technology/2025/jul/17/ai-firms-unprepared-for-dangers-of-building-human-level-systems-report-warns
  5. The Outpost – AI company safety analysis
    https://theoutpost.ai/ai-safety-index-2025
  6. Euronews – AI less regulated than sandwiches
    https://www.euronews.com/next/2025/12/03/ai-less-regulated-than-sandwiches-as-tech-firms-race-toward-superintelligence-study-says
  7. Axios – AI existential risk coverage
    https://www.axios.com/2025/12/03/ai-existential-risk-report
  8. Computing – Top AI companies failing on safety
    https://www.computing.co.uk/news/2025/ai/top-ai-companies-failing-on-safety
  9. WHBL – AI company safety practices report
    https://1330whbl.com/2025/12/ai-companies-safety-practices-fail
Related topics

RECENT POSTS

CATEGORIES

Copyright SMARTA 2025