New report finds that safety practices of top AI companies are inadequate

  1. HOME
  2. BUSINESS
  3. New report finds that safety practices of top AI companies are inadequate
  • Last update: 1 days ago
  • 3 min read
  • 980 Views
  • BUSINESS
New report finds that safety practices of top AI companies are inadequate

A newly released analysis warns that as artificial intelligence systems grow more advanced, many leading developers are not keeping pace with essential safety practices. The Winter 2025 AI Safety Index, which evaluates eight major AI companies, states that their current strategies lack concrete safeguards, independent oversight and credible long-term risk-management planning required for such powerful technologies.

According to Sabina Nong, an AI safety investigator at the Future of Life Institute (FLI), the findings show a clear divide in how companies approach responsible development. She noted that three organizations Anthropic, OpenAI and Google DeepMind lead the field, while five others trail behind.

The lower-ranked group includes xAI, Meta and three prominent Chinese developers: Z.ai, DeepSeek and Alibaba Cloud. Chinese AI models have gained traction in Silicon Valley due to rapid capability improvements and their largely open-source nature.

Anthropic received the highest grade in the index with a C+, while Alibaba Cloud placed last with a D-. The report assessed 35 safety criteria across six categories, including risk-assessment procedures, transparency commitments, whistleblower protections and support for safety-focused research. An independent panel of eight experts evaluated each companys performance against these standards.

FLI President Max Tegmark said the results demonstrate that the industry is advancing toward high-risk technologies without adequate safeguards, exacerbated by a lack of meaningful regulation. He argued that weak oversight allows companies to push forward despite poor safety scores.

The report urges AI developers to improve transparency around internal safety processes, rely more on independent evaluators, strengthen protections against harmful model behavior and reduce aggressive lobbying efforts. Nong reiterated concerns about future systems surpassing human intelligence, saying companies are not prepared for the existential risks such systems may pose.

The release of the index coincides with several ambitious AI model launches. Googles Gemini 3, announced in late November, achieved record-setting results on numerous benchmark tests. Days later, Chinas DeepSeek introduced a model that appears to match Gemini 3 in several areas. Despite these advances, the report places DeepSeek second-to-last in overall safety, noting that it does not publish safety frameworks or whistleblowing policies requirements now mandated for AI companies operating in California.

The report concludes that trailing companies consistently miss foundational elements such as structured governance, formal safety frameworks and comprehensive risk evaluations. Tegmark emphasized that rapidly advancing developers can no longer justify sidelining safety now that they have reached the technological frontier.

The authors warn that AI capabilities are accelerating far faster than safety initiatives, creating a widening gap that leaves the industry ill-equipped for the dangers its technologies may introduce.

Author: Natalie Monroe

Share