AI company efforts ranked in safety report card for protecting humanity

  1. HOME
  2. BUSINESS
  3. AI company efforts ranked in safety report card for protecting humanity
  • Last update: 6 hours ago
  • 3 min read
  • 667 Views
  • BUSINESS
AI company efforts ranked in safety report card for protecting humanity

A recent assessment questions whether artificial intelligence companies are adequately protecting humans from AI-related risks. The findings suggest that the industry has significant gaps in safety practices as AI becomes increasingly embedded in daily life, from mental health chatbots to potential cyberattack tools.

The AI Safety Index, released Wednesday by the Silicon Valley nonprofit Future of Life Institute, aims to guide AI development toward safer practices and reduce existential risks to humanity. The report highlights that current incentives for AI firms do not prioritize human safety.

"This industry is largely unregulated in the U.S., creating a competitive environment where safety is often neglected," explained Max Tegmark, president of the institute and MIT professor.

The highest grades awarded in the report were only C+, received by OpenAI, developer of ChatGPT, and Anthropic, known for its Claude chatbot. Google DeepMind received a C, while Meta and Elon Musks xAI were given a D. Chinese companies Z.ai and DeepSeek also earned a D, and Alibaba Cloud scored the lowest with a D-.

Grades were based on 35 indicators across six categories, including existential safety, risk assessment, and transparency. Evaluators considered public documentation and company responses to surveys, scoring the firms through a panel of eight AI experts including academics and organizational leaders. All companies scored below average in existential safety, which examines internal monitoring, control mechanisms, and long-term safety planning.

The report stated, "While companies pursue artificial general intelligence and superintelligence, none have shown a credible plan to prevent catastrophic misuse or loss of control."

OpenAI emphasized its commitment to safety: "We invest in frontier safety research, implement safeguards, rigorously test models internally and with independent experts, and share safety frameworks to help establish industry standards."

Google DeepMind similarly stated that it follows a "science-led approach to AI safety," with a Frontier Safety Framework to identify and mitigate risks from advanced models.

The report criticized xAI and Meta for lacking monitoring and control commitments despite having risk frameworks. DeepSeek, Z.ai, and Alibaba Cloud also did not provide public documentation on existential safety strategies. Several companies did not respond to comment requests.

Tegmark warned that insufficient regulation could enable misuse of AI for bioweapons, manipulation, or political destabilization, but noted that establishing binding safety standards could address these risks effectively.

Government oversight efforts exist but face resistance from tech lobbying groups concerned about innovation slowdown. Californias SB 53, signed by Gov. Gavin Newsom, requires AI companies to report safety protocols and incidents such as cyberattacks. Tegmark called it progress but emphasized the need for broader measures.

Analyst Rob Enderle described the AI Safety Index as an innovative approach but expressed concerns about the effectiveness and enforceability of U.S. regulations.

Author: Natalie Monroe

Share