Study reveals AI companies' safety practices do not meet international standards

  1. HOME
  2. BUSINESS
  3. Study reveals AI companies' safety practices do not meet international standards
  • Last update: 2 days ago
  • 2 min read
  • 209 Views
  • BUSINESS
Study reveals AI companies' safety practices do not meet international standards

Dec 3 (Reuters) - Leading artificial intelligence firms, including Anthropic, OpenAI, xAI, and Meta, are reportedly not meeting emerging international safety standards, according to the latest release of the Future of Life Institute's AI Safety Index on Wednesday. The independent assessment indicated that while these companies focus on advancing toward superintelligent AI, none currently possess comprehensive strategies to safely manage such systems.

The report arrives amid growing public anxiety over AI systems capable of human-level reasoning and decision-making, especially following several incidents where AI chatbots were linked to self-harm and suicidal behavior.

"Even with rising concerns about AI-driven cyberattacks and mental health risks, American AI firms face less oversight than the restaurant industry and continue to lobby against mandatory safety regulations," stated Max Tegmark, MIT professor and president of the Future of Life Institute.

The competition to develop more advanced AI technologies shows no signs of slowing, with major tech companies investing hundreds of billions of dollars to enhance and expand their machine learning capabilities.

The Future of Life Institute, a non-profit organization founded in 2014 with early backing from Tesla CEO Elon Musk, has long warned about the potential hazards of highly intelligent machines. In October, notable AI researchers including Geoffrey Hinton and Yoshua Bengio urged a pause on the creation of superintelligent AI until public consensus and scientific safeguards are established.

(Reporting by Zaheer Kachwala in Bengaluru; Editing by Shinjini Ganguli)

Author: Aiden Foster

Share