As artificial intelligence evolves, the need for oversight is clear. Most AI labs openly support regulation and provide access to frontier models for independent evaluation prior to release.
The world is using AI to solve all kinds of problems, but without proper oversight, AI could always create new problems.
Future of Life has developed report cards for various AI labs, including OpenAI, Meta, Anthropic, and Elon Musk's xAI; the AI Safety Index is an independent review looking at 42 indicators of “responsible behavior.”
The report gave each company a letter grade based on these indicators, and Meta, which focuses on open-source AI models through its Llama family of companies, earned an F.
: Future of Life Institute rating: grade boundaries are US GPA system used: A+, A, A-, B+, [...] , F
A succession of luminaries from education and think tanks have joined this panel to check out how AI companies operate, but the initial results are alarming.
Looking at Anthropic, Google DeepMind, Meta, OpenAI, x.AI, and Zhipu AI, the report found “significant gaps in safety measures and a serious need to improve accountability.”
According to the first report card, Meta scored the lowest (and x.AI was not far behind), with Anthropic topping the list, but still receiving only a C grade.
All flagship models were found to be “vulnerable to hostile attacks,” as well as insecure and potentially subject to departure from human control.
Perhaps most damningly, the report states that “reviewers consistently highlighted how, in the absence of independent oversight, companies were unable to resist profit-driven incentives to cut corners on security.”
“While Anthropic's current governance structure and OpenAI's initial governance structure were highlighted as promising, experts called for independent verification of risk assessment and compliance with the safety framework across all companies.
In short, this is the oversight and accountability we need to see in the burgeoning AI industry before it is too late.
Comments