OpenAI Launches Independent Safety and Security Committee That Can Stop Releases of Its Models

OpenAI Launches Independent Safety and Security Committee That Can Stop Releases of Its Models

OpenAI Launches Safety and Security Committee

OpenAI Launches Safety and Security Committee, Why It’s Needed OpenAI announced in a blog post that it is transforming its Safety and Security Committee into an “independent advisory committee” with the authority to delay the release of its models due to security concerns. The committee’s recommendation came after a 90-day review of OpenAI’s security processes and procedures.

OpenAI Launches Safety and Security Committee

The committee, chaired by Zecco Coulter and including Adam DeAngelo, Paul Nakasone, and Nicole Seligman, will “receive briefings from company leadership on security assessments of major model releases and, with the full board, exercise oversight of model releases, including the authority to delay releases until security concerns are addressed,” according to OpenAI. The full OpenAI board will also receive “regular briefings” on “safety and security issues.”

It should be noted that members of OpenAI's safety committee are also members of the company's broader board of directors, so it's not entirely clear how independent the committee actually is or how that independence is structured (CEO Sam Altman was previously on the committee, but is no longer).

OpenAI Launches Safety and Security Committee

By creating an independent safety board, OpenAI appears to be taking a somewhat similar approach to Meta ’s Oversight Board , which reviews some of Meta ’s content policy decisions and can make decisions that the company should follow. None of the Oversight Board members sit on the company’s board of directors. OpenAI’s Safety and Security Committee review has helped “provide additional opportunities for collaboration and information sharing to advance the security of the AI ​​industry.” The company also says it will look for “more ways to share and explain our work on safety” and “more opportunities to independently test our systems.”

But I don't quite understand what is the need for OpenAI to launch a safety and security committee, what is it that it wants to protect users from, or should I say what is the need that prevents users from accessing it that makes it want to create a special safety committee, a lot of questions are running through my mind now that I don't know the answers to, but they worry me, so what do you think, dear reader?

Read also: Hello Wonder | Creates an AI-powered browser for kids

Source


google-playkhamsatmostaqltradent