OpenAI Creates Independent Panel to Oversee AI Safety

OpenAI Creates Independent Panel to Oversee AI Safety

OpenAI announced that the Safety and Security Committee it created in May to address the controversy over the security of the artificial intelligence it is developing will become an independent oversight committee of the board of directors.

The committee will be chaired by Zeco Coulter, director of machine learning at Carnegie Mellon University’s School of Computer Science, and will also include Adam D’Angelo, a member of the OpenAI board of directors and co-founder of Quora, as well as Paul Nakasone, former head of the National Security Agency, and Nicole Seligman, former executive vice president at Sony.

The committee will oversee “the safety and security processes that guide the development and deployment of OpenAI models,” according to the company’s statement .

The committee recently completed a 90-day review of OpenAI's operations and procedures, and made recommendations to the board, and OpenAI has decided to publish the committee's findings in a public blog.

The committee's five main recommendations included the need to create an independent safety and security department, strengthen safety procedures, be transparent about OpenAI's work, collaborate with external organizations, and standardize the company's safety frameworks.

OpenAI recently launched the new model o1 , a beta version focused on thinking and solving difficult problems. The company noted that the committee “reviewed the safety and security criteria that OpenAI used to evaluate the o1 model’s readiness for launch.”

The company added that the committee will work with the board of directors to oversee the launch of the models, with the authority to postpone the launch if safety concerns are not addressed.

OpenAI has faced intense criticism over concerns that its rapid growth could outpace its ability to operate safely. According to press reports, OpenAI recently saw about half of its researchers in the safety of artificial general intelligence (AGI) and artificial superintelligence (ASI), likely due to disagreements over how to manage the risks associated with developing these technologies.

The resignations of Ilya Sutskever and Yan Lake in particular were particularly significant because they were senior scientists at the company and had led the “superconsensus” team concerned with the safety of future AI systems, which the company disbanded after their departures.

Earlier this year, a group of current and former OpenAI employees published a letter expressing concerns about the lack of oversight of the company’s products and the lack of protection for whistleblowers.

The company's decision to create an independent committee to monitor the safety of artificial intelligence comes at a time when OpenAI is busy with a funding round that could value the company at more than $150 billion, with major companies such as Microsoft, Nvidia and Apple likely to join the investment, according to press reports.


google-playkhamsatmostaqltradent