Saturday, September 21, 2024 - 12:07 am
HomeBreaking Newscreates new independent committee that will block dangerous models

creates new independent committee that will block dangerous models

The explosion that ChatGPT has caused around the world, including in Spain, has also had an impact on the company itself, which has been harshly criticized for leaving behind its origins as a non-profit organization focused on research. This last year has been somewhat eventful for OpenAI’s management with the dismissal and return of Sam Altman as CEO, the departure of the founding members or Microsoft’s abandonment of its role as an observer on the board of directors, among other changes. In less than four months, The company is again creating a new safety committee.

Scandals such as the launch of GPT 4.o with a voice very similar to that of actress Scarlett Johansson or rumors of a possible dangerous artificial superintelligence have put this AI giant, whose tools are used by 200 million weekly users, on the ropes. With this new committee, the company promises to be more transparent in its worknow seeking to increase its valuation to $150 billion through a new fundraising round.

The company behind ChatGPT reported on its blog that it is creating a “Board Oversight Committee” that will be independent and have the power to delay the release of new models and tools for security reasons. The move came after the company’s Safety and Security Committee conducted a 90-day review of security-related processes and made a series of recommendations to the full board.

Sam Altman, CEO of OpenAI, arriving at a forum in the US Senate last September.

Reuters

It should be remembered that this Safety and Protection Committee was created at the end of May. It was led by Altman himself, the CEO of the company, and three advisors: Bret Taylor, Adam d’Angelo and Nicole Seligman. Their only work was this 90-day study in which some recommendations were made such as collaboration with external organizations and “establish independent governance for safety and security“.

In this way, the Safety and Security Committee becomes a committee that will be chaired by Zico Kolter, director of the Machine Learning Department at Carnegie Mellon University’s School of Computer Science. Also present will be Adam D’Angelo, co-founder and CEO of Quora and retired U.S. Army general, as well as Paul Nakasone and Nicole Seligman, former executive vice president and general counsel of Sony Corporation.

Logo of OpenAI, founded by Sam Altman.

Unsplash

The announcement ensures that the new committee will be independent and will oversee the board, but that its members also sit on the company’s board. Current CEO Sam Altman no longer sits on the committee. The company also says it is seeking, at the recommendation of its committee, “more ways to share and explain our work on security” and “more possibilities for independent testing of our systems.”

“We are committed to continually improving our approach to launching high-performing and safe models,” the statement said. In late August, OpenAI and Anthropic agreed allow the US government to evaluate leading AI models before making them public.

This measure would be similar to the Supervisory Board created by Meta, Mark Zuckerberg’s company, to review the decisions of the board of directors of the company to which some of the most relevant social networks of the moment, such as Instagram or the WhatsApp messaging application, belong and which also develops generative artificial intelligence.

Source

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts