"The future will be harder than the past": OpenAI's Altman and Brock respond to senior resignation

"The future will be harder than the past": OpenAI's Altman and Brock respond to senior resignation

This week,OpenAIJan Leike, co-leader of the "super alignment" team that oversees safety issues at the companyResignIn a post on the X platform, formerly known as Twitter, the safety leader explained his reasons for leaving OpenAI, including that he had been at odds with the company's leadership on "core priorities" for so long that they had reached a "tipping point."

The next day, OpenAI CEO Sam Altman and president and co-founder Greg Brockman responded to Leike's assertion that the company isn't focusing on safety.

Among other things, Leike said that OpenAI’s “safety culture and processes have taken a backseat to other products” in recent years and that his team had difficulty getting the resources to do its safety work.

“It’s long past time for us to take seriously the implications of AGI (artificial general intelligence),” Lake wrote. “We must prioritize doing everything we can to prepare for them.”

Altman first responded to Lake’s retweet on Friday, saying Lake was right and that OpenAI “has a lot more to do” and is “committed to doing so.” He promised a longer post would follow.

On Saturday, Brockman posted a joint response from himself and Altman on X:

After thanking Lake for his work, Brockman and Altman said they had received some questions after Lake's resignation. They shared three points, the first of which was that OpenAI raised awareness of AGI "so that the world can be better prepared for it."

“We have repeatedly demonstrated the incredible possibilities of scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls became popular; and helped pioneer the science of assessing the catastrophic risks of AI systems,” they wrote.

The second is that they are laying the foundation for the safe deployment of these technologies, citing the work done by employees to "bring [Chat] GPT-4 to the world in a safe way." The two claim that since then (OpenAI released ChatGPT-4 in March 2023), the company has "continuously improved model behavior and abuse monitoring based on lessons learned from deployment."

The third point? “The future will be harder than the past,” they wrote. Brock and Altman explained that OpenAI needs to continually improve its safety work when releasing new models, and cited the company’s preparedness framework as a way to help achieve that goal. According to a page on OpenAI’s website, the framework predicts “catastrophic risks” that may arise and seeks to mitigate them.

Brockman and Altman then went on to discuss a future where OpenAI’s models will be more engaged with the world and interact with more people. They see this as a beneficial thing and believe it can be done safely — “but it requires a lot of groundwork.” As such, the company may delay its release timeline so that the models “reach [its] safety standards.”

“We know we can’t imagine every possible future scenario,” they said. “So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class safety, and harmony of security and functionality.”

Leaders said OpenAI will continue to research and collaborate with governments and stakeholders on safety issues.

“There is no proven playbook for how to embark on the path to general artificial intelligence. We think empirical understanding can help point the way forward,” they concluded. “We believe there are both significant benefits to be gained and serious risks to be mitigated; we take our role here very seriously and carefully weigh feedback on our actions.”

OpenAI 首席科学家 Ilya Sutskever 本周也辞职,这一事实使雷克的辞职和言论变得更加复杂。 “#WhatDidIlyaSee”成为 X 上的热门话题,标志着人们对 OpenAI 高层领导者所了解的事情的猜测。从布罗克曼和奥特曼今天声明的负面反应来看,这并没有消除任何这种猜测。

As of now, the company is moving forward with the next version: the voice assistant ChatGPT-4o.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI CEO Sam Altman addresses equity clause error in resignation, emphasizes AI safety and revised separation agreement

2024-5-19 10:02:42

Information

Microsoft faces billions of dollars in EU fines over Bing AI risks, fails to respond to risk assessment request

2024-5-19 10:19:57

Search