OpenAI executive Jan Leike resigns, criticizing the company for no longer prioritizing safety

Continue OpenAI Following the departure of co-founder Ilya Sutskever, another OpenAI executive, Jan Leike, posted on the X platform announcing that he had left the company last week.

OpenAI executive Jan Leike resigns, criticizing the company for no longer prioritizing safety

It is reported that Jan Leike is the co-director of the Superalignment team under OpenAI.He said that in recent years OpenAI has neglected its internal culture andSafetyGuidelines, insist on launching "eye-catching" products at a high speed.

OpenAI established the Superalignment team in July 2023, whose mission is to "ensure that AI systems with 'superintelligence' and 'smarter than humans' can follow human intentions." At that time, OpenAI promised to invest 20% of computing power in the next four years to ensure the security of AI models. According to Bloomberg,OpenAI is now said to have disbanded the Superalignment team.

Leike said that he joined OpenAI because "he believed that OpenAI was the best place in the world to conduct AI safety research." However, the current leadership of OpenAI has highly neglected the safety of the model and placed its core priorities on profitability and obtaining computing resources.

As of press time, Greg Brockman and Sam Altman of OpenAI have jointly responded to Leike's views, saying that they "have improved their awareness of AI risks and will continue to improve safety work in the future to address the stakes of each new model." The translation is as follows:

We are extremely grateful for everything Jan has done for OpenAI, and we know he will continue to contribute to our mission externally. In light of some of the questions raised by his departure, we wanted to explain our thinking about our overall strategy.

First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it. We have repeatedly demonstrated the enormous possibilities offered by the expansion of deep learning and analyzed their implications; called internationally for governance of AGI (before such calls became popular); and conducted groundbreaking work in the science of assessing the catastrophic risks of AI systems.

Second, we are laying the foundation for the safe deployment of increasingly powerful systems. Making new technology safe for the first time is not easy. For example, our team did a lot of work to safely bring GPT-4 to the world, and have since continued to improve model behavior and abuse detection in response to lessons learned from deployment.

Third, the future will be harder than the past. We need to continually improve our security work to match the risks of each new model. Last year, we adopted a preparedness framework to systematize our approach.

Now is a good time to talk about how we see the future.

As models continue to improve in capabilities, we expect them to become more deeply integrated with the world. Users will increasingly interact with systems composed of multiple multimodal models and tools that can take actions on their behalf, rather than just conversing with a single model via textual input and output.

We think these systems will be very beneficial and helpful to people, and can be delivered safely, but it requires a lot of foundational work. This includes being thoughtful about what they connect to during training, solving hard problems like scalable supervision, and other new safety work. As we build in this direction, we are not sure when we will reach safety standards for release, and we think it is acceptable if this delays the release time.

We know that we can't foresee every possible future scenario. Therefore, we need very tight feedback loops, rigorous testing, careful consideration at every step, world-class safety, and a harmonious integration of safety and capability. We will continue to conduct safety research on different time scales. We also continue to work with governments and many stakeholders on safety issues.

There is no handbook to guide the path toward AGI. We believe empirical understanding can help guide the way forward. We believe in realizing the huge potential gains while working to mitigate the serious risks; we take our role very seriously and carefully weigh feedback on our actions.

—Sam and Greg

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Tongyi Qianwen announced that the price of the main model Qwen-Long of "GPT-4 level" has been reduced by 97%, 2 million tokens per yuan

2024-5-22 8:52:48

Information

Baidu announces that Wenxin Big Model ENIRE Speed and ENIRE Lite are now free, effective immediately

2024-5-22 8:54:27

Search