OpenAI However, a recent report reveals a worrying phenomenon: nearly half of the researchers who once focused on the long-term risks of superintelligent AI have left the company.
According to Fortune, Daniel Kokotajlo, a former governance researcher at OpenAI, said that in the past few months, almost half of OpenAI AGI Members of the security team have left, raising concerns about whether the company is neglecting AI safety.
AGI safety researchers are primarily responsible for ensuring that future AGI systems do not pose an existential threat to humanity. However, as OpenAI increasingly focuses on products and commercialization, the departure of researchers means that the company's safety research team is gradually shrinking.
Kokotajlo pointed out thatSince 2024, OpenAI's AGI safety team has been reduced from about 30 people to about 16 peopleHe believes that this was not an organized action, but rather individuals gradually lost confidence and left.
An OpenAI spokesperson said the company is proud to provide the most capable and safest artificial intelligence systems and believes it has a scientific approach to address risks.
Earlier this year, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation from OpenAI, and the "super alignment" team he led, which was responsible for safety issues, was also disbanded.