OpenAI Co-Founder Ilya Sutskever recently announced that he will step down as OpenAI’s chief scientist in May this year to pursue a new entrepreneurial project. He and OpenAI colleague Daniel Levy and former Apple AI director and Cue co-founder Daniel Gross announced that they are co-founding Safe Superintelligence Inc., a company that aims toStartupsOur mission is to build secure superintelligence.
In a post on the SSI website, the founders said that building secure superintelligence is "the most important technical problem of our time." In addition, "We view security and capability as one, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to increase capability as quickly as possible while ensuring that our security is always ahead."
So, what is superintelligence? It is a hypothetical concept that represents a theoretical entity that possesses intelligence far beyond that of the smartest humans.
The move is a continuation of Sutskever’s work at OpenAI. He was a member of the company’s Super Alignment Team, which was responsible for designing ways to control powerful new AI systems. But with Sutskever’s departure, the team was disbanded, a move that was harshly criticized by one of its former leaders, Jean Leike.
SSI claims to be “advancing security superintelligence in a straight line, focused on one goal and one product.”
Of course, OpenAI’s co-founders played a major role in a brief ouster of CEO Sam Altman in November 2023. Sutskever later said he regretted his role in the incident.
The establishment of Safe Superintelligence Inc. will enable Sutskever and others to focus more on solving the problem of safe superintelligence and achieve single-purpose product development.