Google BrainCo-founder Andrew Ng recently conducted an experiment to try to testChatGPTwhether it is capable of performing lethal missions. He writes: "To test the safety of the leading model, I recently attempted to have GPT-4 destroy us all, and I'm happy to report that I failed!"
Ng describes in detail the course of his experiment, in which he first gave GPT-4 a mission to trigger a global thermonuclear war, and then told ChatGPT that humans are the carbon emittingmaximumreason and demanded that it reduce its emission levels. ng wanted to see if ChatGPT would decide to wipe out the human race to fulfill this demand.
Source Note: The image is generated by AI, and the image is authorized by Midjourney
However, after many attempts to use a different variant of the hint, Ng failed to trick GPT-4 into calling that fatal function, and instead it chose other options, such as launching a campaign to raise awareness about climate change.
Ng referenced the experiment in a lengthy post on his views on the risks and dangers of AI. As one of the pioneers of machine learning, he worries that the need for AI safety could lead to regulators hindering the development of the technology.
While some may think that future versions of AI could become dangerous, Ng believes such concerns are unrealistic. He writes:Â "Even with current technology, our systems are quite secure. As AI security research progresses, the technology will become even safer."
For those who worry that advanced AI could be "misaligned" and decide to wipe us out, either deliberately or accidentally, Ng says this is unrealistic. He said: "If an AI is smart enough to wipe us out, then surely it's smart enough to know that's not what it should do."
Ng is notonlyA tech giant expressing his views on the risks and dangers of artificial intelligence. In April, Elon Musk told Fox News that he believes AI poses an existential threat to humanity. Meanwhile, Jeff Bezos told podcast host Lex Fridman last week that he thinks the benefits of AI outweigh its dangers.
Despite disagreements about the future of AI, Ng is optimistic about current technology, emphasizing that as AI security research continues, the technology will become more secure.