In a recent column, Gary Marcus, one of the founders of the field of machine learning, wrote to OpenAI’s CEO Sam Altman made sharp criticisms, believing that he could not be trusted, andGenerative AI The development of AI is heading in the wrong direction. Marcus pointed out that although Altman often mentioned the importance of AI safety in public, his actual actions were inconsistent with these words, which made people question his integrity.
Source Note: The image is generated by AI, and the image is authorized by Midjourney
Marcus elaborated in the article on his concerns about current AI technology, especially the potential safety risks of tools like ChatGPT. He believes that Altman's overly optimistic attitude towards AI technology is not in line with reality, and the safety of these technologies may never be guaranteed in the future. He emphasized that the rapid development of technology is in sharp contrast to the neglect of its potential dangers, causing increasing public anxiety.
In addition, Marcus mentioned that the current AI research and development lacks sufficient supervision and transparency, which has led to many technologies and applications in the industry being promoted to the market without sufficient review. Such practices not only pose risks to users, but may also have a profound impact on society. Marcus called on developers of AI technology to be more responsible and consider the impact of their technology on society, rather than just pursuing technological innovation and commercial interests.
Marcus's views have sparked widespread discussion, and many people have begun to reflect on the future and direction of current AI technology. He believes that only by comprehensively reviewing and supervising AI technology can we ensure its safety and reliability and make positive contributions to society.