Meta Chief Scientist Li-Kun Yang: At least five to six years are needed to realize AGI optimistically.

Meta Chief Scientist, Turing Award WinnerYang Likun(Note: Yann LeCun, a Frenchman, spoke about his views on general artificial intelligence on the 29th edition of the "Into the Impossible" podcast.

Meta Chief Scientist Li-Kun Yang: At least five to six years are needed to realize AGI optimistically.

He said that the negative impact of AI is currently beingover-amplification, whose capacity remains very limited at this time. "In the most optimistic scenario.AGI The realization of AI will take at least 5-6 years." At present, society is generally worried about AI, and even some of the AI "may lead to the end of" the relevant views, Yang Likun believes that itsIgnoring the actual state of development and potential positive impacts of AI.

He said that AI is currently inUnderstanding and manipulating the physical worldcapacity in this area is still very limited, as it is mainly through thetext dataconducting training and lack of ability to visualize and understand the physical world.Unable to act like a human or an animalto interact naturally with the environment. "For example, a 10-year-old child or a cat can interact with the environment through 'Intuitive physics.' to understand how to interact with the physical world, like planning a jump trajectory or understanding the motion of an object. Current AI systems, on the other hand, do not yet have these capabilities."

There are already many different views on AI and even AGI in the industry, some of which are listed below:

  • Microsoft AI CEO Mustafa Suleiman:Current hardware does not enable AGI in the futureTwo to five generations of hardwareIt's possible to realize it in the middle, but I don't think the probability of success in two years is very high. Depending on the hardware development cycle, each generation of hardware currently takes 18 to 24 months, so five generations could mean ten years. The current industry focus on AGI is a bit off the mark: "Instead of obsessing about the Singularity or superintelligence, I'm more focused on developing theAI systems that actually help humans. These AIs shouldFor Users, to be part of its team rather than pursuing unattainable theoretical goals."
  • OpenAI CEO Altman:By 2025, we may see systems with "generalized artificial intelligence" (AGI) capabilities for the first time. Such systems could perform complex tasks like humans, and evenAbility to use multiple tools to solve problems. "Maybe in 2025, we'll see some AGI systems where people will marvel, 'Wow! That was beyond my wildest dreams.'"
  • 2024 Nobel Prize-winning physicist and "Godfather of AI" Hinton:"Over the years, I've come to think more and more of AI as anAdvanced Intelligence Beyond HumanityThe question is: what does it mean for human society if AI has the ability to surpass human intelligence and even control us? Because of this, we need to seriously think about the question of what it means for human society if AI possesses intelligence beyond that of humans and even has the ability to control us. ThisIt's not science fiction., but rather tangible risks." He told a BBC program later this month thatThere is a 10% to 20% probability that AI will lead to the extinction of the human race in the next thirty years.and warned that the technology was changing "much faster than expected".
statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Meta to create 'AI account' on Facebook to generate content and interact with users

2024-12-31 9:24:53

Information

NVIDIA buys AI startup Run:ai for $700M after EU approval

2024-12-31 9:27:24

Search