Meta Chief Scientist Yann LeCun believes that AI superintelligence will not arrive soon and is skeptical about quantum computing

exist Meta At an event celebrating the 10th anniversary of its foundational AI research team, Yann LeCun, the company's chief scientist and deep learning pioneer, voiced his concerns about currentAILeCun argues that theExisting AI systems are still decades away from reaching some level of self-awareness, with the common sense to push their capabilities beyond merely summarizing large amounts of text in creative ways.

Meta Chief Scientist Yann LeCun believes that AI superintelligence will not arrive soon and is skeptical about quantum computing

LeCun's views contrast with those of Nvidia CEO Jen-Hsun Huang.Jen-Hsun Huang recently said that AI will be 'quite competitive' with humans in less than five years., outperforming humans on many intellectually intensive tasks.

LeCun mentioned at the event that he knew Jen-Hsun Huang and that he had benefited from the AI boom. "TheThere's an AI war going on, and he's providing the weapons."

LeCun said about technicians trying to develop artificial general intelligence (AGI) case, "[If] you think AGI has arrived, then you have to buy more GPUs." as long as companies like OpenAI continue to pursue AGI, they will need more NVIDIA computer chips.

LeCun argues that society is more likely to get 'cat-level' or 'dog-level' AI than human-level AI years from now. He also said that the tech industry's current focus on language modeling and textual data is not enough to create what researchers have been dreaming of for decadesadvancedHuman-like AI systems.

"Text is a very poor source of information," LeCun says. He explains that the amount of text used to train a modern language model could take 20,000 years for humans to read."Train a system to use the equivalent of 20,000 years of reading material and they still won't understand that if A is the same as B, then B is also the same as A."

With this kind of training, they remain ignorant of many basic things in the world," LeCun said.

As a result, LeCun and other Meta AI executives have been delving into ways to customize the so-called Transformer models used to create apps such as ChatGPT to work with all kinds of data, including audio, image, and video information. These AI systems are capable of discovering billions of hidden correlations that may exist between these different kinds of data, and thus may be able to exhibit even more amazing capabilities.

Some of Meta's research includes software that can help people learn to play tennis better while wearing the company's Project Aria augmented reality glasses, which incorporate digital graphics into the real world. Executives showed a demo in which a person wearing the AR glasses was able to see visual cues while playing tennis that taught them how to properly grip a tennis racket and swing their arm with perfect posture. The type of AI model needed to deliver this digital tennis assistant would need to incorporate 3D visual data as well as text and audio in case the digital assistant needed to speak.

These so-called multimodal AI systems represent the next frontier, but they're not cheap to develop. As companies like Meta and Google's parent company Alphabet work on more advanced AI models, NVIDIA could gain an added advantage, especially if no other competition emerges.

The Future of AI Hardware

NVIDIA has been the generator of AImaximumbeneficiaries, and its expensive graphics processing units have become a standard tool for training large language models.Meta relies on 16,000 NVIDIA A100 GPUs to train its Llama AI model.

CNBC asked if the tech industry needs more hardware providers as Meta and other researchers continue to develop these kinds of complex AI models.

It's not required, but it would be nice to have," LeCun said. He added thatGPU technology is still the gold standard in AI.

However, he said.Future computer chips may not be called GPUs.

"You can expect to see new chips coming out that are not graphics processing units, they're just neural network, deep learning gas pedals," LeCun said.

LeCun is also somewhat skeptical of quantum computing, in which tech giants like Microsoft, IBM and Google have invested resources. Many researchers outside of Meta believe that quantum computers could accelerate progress in data-intensive areas such as drug discovery because they are able to perform multiple calculations using so-called quantum bits (rather than the regular binary bits used in modern computing).

But LeCun is skeptical.

"You can solve more problems more efficiently with traditional computers," LeCun says.

"Quantum computing is a fascinating scientific topic," LeCun says. The "practical relevance and the possibility of building quantum computers that are actually useful" is less clear.

Meta advancedMike Schroepfer, a researcher and former technology chief, concurred, saying he evaluates quantum technology every few years andConsider that a useful quantum machine "may come along at some point, but it's on too long a time horizon to be relevant to what we're doing."

"The reason we started an AI lab ten years ago is that it was clear that the technology would be commercialized in the next few years," Schroepfer said.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Adobe completes first acquisition in AI space, integrating Rephrase technology into powerful marketing tool

2023-12-3 14:41:20

Information

OpenAI agrees to buy $51 million in AI chips from Rain, a startup in which CEO Sam Altman has a personal investment

2023-12-4 9:26:25

Search