DeepMind CEO interview: AI is not yet a battle of computing power, Google's advantage lies in research and development, and intelligent agents are the next explosion point

DeepMind CEO interview: AI is not yet a battle of computing power, Google's advantage lies in research and development, and intelligent agents are the next explosion point

  NIC report  

Editor: Run

[New Wisdom guide]Google DeepMind In a recent interview with WIRED, CEO Hassabis stated thatAI TechnologyThere is still a lot of room for improvement, and it is still far from the time when we can only fight for computing power. Google's strength lies in its scientific research capabilities, and the futureAgentwill change the AI landscape.

While Google's Gemini didn't get much attention in the AI product wars to start the year, Google DeepMind, the most cutting-edge AI organization in humanity, is still chasing OpenAI on the road to arriving at general-purpose AI.

Recently, WIRED conducted an interview with Hassabis, the head of DeepMind, and talked a lot about the recently released products, as well as the future of AI development technology path, which is full of dry goods.

In his view, the future development of artificial intelligence technology is far from only competing with the degree of arithmetic and scale, there is still a lot of imagination in the basic architecture, Agent and other aspects.

Google's strength is in the development of new technologies

Q: Gemini Pro 1.5 can handle far more data than its predecessor. Thanks to an architecture called "MoE," it also has enhanced capabilities at the same scale. Why are these advances important?

Demis Hassabis : You can now handle a short movie of normal length. I think our update will be very useful if you're studying a certain topic, going through an hour-long lecture, or want to look up a specific piece of information or a point mentioned in a lecture.

Jeff Dean made this new Gemini Pro version with MoE, and while it hasn't been tested on a large scale yet, its performance is roughly equivalent to the largest model in the previous generation of the architecture.

We are fully capable of using these innovations to create an Ultra-sized model, which is exactly what we are working on.

According to Hassabis, increasing the amount of computing power and data used in AI model training has been a key factor in driving great advances over the past few years.

Rumor has it that Sam Altman is looking to raise up to $7 trillion to buy more AI chips.

In response, Hassabis asked, "Is it a rumor? I've heard that it seems to be in yen?"

It's true, though, that size matters, and that's why NVIDIA's market cap is soaring right now.

That's why Sam is trying to raise funds. But unlike many other organizations, we've always prioritized basic research.

In the last decade of pioneering work, Google Research, Google Brain, and DeepMind have invented most of the machine learning techniques we use today.

This has always been at the core of who we are, and we have many senior research scientists that other organizations may not have. By contrast, other startups, and even large companies, tend to focus more on engineering than research.

There's still a lot of room for AI breakthroughs

Hassabis said he believes that realizing general-purpose artificial intelligence (AGI) will require not only scaling up on existing technology, but also a lot of significant technological innovation.

"We have yet to see any signs of stagnation in technology, and there is still room for progress. So my view is that we should continue to push existing technologies and see how far they can go. But you don't get new capabilities like planning, tool use, or intelligent body behavior just by scaling up existing technologies. These capabilities don't just suddenly happen for no reason."

He also emphasized the importance of exploring computing itself.

"Ideally, experiments on small-scale problems that can be trained in a few days often reveal that what works on a small scale may not work on a large scale. So there exists a certain threshold of validity that might be able to scale up by a factor of 10 (extrapolate maybe 10X in size)."

Smart bodies are the next hot thing

When asked if future competition between AI companies will increasingly center around tool use and intelligences, Hassabis said that's likely.

"We've been on this path for a long time; in fact, intelligentsia, reinforcement learning and planning are our specialties, as they have been since the days of AlphaGo.

We are revisiting many of our ideas and considering combining AlphaGo's capabilities with these large models. The introspection and planning capabilities will help ameliorate problems such as hallucinations."

He also noted, "This is certainly a huge area. We're putting a lot of time and effort into it, and we think it's going to dramatically improve the capabilities of these systems -- as they start to behave more like intelligent bodies. We're investing heavily in that direction, and I think others are doing the same thing."

As for whether making AI models more like intelligences also makes them more problematic or potentially dangerous, Hassabis says that's a really big change.

"Once we get an intelligencer-like system up and running, AIs will feel very different from current systems, as they will shift from being passive question-and-answer systems to active learners.

And of course, they become more useful because they can actually get things done. But we need to be more careful."

He emphasized the importance of testing these intelligences in a simulated environment before deploying them on the network.

"I have always advocated testing intelligences in rigorous simulation environments prior to release.

There are many other suggestions, but I think the industry should start to seriously consider the emergence of these systems. It may be a few years away, maybe sooner, but it's a different category of system."

Speaking about the reasons why testing their most powerful model, the Gemini Ultra, took so long to deliver, Hassabis said it was both because of the speed of development and because the model itself was more complex.

"First of all, larger models are more complex to fine-tune, so they take longer. Larger models also have more capabilities that need to be tested."

Hassabis wants people to notice that as Google DeepMind stabilizes as a unified organization, they are increasingly inclined to release products early, make them available to a small number of users on an experimental basis, and then tweak them based on the feedback of these trusted early testers in order to make improvements before a general release.

On progress with government agencies such as the UK's Institute for AI Security, Hassabis said:

"It's going well. I'm not sure what I can say because this is all classified information, but they certainly have access to our cutting-edge models that they're testing with Ultra, and we'll continue to work closely with them.

I think the equivalent organization in the United States is in the process of being set up as well. This is a positive outcome from the Bletchley Park AI Security Summit. They can check things we don't have the authority to check, such as issues with chemical, biological, radiological and nuclear weapons (CBRN)."

Hassabis argues that the current system is not adequate to perform any substantial and worrisome tasks.

"But it would be good to have mechanisms for cooperation between government, industry and academia in place now. I think intelligent body systems will be the next big change. We'll see incremental improvements along the way and probably some major breakthroughs, but that will lead to a completely different experience."

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

ZTE Terminal will release its self-developed AI large model and its first AI flagship terminal

2024-2-26 9:48:40

Information

Samsung will publicly showcase the Galaxy Ring smart ring at MWC 2024

2024-2-26 9:51:00

Search