CPU, GPU, NPU, which one is the protagonist of "AI PC"?

As we all know, "AI PC" is one of the hottest topics in the consumer electronics industry. For some consumers who don't know much about the technical details but are interested in this concept, they believe that "AI PC"It can help you complete some unskilled operations more intelligently or reduce the burden of daily work.

But for people like us who have bothVery highFor users who are looking forward to it but are relatively familiar with it, many people often wonder why AI PC has only been promoted now, even though it has already appeared.

  • How early is AI PC? In fact, it appeared 7 years ago

Regardless of those professionalsuperComputers, for individual consumers, when did the term “AI PC” begin to appear?

fromCPUFrom a 2019 perspective, the answer is 2019. Because in this year's 10th generation Core-X processors (such as the i9-10980XE), IntelfirstThe "DL BOOST" instruction set was introduced to accelerate 16-bit operations. It was later popularized to the lower-end 10th-generation Core mobile version and the entire 11th-generation Core series products, allowing them to process deep learning and AI applications with theoretical efficiency doubled compared to before.

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

From the perspective of graphics cards, the consumer market has already ushered in the era ofFirstNVIDIA TITAN V, an "AI graphics card" with built-in Tensor Core. According to public technical information, it integrates 640 first-generation Tensor Core units, which can provide 119TFlops of AI acceleration computing power in FP16 mode.

Interestingly, if you are sensitive to numbers, you may realize that this 7-year-old graphics card has even better AI computing power thanup to dateThe computing power levels claimed by many "AI PCs" are more than 10 times higher.

  • NPUThe AI performance is not high, but why use them?

So why does this happen? According to our actual experience, it mainly comes from three aspects.

First, there is the issue of energy efficiency. Indeed, modern CPUs andGPUIn fact, they all have certain AI acceleration capabilities, especially the AI computing power of GPU is amazing. However, the AI computing efficiency of CPU is not high, and the high power consumption of GPU when it is used to calculate AI cannot be ignored for devices such as laptops. Therefore, NPU, which processes AI faster than CPU and saves more power than GPU, naturally has value in terms of power saving.

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

Some friends may ask, if we use a device with higher computing power to complete the calculation in a shorter time, can we also save power? This is indeed the case, but the problem is that for the current "AI PC", AI tasks do not necessarily require high computing power such as video super-resolution processing and generative image processing. It is also possible that AI voice assistants and AI performance scheduling need to be resident in the background.lowestUse cases where feedback is given at any time with a delay.

Obviously, at this time, it is impossible for the laptop to allow the CPU or GPU to be in a high-power state all the time to meet the needs of "maintaining AI applications at all times".

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

AMD's GPU-based end-side AI large model chat function requires a lot of graphics card performance

Moreover, from the perspective of actual usage scenarios, a large number of commonly used PC programs and common games are not based on "AI", which means that if the CPU or GPU is always used to share AI computing power, then they will be equivalent to reducing the performance of the computer. Obviously, except for those who specialize in AI-related work, most users would not like to see this situation.

  • "Computing power integration" is the future, but it is not easy to do it well

Of course, even if NPU has the advantages of high energy efficiency, low power consumption, can be always started, and does not affect the performance of CPU and GPU, there must be friends who have the same idea as us. Why can't CPU, GPU, and NPU be used for AI computing at the same time, so as to more flexibly balance "high performance" and "high energy efficiency"?

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

In fact, theoretically, this is certainly achievable. For example, according to relevant information, Qualcomm has achieved that the CPU, GPU, and NPU can all participate in AI computing in their upcoming Snapdragon X Elite series, and can even achieve a "heterogeneous collaboration" design that automatically allocates different code types and task loads.

However, from this, we can also get a glimpse of why other "AI PC" hardware solutions are difficult to achieve collaborative computing of different processing units. To put it bluntly, the problem lies in the product composition of each "fighting on its own".

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

up to dateThe NVIDIA driver interface will provide a series of AI acceleration functions based on RTX graphics cards.

For example, in the GPU field, NVIDIA's RTX series, AMD's RX7000 series, and Intel's ARC series of discrete graphics cards all have independent AI computing units inside. However, NVIDIA does not manufacture consumer-grade PC CPUs, so you can see that they do not consider AI computing coordination with CPUs at all. Instead, they have been updating graphics card-based AI video super-resolution, AI color enhancement, AI audio noise reduction, and even AI voice chat functions, which seems to suggest that "AI PCs only need graphics card computing power."

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

In comparison, Intel and Intel are in a more awkward situation. For example, although Intel's ARC discrete graphics card contains an XMX matrix computing unit, the current generation of ARC core graphics card integrated into the CPU has cancelled this design. As a result, the current MTL architecture CPU actually only has an independent AI computing unit such as the built-in NPU. Moreover, even if used with ARC discrete graphics card, the AI computing power of the core graphics card and the discrete graphics card cannot be "superimposed".

CPU, GPU, NPU, which one is the protagonist of "AI PC"?

AMD's CPUs and GPUs now have AI units, but the architecture and software are not universal

At the same time, AMD uses the mature XDNA architecture from its enterprise-level computing cards as the NPU unit in their CPUs, so in theory it has the advantage of easier software adaptation. But for some reason, AMD seems to have used another AI unit design in the RDNA3 independent graphics architecture, so that they have not yet been able to figure out the game screen super-resolution function based on AI code. Moreover, in many of the graphics card AI use cases demonstrated before, only the floating-point computing power of the GPU itself was used, so this also means that it (compared to the processing method of using only the built-in AI unit of the graphics card) will have higher power consumption, and it will be even more impossible to "calculate AI while playing games".

  • If chip manufacturers are not united, downstream brands can only find ways to "save themselves"

Having said so much, is there a way to solve these problems? In fact, there is. For example, for Intel and AMD, they certainly hope to solve this "unified computing power" problem through architectural revisions in future product lines. As for NVIDIA, although they do not have a consumer-grade x86 CPU product line, it is obviously not ruled out that NVIDIA will enter the Windows on ARM ecosystem through ARM CPUs in the future.

Of course, all of the above is equivalent to throwing the problem to the future architecture and the next generation of hardware platforms. So for consumers who are about to buy a machine now, or even those who have been using graphics cards or CPUs with built-in AI units for many years, is there no way out?

Not really. On the one hand, from the perspective of operating system manufacturers, Microsoft certainly does not want to see this kind of "AI PC standard split" situation. So they actually did some work at the driver and API level to integrate the AI computing power of different hardware architectures. A typical example is that no matter the graphics card (floating-point computing power), NPU, or the AI unit built into the graphics card, in the Windows system they are actually all under the unified scheduling of the Direct ML API, so they can achieve a certain degree of computing power "integration".

On the other hand, in addition to Microsoft, PC manufacturers will also make some efforts. They try to integrate the computing power of CPU, GPU and NPU through some self-developed AI bottom layers, or integrate the GPU acceleration API more deeply into the system bottom layer, so as to enhance its efficiency in AI calculations. Of course, these technical means are also meaningful, but they will be restricted by PC brands and sometimes even specific product lines, so their effects may be very good, but they may not necessarily have a broad driving effect on the entire industry.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

India shelved plans to approve AI models after backlash from entrepreneurs, investors

2024-3-17 9:26:20

Information

Beijing will soon release the first generation of universal open humanoid robot body

2024-3-18 9:38:07

Search