Meta AI's new quantized version, Llama 3.2: 2x faster, 56% less volume, runs on your phone

recent,Meta AI Introduced a new quantitative LlamaThe 3.2 model, available in versions 1B and 3B, is a model that can be fine-tuned, distilled and deployed on a wide range of devices.

Meta AI's new quantized version, Llama 3.2: 2x faster, 56% less volume, runs on your phone

In the past, while models like Llama3 have achieved remarkable success in natural language understanding and generation, their sheer size and high computational requirements have made them difficult for many organizations to use. Long training times, high energy consumption, and reliance on expensive hardware have certainly increased the chasm between tech giants and smaller organizations.

One of the features of Llama 3.2 is the support for multilingual text and image processing.1B and 3B models are quantized to reduce the size by an average of 561 TP3T and reduce memory usage by 411 TP3T, and achieve 2-3x speedups, making them ideally suited for running on mobile devices and in edge computing environments.

Specifically, these models use 8-bit and 4-bit quantization strategies to reduce the weights and activation precision of the original 32-bit floating-point numbers, thereby dramatically reducing memory requirements and computational power requirements. This means that the quantized Llama3.2 models can run on regular consumer GPUs or even CPUs with little to no loss in performance.

Users can now perform a variety of smart applications on their phones, such as summarizing the content of a discussion in real time or invoking a calendar tool, all thanks to these lightweight models.

Meta AI's new quantized version, Llama 3.2: 2x faster, 56% less volume, runs on your phone

Meta AI is also working with industry-leading partners such as Qualcomm and MediaTek to deploy these models on a single Arm CPU-based system-on-chip, ensuring that they can be used efficiently across a wide range of devices. Early tests show that Quantized Llama 3.2 achieves 951 TP3T of the Llama 3 model effect in major natural language processing benchmarks, while reducing memory usage by nearly 601 TP3T.This is significant for enterprises and researchers looking to implement AI without investing in costly infrastructure.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

The World's Most Powerful Model: OpenAI Announces December Launch of Orion, 100-Fold Jump in AI Performance

2024-10-25 9:59:56

Information

US woman sues chatbot platform Character.AI: says it caused son's suicide

2024-10-25 10:03:47

Cart
Coupons
Search