The 2024 Mobile World Congress has kicked off. This year’s MWC, 5G and AI are still the most popular topics.QualcommAt today's MWC launch event, the new Qualcomm AI Hub, creating a center for developers to obtain development resources so that they can build AI applications based on Snapdragon or Qualcomm platforms.
Specifically, Qualcomm AI Hub can provide developers with a fully optimized AI model library, including traditional AI models and generative AI models, which can be deployed on Snapdragon and Qualcomm platforms. Developers only need to select the model required for the application and the framework used to develop the application, and then determine the target platform, such as a specific model of mobile phone or a specific model of Qualcomm platform. After completing these, Qualcomm AI Hub can provide developers with models optimized for their specified applications and specified platforms. Developers only need a few lines of code to obtain the model and integrate the model into the application.
Qualcomm AI Hub will support more than 75 AI Models, including traditional AI models and generative AI models. By optimizing these models, developers will be able to run AI reasoning up to 4 times faster.
Not only will the speed be improved, but the optimized model will also occupy less memory bandwidth and storage space, resulting in higher energy efficiency and longer battery life.
These optimized models will be available on Qualcomm AI Hub, HuggingFace and GitHub, allowing developers to easily integrate AI models into their workflows.
In addition to the new AI Hub, Qualcomm also demonstrated the world's first large multimodal model (LMM) running on an Android phone equipped with the third-generation Snapdragon 8. In this demonstration, Qualcomm showed an LMM with more than 7 billion parameters that supports text, voice, and image input, and can conduct multiple rounds of conversations based on the input content.
At the same time, Qualcomm also brought another multimodal AI demonstration on a Windows PC equipped with the new Snapdragon X Elite platform. This is the world's first multimodal large model of audio reasoning running on a Windows PC. It can understand birdsong, music or different sounds in the home, and can communicate based on this information to provide assistance to users.
For example, a multimodal large language model can understand the type and style of music input by the user, provide the user with music history and similar music recommendations, or adjust the music around the user through conversation.
These models are optimized for outstanding performance and energy efficiency and run entirely on the device side, enhancing privacy, reliability, personalization, and cost advantages.
In addition, Qualcomm also demonstrated their first LoRA model running on an Android phone. LoRA can adjust or customize the generated content of the model without changing the underlying model. By using a very small adapter (only 2% of the model, easy to download), the behavior of the entire generative AI model can be customized.
For example, in the demonstration, the model was able to create high-quality custom images based on different personal or artistic preferences. Qualcomm said that this technology can not only be used for image generation, but also for a variety of generative AI models such as large language models, which is an efficient way to achieve personalized generative AI.