Google releases Japanese-language version of Gemma AI model that runs easily with just 2 billion parameters and mobile devices!

Recently held in Tokyo Gemma On Developer Day.GoogleA new Japanese version of the Gemma AI model has been officially launched. The performance of this model is comparable to GPT-3.5, but it has only a mere 2 billion covariates and is very small for running on mobile devices.

Google releases Japanese-language version of Gemma AI model that runs easily with just 2 billion parameters and mobile devices!

The Gemma models in this release excel in Japanese processing while maintaining their capabilities in English. This is especially important for small models, which can face the problem of "catastrophic forgetting" when fine-tuning to a new language, where newly learned knowledge overwrites previously learned information. But Gemma successfully overcame this challenge, demonstrating strong language processing capabilities.

What's more, Google has also immediately released the model's weights, training materials, and examples through platforms like Kaggle and Hugging Face to help developers get started faster. This means that developers can easily use this model for local computing, especially in edge computing applications, which will lead to more possibilities.

To encourage more international developers, Google has also launched a contest called "Unlocking Global Communication with Gemma" with $150,000 in prizes. This program is designed to help developers adapt Gemma models to local languages. Currently, there are already projects underway in Arabic, Vietnamese and Zulu. In India, developers are working on the "Navarasa" project, which plans to optimize the model to support 12 Indian languages, while another team is working on fine-tuning it to support Korean dialects.

The Gemma2 family of models was introduced to achieve higher performance with fewer parameters. Compared to similar models from other companies, such as Meta, Gemma2 performs equally well, and in some cases the 200 million parameter Gemma2 is able to outperform some models with 70 billion parameters, such as LLaMA-2.

Developers and researchers can access Gemma-2-2B models and other Gemma models through free programs at Hugging Face, Google AI Studio, and Google Colab, in addition to finding them in the Vertex AI Model Garden.

Official Portal:https://aistudio.google.com/app/prompts/new_chat?model=gemma-2-2b-it

Hugging Face:https://huggingface.co/google

Google Colab:https://ai.google.dev/gemma/docs/keras_inference?hl=de

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
HeadlinesInformation

The first batch of humanoid robots with body intelligence standards released: 4 levels according to lower limb movement, upper limb work, etc.

2024-10-30 9:47:07

HeadlinesInformation

Xiaomi 15 series AI upgrade, AI subtitles are here Real-time translation for watching movies and meetings

2024-10-30 10:02:19

Search