Stability AI releases 3 billion parameter language model StableLM Zephyr 3B, which is smaller, faster and more resource-efficient

Stability AI uses its stable diffusion to generate text to imagesArtificial Intelligence ModelBut that’s no longer the company’s entire business.

up to dateThe StableLM Zephyr3B released by Stable AI is a 3 billion parameter large language model optimized for chat application scenarios, including text generation, summarization, and content personalization. This new model is the first model that Stability AI released in April this year.firstA smaller, optimized version of the proposed StableLM text generation model.

The promise of the StableLM Zephyr3B is that it is smaller than the 7B StableLM model, bringing a range of benefits. Due to its smaller size, it can be deployed on a wider range of hardware, taking up lower resources, while still providing fast responses. The model is optimized specifically for question-answering and instruction-following type tasks.

Stability AI releases 3 billion parameter language model StableLM Zephyr 3B, which is smaller, faster and more resource-efficient

“StableLM has been trained for a longer time and with higher quality data than previous models, for example, and is able to match LLaMA v27b in terms of basic performance despite being only 40% in size,” said Emad Mostaque, CEO of Stability AI.

StableLM Zephyr3B is not a completely new model, but an extension of the existing StableLM3B-4e1t model defined by Stability AI. Zephyr's design approach was inspired by HuggingFace's Zephyr7B model. HuggingFace's Zephyr model was developed under the open source MIT license and is designed to act as an assistant. Zephyr uses a training method called Direct Preference Optimization (DPO), and StableLM now also benefits from this method.

Mostaque explained that Direct Preference Optimization (DPO) is an alternative to reinforcement learning used in previous models to adjust models to match human preferences. DPO is typically used in larger 7 billion parameter models, while StableLM Zephyr is one of the first models to use the technique in a smaller 3 billion parameter size.

Stability AI used the UltraFeedback dataset from the OpenBMB research group for DPO. The UltraFeedback dataset contains over 64,000 prompts and 256,000 responses. The combination of DPO, smaller size, and optimized data training set provides StableLM with excellent performance in the metrics provided by Stability AI. For example, in the MT Bench evaluation, StableLM Zephyr3B was able to outperform larger models including Meta's Llama-2-70b-chat and Anthropric's Claude-V1.

StableLM Zephyr3B is one of a series of new models launched by Stability AI in recent months, as the startup continues to push the boundaries of its capabilities and tools. While the company is busy moving into different areas, new models haven’t made Stability AI forget the basics of text-to-image generation. Last week, Stability AI released SDXL Turbo, a faster version of its flagship SDXL text-to-image stable diffusion model.

Mostaque also made it clear that there are more innovations to come from Stability AI. “We believe that small, open, well-performing models tuned to a user’s own data will outperform larger general-purpose models,” he said. “With the future full release of our new StableLM model, we look forward to further democratizing generative language models.”

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

JetBrains launches new AI coding assistant that combines multiple large language models to achieve vendor neutrality

2023-12-8 11:28:28

Information

Samsung's on-device AI is named Galaxy AI and will debut on the Galaxy S24 next month

2023-12-8 11:30:29

Search