When the news of Llama 3.1 open source is still ringing in my ears,OpenAIFrom now on, 2 million training tokens per day will be used to fine-tune the model for free until September 23.DevelopersThe generous donation is a bold boost to the advancement of AI technology.
GPT-4o miniThe advent of OpenAI has excited countless developers. It ranks first with GPT-4o in the LMSYS ranking of the large model arena. It has powerful performance but the price is only 1/20 of GPT-4o. This move by OpenAI is undoubtedly a major benefit to the field of AI.
Developers who received the email were excited to tell each other that they must grab such a big bargain as soon as possible. OpenAI announced that from July 23 to September 23, developers can use 2 million training tokens for free every day. The portion exceeding this will be charged at US$3 per million tokens.
More affordable: GPT-4o mini’s input token fee is 90% lower than GPT-3.5Turbo, and its output token fee is 80% lower. Even after the free period, GPT-4o mini’s training cost is half that of GPT-3.5Turbo.
Longer context: The training context length of GPT-4o mini is 65k Token, which is 4 times that of GPT-3.5Turbo, and the inference context length is 128k Token, which is 8 times that of GPT-3.5Turbo.
Smarter and more capable: GPT-4o mini is smarter than GPT-3.5Turbo and supports visual capabilities (although fine-tuning is currently limited to text).
The GPT-4o mini fine-tuning function will be open to enterprise customers and Tier 4 and Tier 5 developers, and will gradually expand access rights to all levels of users in the future. OpenAI has released a fine-tuning guide to help developers get started quickly.
Some netizens are not optimistic about this, and they believe that this is OpenAI collecting data, training and improving AI models. Other netizens believe that the victory of GPT-4o mini is substantial evidence that AI has become smart enough to even fool us.
The release of GPT-4o mini and the free fine-tuning policy will undoubtedly promote the further development and popularization of AI technology. For developers, this is a once-in-a-lifetime opportunity to build more powerful applications at a lower cost. And for AI technology itself, does this mean a new milestone? Let us wait and see.
Fine-tuning documentation: https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-integrations