iFLYTEKAnnounced that iFlytek Spark API Officially released the long context version - Spark Pro-128K Large Model, with the lowest price being 0.21 yuan/10,000 tokens.
It is reported that the conversation between users and large models is usually considered to be short-term memory. Once the length of the conversation exceeds its context carrying capacity, the excess part may be forgotten by the model.
Different from traditional text processing models, long text models have more accurate text understanding and generation capabilities and stronger cross-domain migration capabilities. They can understand and generate more information at one time. They are suitable for tasks such as complex conversations, long content creation, and detailed data analysis, and can improve the boundaries of the model's problem solving.
On June 27, iFlytek Spark V4.0 was released, with a completely new upgrade in long text capabilities. In addition, it launched the industry's first content traceability function to address the illusion problem of long document knowledge questions and answers. When users ask Spark a question, it will tell you why it answered in this way and which content it referenced after answering it. In this way, when users do not have time to read the full text and are worried about the credibility of the answer, they only need to verify its traceability.
Now, Spark Pro -128k, the Spark large model that supports the longest context, opens API calls to developers at a price of 0.21~0.30 yuan/10,000 tokens. Individual users can receive 2 million tokens of service for free.