Mistral AI's Codestral Model Gets 25.01 Update: Support for Over 80 Programming Languages, Context Length Increased to 256,000 Tokens

recently,Mistral AI Announcing a new program for its Codestral ProgrammingModelVersion 25.01 was released, with officials emphasizing that the release in question featured major improvements in handling context length and code completion efficiency.

Specifically, Codestral 25.01 Increase the length of contexts supported by the model by 256,000 Tokenwhich claims to be able to effectively handle large projects and complex code generation needs.In addition, the new version of the model supports more than 80 programming languages, including Python, Java, JavaScript and other major languages.The model is also capable of accurate generation in application scenarios such as SQL and Bash. Tests show that the model has an average accuracy of 71.4% in HumanEval tests across languages.

Mistral AI claims that Codestral version 25.01 set several benchmark records in the Fill-In-the-Middle (FIM) task, notably achieving an average pass rate of 95.3% in the Pass@1 test for FIM, showing the model's power in generating single lines of code.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Musk's Chatbot Grok Gets New Logo, Unifies Brand with xAI Company

2025-1-18 11:23:38

Information

OpenAI Altman: Plans to launch o3 mini inference model in a few weeks

2025-1-18 11:27:24

Search