Meta launches LLM Compiler code optimization model, which can be used with other AI to improve code generation/compilation capabilities

Meta The day before yesterday, a product called "LLM Compiler", which is based on Meta's existing Code Llama and focuses on code optimization. The relevant model has been launched on Hugging Face, providing two versions with 7 billion parameters and 13 billion parameters, allowing academic and commercial use. The project address is as follows:Click here to visit.

Meta launches LLM Compiler code optimization model, which can be used with other AI to improve code generation/compilation capabilities

Meta believes that although the major language models in the industry have demonstrated outstanding capabilities in various programming code tasks, such models still have room for improvement in code optimization. The currently launched LLM Compiler model is a pre-trained model designed specifically for code optimization tasks. It can simulate the compiler to optimize the code, or "convert the optimized code back to the original language."

LLM Compiler was trained on a massive corpus of 546 billion LLVM-IR and assembly code tokens and is said to be able to achieve a "code optimization potential" of 77%., developers can freely use related models with other AI models to improve the quality of generated code.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Microsoft Copilot for Microsoft 365 will welcome new features this month, fully empowering design and writing

2024-7-1 9:18:00

Information

Not only ChatGPT, it is reported that iOS 18 "Apple-branded AI" will be connected to Google Gemini this fall

2024-7-1 9:19:28

Search