JetBrains Released on Wednesday local timeA new AI Coding Assistant, the assistant is able to take information from the developer's integrated development environment (IDE) and feed it back to the AI software to provide coding suggestions, code refactoring and documentation support. The development tool company claims thatIts AI assistant isFirstThe first vendor-neutral product of its kind because it uses multiple large language models rather than relying on a single AI platform.
“We call ourselves neutral because you’re not locked into a large language model,” said Jodie Burchell, data science developer advocate at JetBrains. “It also means you’re not locked into a large language model that might become outdated and might not be able to keep up.”up to datemodel of development.
The foundation of the platform includes LLMs from OpenAI and Google, as well as some smaller LLM models created by JetBrainsDifferent models are used for different tasks. Code generation, for example, might be straightforward and therefore easier to solve than questions like, “Why did this bug happen?” That requires the more nuanced language understanding that larger models provide, Burchell said.JetBrains’ AI services automatically handle which model gets which query. By using the AI service architecture, JetBrains can connect new models without requiring users to change AI providers.
Burchell said:If you have a simpler task, you can use a smaller [model], maybe one of our large language models internally. If you need to do something really complex, you’re going to have to use GPT..
The third-party LLM companies don’t store the data, although the information is shared to create hints for the underlying large language models, she said.
Burchell said:Our agreement with the vendors is that they cannot store the data. They cannot use it for anything other than generating model outputs and reviewing it.— so it’s basically a security check.” She also said: “We would never work with third-party vendors who have these kinds of agreements where they can use the data for anything other than that.”
Developers aren’t particularly keen on handing over all their coding to AI assistants, a recent JetBrains survey foundOnly 17% respondents said they would be willing to delegate code creation to an AI assistantHowever, when it comes to helping, 56% respondents said they would let an AI assistant write code comments and documentation if given the chance.
So far, the service is only available to paying customers because of the high cost of using large language models under the hood, Burchell said. The plan is to roll it out to other products, but for now,AI assistant is available as a subscription add-on for most IDEs, including IntelliJ IDEA, PyCharm, PhpStorm, and ReSharperExceptions are community-licensed IDEs and users with educational, startup, or open source licenses.
One possibility that JetBrains and other AI assistants cannot currently achieve is to offer such a solution as an on-premises version, relying on internal models and running on a company’s local servers. Some companies are interested in this approach because of its security advantages.
“Obviously, that’s going to be what you get in terms of security,” Burchell said.most“It’s a trade-off, but it would be a possibility that we are actively exploring,” she said.
The reason serving local models is difficult is simply because AI runs on very large neural networks.
“What that means is that building a model is obviously one thing — [it] takes a long time — but actually running the model and getting answers in real time is a major engineering problem,” she explained.
"The model has to take in a lot of context, such as a piece of code that's run through the model to predict what comes next," she explained. "It then runs the new prediction along with the previous information through the model to predict the next part of the sequence. It does this continuously, in real time, while doing all the network and security checks."
“Behind the scenes, what needs to happen from an engineering perspective is pretty amazing,” Burchell said. “It requires GPUs, and it requires using GPUs in a cost-effective way. So if you have enough compute power to solve the problem, it’s not necessarily an insurmountable problem.”But being able to do this in a way that is not overly expensive is difficult..
An interesting side effect of this process is distillation. This means that as the model evolves, the neural network needed to run it gets smaller over time because it is able to provide the same performance with fewer parameters, she said.
Vendors are actively working to address the challenges of LLMs, but that's one reason JetBrains is working with third-party vendors: Doing so helps keep costs down and enables JetBrains to offer an affordable product, Burchell said. The company is also exploring open source alternatives.
Burchell said:Some people have expressed concerns about bundling AI systems into their IDEs. To accommodate these developers, JetBrains has introduced the ability to disable AI..