Tabnine Chat Allows Users to Select LLM
April 02, 2024

Tabnine announced the ability for users to select the underlying large language model (LLM) that powers its software development chat tool, Tabnine Chat.

Engineering teams can now select from a catalog of models to use the one best suited to their situation, and can switch between them at-will.

“Engineering teams should have the freedom to choose the most suitable model for their use case without the need for multiple AI assistants or vendor switches. They should be able to choose based on each model’s specific performance, privacy policies, and the code it was trained on,” said Eran Yahav, co-founder and CTO of Tabnine. “To date, this has not been possible with the best-of-breed AI coding assistants — but we are changing that. Tabnine eliminates any worry about missing out on AI innovations, and makes it simple to use new models as they become available.”

Announced in June 2023 and now generally available, Tabnine Chat is the enterprise-grade, code-centric chat application that allows developers to interact with Tabnine AI models using natural language. Today’s update enables users to further adapt Tabnine Chat to the way they work and make it an integral part of the software development lifecycle (SDLC). It provides users control and flexibility to select LLMs that are the best fit for their needs and allows them to take advantage of the capabilities in newer LLMs and without getting locked to a specific model.

Tabnine also provides transparency into the behaviors and characteristics around the performance, privacy, and protection for each of the available models — making it easier to decide which model is right for each specific use case, project, or team. Developers now have the ability to choose any of the following models with Tabnine Chat and switch at any time:

- Tabnine Protected: Tabnine’s original model, designed to deliver high performance without the risks of intellectual property violations or exposing your code and data to others.

- Tabnine + Mistral: Tabnine’s newest offering, built to deliver the highest class of performance while still maintaining complete privacy.

- GPT-3.5 Turbo and GPT-4.0 Turbo: The industry’s most popular LLMs, proven to deliver the highest levels of performance for teams willing to share their data externally.

Switchable models offer engineering teams the combined advantages of LLMs' comprehensive understanding of programming languages and software development methods, coupled with Tabnine’s extensive efforts in crafting a developer experience and AI agents tailored to fit seamlessly into the SDLC. Tabnine employs advanced proprietary techniques — including prompt engineering, contextual awareness of local and global code bases, and task-specific fine-tuning within the SDLC — to maximize the performance, relevance, and quality of LLM output. By uniquely applying these methods to each LLM, Tabnine delivers an AI coding assistant optimized for each model. Regardless of the model selected, engineering teams get the full capability of Tabnine — including code generation, code explanations, generation of documentation, AI-created tests — and Tabnine is always highly personalized to each engineering team through both local and full codebase awareness.

Furthermore, with ongoing support for popular integrated development environments (IDEs) and integrations with common development tools, Tabnine ensures compatibility within existing engineering ecosystems.

Share this

Industry News

March 06, 2025

Parasoft is showcasing its latest product innovations at embedded world Exhibition, booth 4-318, including new GenAI integration with Microsoft Visual Studio Code (VS Code) to optimize test automation of safety-critical applications while reducing development time, cost, and risk.

March 06, 2025

JFrog announced general availability of its integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform.

March 06, 2025

CloudCasa by Catalogic announce an integration with SUSE® Rancher Prime via a new Rancher Prime Extension.

March 05, 2025

MacStadium announced the extended availability of Orka Cluster 3.2, establishing the market’s first enterprise-grade macOS virtualization solution available across multiple deployment options.

March 05, 2025

JFrog is partnering with Hugging Face, host of a repository of public machine learning (ML) models — the Hugging Face Hub — designed to achieve more robust security scans and analysis forevery ML model in their library.

March 05, 2025

Copado launched DevOps Automation Agent on Salesforce's AgentExchange, a global ecosystem marketplace powered by AppExchange for leading partners building new third-party agents and agent actions for Agentforce.

March 05, 2025

Harness completed its merger with Traceable, effective March 4, 2025.

March 04, 2025

JFrog released JFrog ML, an MLOps solution as part of the JFrog Platform designed to enable development teams, data scientists and ML engineers to quickly develop and deploy enterprise-ready AI applications at scale.

March 04, 2025

Progress announced the addition of Web Application Firewall (WAF) functionality to Progress® MOVEit® Cloud managed file transfer (MFT) solution.

March 04, 2025

Couchbase launched Couchbase Edge Server, an offline-first, lightweight database server and sync solution designed to provide low latency data access, consolidation, storage and processing for applications in resource-constrained edge environments.

March 04, 2025

Sonatype announced end-to-end AI Software Composition Analysis (AI SCA) capabilities that enable enterprises to harness the full potential of AI.

March 03, 2025

Aviatrix® announced the launch of the Aviatrix Kubernetes Firewall.

March 03, 2025

ScaleOps announced the general availability of their Pod Placement feature, a solution that helps companies manage Kubernetes infrastructure.

March 03, 2025

Cloudsmith raised a $23 million Series B funding round led by TCV, with participation from Insight Partners and existing investors.

February 27, 2025

IBM has completed its acquisition of HashiCorp, whose products automate and secure the infrastructure that underpins hybrid cloud applications and generative AI.