Tabnine Chat Allows Users to Select LLM
April 02, 2024

Tabnine announced the ability for users to select the underlying large language model (LLM) that powers its software development chat tool, Tabnine Chat.

Engineering teams can now select from a catalog of models to use the one best suited to their situation, and can switch between them at-will.

“Engineering teams should have the freedom to choose the most suitable model for their use case without the need for multiple AI assistants or vendor switches. They should be able to choose based on each model’s specific performance, privacy policies, and the code it was trained on,” said Eran Yahav, co-founder and CTO of Tabnine. “To date, this has not been possible with the best-of-breed AI coding assistants — but we are changing that. Tabnine eliminates any worry about missing out on AI innovations, and makes it simple to use new models as they become available.”

Announced in June 2023 and now generally available, Tabnine Chat is the enterprise-grade, code-centric chat application that allows developers to interact with Tabnine AI models using natural language. Today’s update enables users to further adapt Tabnine Chat to the way they work and make it an integral part of the software development lifecycle (SDLC). It provides users control and flexibility to select LLMs that are the best fit for their needs and allows them to take advantage of the capabilities in newer LLMs and without getting locked to a specific model.

Tabnine also provides transparency into the behaviors and characteristics around the performance, privacy, and protection for each of the available models — making it easier to decide which model is right for each specific use case, project, or team. Developers now have the ability to choose any of the following models with Tabnine Chat and switch at any time:

- Tabnine Protected: Tabnine’s original model, designed to deliver high performance without the risks of intellectual property violations or exposing your code and data to others.

- Tabnine + Mistral: Tabnine’s newest offering, built to deliver the highest class of performance while still maintaining complete privacy.

- GPT-3.5 Turbo and GPT-4.0 Turbo: The industry’s most popular LLMs, proven to deliver the highest levels of performance for teams willing to share their data externally.

Switchable models offer engineering teams the combined advantages of LLMs' comprehensive understanding of programming languages and software development methods, coupled with Tabnine’s extensive efforts in crafting a developer experience and AI agents tailored to fit seamlessly into the SDLC. Tabnine employs advanced proprietary techniques — including prompt engineering, contextual awareness of local and global code bases, and task-specific fine-tuning within the SDLC — to maximize the performance, relevance, and quality of LLM output. By uniquely applying these methods to each LLM, Tabnine delivers an AI coding assistant optimized for each model. Regardless of the model selected, engineering teams get the full capability of Tabnine — including code generation, code explanations, generation of documentation, AI-created tests — and Tabnine is always highly personalized to each engineering team through both local and full codebase awareness.

Furthermore, with ongoing support for popular integrated development environments (IDEs) and integrations with common development tools, Tabnine ensures compatibility within existing engineering ecosystems.

Share this

Industry News

September 05, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud.

September 05, 2024

Jitterbit announced its unified AI-infused, low-code Harmony platform.

September 05, 2024

Akuity announced the launch of KubeVision, a feature within the Akuity Platform.

September 05, 2024

Couchbase announced Capella Free Tier, a free developer environment designed to empower developers to evaluate and explore products and test new features without time constraints.

September 04, 2024

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced the general availability of AWS Parallel Computing Service, a new managed service that helps customers easily set up and manage high performance computing (HPC) clusters so they can run scientific and engineering workloads at virtually any scale on AWS.

September 04, 2024

Dell Technologies and Red Hat are bringing Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimized operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (gen AI) models, to Dell PowerEdge servers.

September 04, 2024

Couchbase announced that Couchbase Mobile is generally available with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge.

September 04, 2024

Seekr announced the launch of SeekrFlow as a complete end-to-end AI platform for training, validating, deploying, and scaling trusted enterprise AI applications through an intuitive and simple to use web user interface (UI).

September 03, 2024

Check Point® Software Technologies Ltd. unveiled its innovative Portal designed for both managed security service providers (MSSPs) and distributors.

September 03, 2024

Couchbase officially launched Capella™ Columnar on AWS, which helps organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform.

September 03, 2024

Mend.io unveiled the Mend AppSec Platform, a solution designed to help businesses transform application security programs into proactive programs that reduce application risk.

September 03, 2024

Elastic announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).

August 29, 2024

Progress announced the latest release of Progress® Semaphore™, its metadata management and semantic AI platform.

August 29, 2024

Elastic, the Search AI Company, announced the Elasticsearch Open Inference API now integrates with Anthropic, providing developers with seamless access to Anthropic’s Claude, including Claude 3.5 Sonnet, Claude 3 Haiku and Claude 3 Opus, directly from their Anthropic account.