Progress announced the Q4 2024 release of its award-winning Progress® Telerik® and Progress® Kendo UI® component libraries.
Tabnine announced the ability for users to select the underlying large language model (LLM) that powers its software development chat tool, Tabnine Chat.
Engineering teams can now select from a catalog of models to use the one best suited to their situation, and can switch between them at-will.
“Engineering teams should have the freedom to choose the most suitable model for their use case without the need for multiple AI assistants or vendor switches. They should be able to choose based on each model’s specific performance, privacy policies, and the code it was trained on,” said Eran Yahav, co-founder and CTO of Tabnine. “To date, this has not been possible with the best-of-breed AI coding assistants — but we are changing that. Tabnine eliminates any worry about missing out on AI innovations, and makes it simple to use new models as they become available.”
Announced in June 2023 and now generally available, Tabnine Chat is the enterprise-grade, code-centric chat application that allows developers to interact with Tabnine AI models using natural language. Today’s update enables users to further adapt Tabnine Chat to the way they work and make it an integral part of the software development lifecycle (SDLC). It provides users control and flexibility to select LLMs that are the best fit for their needs and allows them to take advantage of the capabilities in newer LLMs and without getting locked to a specific model.
Tabnine also provides transparency into the behaviors and characteristics around the performance, privacy, and protection for each of the available models — making it easier to decide which model is right for each specific use case, project, or team. Developers now have the ability to choose any of the following models with Tabnine Chat and switch at any time:
- Tabnine Protected: Tabnine’s original model, designed to deliver high performance without the risks of intellectual property violations or exposing your code and data to others.
- Tabnine + Mistral: Tabnine’s newest offering, built to deliver the highest class of performance while still maintaining complete privacy.
- GPT-3.5 Turbo and GPT-4.0 Turbo: The industry’s most popular LLMs, proven to deliver the highest levels of performance for teams willing to share their data externally.
Switchable models offer engineering teams the combined advantages of LLMs' comprehensive understanding of programming languages and software development methods, coupled with Tabnine’s extensive efforts in crafting a developer experience and AI agents tailored to fit seamlessly into the SDLC. Tabnine employs advanced proprietary techniques — including prompt engineering, contextual awareness of local and global code bases, and task-specific fine-tuning within the SDLC — to maximize the performance, relevance, and quality of LLM output. By uniquely applying these methods to each LLM, Tabnine delivers an AI coding assistant optimized for each model. Regardless of the model selected, engineering teams get the full capability of Tabnine — including code generation, code explanations, generation of documentation, AI-created tests — and Tabnine is always highly personalized to each engineering team through both local and full codebase awareness.
Furthermore, with ongoing support for popular integrated development environments (IDEs) and integrations with common development tools, Tabnine ensures compatibility within existing engineering ecosystems.
Industry News
Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).
Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.
Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.
Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).
Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.
Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.
Check Point® Software Technologies Ltd. announced the new Check Point Quantum Firewall Software R82 (R82) and additional innovations for the Infinity Platform.
Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.
Red Hat announced an extended collaboration with Microsoft to streamline and scale artificial intelligence (AI) and generative AI (gen AI) deployments in the cloud.
Endor Labs announced that Microsoft has natively integrated its advanced SCA capabilities within Microsoft Defender for Cloud, a Cloud-Native Application Protection Platform (CNAPP).
Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.
Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.
Securiti announced a new solution - Security for AI Copilots in SaaS apps.
Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.