Check Point® Software Technologies Ltd. has been recognized as a Leader in the 2024 Gartner® Magic Quadrant™ for Email Security Platforms (ESP).
The cost of running Kubernetes at scale with a large number of users quickly becomes untenable for cloud-native organizations. Monitoring costs, either via public cloud providers or with external tools such as Kubecost, is the first step to identifying important cost drivers and areas of improvement. Setting efficient resource limits with Resource Quotas and Limit Ranges, and enabling horizontal and vertical autoscaling, can also help reduce costs and inform optimization strategy.
However, these traditional methods are not enough given today's complex distributed systems, with many organizations spinning up huge numbers of underutilized clusters. To truly reduce Kubernetes costs and simplify management in the long-term, teams should consider a new approach: multi-tenancy with virtual Kubernetes clusters.
Reducing the Number of Clusters
Implementing multi-tenancy helps cut costs because the Kubernetes control plane and computing resources can be shared by several users or applications, which also reduces the management burden. Many organizations deploy too many clusters, even one for every developer, and stand to save significantly by relying on a multi-tenant architecture.
Reducing the number of clusters improves resource utilization and reduces redundancies, as API servers, etcd instances, and other components of the control plane will not be duplicated unnecessarily, but shared by workloads in the same cluster. Multi-tenancy also reduces cluster management fees, which are charged by public cloud providers. When running many small clusters, the management fee cost of about $70 per month per cluster can quickly become overwhelming.
In traditional multi-tenant architectures, engineers might receive self-service namespaces on a shared cluster. Given their limited utility and poor isolation between namespaces, opting for virtual clusters instead can preserve all the benefits of "real" clusters in a more efficient, secure multi-tenant setup. Virtual clusters are fully functional Kubernetes clusters running within an underlying host cluster. Unlike namespaces, virtual clusters have separate Kubernetes control planes and storage backends. Only core resources like pods and services are shared with the physical cluster, while all others such as statefulsets, deployments, and webhooks exist only in the virtual cluster.
Virtual clusters thus solve the "noisy neighbor" problem as they provide better workload isolation than namespaces, and developers can configure their virtual cluster independently tailored to their specific requirements. Because configurations and new installations can be carried out on virtual clusters themselves, the underlying host cluster can remain simple with only the basic components, which improves stability and reduces the chance for errors. While virtual clusters may not completely replace the need for separate regular clusters, implementing multi-tenancy with virtual clusters makes it possible to greatly reduce the number of real clusters needed to operate at scale.
The Case for Virtual Clusters to Reduce Cost
Virtual clusters are an exciting new alternative to both namespaces and separate clusters; cheaper and easier to deploy than regular clusters, with much better isolation than namespaces. Crucially, shifting to virtual clusters is a simple process that in most cases will not disrupt development workflows. For example, a large organization with developers distributed across 25 teams may choose to provision 25 separate Kubernetes clusters to test and develop the application. To switch to virtual clusters, they would instead simply create a single Kubernetes cluster and then deploy 25 virtual clusters within it. From the developers' viewpoint, nothing changes — teams can utilize all the necessary services within their virtual clusters, deploying their own resources like Prometheus and Istio without affecting the host cluster.
Further, since virtual clusters and their workloads are also pods in the host cluster, teams can take full advantage of the Kubernetes scheduler. If a team will not be using a virtual cluster for a period of time, there will not be pods scheduled in the host cluster using resources; overall, improved node resource utilization will drive down costs. Automating the process of scaling down unused resources can also eliminate costs created by idle virtual clusters. This "sleep mode" means the environment is stored and can be spun up quickly once a developer needs it again. Developers can implement a sleep mode via scripts or with tools that have built-in functionality.
Another key benefit is that infrastructure teams can centralize services like ingress controllers, service meshes, and logging tools, installing them just once in the host cluster and letting all virtual clusters share access. When organizations have trust in their tenants, like internal teams, CI/CD pipelines, and even select customers, replacing underutilized clusters with virtual ones can significantly cut down infrastructure and operational costs.
Future-Proofing Systems with Virtual Cluster Multi-Tenancy
Traditional Kubernetes cost management techniques, like autoscaling and monitoring tools, are a good first step to reducing runaway cloud spend tied to Kubernetes. But as companies rush to deploy artificial intelligence workloads, the associated complexity and resource demands will quickly render typical Kubernetes setups unmanageable and prohibitively expensive. Making the shift to virtual clusters now will provide the same levels of security and functionality, but will drastically reduce the operational and financial burden as organizations will need far fewer clusters. A virtualized, multi-tenant Kubernetes architecture is well-positioned to scale to the demands of modern applications.
Industry News
Progress announced its partnership with the American Institute of CPAs (AICPA), the world’s largest member association representing the CPA profession.
Kurrent announced $12 million in funding, its rebrand from Event Store and the official launch of Kurrent Enterprise Edition, now commercially available.
Blitzy announced the launch of the Blitzy Platform, a category-defining agentic platform that accelerates software development for enterprises by autonomously batch building up to 80% of software applications.
Sonata Software launched IntellQA, a Harmoni.AI powered testing automation and acceleration platform designed to transform software delivery for global enterprises.
Sonar signed a definitive agreement to acquire Tidelift, a provider of software supply chain security solutions that help organizations manage the risk of open source software.
Kindo formally launched its channel partner program.
Red Hat announced the latest release of Red Hat Enterprise Linux AI (RHEL AI), Red Hat’s foundation model platform for more seamlessly developing, testing and running generative artificial intelligence (gen AI) models for enterprise applications.
Fastly announced the general availability of Fastly AI Accelerator.
Amazon Web Services (AWS) announced the launch and general availability of Amazon Q Developer plugins for Datadog and Wiz in the AWS Management Console.
vFunction released new capabilities that solve a major microservices headache for development teams – keeping documentation current as systems evolve – and make it simpler to manage and remediate tech debt.
Check Point® Software Technologies Ltd. announced that Infinity XDR/XPR achieved a 100% detection rate in the rigorous 2024 MITRE ATT&CK® Evaluations.
CyberArk announced the launch of FuzzyAI, an open-source framework that helps organizations identify and address AI model vulnerabilities, like guardrail bypassing and harmful output generation, in cloud-hosted and in-house AI models.
Grid Dynamics announced the launch of its developer portal.
LTIMindtree announced a strategic partnership with GitHub.