Optimizing Kubernetes Costs with Multi-Tenancy and Virtual Clusters
October 16, 2024

Cliff Malmborg
Loft Labs

The cost of running Kubernetes at scale with a large number of users quickly becomes untenable for cloud-native organizations. Monitoring costs, either via public cloud providers or with external tools such as Kubecost, is the first step to identifying important cost drivers and areas of improvement. Setting efficient resource limits with Resource Quotas and Limit Ranges, and enabling horizontal and vertical autoscaling, can also help reduce costs and inform optimization strategy.

However, these traditional methods are not enough given today's complex distributed systems, with many organizations spinning up huge numbers of underutilized clusters. To truly reduce Kubernetes costs and simplify management in the long-term, teams should consider a new approach: multi-tenancy with virtual Kubernetes clusters.

Reducing the Number of Clusters

Implementing multi-tenancy helps cut costs because the Kubernetes control plane and computing resources can be shared by several users or applications, which also reduces the management burden. Many organizations deploy too many clusters, even one for every developer, and stand to save significantly by relying on a multi-tenant architecture.

Reducing the number of clusters improves resource utilization and reduces redundancies, as API servers, etcd instances, and other components of the control plane will not be duplicated unnecessarily, but shared by workloads in the same cluster. Multi-tenancy also reduces cluster management fees, which are charged by public cloud providers. When running many small clusters, the management fee cost of about $70 per month per cluster can quickly become overwhelming.

In traditional multi-tenant architectures, engineers might receive self-service namespaces on a shared cluster. Given their limited utility and poor isolation between namespaces, opting for virtual clusters instead can preserve all the benefits of "real" clusters in a more efficient, secure multi-tenant setup. Virtual clusters are fully functional Kubernetes clusters running within an underlying host cluster. Unlike namespaces, virtual clusters have separate Kubernetes control planes and storage backends. Only core resources like pods and services are shared with the physical cluster, while all others such as statefulsets, deployments, and webhooks exist only in the virtual cluster.

Virtual clusters thus solve the "noisy neighbor" problem as they provide better workload isolation than namespaces, and developers can configure their virtual cluster independently tailored to their specific requirements. Because configurations and new installations can be carried out on virtual clusters themselves, the underlying host cluster can remain simple with only the basic components, which improves stability and reduces the chance for errors. While virtual clusters may not completely replace the need for separate regular clusters, implementing multi-tenancy with virtual clusters makes it possible to greatly reduce the number of real clusters needed to operate at scale.

The Case for Virtual Clusters to Reduce Cost

Virtual clusters are an exciting new alternative to both namespaces and separate clusters; cheaper and easier to deploy than regular clusters, with much better isolation than namespaces. Crucially, shifting to virtual clusters is a simple process that in most cases will not disrupt development workflows. For example, a large organization with developers distributed across 25 teams may choose to provision 25 separate Kubernetes clusters to test and develop the application. To switch to virtual clusters, they would instead simply create a single Kubernetes cluster and then deploy 25 virtual clusters within it. From the developers' viewpoint, nothing changes — teams can utilize all the necessary services within their virtual clusters, deploying their own resources like Prometheus and Istio without affecting the host cluster.

Further, since virtual clusters and their workloads are also pods in the host cluster, teams can take full advantage of the Kubernetes scheduler. If a team will not be using a virtual cluster for a period of time, there will not be pods scheduled in the host cluster using resources; overall, improved node resource utilization will drive down costs. Automating the process of scaling down unused resources can also eliminate costs created by idle virtual clusters. This "sleep mode" means the environment is stored and can be spun up quickly once a developer needs it again. Developers can implement a sleep mode via scripts or with tools that have built-in functionality.

Another key benefit is that infrastructure teams can centralize services like ingress controllers, service meshes, and logging tools, installing them just once in the host cluster and letting all virtual clusters share access. When organizations have trust in their tenants, like internal teams, CI/CD pipelines, and even select customers, replacing underutilized clusters with virtual ones can significantly cut down infrastructure and operational costs.

Future-Proofing Systems with Virtual Cluster Multi-Tenancy

Traditional Kubernetes cost management techniques, like autoscaling and monitoring tools, are a good first step to reducing runaway cloud spend tied to Kubernetes. But as companies rush to deploy artificial intelligence workloads, the associated complexity and resource demands will quickly render typical Kubernetes setups unmanageable and prohibitively expensive. Making the shift to virtual clusters now will provide the same levels of security and functionality, but will drastically reduce the operational and financial burden as organizations will need far fewer clusters. A virtualized, multi-tenant Kubernetes architecture is well-positioned to scale to the demands of modern applications.

Cliff Malmborg is Director of Product Marketing at Loft Labs
Share this

Industry News

February 18, 2025

Check Point® Software Technologies Ltd. announced that its Check Point CloudGuard solution has been recognized as a Leader across three key GigaOm Radar reports: Application & API Security, Cloud Network Security, and Cloud Workload Security.

February 13, 2025

LaunchDarkly announced the private preview of Warehouse Native Experimentation, its Snowflake Native App, to offer Data Warehouse Native Experimentation.

February 13, 2025

SingleStore announced the launch of SingleStore Flow, a no-code solution designed to greatly simplify data migration and Change Data Capture (CDC).

February 13, 2025

ActiveState launched its Vulnerability Management as a Service (VMaas) offering to help organizations manage open source and accelerate secure software delivery.

February 12, 2025

Genkit for Node.js is now at version 1.0 and ready for production use.

February 12, 2025

JFrog signed a strategic collaboration agreement (SCA) with Amazon Web Services (AWS).

February 12, 2025

mabl launched of two new innovations, mabl Tools for Playwright and mabl GenAI Test Creation, expanding testing capabilities beyond the bounds of traditional QA teams.

February 11, 2025

Check Point® Software Technologies Ltd. announced a strategic partnership with leading cloud security provider Wiz to address the growing challenges enterprises face securing hybrid cloud environments.

February 11, 2025

Jitterbit announced its latest AI-infused capabilities within the Harmony platform, advancing AI from low-code development to natural language processing (NLP).

February 11, 2025

Rancher Government Solutions (RGS) and Sequoia Holdings announced a strategic partnership to enhance software supply chain security, classified workload deployments, and Kubernetes management for the Department of Defense (DOD), Intelligence Community (IC), and federal civilian agencies.

February 10, 2025

Harness and Traceable have entered into a definitive merger agreement, creating an advanced AI-native DevSecOps platform.

February 10, 2025

Endor Labs announced a partnership with GitHub that makes it easier than ever for application security teams and developers to accurately identify and remediate the most serious security vulnerabilities—all without leaving GitHub.

February 06, 2025

GitHub announced a wave of new features and enhancements to GitHub Copilot to streamline coding tasks based on an organization’s specific ways of working.

February 06, 2025

Mirantis launched k0rdent, an open-source Distributed Container Management Environment (DCME) that provides a single control point for cloud native applications – on-premises, on public clouds, at the edge – on any infrastructure, anywhere.

February 06, 2025

Hitachi Vantara announced a new co-engineered solution with Cisco designed for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes.