Optimizing Kubernetes Costs with Multi-Tenancy and Virtual Clusters
October 16, 2024

Cliff Malmborg
Loft Labs

The cost of running Kubernetes at scale with a large number of users quickly becomes untenable for cloud-native organizations. Monitoring costs, either via public cloud providers or with external tools such as Kubecost, is the first step to identifying important cost drivers and areas of improvement. Setting efficient resource limits with Resource Quotas and Limit Ranges, and enabling horizontal and vertical autoscaling, can also help reduce costs and inform optimization strategy.

However, these traditional methods are not enough given today's complex distributed systems, with many organizations spinning up huge numbers of underutilized clusters. To truly reduce Kubernetes costs and simplify management in the long-term, teams should consider a new approach: multi-tenancy with virtual Kubernetes clusters.

Reducing the Number of Clusters

Implementing multi-tenancy helps cut costs because the Kubernetes control plane and computing resources can be shared by several users or applications, which also reduces the management burden. Many organizations deploy too many clusters, even one for every developer, and stand to save significantly by relying on a multi-tenant architecture.

Reducing the number of clusters improves resource utilization and reduces redundancies, as API servers, etcd instances, and other components of the control plane will not be duplicated unnecessarily, but shared by workloads in the same cluster. Multi-tenancy also reduces cluster management fees, which are charged by public cloud providers. When running many small clusters, the management fee cost of about $70 per month per cluster can quickly become overwhelming.

In traditional multi-tenant architectures, engineers might receive self-service namespaces on a shared cluster. Given their limited utility and poor isolation between namespaces, opting for virtual clusters instead can preserve all the benefits of "real" clusters in a more efficient, secure multi-tenant setup. Virtual clusters are fully functional Kubernetes clusters running within an underlying host cluster. Unlike namespaces, virtual clusters have separate Kubernetes control planes and storage backends. Only core resources like pods and services are shared with the physical cluster, while all others such as statefulsets, deployments, and webhooks exist only in the virtual cluster.

Virtual clusters thus solve the "noisy neighbor" problem as they provide better workload isolation than namespaces, and developers can configure their virtual cluster independently tailored to their specific requirements. Because configurations and new installations can be carried out on virtual clusters themselves, the underlying host cluster can remain simple with only the basic components, which improves stability and reduces the chance for errors. While virtual clusters may not completely replace the need for separate regular clusters, implementing multi-tenancy with virtual clusters makes it possible to greatly reduce the number of real clusters needed to operate at scale.

The Case for Virtual Clusters to Reduce Cost

Virtual clusters are an exciting new alternative to both namespaces and separate clusters; cheaper and easier to deploy than regular clusters, with much better isolation than namespaces. Crucially, shifting to virtual clusters is a simple process that in most cases will not disrupt development workflows. For example, a large organization with developers distributed across 25 teams may choose to provision 25 separate Kubernetes clusters to test and develop the application. To switch to virtual clusters, they would instead simply create a single Kubernetes cluster and then deploy 25 virtual clusters within it. From the developers' viewpoint, nothing changes — teams can utilize all the necessary services within their virtual clusters, deploying their own resources like Prometheus and Istio without affecting the host cluster.

Further, since virtual clusters and their workloads are also pods in the host cluster, teams can take full advantage of the Kubernetes scheduler. If a team will not be using a virtual cluster for a period of time, there will not be pods scheduled in the host cluster using resources; overall, improved node resource utilization will drive down costs. Automating the process of scaling down unused resources can also eliminate costs created by idle virtual clusters. This "sleep mode" means the environment is stored and can be spun up quickly once a developer needs it again. Developers can implement a sleep mode via scripts or with tools that have built-in functionality.

Another key benefit is that infrastructure teams can centralize services like ingress controllers, service meshes, and logging tools, installing them just once in the host cluster and letting all virtual clusters share access. When organizations have trust in their tenants, like internal teams, CI/CD pipelines, and even select customers, replacing underutilized clusters with virtual ones can significantly cut down infrastructure and operational costs.

Future-Proofing Systems with Virtual Cluster Multi-Tenancy

Traditional Kubernetes cost management techniques, like autoscaling and monitoring tools, are a good first step to reducing runaway cloud spend tied to Kubernetes. But as companies rush to deploy artificial intelligence workloads, the associated complexity and resource demands will quickly render typical Kubernetes setups unmanageable and prohibitively expensive. Making the shift to virtual clusters now will provide the same levels of security and functionality, but will drastically reduce the operational and financial burden as organizations will need far fewer clusters. A virtualized, multi-tenant Kubernetes architecture is well-positioned to scale to the demands of modern applications.

Cliff Malmborg is Director of Product Marketing at Loft Labs
Share this

Industry News

October 15, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of KubeEdge.

October 15, 2024

Perforce Software announced its AI-driven strategy, covering four AI-driven pillars across the testing lifecycle: test creation, execution, analysis and maintenance, across all main environments: web, mobile and packaged applications.

October 15, 2024

OutSystems announced Mentor, a full software development lifecycle (SDLC) digital worker, enabling app generation, delivery, and monitoring, all powered by low-code and GenAI.

October 15, 2024

Azul introduced its Java Performance Engineering Lab, which collaborates with global Java developers and customers’ technical teams to deliver enhanced Java performance through continuous benchmarking, code modernization recommendations and in-depth analysis of performance impacts from new OpenJDK releases.

October 10, 2024

AWS has added support for Valkey 7.2 on Amazon ElastiCache and Amazon MemoryDB, a fully managed in-memory services.

October 10, 2024

MineOS announced a major upgrade: Data Subject Request Management (DSR) 2.0.

October 09, 2024

Snyk announced advancements to its platform to elevate risk-based application security through developer-first, AI-driven solutions.

October 09, 2024

Buildkite announced a Scale-Out Delivery Platform, providing the adaptability and scalability required by the world’s most demanding and complex computing environments.

October 09, 2024

MindStudio announced Serverless AI Functions as part of its new MindStudio for Developers offering.

October 08, 2024

Parasoft has achieved the widely recognized and respected TÜV SÜD certification for the development of its C/C++test CT (continuous testing) product.

October 08, 2024

StackGen announced enhanced support for developers utilizing Argo CD, a GitOps continuous delivery tool for Kubernetes.

October 08, 2024

Data Theorem announced the launch of Code Secure, the latest evolution in application security designed to protect the software supply chain from code to deployment.

October 08, 2024

Anthropic unveiled the Message Batches API – a cost-effective way to process large volumes of queries asynchronously.

October 07, 2024

Progress announced the winners of its 2024 OpenEdge North America Partner Awards.

October 07, 2024

RiverMeadow announced support for Red Hat OpenShift Virtualization, enabling organizations to seamlessly run and manage virtual machines alongside containerized applications in a single platform that can run in both on-premises and cloud environments.