6 Kubernetes Pain Points and How to Solve Them - Part 2
March 06, 2018

Kamesh Pemmaraju
ZeroStack

With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale. Far from it. There are six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments, and there are also some best practices companies can use to address those pain points.

Start with 6 Kubernetes Pain Points and How to Solve Them - Part 1

Pain Point 3 - Development teams are often distributed in multiple sites and geographies

Companies do not build a single huge Kubernetes cluster for all of their development teams spread around the world. Building such a cluster in one location has DR implications, not to mention latency and country-specific data regulation challenges. Typically, companies want to build out separate local clusters based on location, type of application, data locality requirements, and the need for separate development, test, and production environments. Having a central pane of glass for management becomes crucial in this situation for operational efficiency, simplifying deployment, and upgrading these clusters. Having strict isolation and role-based access control (RBAC) is often a security requirement.

IT administrators should implement a central way to manage diverse infrastructures in multiple sites, with the ability to deploy and manage multiple Kubernetes clusters within those sites. Access rights to each of these environments should be managed through strict BU-level and Project-level RBAC and security controls.

Pain Point 4 - Container Orchestration is just one part of running cloud-native applications and infrastructure operations

Developing, deploying, and operating large-scale enterprise cloud-native applications requires more than just container orchestration. For example, IT operations teams still need to set up firewalls, load balancers, DNS services, and possibly databases, to name a few. They still need to manage infrastructure operations such as physical host maintenance, disk additions/removals/replacements, and physical host additions/removals/replacements. They still need to do capacity planning, and they still need to monitor utilization, allocation, performance of compute, storage, and networking. Kubernetes does not help with any of this.

The IT team should have full manageability for all the underlying infrastructure that runs Kubernetes. IT operations teams should be provided with all the intelligence they need to optimize sizing, perform predictive capacity planning, and implement seamless failure management.

Pain Point 5 - Enterprises have policy-driven security and customization requirements

Enterprises have policies around using their specifically hardened and approved gold images of operating systems. The operating systems often need to have security configurations, databases, and other management tools installed before they can be used. Running these on public cloud may not be allowed, or they may run very slowly.

The solution is to enable an on-premises data center image store where enterprises can create customized gold images. Using fine-grained RBAC, the IT team can share these images selectively with various development teams around the world, based on the local security, regulatory, and performance requirements. The local Kubernetes deployments are then carried out using these gold images to provide the underlying infrastructure to run containers.

Pain Point 6 - Enterprises need a DR strategy for container applications

Any critical application and the data associated with it needs to be protected from natural disasters regardless of whether or not these apps are based on containers. None of the existing solutions provide an out-of-the-box disaster recovery feature for critical Kubernetes applications. Customers are left to cobble together their own DR strategy.

As part of a platform's multi-site capabilities, IT teams should be able to perform remote data replication and disaster recovery between remote geographically-separated sites. This protects persistent data and databases used by the Kubernetes cluster. In addition, the underlying VMs that are running Kubernetes clusters can also be brought up at another site to provide an active-passive failover scenario.

Kubernetes has been a godsend for developers of cloud-native applications, but IT teams can scramble to provision and manage a Kubernetes infrastructure. As we have seen, however, these pain points can be managed.

Kamesh Pemmaraju is VP of Product at ZeroStack
Share this

Industry News

November 26, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).

November 26, 2024

Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.

November 26, 2024

Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.

November 26, 2024

Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

November 26, 2024

Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.

November 26, 2024

Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.

November 25, 2024

Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.

November 25, 2024

Red Hat announced an extended collaboration with Microsoft to streamline and scale artificial intelligence (AI) and generative AI (gen AI) deployments in the cloud.

November 25, 2024

Endor Labs announced that Microsoft has natively integrated its advanced SCA capabilities within Microsoft Defender for Cloud, a Cloud-Native Application Protection Platform (CNAPP).

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.