AI and Kubernetes Challenges: 93% of Enterprise Platform Teams Struggle with Complexity and Costs
September 25, 2024

Haseeb Budhani
Rafay Systems

As artificial intelligence (AI) and generative AI (GenAI) reshape the enterprise landscape, organizations face implementation hurdles that echo the early stages of cloud adoption challenges. A new survey of over 2,000 platform engineering, architecture, cloud engineering, developer, DevOps and site reliability engineering (SRE) professionals reveals that while AI's potential is recognized, operationalizing these technologies remains challenging.

Conducted by Rafay Systems, the research study, The Pulse of Enterprise Platform Teams: Cloud, Kubernetes and AI, shows that 93% of platform teams face persistent challenges. Top issues include managing Kubernetes complexity, keeping Kubernetes and cloud costs low and boosting developer productivity. In response, organizations are turning to platform teams and emphasizing automation to navigate these complexities.

AI Implementation Complexity

Engineering teams are grappling with AI-related challenges as applications become more sophisticated. Almost all respondents with machine learning operations (MLOps) implementations (95%) reported difficulties in experimenting with and deploying AI apps, while 94% struggled with GenAI app experimentation and deployment.

These obstacles potentially stem from a lack of mature operational frameworks, as only 17% of organizations report adequate MLOps implementation and 16% for large language model operations (LLMOps). This gap between the desire to leverage AI technologies and operational readiness limits engineering teams' ability to develop, deliver and scale AI-powered applications quickly.

MLOps and LLMOps Challenges

To address AI operational challenges, organizations are prioritizing key capabilities including:

1. Pre-configured environments for developing and testing generative AI applications

2. Automatic allocation of AI workloads to appropriate GPU resources

3. Pre-built MLOps pipelines

4. GPU virtualization and sharing

5. Dynamic GPU matchmaking

These priorities reflect the need for specialized infrastructure and tooling to support AI development and deployment. The focus on GPU-related capabilities highlights the resource-intensive nature of AI workloads and the importance of optimizing hardware utilization.

Platform Teams in AI Adoption

Enterprises recognize the role of platform teams in advancing AI adoption. Half (50%) of respondents emphasized the importance of security for MLOps and LLMOps workflows, while 49% highlighted model deployment automation as a key responsibility. An additional 45% pointed to data pipeline management as an area where platform teams can contribute to AI success.

The survey reveals an emphasis on automation and self-service capabilities to enhance developer productivity and accelerate AI adoption. Nearly half (47%) of respondents are focusing on automating cluster provisioning, while 44% aim to provide self-service experiences for developers.

A vast majority (83%) of respondents believe that pre-configured AI workspaces with built-in MLOps and LLMOps tooling could save teams over 10% of time monthly. This data highlights the role platform teams play in ensuring efficient, productive AI development environments.

Kubernetes and Infrastructure Challenges

The study also revealed challenges related to Kubernetes complexity:

■ 45% of respondents cited managing cost visibility and controlling Kubernetes and cloud infrastructure costs as a top challenge.

■ 38% highlighted the complexity of keeping up with Kubernetes cluster lifecycle management using multiple, disparate tools.

■ 38% pointed to the establishment and upkeep of enterprise-wide standardization as a hurdle.

Nearly one-third (31%) of organizations state that the total cost of ownership for Kubernetes is higher than budgeted for or anticipated. Looking ahead, 60% report that reducing and optimizing costs associated with Kubernetes infrastructure remains a top management initiative for the coming year.

Automation for AI Success

To address AI implementation challenges, organizations are turning to automation and self-service capabilities. 44% of respondents advocate for standardizing and automating infrastructure, while another 44% are focusing on automating Kubernetes cluster lifecycle management. Over a third (37%) highlighted the importance of reducing cognitive load on developer teams.

Navigating the Future: Automation and Platform Teams Drive AI Success

As organizations work to maintain their competitive edge and navigate the AI landscape, adopting automated approaches to address implementation challenges is important. The survey results depict an enterprise landscape that’s embracing AI and GenAI technologies, while dealing with the practical challenges of implementation. By prioritizing automation, self-service and leveraging the expertise of platform teams, organizations can build resilient, scalable AI architectures that drive business success.

As AI continues to evolve, the ability to integrate these technologies while managing complexity and costs will likely differentiate successful enterprises. Those that can navigate the implementation hurdles and create efficient, scalable AI infrastructures will be positioned to leverage the potential of AI and GenAI in driving innovation and business growth. By investing in automation, empowering platform teams and prioritizing developer productivity, enterprises can create the foundation necessary for successful AI implementation and unlock its transformative potential.

Haseeb Budhani is CEO of Rafay Systems
Share this

Industry News

April 03, 2025

StackGen has partnered with Google Cloud Platform (GCP) to bring its platform to the Google Cloud Marketplace.

April 03, 2025

Tricentis announced its spring release of new cloud capabilities for the company’s AI-powered, model-based test automation solution, Tricentis Tosca.

April 03, 2025

Lucid Software has acquired airfocus, an AI-powered product management and roadmapping platform designed to help teams prioritize and build the right products faster.

April 03, 2025

AutonomyAI announced its launch from stealth with $4 million in pre-seed funding.

April 02, 2025

Kong announced the launch of the latest version of Kong AI Gateway, which introduces new features to provide the AI security and governance guardrails needed to make GenAI and Agentic AI production-ready.

April 02, 2025

Traefik Labs announced significant enhancements to its AI Gateway platform along with new developer tools designed to streamline enterprise AI adoption and API development.

April 02, 2025

Zencoder released its next-generation AI coding and unit testing agents, designed to accelerate software development for professional engineers.

April 02, 2025

Windsurf (formerly Codeium) and Netlify announced a new technology partnership that brings seamless, one-click deployment directly into the developer's integrated development environment (IDE.)

April 02, 2025

Opsera raised $20M in Series B funding.

April 02, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, is making significant updates to its certification offerings.

April 01, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the Golden Kubestronaut program, a distinguished recognition for professionals who have demonstrated the highest level of expertise in Kubernetes, cloud native technologies, and Linux administration.

April 01, 2025

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade internal developer portal based on the Backstage project.

April 01, 2025

Platform9 announced that Private Cloud Director Community Edition is generally available.

March 31, 2025

Sonatype expanded support for software development in Rust via the Cargo registry to the entire Sonatype product suite.

March 31, 2025

CloudBolt Software announced its acquisition of StormForge, a provider of machine learning-powered Kubernetes resource optimization.