Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
As artificial intelligence (AI) and generative AI (GenAI) reshape the enterprise landscape, organizations face implementation hurdles that echo the early stages of cloud adoption challenges. A new survey of over 2,000 platform engineering, architecture, cloud engineering, developer, DevOps and site reliability engineering (SRE) professionals reveals that while AI's potential is recognized, operationalizing these technologies remains challenging.
Conducted by Rafay Systems, the research study, The Pulse of Enterprise Platform Teams: Cloud, Kubernetes and AI, shows that 93% of platform teams face persistent challenges. Top issues include managing Kubernetes complexity, keeping Kubernetes and cloud costs low and boosting developer productivity. In response, organizations are turning to platform teams and emphasizing automation to navigate these complexities.
AI Implementation Complexity
Engineering teams are grappling with AI-related challenges as applications become more sophisticated. Almost all respondents with machine learning operations (MLOps) implementations (95%) reported difficulties in experimenting with and deploying AI apps, while 94% struggled with GenAI app experimentation and deployment.
These obstacles potentially stem from a lack of mature operational frameworks, as only 17% of organizations report adequate MLOps implementation and 16% for large language model operations (LLMOps). This gap between the desire to leverage AI technologies and operational readiness limits engineering teams' ability to develop, deliver and scale AI-powered applications quickly.
MLOps and LLMOps Challenges
To address AI operational challenges, organizations are prioritizing key capabilities including:
1. Pre-configured environments for developing and testing generative AI applications
2. Automatic allocation of AI workloads to appropriate GPU resources
3. Pre-built MLOps pipelines
4. GPU virtualization and sharing
5. Dynamic GPU matchmaking
These priorities reflect the need for specialized infrastructure and tooling to support AI development and deployment. The focus on GPU-related capabilities highlights the resource-intensive nature of AI workloads and the importance of optimizing hardware utilization.
Platform Teams in AI Adoption
Enterprises recognize the role of platform teams in advancing AI adoption. Half (50%) of respondents emphasized the importance of security for MLOps and LLMOps workflows, while 49% highlighted model deployment automation as a key responsibility. An additional 45% pointed to data pipeline management as an area where platform teams can contribute to AI success.
The survey reveals an emphasis on automation and self-service capabilities to enhance developer productivity and accelerate AI adoption. Nearly half (47%) of respondents are focusing on automating cluster provisioning, while 44% aim to provide self-service experiences for developers.
A vast majority (83%) of respondents believe that pre-configured AI workspaces with built-in MLOps and LLMOps tooling could save teams over 10% of time monthly. This data highlights the role platform teams play in ensuring efficient, productive AI development environments.
Kubernetes and Infrastructure Challenges
The study also revealed challenges related to Kubernetes complexity:
■ 45% of respondents cited managing cost visibility and controlling Kubernetes and cloud infrastructure costs as a top challenge.
■ 38% highlighted the complexity of keeping up with Kubernetes cluster lifecycle management using multiple, disparate tools.
■ 38% pointed to the establishment and upkeep of enterprise-wide standardization as a hurdle.
Nearly one-third (31%) of organizations state that the total cost of ownership for Kubernetes is higher than budgeted for or anticipated. Looking ahead, 60% report that reducing and optimizing costs associated with Kubernetes infrastructure remains a top management initiative for the coming year.
Automation for AI Success
To address AI implementation challenges, organizations are turning to automation and self-service capabilities. 44% of respondents advocate for standardizing and automating infrastructure, while another 44% are focusing on automating Kubernetes cluster lifecycle management. Over a third (37%) highlighted the importance of reducing cognitive load on developer teams.
Navigating the Future: Automation and Platform Teams Drive AI Success
As organizations work to maintain their competitive edge and navigate the AI landscape, adopting automated approaches to address implementation challenges is important. The survey results depict an enterprise landscape that’s embracing AI and GenAI technologies, while dealing with the practical challenges of implementation. By prioritizing automation, self-service and leveraging the expertise of platform teams, organizations can build resilient, scalable AI architectures that drive business success.
As AI continues to evolve, the ability to integrate these technologies while managing complexity and costs will likely differentiate successful enterprises. Those that can navigate the implementation hurdles and create efficient, scalable AI infrastructures will be positioned to leverage the potential of AI and GenAI in driving innovation and business growth. By investing in automation, empowering platform teams and prioritizing developer productivity, enterprises can create the foundation necessary for successful AI implementation and unlock its transformative potential.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.