Broadcom announced the general availability of VMware Tanzu Platform 10 that establishes a new layer of abstraction across Cloud Foundry infrastructure foundations to make it easier, faster, and less expensive to bring new applications, including GenAI applications, to production.
As artificial intelligence (AI) and generative AI (GenAI) reshape the enterprise landscape, organizations face implementation hurdles that echo the early stages of cloud adoption challenges. A new survey of over 2,000 platform engineering, architecture, cloud engineering, developer, DevOps and site reliability engineering (SRE) professionals reveals that while AI's potential is recognized, operationalizing these technologies remains challenging.
Conducted by Rafay Systems, the research study, The Pulse of Enterprise Platform Teams: Cloud, Kubernetes and AI, shows that 93% of platform teams face persistent challenges. Top issues include managing Kubernetes complexity, keeping Kubernetes and cloud costs low and boosting developer productivity. In response, organizations are turning to platform teams and emphasizing automation to navigate these complexities.
AI Implementation Complexity
Engineering teams are grappling with AI-related challenges as applications become more sophisticated. Almost all respondents with machine learning operations (MLOps) implementations (95%) reported difficulties in experimenting with and deploying AI apps, while 94% struggled with GenAI app experimentation and deployment.
These obstacles potentially stem from a lack of mature operational frameworks, as only 17% of organizations report adequate MLOps implementation and 16% for large language model operations (LLMOps). This gap between the desire to leverage AI technologies and operational readiness limits engineering teams' ability to develop, deliver and scale AI-powered applications quickly.
MLOps and LLMOps Challenges
To address AI operational challenges, organizations are prioritizing key capabilities including:
1. Pre-configured environments for developing and testing generative AI applications
2. Automatic allocation of AI workloads to appropriate GPU resources
3. Pre-built MLOps pipelines
4. GPU virtualization and sharing
5. Dynamic GPU matchmaking
These priorities reflect the need for specialized infrastructure and tooling to support AI development and deployment. The focus on GPU-related capabilities highlights the resource-intensive nature of AI workloads and the importance of optimizing hardware utilization.
Platform Teams in AI Adoption
Enterprises recognize the role of platform teams in advancing AI adoption. Half (50%) of respondents emphasized the importance of security for MLOps and LLMOps workflows, while 49% highlighted model deployment automation as a key responsibility. An additional 45% pointed to data pipeline management as an area where platform teams can contribute to AI success.
The survey reveals an emphasis on automation and self-service capabilities to enhance developer productivity and accelerate AI adoption. Nearly half (47%) of respondents are focusing on automating cluster provisioning, while 44% aim to provide self-service experiences for developers.
A vast majority (83%) of respondents believe that pre-configured AI workspaces with built-in MLOps and LLMOps tooling could save teams over 10% of time monthly. This data highlights the role platform teams play in ensuring efficient, productive AI development environments.
Kubernetes and Infrastructure Challenges
The study also revealed challenges related to Kubernetes complexity:
■ 45% of respondents cited managing cost visibility and controlling Kubernetes and cloud infrastructure costs as a top challenge.
■ 38% highlighted the complexity of keeping up with Kubernetes cluster lifecycle management using multiple, disparate tools.
■ 38% pointed to the establishment and upkeep of enterprise-wide standardization as a hurdle.
Nearly one-third (31%) of organizations state that the total cost of ownership for Kubernetes is higher than budgeted for or anticipated. Looking ahead, 60% report that reducing and optimizing costs associated with Kubernetes infrastructure remains a top management initiative for the coming year.
Automation for AI Success
To address AI implementation challenges, organizations are turning to automation and self-service capabilities. 44% of respondents advocate for standardizing and automating infrastructure, while another 44% are focusing on automating Kubernetes cluster lifecycle management. Over a third (37%) highlighted the importance of reducing cognitive load on developer teams.
Navigating the Future: Automation and Platform Teams Drive AI Success
As organizations work to maintain their competitive edge and navigate the AI landscape, adopting automated approaches to address implementation challenges is important. The survey results depict an enterprise landscape that’s embracing AI and GenAI technologies, while dealing with the practical challenges of implementation. By prioritizing automation, self-service and leveraging the expertise of platform teams, organizations can build resilient, scalable AI architectures that drive business success.
As AI continues to evolve, the ability to integrate these technologies while managing complexity and costs will likely differentiate successful enterprises. Those that can navigate the implementation hurdles and create efficient, scalable AI infrastructures will be positioned to leverage the potential of AI and GenAI in driving innovation and business growth. By investing in automation, empowering platform teams and prioritizing developer productivity, enterprises can create the foundation necessary for successful AI implementation and unlock its transformative potential.
Industry News
Tricentis announced the expansion of its test management and analytics platform, Tricentis qTest, with the launch of Tricentis qTest Copilot.
Redgate is introducing two new machine learning (ML) and artificial intelligence (AI) powered capabilities in its test data management and database monitoring solutions.
Upbound announced significant advancements to its platform, targeting enterprises building self-service cloud environments for their developers and machine learning engineers.
Edera announced the availability of Am I Isolated, an open source container security benchmark that probes users runtime environments and tests for container isolation.
Progress announced 10 years of partnership with emt Distribution — a leading cybersecurity distributor in the Middle East and Africa.
Port announced $35 million in Series B funding, bringing its total funding to $58M to date.
Parasoft has made another step in strategically integrating AI and ML quality enhancements where development teams need them most, such as using natural language for troubleshooting or checking code in real time.
MuleSoft announced the general availability of full lifecycle AsyncAPI support, enabling organizations to power AI agents with real-time data through seamless integration with event-driven architectures (EDAs).
Numecent announced they have expanded their Microsoft collaboration with the launch of Cloudpager's new integration to App attach in Azure Virtual Desktop.
Progress announced the completion of the acquisition of ShareFile, a business unit of Cloud Software Group, providing a SaaS-native, AI-powered, document-centric collaboration platform, focusing on industry segments including business and professional services, financial services, industrial and healthcare.
Incredibuild announced the acquisition of Garden, a provider of DevOps pipeline acceleration solutions.
The Open Source Security Foundation (OpenSSF) announced an expansion of its free course “Developing Secure Software” (LFD121).
Redgate announced that its core solutions are listed in Amazon Web Services (AWS) Marketplace.
LambdaTest introduced a suite of new features to its AI-powered Test Manager, designed to simplify and enhance the test management experience for software development and QA teams.