OutSystems announced the general availability (GA) of Mentor on OutSystems Developer Cloud (ODC).
As artificial intelligence (AI) and generative AI (GenAI) reshape the enterprise landscape, organizations face implementation hurdles that echo the early stages of cloud adoption challenges. A new survey of over 2,000 platform engineering, architecture, cloud engineering, developer, DevOps and site reliability engineering (SRE) professionals reveals that while AI's potential is recognized, operationalizing these technologies remains challenging.
Conducted by Rafay Systems, the research study, The Pulse of Enterprise Platform Teams: Cloud, Kubernetes and AI, shows that 93% of platform teams face persistent challenges. Top issues include managing Kubernetes complexity, keeping Kubernetes and cloud costs low and boosting developer productivity. In response, organizations are turning to platform teams and emphasizing automation to navigate these complexities.
AI Implementation Complexity
Engineering teams are grappling with AI-related challenges as applications become more sophisticated. Almost all respondents with machine learning operations (MLOps) implementations (95%) reported difficulties in experimenting with and deploying AI apps, while 94% struggled with GenAI app experimentation and deployment.
These obstacles potentially stem from a lack of mature operational frameworks, as only 17% of organizations report adequate MLOps implementation and 16% for large language model operations (LLMOps). This gap between the desire to leverage AI technologies and operational readiness limits engineering teams' ability to develop, deliver and scale AI-powered applications quickly.
MLOps and LLMOps Challenges
To address AI operational challenges, organizations are prioritizing key capabilities including:
1. Pre-configured environments for developing and testing generative AI applications
2. Automatic allocation of AI workloads to appropriate GPU resources
3. Pre-built MLOps pipelines
4. GPU virtualization and sharing
5. Dynamic GPU matchmaking
These priorities reflect the need for specialized infrastructure and tooling to support AI development and deployment. The focus on GPU-related capabilities highlights the resource-intensive nature of AI workloads and the importance of optimizing hardware utilization.
Platform Teams in AI Adoption
Enterprises recognize the role of platform teams in advancing AI adoption. Half (50%) of respondents emphasized the importance of security for MLOps and LLMOps workflows, while 49% highlighted model deployment automation as a key responsibility. An additional 45% pointed to data pipeline management as an area where platform teams can contribute to AI success.
The survey reveals an emphasis on automation and self-service capabilities to enhance developer productivity and accelerate AI adoption. Nearly half (47%) of respondents are focusing on automating cluster provisioning, while 44% aim to provide self-service experiences for developers.
A vast majority (83%) of respondents believe that pre-configured AI workspaces with built-in MLOps and LLMOps tooling could save teams over 10% of time monthly. This data highlights the role platform teams play in ensuring efficient, productive AI development environments.
Kubernetes and Infrastructure Challenges
The study also revealed challenges related to Kubernetes complexity:
■ 45% of respondents cited managing cost visibility and controlling Kubernetes and cloud infrastructure costs as a top challenge.
■ 38% highlighted the complexity of keeping up with Kubernetes cluster lifecycle management using multiple, disparate tools.
■ 38% pointed to the establishment and upkeep of enterprise-wide standardization as a hurdle.
Nearly one-third (31%) of organizations state that the total cost of ownership for Kubernetes is higher than budgeted for or anticipated. Looking ahead, 60% report that reducing and optimizing costs associated with Kubernetes infrastructure remains a top management initiative for the coming year.
Automation for AI Success
To address AI implementation challenges, organizations are turning to automation and self-service capabilities. 44% of respondents advocate for standardizing and automating infrastructure, while another 44% are focusing on automating Kubernetes cluster lifecycle management. Over a third (37%) highlighted the importance of reducing cognitive load on developer teams.
Navigating the Future: Automation and Platform Teams Drive AI Success
As organizations work to maintain their competitive edge and navigate the AI landscape, adopting automated approaches to address implementation challenges is important. The survey results depict an enterprise landscape that’s embracing AI and GenAI technologies, while dealing with the practical challenges of implementation. By prioritizing automation, self-service and leveraging the expertise of platform teams, organizations can build resilient, scalable AI architectures that drive business success.
As AI continues to evolve, the ability to integrate these technologies while managing complexity and costs will likely differentiate successful enterprises. Those that can navigate the implementation hurdles and create efficient, scalable AI infrastructures will be positioned to leverage the potential of AI and GenAI in driving innovation and business growth. By investing in automation, empowering platform teams and prioritizing developer productivity, enterprises can create the foundation necessary for successful AI implementation and unlock its transformative potential.
Industry News
Kurrent announced availability of public internet access on its managed service, Kurrent Cloud, streamlining the connectivity process and empowering developers with ease of use.
MacStadium highlighted its major enterprise partnerships and technical innovations over the past year. This momentum underscores MacStadium’s commitment to innovation, customer success and leadership in the Apple enterprise ecosystem as the company prepares for continued expansion in the coming months.
Traefik Labs announced the integration of its Traefik Proxy with the Nutanix Kubernetes Platform® (NKP) solution.
Perforce Software announced the launch of AI Validation, a new capability within its Perfecto continuous testing platform for web and mobile applications.
Mirantis announced the launch of Rockoon, an open-source project that simplifies OpenStack management on Kubernetes.
Endor Labs announced a new feature, AI Model Discovery, enabling organizations to discover the AI models already in use across their applications, and to set and enforce security policies over which models are permitted.
Qt Group is launching Qt AI Assistant, an experimental tool for streamlining cross-platform user interface (UI) development.
Sonatype announced its integration with Buy with AWS, a new feature now available through AWS Marketplace.
Endor Labs, Aikido Security, Arnica, Amplify, Kodem, Legit, Mobb and Orca Security have launched Opengrep to ensure static code analysis remains truly open, accessible and innovative for everyone:
Progress announced the launch of Progress Data Cloud, a managed Data Platform as a Service designed to simplify enterprise data and artificial intelligence (AI) operations in the cloud.
Sonar announced the release of its latest Long-Term Active (LTA) version, SonarQube Server 2025 Release 1 (2025.1).
Idera announced the launch of Sembi, a multi-brand entity created to unify its premier software quality and security solutions under a single umbrella.
Postman announced the Postman AI Agent Builder, a suite empowering developers to quickly design, test, and deploy intelligent agents by combining LLMs, APIs, and workflows into a unified solution.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of CubeFS.