Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
DevOps teams are pressured to have their builds running smoothly … but we all know that's hard to achieve. CI optimization is a constant battle against expectations and resources. With the number of automated tests that run in the pipeline, to flaky tests … to simply not enough CPU, DevOps teams are rarely 100% satisfied with their build process.
A build that used to take 10 minutes to finish, once you add automated tests into the picture, suddenly takes 45 minutes to complete. And if failed tests are set to "retry" — that 45-min build can easily double in time!
So enters AI to save the day! We can't get away with a blog post without mentioning AI these days, can we? Apparently not. Because it really does help DevOps teams with their CI optimization.
Pressure to keep the pipeline moving is fierce, and so keeping workflows running optimally is always top of mind for any Devops leader. Although there seems to be an abundant amount of tooling circling around the market, few exist that help optimize workflows to achieve a streamlined build process.
Here are the areas we see AI helping DevOps leaders with CI optimization.
How to Optimize a Pipeline Running a lot of Automated Tests
When automated tests come into the pipeline, the historic answer has always been: add CPU and run those tests in parallel, that should fix the problem. Before the team knows it, DevOps is reciting the famous words from Scotty in Star Trek "Captain, I'm giving her all she's got!"
Parallel threads clamp down on test run times initially, however, teams easily hit a maximum thread count where it's either cost prohibitive to add more CPU or that parallelization is maxed out and no longer reduces build times.
What's the next option?
This is where AI has come fully into picture. There are promising developments in the AI risk-based testing space that are helping DevOps teams with their CI optimization efforts. Leveraging AI, DevOps can set their CI pipeline to auto-select and execute only the automated tests impacted by recent developer changes on a per Commit basis.
Rather than running 100% of the testsuite in maximum parallel threads, the CI Pipeline executes a smart subset, such as 20% of tests, while still catching the regressions. Therefore, the CI Pipeline doesn't require as many resources to find regressions and can do so in a much more fast and efficient way.
Example for CI Optimization
An E2E testsuite with 1,000 tests running through a CI pipeline in 10 parallel threads takes 60 minutes to complete. 60 minutes is a long time. So how can they get 60 minutes down farther without throwing more hardware at the issue?
Inserting an AI risk-based testing tool into the pipeline would allow it to auto-select and execute only the tests relevant to developer changes, and thereby reduce the number of tests needing to be executed.
In this example, if Smart Test Selection is set at 10%, it would prioritize the top 100 E2E tests for execution based on the recent developer commits. The execution time would drop from 60 minutes down to approximately 6 minutes to complete.
By adding an AI risk-based testing tool into the CI pipeline, the team can now run against the full 1,000 tests while AI picks the top 100 tests given a recent change on a per PR basis to save time and resources. A worthy option to consider over just throwing more hardware at the issue.
How to Optimize a Pipeline with Flaky Tests
Flaky tests are the bane of existence for both DevOps and QA teams alike. These tests are a constant distraction, from breaking builds to chasing false positives. A test that fails due to no underlying code changes is simply a test that will cause someone to waste time.
There are a few avenues DevOps leaders can pursue to prevent flaky test distraction, such as retries. For example, if a test fails, retry it 3 to 5 times again to ensure it fails consistently each time to warrant raising a real defect.
The downside to this methodology, once again, it prolongs the CI build process!
A 10-min build with 5-min worth of automated tests and a high degree of flakiness can be extended to 25-min in some cases. Retries are good to help determine flakiness, but there is a tradeoff, which is time and resources.
What's the next option?
Giving clean feedback to developers is hard. AI can do this and block out flaky distraction. AI can recognize trends in test failures that pick up whether a test failed due to a real defect or not.
AI-models can allow DevOps teams to only fail CI builds on real failures, and avoid the flaky noise inherent in so many testsuites. If there's a degree of flakiness in every run, the builds can Pass/Fail only on real defects, thereby giving clean signals to Developers.
How does AI know to prevent Flaky Tests?
The AI trains from historical data, learning about the tests and defects, and builds out an AI Model that can determine when a test failed on a real defect or is a false positive.
For example
A UI testsuite with 1,000 tests has 5% inherent flakiness. After every run, the team must review 50 tests failures that are flaky — a major distraction that may cause developers to ignore the results (which is a very bad thing).
When the AI Model is trained on this testsuite, if there are no real defects introduced, the build, even with 5% flakiness, can be set to Pass. Only on which the AI Model determines to be a Real Defect, will the Build Fail.
Tooling such as this, which is highly configurable to team preference, can be leveraged to save team distraction, keep builds running smoothly, and optimize CI pipelines.
Conclusion
The advent of CI pipelines has accelerated team output to deliver faster than ever, however, as more weight is put on these pipelines, optimization strategies need to be put in place to maintain speed and efficiency.
Automated Tests and Flaky Tests can burden the CI Build process, however, new AI technologies, such as an AI-Powered Risk Based Testing tool, exist to help counteract these CI bottlenecks to keep workflows streamlined and efficient.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.