Postman announced the Postman AI Agent Builder, a suite empowering developers to quickly design, test, and deploy intelligent agents by combining LLMs, APIs, and workflows into a unified solution.
In the DevOps rapid iteration cycle, too many organizations push their software and services out without being able to properly test for bugs that will show up with production traffic. This can cause unanticipated downtime, which means it's a big risk; it could take down the whole service. And no one wants that. So, what can be done?
The Perils of Buggy Code
The average cost of downtime is $5,600 a minute
Downtime is expensive — both financially and to the brand. Gartner has estimated that the average cost of downtime is $5,600 a minute. That's well over $300,000 an hour. To provide a real-world example of what this looks like, Microsoft Azure suffered a major outage in November 2018 caused by issues introduced as part of a code update. The outage lasted for 14 hours and affected customers throughout Europe and beyond. With migration from legacy systems to microenvironments in the cloud, outages and downtime pose a growing and serious problem.
The kinds of quality-testing tools in use now don't enable developers to know how a new software version will perform in production or if it will even work in production. The Cloudbleed bug is an example of this problem. In February 2017, a coding error in a software upgrade from security vendor Cloudflare led to a serious vulnerability discovered by a Google researcher several months later.
In addition to having the immediate impacts mentioned above, flaws can lead to serious security issues later. Heartbleed, a vulnerability that arose in 2014 stemming from a programming mistake in the OpenSSL library, left large numbers of private keys and sensitive information exposed to the internet, enabling theft that would otherwise have been protected by SSL/TLS encryption.
The Need to Test with Production Traffic
For today's increasingly frequent and fast development cycles, the way QA testing is typically done is no longer sufficient. Traditionally, DevOps teams haven't been able to do side-by-side testing of the production version and an upgrade candidate. The QA testing used by many organizations is a set of simulated test suites, which may not give comprehensive insight into the myriad ways in which customers may actually make use of the software. Just because upgraded code works under one set of testing parameters doesn't mean it will work in the unpredictable world of production usage.
In the case of the Cloudflare incident, the error went entirely unnoticed by end-users for an extended period of time and there were no system errors logged as a result of the flaw. Just as QA testing isn't sufficient, relying on system logs and users also has a limited scope for what can be detected.
Fixing bugs post-release ... estimated to be 5X as expensive as fixing them during design
Fixing bugs post-release gets pricey. It's estimated to be five times as expensive as fixing them during design — and it can lead to even costlier development delays. Giving software teams a way to identify potential bugs and security concerns prior to release can alleviate those delays. Clearly, testing with production traffic earlier in the code development process can save time, money and pain. Software and DevOps teams need a way to test quickly and accurately how new releases will perform with real (not just simulated) customer traffic and while maintaining the highest standards.
If teams have the capability to evaluate release versions side-by-side, they can quickly locate any differences or defects. In addition, they can gain real insight on network performance while also verifying the stability of upgrades and patches in a working environment. Doing this efficiently will significantly reduce the likelihood of releasing software that later needs to be rolled back. Rollbacks are expensive, as we saw in the case of the Microsoft Azure incident.
Teams sometimes stage rollouts, which necessitates running multiple software versions in production. The software teams put a small percentage of users on the new version, while most users run the status quo. Unfortunately, this approach to testing with production traffic is cumbersome to manage, costly and still vulnerable to rollbacks. The other problem with these kinds of rolling deployments is that while failures can be caught early in the process, they are — by design — only caught after they've affected end-users.
Issues Remain
Important questions arise at this point. For instance, how do you know whether the new software is causing the "failures"? How many "failures" does the business allow before recalling or rolling back the software, since the business does not observe side-by-side results from the same customer? This disrupts the end-user experience, which ultimately affects business operations and company reputation. And staging may not provide a sufficient sample to gauge the efficacy of the new release versus the entire population of customers.
Another issue that persists is cost. Even if you stage with only 10% of customers on the new version, if a failure costs more than $300,000 an hour, then a failure affecting 10% of users could potentially still cost more than $30,000 per hour. The impact is reduced, of course, but it's still significant, not counting the uncertainty of when to roll back.
A Better Way
Gone are the days when standard QA testing sufficed. Instead, DevOps teams have the option of testing in production and evaluating release versions side-by-side. This reduces the risk of bugs that comes with today's rapid dev cycles. This approach helps organizations release product that is secure and high-quality while avoiding expensive rollbacks or staging.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of CubeFS.
BrowserStack and Bitrise announced a strategic partnership to revolutionize mobile app quality assurance.
Mendix, a Siemens business, announced the general availability of Mendix 10.18.
Red Hat announced the general availability of Red Hat OpenShift Virtualization Engine, a new edition of Red Hat OpenShift that provides a dedicated way for organizations to access the proven virtualization functionality already available within Red Hat OpenShift.
Contrast Security announced the release of Application Vulnerability Monitoring (AVM), a new capability of Application Detection and Response (ADR).
Red Hat announced the general availability of Red Hat Connectivity Link, a hybrid multicloud application connectivity solution that provides a modern approach to connecting disparate applications and infrastructure.
Appfire announced 7pace Timetracker for Jira is live in the Atlassian Marketplace.
SmartBear announced the availability of SmartBear API Hub featuring HaloAI, an advanced AI-driven capability being introduced across SmartBear's product portfolio, and SmartBear Insight Hub.
Azul announced that the integrated risk management practices for its OpenJDK solutions fully support the stability, resilience and integrity requirements in meeting the European Union’s Digital Operational Resilience Act (DORA) provisions.
OpsVerse announced a significantly enhanced DevOps copilot, Aiden 2.0.
Progress received multiple awards from prestigious organizations for its inclusive workplace, culture and focus on corporate social responsibility (CSR).
Red Hat has completed its acquisition of Neural Magic, a provider of software and algorithms that accelerate generative AI (gen AI) inference workloads.
Code Intelligence announced the launch of Spark, an AI test agent that autonomously identifies bugs in unknown code without human interaction.