Progress announced the Q4 2024 release of its award-winning Progress® Telerik® and Progress® Kendo UI® component libraries.
In the DevOps rapid iteration cycle, too many organizations push their software and services out without being able to properly test for bugs that will show up with production traffic. This can cause unanticipated downtime, which means it's a big risk; it could take down the whole service. And no one wants that. So, what can be done?
The Perils of Buggy Code
The average cost of downtime is $5,600 a minute
Downtime is expensive — both financially and to the brand. Gartner has estimated that the average cost of downtime is $5,600 a minute. That's well over $300,000 an hour. To provide a real-world example of what this looks like, Microsoft Azure suffered a major outage in November 2018 caused by issues introduced as part of a code update. The outage lasted for 14 hours and affected customers throughout Europe and beyond. With migration from legacy systems to microenvironments in the cloud, outages and downtime pose a growing and serious problem.
The kinds of quality-testing tools in use now don't enable developers to know how a new software version will perform in production or if it will even work in production. The Cloudbleed bug is an example of this problem. In February 2017, a coding error in a software upgrade from security vendor Cloudflare led to a serious vulnerability discovered by a Google researcher several months later.
In addition to having the immediate impacts mentioned above, flaws can lead to serious security issues later. Heartbleed, a vulnerability that arose in 2014 stemming from a programming mistake in the OpenSSL library, left large numbers of private keys and sensitive information exposed to the internet, enabling theft that would otherwise have been protected by SSL/TLS encryption.
The Need to Test with Production Traffic
For today's increasingly frequent and fast development cycles, the way QA testing is typically done is no longer sufficient. Traditionally, DevOps teams haven't been able to do side-by-side testing of the production version and an upgrade candidate. The QA testing used by many organizations is a set of simulated test suites, which may not give comprehensive insight into the myriad ways in which customers may actually make use of the software. Just because upgraded code works under one set of testing parameters doesn't mean it will work in the unpredictable world of production usage.
In the case of the Cloudflare incident, the error went entirely unnoticed by end-users for an extended period of time and there were no system errors logged as a result of the flaw. Just as QA testing isn't sufficient, relying on system logs and users also has a limited scope for what can be detected.
Fixing bugs post-release ... estimated to be 5X as expensive as fixing them during design
Fixing bugs post-release gets pricey. It's estimated to be five times as expensive as fixing them during design — and it can lead to even costlier development delays. Giving software teams a way to identify potential bugs and security concerns prior to release can alleviate those delays. Clearly, testing with production traffic earlier in the code development process can save time, money and pain. Software and DevOps teams need a way to test quickly and accurately how new releases will perform with real (not just simulated) customer traffic and while maintaining the highest standards.
If teams have the capability to evaluate release versions side-by-side, they can quickly locate any differences or defects. In addition, they can gain real insight on network performance while also verifying the stability of upgrades and patches in a working environment. Doing this efficiently will significantly reduce the likelihood of releasing software that later needs to be rolled back. Rollbacks are expensive, as we saw in the case of the Microsoft Azure incident.
Teams sometimes stage rollouts, which necessitates running multiple software versions in production. The software teams put a small percentage of users on the new version, while most users run the status quo. Unfortunately, this approach to testing with production traffic is cumbersome to manage, costly and still vulnerable to rollbacks. The other problem with these kinds of rolling deployments is that while failures can be caught early in the process, they are — by design — only caught after they've affected end-users.
Issues Remain
Important questions arise at this point. For instance, how do you know whether the new software is causing the "failures"? How many "failures" does the business allow before recalling or rolling back the software, since the business does not observe side-by-side results from the same customer? This disrupts the end-user experience, which ultimately affects business operations and company reputation. And staging may not provide a sufficient sample to gauge the efficacy of the new release versus the entire population of customers.
Another issue that persists is cost. Even if you stage with only 10% of customers on the new version, if a failure costs more than $300,000 an hour, then a failure affecting 10% of users could potentially still cost more than $30,000 per hour. The impact is reduced, of course, but it's still significant, not counting the uncertainty of when to roll back.
A Better Way
Gone are the days when standard QA testing sufficed. Instead, DevOps teams have the option of testing in production and evaluating release versions side-by-side. This reduces the risk of bugs that comes with today's rapid dev cycles. This approach helps organizations release product that is secure and high-quality while avoiding expensive rollbacks or staging.
Industry News
Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).
Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.
Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.
Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).
Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.
Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.
Check Point® Software Technologies Ltd. announced the new Check Point Quantum Firewall Software R82 (R82) and additional innovations for the Infinity Platform.
Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.
Red Hat announced an extended collaboration with Microsoft to streamline and scale artificial intelligence (AI) and generative AI (gen AI) deployments in the cloud.
Endor Labs announced that Microsoft has natively integrated its advanced SCA capabilities within Microsoft Defender for Cloud, a Cloud-Native Application Protection Platform (CNAPP).
Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.
Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.
Securiti announced a new solution - Security for AI Copilots in SaaS apps.
Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.