Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
It seems that the term DevOps is popping up everywhere these days. Everyone is talking about it. Nearly every IT manager seems to be working on moving their organization to DevOps in some way, shape or form, over the next few months or years. The rapid ascension of cloud services has driven significant change in businesses and the move to DevOps is one of the outcomes. Combined with broader adoption of agile methodologies, new processes and new ways of thinking were the logical outcome.
The first question to ask is what is DevOps? Answers range from it being a philosophy including collaboration, communication, process and integration to those who believe it's a new type of technical team focused on the "full stack developer". The truth is likely a bit more pragmatic at the enterprise level. Organizations need to move more quickly. They need to take advantage of new technologies, processes and solutions and they need to implement new processes and, in some cases, find new people to deliver on a new vision.
However, the biggest question that needs to be asked with regards to DevOps by enterprises going in is "What is the expected outcome?" The answer that seems to come up the most is velocity. Large enterprises have become encumbered by the weight of their organizations, governance and processes without the outcome necessarily being better products. Smaller, more nimble organizations are outpacing the behemoths and something needed to give. DevOps intends to encompass those changes across development, infrastructure and process, or so the story goes.
There are cloud companies offering complete DevOps infrastructure in the cloud including development, testing, staging and deployment. All, in theory, nearly at the click of a mouse (or tap of a mobile device). Yet, process isn't that simple to change. In nearly every solution on the market today, there are major pieces missing. The most glaring of them is how do to rapid, parallel, unified testing in a DevOps world.
Testing has often been the low priority on the totem pole in the world of software development. That's not because it isn't important and there isn't value placed on it, but rather because it is difficult, the tools and platforms available are time-consuming to implement and it is the area that most often gets the compressed end of the schedule. Talk to large enterprises and see how many of them have outsourced much of their QA to the lowest cost offshore firm they can find.
A New Paradigm Needs to be a Complete Paradigm
The last two years have seen deeper and more sophisticated attacks on software and companies. People's data has been stolen and published. IP has been lost to foreign countries and lives have been ruined. The cost has run into the billions. US News & World Report estimates that hackers are costing consumers and companies between $375 and $575 billion annually and those numbers are expected to rise. With a rapid proliferation of applications and devices, the risks associated with hacking become greater on nearly a daily basis.
Yet, not only is testing rarely discussed in conversations about DevOps, unified testing is completely ignored, mostly because it has not been widely available outside of major Internet companies who paid the price to roll their own platforms. Unified testing is critical to successful DevOps. However, if anything, the move to greater velocity in a world with increased risk should elevate the discussion about what testing should really mean in a world of agile development, DevOps and continuous integration. Some CIOs will proudly tell you that they are delivering 2-3 releases to the public a day. The real question for those CIOs should be whether they are confident of the quality of what they have released.
Unified Testing Needs to be Integrated Unified Testing
Ultimately, there will be two types of organizations that have success in the future – limiting the amount of hacking and buggy releases that they put out to the world. Some organizations will build testing from scratch into their environments. This comes with great commitment and great cost and only makes sense to companies like Google. However, organizations with hundreds or thousands of internal and external facing applications need a commercially available solution. The continuous integration toolset continues to evolve. Solutions such as Docker and Drone are making cloud-based deployments with DevOps in mind. Drone makes it easier for organizations to track bugs, but when it comes to actual testing, the paradigm has essentially remained unchanged for the last 20 years - developers writing test code based on use cases.
Often, these test developers sit on a different continent from the application developers and there are many separate siloed teams writing test code for unit and functional tests separate from teams working on load and performance tests. Security testing – the need for which grows daily – is often relegated to generalized, security testing for SQL injection and other hacks and is run late in the game prior to major releases rather than at every build. In truth, great companies will (and do) run unit, functional, performance, load. DDOS, app penetration security, compatibility, database and others at every build. None are relegated to a back seat or worse a "CYA" once-in-a-while test. Great quality means hundreds of tests in parallel at every build. And that means writing once, no silos, and embracing this change to attain true DevOps productivity, scale, and quality.
The focus needs to be on velocity with quality. CIOs need to have confidence that every build has the right level of unit, load, security, database, browser and performance testing, ultimately feeding into synthetic APM in production. Most tools on the market today attempt to address only one area, mimicking the silos that corporate culture built over decades. Offerings that only do load testing for Web-based applications, for example, tie a team back to familiar silos and produce no increase in quality or productivity. Success for truly agile organizations requires a single platform that does not require scripting for most use cases and can run all the different types of tests, from functional to security to performance, and integrate with a continuous integration environment. Only then will we see enterprises become truly agile with quality and security truly being integral to what is being delivered.
Tony Rems is CTO at Appvance.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.