Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
DevOps is not really about the tools. DevOps is about people and processes as much as – if not more than – tools. Without cultural and process changes, technology alone cannot enable DevOps success. Several of the top experts in the DevOps arena made this very clear while DEVOPSdigest was compiling this list. That being said, a variety of technologies can be critical to supporting the people and processes that drive DevOps.
To develop this list, DEVOPSdigest asked experts from across the industry for their recommendation on a key technology required for DevOps. According to the many experts who have contributed their opinions to this massive 5-part list, the DevOps toolkit includes a wide range of both traditional and cutting edge technologies. The purpose of this list is not to finalize a technology checklist for DevOps, but rather to explore how many different types of tools can impact, and enable, your DevOps initiative.
Looking at the many ways experts define DevOps, it is no surprise that many of the technologies on the list of must-have DevOps tools are designed to support those definitive aspects of DevOps: collaboration, breaking down silos, bringing Dev and Ops together, agile development, continuous delivery and automation, to name a few.
Part 1 of the list covers performance management, monitoring and analytics.
1. APPLICATION PERFORMANCE MANAGEMENT (APM)
There are clearly so many tools vital to DevOps advancement, but Application Performance Management is the one that stands out today as it has become so highly ingrained as the primary vehicle by which practitioners aggregate and share critical data. APM has a tremendous halo effect on the maturation of DevOps in general, serving as the de facto measuring stick for applications and process improvement, as well as a practical sounding board for experimentation. At the end of the day, organizations are employing a wide range of metrics to gauge various aspects of DevOps progress, but APM tools supply the most critical view – how this work is translating directly into end user interactions.
Aruna Ravichandran
VP, DevOps Product and Solutions Marketing, CA Technologies
Using an Application Performance Management (APM) tool in a consistent manner to cover all environments across the SDLC (i.e. Dev, Test, QA, and Prod) will help facilitate an amplified feedback loop for application delivery. APM has the potential to lay the foundation for SHIFTING your development timeline LEFT to improve time to market, fostering smoother code deployments and minimizing anomalies in production.
Larry Dragich
Director of Customer Experience Management at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.
Read Larry Dragich's latest blog on APMdigest: End User Experience - Perceptions of Performance
Enterprise production-focused Application Performance Monitoring (APM) products are essential for giving IT dev, ops and business teams, real-time visibility into how applications are performing and supporting the business. APM is essential to the DevOps feedback cycle, allowing IT operations to uncover information, such as capacity, application usage etc., so that architects and developers can design and build better quality applications. On top of this, a good APM solution should promote collaboration between business, IT dev and ops teams, especially during emerging app issues, so that business impact can be avoided.
John Rakowski
Director of Technology Strategy, AppDynamics
From all must-have tools, the tool that’s forgotten the most, APM is the one that allows developers to have insight into the behavior of their code in production and give them the ability to detect anomalies and defects and fix them as soon as possible. That’s the tool that brings the most bang for your DevOps bucks. DevOps without real Ops is not sufficient. Furthermore, if Dev and Ops share the same operational data, that will reduce finger-pointing and enable faster and more effective troubleshooting, resulting in a better user experience for your customers.
Daniel Schrijver
Senior Principal Product Marketing Director, Oracle
A DevOps culture has a lot to do with trust and transparency using actionable metrics throughout the entire application lifecycle. At a minimum, an enterprise needs a core understanding of user interactions, transactions and overall digital performance. Core APM tools can give you the ability to synthetically and natively exercise performance while proactively uncovering problems that ensure users have the optimal experience and remain engaged. In an ideal world, an enterprise should have a comprehensive yet consistent suite of tools that include complete user monitoring, allowing the enterprise to focus on quality metrics and also identify performance issues as early in the application lifecycle as possible.”
Brett Hofer
Global DevOps Practice Lead, Dynatrace
2. MONITORING
While DevOps is most often associated with automation and continuous delivery/integration tools, I believe the single most important tool that organizations need to properly adopt and use to make a transformation to DevOps is a monitoring system. You cannot improve what you can't measure. Implementing key metrics across the business to help recognize areas that are in most need of improvement is the key to identifying the bottlenecks that prevent DevOps adoption. If the metrics show that certain workflows are inefficient because of bloated processes or interaction between multiple groups, then those workflows need to be reviewed and changed. By having insight into software development, deployment pipelines, and business process efficiency provides a complete picture of areas in need improvement. Once the problematic areas are identified, other tools can be plugged in where needed to improve and streamline the delivery pipeline.
Leon Fayer
VP, OmniTI
Hands down, one of if not the most important tool for DevOps success is end-to-end monitoring with automation. The DevOps process requires everything to be monitored and much of what that monitoring entails will need to be automated. Visibility across the application stack and into everything that drives performance is critical for the speed and collaboration that is the primary goal of a DevOps strategy. The impact of every change should be known. And to move faster, alerts, remediation and more should be automated.
Gerardo Dada
VP, Product Marketing and Strategy, SolarWinds
Monitoring tools that can integrate easily into your stack are critical for enabling DevOps. With microservices architecture, there are hundreds, if not thousands, of pieces to the DevOps puzzle that only become more complicated if you can't quickly and easily get visibility into the health of those services. Monitoring every layer of your infrastructure is critical, but in order to reduce complexity, those monitoring tools must be able to work together to show the bigger picture — from your servers to your API endpoints — while still allowing you to isolate problems down to the microscopic level.
Ashley Waxman
Marketing Lead, Runscope
DevOps exists in order for IT to be more responsive to the requirements of the business. The business wants an infinite number of enhancements implemented every minute. Your entire tool chain must therefore operate at the clock rate of your DevOps initiative. This places a new and special burden upon your DevOps monitoring tools. Many great new monitoring tools have been created to address the new requirements of Agile Development, DevOps, Containerized Micro-Services and highly distributed applications. New monitoring tools exist at the application performance layer, the virtualized network layer, the software defined infrastructure layer, and the virtualized storage layer. New monitoring tools exist across data types that are collected with some focusing upon metrics and others focusing upon logs. This explosion of new requirements and the explosion of new monitoring tools to meet these requirements leads to the need to integrate these streams of data into forms easily consumable and useful to IT Operations and other constituencies.
Bernd Harzog
CEO, OpsDataStore
The important thing to remember about DevOps projects is that they end and are turned over to IT operations. Make this turnover fast and easy by planning for integration with data center monitoring from the beginning. Your most important tool is the one that lets you move onto the next project!
Kent Erickson
Alliance Strategist, Zenoss
3. END USER EXPERIENCE MONITORING
The parts of DevOps which turn the tide around and start exposing data from production to developers are also increasingly deployed, but the processes around these are not. For example, tools that enable exposure to the actual end user experience in production would need to become more transparent for the engineering departments instead of just operations. Even more so, many of such tools provide value to the business side as well, so a successful deployment in the user experience monitoring domain would satisfy even more stakeholders.
Ivo Mägi
Co-founder and Head of Product, Plumbr
4. SYNTHETIC MONITORING
DevOps implies that you need to communicate between Ops and Dev in a good way. Using application/API driven synthetic monitoring will always give you the yardstick to measure your success.
Sven Hammar
Founder and CEO, Apica
The DevOps toolbox is absolutely jam-packed, but one tool that cannot be overlooked is synthetic performance monitoring – as a complement to real user measurement (RUM). Going beyond providing a view of the user experience, performance monitoring tools must also be able to exactly pinpoint the source of bottlenecks – ideally before they impact a large number of users. This gives DevOps teams the opportunity to find and fix problems accurately and expeditiously, both before and during production. Given increased IT complexity both within the data center and across the Internet, finding the source of performance problems – whether for internal enterprise applications, or customer-facing web applications – has the potential to grow harder and more time-consuming. Synthetic performance monitoring data's ability to swiftly and accurately identify problem sources before they affect the digital user experience is the only way to reconcile two competing demands – growing user performance expectations, and faster and more frequent software roll-outs.
Dennis Callaghan
Director of Industry Innovation, Catchpoint
5. INFRASTRUCTURE MANAGEMENT
If you are stranded on a desert island (but with a strong and reliable Internet connection) you still need to ensure your infrastructure is performing and your users are happy with their experience. What’s needed is a solid and extensible Digital Infrastructure Management Platform that can collect data from every layer of your stack, analyze what’s normal, what’s not, and visualize the impact of anomalous behavior. This will allow you to catch issues that can affect your operations before they truly impact your business.
Vess Bakalov
Co-Founder and CTO, SevOne
Traditional operational tools for data centers are generally geared towards configuration management and monitoring, but they offer no visibility into encapsulated traffic for Infrastructure-as-a-Service clouds. From our own experience in DevOps, and by working with operators ourselves, we’ve seen firsthand the unmet need for analytic and end-to-end operational tools for network management. End-to-end operational tools facilitating provisioning and orchestration offer key enablers for organizations looking for DevOps transformation from traditional IT service management.
Adam Johnson
VP of Business, Midokura
6. INCIDENT MANAGEMENT
Organizations must understand that tools are only one part of the answer. They must have the people, processes, and tools in place in order to successfully implement a DevOps environment. There are a number of helpful tools in the DevOps ecosystem. You want to think along the lines of productivity, repeatability, and safety when considering tools best suited to facilitate a DevOps mindset. In the end, you want there to be direct paths in place from an engineer to any given environment for delivery of code, issue resolution (triage, notify, fix, and learn), and maintenance, and one way to do this is with streamlined incident management solutions. Being in position to detect quickly and fix quickly is also key to having a successful DevOps environment in your organization.
Tim Armandpour
VP of Engineering, PagerDuty
7. ANALYTICS
DevOps needs tools that go beyond continuous release and deploy. They need tools that provide continuous analytics in order to measure and analyze application activities against business objectives. While the focus is often on continuous release and deploy, that is not always possible in some firms due to regulatory concerns. However, the need is there for continuous monitoring, tracking and analytics. First, use monitoring to gather end-user experience data as well as infrastructure and application data. Then, track and stitch transactions together to show a timeline of what happened. Finally, create shared metrics that enable the analysis to be compared to both technical and business objectives.
Charley Rich
VP Product Management and Marketing, Nastel Technologies
Application-centric analytics - Recent developments by many leading APM providers have focused on how application performance, usage, and business data can be correlated to analyze whether applications are driving desired commercial outcomes. This form of application-centric analytics is critical to DevOps as there is no point in delivering new updates or features at speed if they are not providing value to users or the business. Application analytics allows DevOps professionals and the business to understand quickly how to tailor applications, in order to optimize user experience and overall application quality
John Rakowski
Director of Technology Strategy, AppDynamics
In any company running DevOps the critical tool is the data analytics platform - a central place where the most important machine data is stored, analyzed and presented. Combining multiple data sources from servers, devices and other DevOps tools is crucial for an ever changing world. Being able to act on insights will determine the win or loss for every company.
Coen Meerbeek
Online Performance Consultant and Founder of Blue Factory Internet
Historically the focus for DevOps has been on deployment automation - pushing a change rapidly into production. But what happens if the change, automatic or manual causes undesired impact? You can't always just roll back the change if it's incorrect. Today's IT Operations Analytics (ITOA) tools automatically analyze all actual changes and their impact across the entire IT environment together with release and deployment data for key operational insights. ITOA technologies help to predict early stability issues caused by the change and link changes to incidents for root cause analysis when incidents do happen. I believe that we will continue to see further expansion of ITOA and its integration into DevOps platforms. This will enable DevOps to implement truly agile, rapid and stable processes automated end-to-end.
Sasha Gilenson
CEO, Evolven
As more and more companies embrace the method of constantly developing, releasing and updating software, there becomes an even greater opportunity for errors to occur. If your team is pushing out several software deployments a day, regardless of how good your testing and quality control is, it can quickly become impossible to know exactly how well everything works together. The first release may not work perfectly anymore when combined in an environment with the sixth release. With tight schedules, limited staff and limited budgets, it is important to embrace machine learning-based analytics as a solution to quickly finding any operational errors and bringing them to the team’s attention so they can be repaired. Machine learning analytics can do what humans cannot – namely, monitor all operational metrics in near real-time and look for anomalies that indicate a current or impending problem. By continuously learning your unique environment, including what constitutes normal and what does not – machine learning-based systems use that information to be even smarter about detecting future anomalies and problems, helping to make for a smoother and more successful DevOps process.
Mike Paquette
VP of Products, Prelert
Business objectives and the usefulness of DevOps technologies change throughout the application lifecycle. During the production phase, the most important objective is business assurance and the most effective technology to accomplish this is through traffic-based analytics. Assuring the delivery of business services requires data coherence by continuously converting large volumes of traffic-based data into structured metadata which is optimized for real-time analytics platforms. The generated metadata delivers actionable insights for business agility, mitigating risk, and service assurance. Traffic-based intelligence is the foundation for a solution that effectively pinpoints the root-cause of performance problems, reduces the Mean-Time-To-Knowledge (MTTK) by 80% or more, and substantially eliminates OpEx by proactively monitoring and managing the entire service delivery chain in a cost-effective manner.
Ron Lifton
Senior Enterprise Solutions Marketing Manager, NetScout
8. MANAGER OF MANAGERS
The DevOps agile development model extends to its tools, and we've seen a huge proliferation of tools introduced to improve some aspect of monitoring. While each tool solves a specific problem, the proliferation has inadvertently fostered silos of expertise, domain-specific views and massive data volumes generated in various formats. As application count and architectural complexity increases, the must-have tool to scale production support is an analytics-driven Manager of Managers (MoM). It has to ingest all of this operational event data (application to infrastructure) and apply machine learning to automate the noise reduction and alert correlation. This gives DevOps teams earlier warning of unfolding issues, better collaboration, visibility into root cause – ultimately reducing the impact of production outages and incidents.
Rob Markovich
CMO, Moogsoft
Read 30 Must-Have Tools to Support DevOps - Part 2, covering automation and continuous integration.
Industry News
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.
Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.
Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.
Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.