Continuous Improvement: The By-Product of Monitoring
September 07, 2016

Jason Hand
VictorOps

We all remember the game from our childhood where one person whispers a phrase to the person directly next to them, who in turn shares the phrase with the following person in line. This continues through a group of people until it makes its way back to the original source.

The point of this exercise was primarily to demonstrate how easily information can become corrupted by a lengthy path through which it passes. Minor and major alterations to the information occur naturally, and in some cases intentionally, as details and facts associated with information are diluted by way of indirect communication with the original source.

Another observation is that the time in which it takes for information to return to the originating source varies greatly and increases with each new point through which the information must pass. In short, the volume and frequency of errors in data increase conversely with the path size and time in which it passes.

This concept can be applied to feedback loops, which are used in nearly every industry. Most IT professionals understand the importance of having the right monitoring and metrics in place to give them a pulse on infrastructure, code base and facilities. With a focus on uptime and availability, extra attention is put toward efforts to identify a problem before end users do.

Unfortunately, with availability as the highest priority, monitoring and metrics are typically used by IT teams to constantly firefight issues. They are rarely used to experiment or innovate so that the teams can improve upon their own processes and tooling.

In an industry where failure is unavoidable, learning and innovating through feedback loops is your best course of action. Instead of focusing on increasing the time until your next failure, you should focus on decreasing the time it takes for your systems to recover following a failure.

Continuous Improvement

Agile and DevOps principles teach us that removing friction in our processes and communications is a critical component to success in modern software delivery. Shortening feedback loops allows for quicker responses to situations, as well as a reduction in opportunities for errors in data.

Companies that have found a competitive advantage know the secret of shortened feedback loops very well. Not only have they adopted the principles of Agile and effective DevOps practice within their IT teams, but throughout the organization. It's part of an ongoing effort towards continuous improvement.

Today's best practices quickly become outdated as new processes and tools become available and mature. Embracing the feedback loop allows us to respond, learn and improve, which in turn allows us to innovate our own products and services.

No More Waterfalls

Waterfall planning and delivery methods where software releases take place in long cycles are no longer acceptable. The demands of competition and innovation require much shorter cycles for every phase of the process. The goal of the waterfall approach is to structure everything so that the schedule, scope, and resources can be determined upfront.

Unfortunately, this approach means companies can't respond as quickly. When the needs of customers or the landscape of markets inevitably changes, IT teams aren't equipped to receive that feedback and immediately apply it to new decisions and choices. There is no way to self-correct other than by throwing out an immense amount of planning and work only to start from scratch.

Human Feedback Through a Systems Thinking Lens

Feedback doesn't take place only within systems - verbal and non-verbal communication between co-workers, partners and customers are other forms of feedback. Taking a step back and looking at that feedback through a systems lens is a far more accurate method of evaluation.

There are three main questions to ask in order to accomplish this:

1. Are differences between the giver and receiver creating friction for the feedback?

2. Is the feedback partly related to the differing roles between giver and receiver as it relates to the common system?

3. Are processes, policies, physical environment, or other factors within the system reinforcing problems with the feedback?

Examining feedback in this manner allows for a deeper understanding of the information flowing to and from the human inputs and outputs. By allowing ourselves to view feedback through a Systems Thinking model, we can begin to look for patterns, understand the feedback loop with more accuracy and identify contributing factors to both failure and success.

Learning and Innovation

The inevitability of failure has a unique ability to absolve us from the effort of trying to engineer failure out of systems. Because of this, we now design for failure, optimize for a reduction in Time-To-Repair and build in feedback loops that prevent us from aimlessly hunting for a root cause of a disruption. From that, we can use divergent thinking to guide our decisions and choices on what to do next to improve the reliability and resilience of our systems. The by-product of all of that is a highly available system built, maintained and continuously improved upon by high-performing IT teams.

As builders and maintainers of complex systems, we must take great effort to shorten feedback loops. Once the focus is on repairing systems faster, you can create space to explore, experiment and develop new ways to provide bleeding-edge products and services. The by-product of a highly reliable and resilient system is a highly available system.

Once the focus is shifted from simply maintaining systems to improving them, value is increased across many fronts. As a result, the IT department will provide greater value to the business, and the business provides greater value to the end user.

Jason Hand is a DevOps Evangelist at VictorOps.

The Latest

July 17, 2018

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

June 18, 2018

An overwhelming 83 percent of respondents have concerns about deploying traditional firewalls in the cloud, according to Firewalls and the Cloud, a survey conducted by Barracuda Networks...

Share this