In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...
DevOps emerged as a philosophy for bridging the gap between operations and development silos, where each focused on different priorities, using different processes and tools. The point of DevOps has been to let developers create high-quality, production-ready software through agile development techniques, and for Ops to monitor and manage application release processes, with low-risk change management. Ultimately, everyone is trying to achieve the same thing - create business value through software - but need to get better at working together.
This article will focus on some key challenges existing in the DevOps approach — challenges that leave operations to wade through overwhelming amounts of operational data — and how new analytics-based tools stand to provide insight into meaningful information, ultimately closing this gap, and putting development and operations into better synch.
Let’s take a look at the big issues weighing down today’s organizations.
DevOps Challenges: a Big Data Problem?
Complex architectures lead to complex deployments. IT environments are becoming more and more complex, requiring data centers to support more technologies and devices, at faster rates than ever before.
Applications are proliferating, and the interdependence between applications is multiplying as new functions are integrated or added to existing ones, making it increasingly difficult to manage and control the sprawl and delivery of all of these business services. As the volume, frequency, and complexity of releases increase, so can errors and the likelihood that applications will be deployed incorrectly or fail.
As Forrester recently reported in Turn Big Data Inward With IT Analytics: “With each passing day, the problem of complexity gets worse. More complex systems present more elements to manage and more data, so growing complexity exacerbates an already difficult problem. Time is now the enemy because complexity is growing exponentially and inexorably.”
One of the main drivers for adopting DevOps is change, since the nature of IT application and service delivery has changed, now with many changes happening simultaneously. Some years back, application updates may have happened once a month, with a few weeks of stabilizing the application in production, going back and forth between operations and development. This was considered a necessary evil. As organizations needed to instantly react to changing business requirements, continuous integration and agile development practices push out many more daily changes.
So where 10 changes a day was considered difficult, staying on top of hundreds per day has become literally impossible. Propelled by agile development, dynamic releases quickly turn into deployment bottlenecks.
Most organizations do not have a single authority that owns end-to-end environments for application management. Typically, applications run on different physical and virtual systems that communicate across networks, which in turn may include internal and external segments with limited visibility.
So, now it’s time that we recognize these challenges for what they really are: a “Big Data” problem. As Forester Research has declared “All of this processing may seem a lot like the “Big Data” movement that is currently so hot. There is good reason to recognize this relationship. It is indeed a Big Data issue."
DevOps Challenges across Application Lifecycle Management (ALM)
This Big Data problem is glaringly evident as applications are moved through multiple complex environments, advancing through the application lifecycle. Since development focuses on quickly delivering application changes through parallel and agile methodologies, Ops needs to ensure that the applications work as a whole. Gaps occur throughout this flow.
Development and the test teams miss the infrastructure perspective, lacking a comprehensive view for how applications ultimately perform in the production environment. This gap means that dev, test, pre-prod and prod environments are often inconsistent.
Due to lack of expertise for handling configuration, pre-prod teams may delay deployment. Sometimes infrastructure configuration differs significantly between pre-production and production environments, resulting in a situation where some components go missing or differ from those in pre-production. Then after being deployed into production, the tested application does not work.
It’s difficult to validate the success of releases. Automation complicates the process. While automated application deployments can run perfectly, their intended environments may still not be verified for all configuration settings.
Stabilizing releases takes time. When you run a performance-testing environment, your application deployment or software deployment tool handles the roll out. Then you follow up by checking the deployment. This can mean an individual makes particular configuration changes directly to the production environment to realize a performance improvement. Changes going directly into production create gaps with the pre-prod environments.
Furthermore, visibility into unauthorized changes is limited. When there are many changes in production, and one remembers to add some of the changes back to the deployment tool, then redeploy, the release won’t work. Why? There are additional changes overlooked in the redeployment.
The Power of IT Analytics
Using mathematical algorithms and other innovations, IT Analytics tools carry out calculations that churn through these immense amounts of data, extracting meaningful information from a sea of raw change and configuration data.
IT Analytics tools can help IT Operations ensure control:
- Statistical pattern analytics infer the existence of relationships where explicit relations are either weak or missing, statistically comparing performance patterns to identify common behaviors and therefore, implicit relationships.
- Textual pattern analytics sift through streams of textual data, such as logs, to find patterns that can be used to identify conditions and behaviors overlooked by more traditional numerical collection technologies.
- Configuration analytics dynamically captures all change configuration information across IT environments, analyzing configurations to detect what has changed from when the system was working fine, verifying change consistency between environments, spotting discrepancies from desired configuration (drift), and tracking configuration changes over time.
Analyzing detailed changes, and validating change across IT environment layers over the entire path including deployment, IT Analytics enables IT ops to address critical questions like:
- What are the changes made in infrastructure to accommodate application changes?
- Is the production environment where the changes are deployed consistent with pre-production?
- What happens to the changes that take place in production and operations? How do they get back-reflected into the pre-production?
As the complexity of the underlying infrastructures and operations grows, without applying IT Analytics, operations can find themselves almost continually involved in performing repetitive, time-consuming tasks in order to close these gaps.
ABOUT Sasha Gilenson
Sasha Gilenson is CEO for Evolven Software, the innovator in IT Operations Analytics. Prior to Evolven, Gilenson spent 13 years at Mercury Interactive, participating in the establishing of Mercury’s SaaS and BTO strategy. He studied at the London Business School and has more than 15 years of experience in IT operations. You can reach him on LinkedInor follow his tweets at: @sgilenson