The Mainframe is Here to Stay: 5 Take-Aways for Mainframe DevOps
May 03, 2018

Chris O'Malley
Compuware

Forrester Research recently conducted a survey of 160 mainframe users across the globe and found that mainframe workloads are increasing, driven by trends including blockchain, modern analytics and more mobile activity hitting the platform. 57 percent of these enterprises currently run more than half of their business-critical applications on the platform, with this number expected to increase to 64 percent by next year. No surprise there, as the mainframe's security, reliability, performance, scalability and efficiency have consistently proven unbeatable for modern transactional applications.

However, these enterprises have only replaced 37 percent of the mainframe workforce lost over the past five years. The prospect of increased workloads, combined with shrinking mainframe skillsets, has huge implications for mainframe DevOps. The only way for organizations to solve this skills gap crisis is by optimizing developer productivity. Drilling down a level further, what does this all mean for mainframe DevOps?

1. DevOps teams must view and treat the mainframe as a first-class digital citizen

DevOps teams are obsessed with establishing and measuring Key Performance Indicators (KPIs) to continually improve outcomes. These KPIs concern quality (minimizing the number of code defects that make it into production), efficiency (time spent developing) and velocity (the number of software products or features that can be rolled out in a given amount of time).

72 percent of firms noted their customer-facing applications are completely or very reliant on mainframe processing

While common within non-mainframe teams, the concept of KPIs within mainframe teams can be foreign, in spite of how vital mainframe processing is to the customer experience. 72 percent of firms noted their customer-facing applications are completely or very reliant on mainframe processing.

While firms recognize the importance of quality, velocity and efficiency, significant percentages (27, 28 and 39 percent respectively) are not measuring them. This is a cause for concern: the reduction of mainframe-specific developer expertise poses a serious threat to quality, velocity and efficiency, yet management has no means to quantify the risks.

2. Teams must honestly assess developer behavior on the platform and proactively identify areas for improvement

Having mainframe development KPIs in place and consistently measuring progress against them is a great first step, but it's not enough. Organizations remain heavily dependent on mainframe applications, and DevOps teams can't afford to hypothesize what changes may or may not move the needle on KPIs — it is much better to rely on real empirical evidence.

New approaches now leverage machine learning applied to real behavioral data. This enables teams to make smart, high-impact decisions that support continuous DevOps improvements.

3. Teams must integrate the mainframe into virtually every aspect of the DevOps toolchain

A DevOps toolchain refers to the set or combination of tools aiding in the delivery, development and management of applications created in a DevOps environment. These toolchains support greater productivity as developers work across an end-to-end application; however, mainframe code — which supports the vital transaction-processing component of most applications — is often excluded. This slows down the entire effort and dilutes the positive impact of such tools on other application components.

Mainframe code must be fully incorporated in these toolchains across the entire delivery pipeline including source code management, code coverage, unit testing, deployment and more.

4. Mainframe workloads require a "cost-aware" approach

Cost optimization becomes a key consideration as the mainframe takes on bigger workloads

Cost optimization becomes a key consideration as the mainframe takes on bigger workloads. Many organizations are unfamiliar with exactly how mainframe licensing costs (MLCs) are determined and don't make sufficient attempts to manage them, which can drive up costs unnecessarily.

MLCs are determined by a metric known as the peak four-hour rolling average MSU (million services units) value across all logical partitions (LPARs). In simple terms, MSU represents an amount of processing work. These can be kept at a minimum by diligently tuning each application to minimize its individual consumption of mainframe resources, while average MSUs can be kept in check by spreading out the timing of application workloads in order to minimize collective utilization peaks, thus keeping the average lower.

New techniques provide visually intuitive insight into how batch jobs are being initiated and executed — as well as the impact of these jobs on MLCs. This means non-mainframe experts can manage costs as easily as prior platform experts.

5. an ongoing focus and emphasis must be placed on recruiting and cultivating top computer science talent for the mainframe

It's a polyglot world and millennials fluent and skilled in working on the mainframe will have a distinct advantage. Working on the mainframe also gives newer developers an opportunity to contribute to some of the most exciting, cutting-edge software products being created today. We recently had the experience of meeting with a class full of young computer science grads and far from dismissing the notion of cultivating mainframe expertise, their excitement and enthusiasm for learning more was palpable.

The mainframe can be an extremely valuable asset, giving DevOps teams — and the organizations they work for — the distinct advantage of being both big and fast. Heavier workloads combined with less mainframe talent will certainly present challenges, though these are not insurmountable. The five take-aways described here are an excellent way to amplify the mainframe's intrinsic and unique attributes as well as mainframe resources in a DevOps world.

The Latest

July 19, 2018

Despite 95 percent of CIOs expecting cyberthreats to increase over the next three years, only 65 percent of their organizations currently have a cybersecurity expert, according to a survey from Gartner. The survey also reveals that skills challenges continue to plague organizations that undergo digitalization, with digital security staffing shortages considered a top inhibitor to innovation ...

July 17, 2018

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

Share this