DevOps on the Mainframe
October 26, 2015

Chris O'Malley
Compuware

Most discussions of DevOps assume that the "dev" is being done exclusively in programming languages of recent vintage and that the "ops" are occurring exclusively on distributed or cloud platforms.

There are, however, at least three compelling reasons to have a DevOps discussion that focuses on the mainframe.

Reason #1: Necessity

Much, if not most, of the world's most economically valuable code still runs on the mainframe in languages such as COBOL, PL/I, and Assembler. A lot of people fail to acknowledge this reality — but as we eagerly hail rides and order pizzas on our smartphones, global banks and other major corporations are executing billions of transactions worth trillions of dollars using their so-called "legacy" systems.

These systems are not going anywhere. No matter how often and how loudly myopic industry pundits may predict the demise of the mainframe, the empirically verifiable truth is instead that mainframe owners have no plans to jettison the platform. Most, in fact, see their mainframe workloads growing as their businesses grow and as they add new logic to their systems of record.

Plus, the mainframe platform itself has evolved dramatically in recent years — despite lack of attention from the trade press. IBM's z13 is the most powerful, reliable, scalable, and secure computing platform on the planet. It also runs Linux and Java. And despite misconceptions to the contrary, its incremental costs for additional workloads are far less than those for distributed and cloud environments.

It simply doesn't make sense to leave such a massive volume of high-value application logic running on such a powerful platform out of the DevOps discussion. If it is worthwhile to apply DevOps best practices to the code that lets us "like" our cousin's neighbor's classmate's baby pictures, it is reasonable to conclude that there may be equal or greater value in applying those same practices to the code that empowers international trade and currency exchange.

Reason #2: Uniqueness

There is nothing inherently unique about applying DevOps best practices to the mainframe. Code is code. So the only inherent difference between managing the lifecycle of a COBOL app and the lifecycle of a Java app is the programming syntax — which is cognitively trivial.

There are, however, significant conditional differences that make DevOps on the mainframe a unique challenge. For one thing, COBOL programs are typically long, involved and not very well documented. Because of their longevity, these applications have also undergone a lot of modification and become deeply intertwined with each other. This makes code-parsing, runtime analysis, and visual mapping of inter-application dependencies much more important in the mainframe environment than they usually are in Java/C++/etc. environments.

For another, the tools that mainframe development teams have historically used — and that they still use in 9 out of 10 cases — are very unlike those being used by today's more freshly tooled Java-centric development teams. So inclusion of pre-Java application logic in the broader enterprise DevOps environment will usually require substantive smart re-tooling of mainframe code management.

Finally, the cultural shift to DevOps, Agile, and continuous delivery can initially be a much greater one for mainframe shops that have been focused for decades (with good reason) on application stability and hyper-rigorous change management, rather than on efficient scrumming and rapid response . This cultural shift places special demands on IT leadership — above and beyond the other process and technology change required for best-practices DevOps.

None of these are insurmountable obstacles to bringing DevOps to the mainframe (or, perhaps more precisely, bringing the mainframe to DevOps). But they do represent a unique set of near-term challenges that require their own discussion, strategy, actions, tools, and leadership.

Reason #3: Significant business upside

The third and most compelling reason to give DevOps on the mainframe its own dedicated focus is that a business gains tremendous advantages when it can more adaptively and efficiently re-aligning its COBOL code base with the ever-changing dictates of the world's increasingly technology-centric markets.

That code base, after all, is the digital DNA of the business. It defines how the business is operated, measured, and managed. So no business with a substantial mainframe environment can successfully compete in today's fast-moving markets if that environment remains slow and unresponsive.

Conversely, a mainframe-centric company that does manage to bring DevOps to the mainframe will be able to out-maneuver mainframe-centric competitors who fail to do likewise.

This is especially true as mainframe applications increasingly act as back-ends for customer-facing mobile apps and customer analytics. Companies that can adaptively update their mainframe code will have a distinct advantage when it comes to customer engagement, because they will be able to deliver better mobile apps and get more relevant analytic results.

The advantages of the DevOps-enabled mainframe, though, go well beyond more adaptive COBOL code. The mainframe platform is the most cost-effective place to host any application logic that has to be fast, scalable, reliable, and secure. So IT organizations creating new workloads can reap massive economic advantages from running those workloads on the mainframe.

But they won't run those workloads on the mainframe if they can't easily modify and extend those applications as circumstances require. DevOps-enablement of the mainframe is therefore a prerequisite for taking advantage of the mainframe's superior technical performance and economics.

There's also a fourth compelling reason for elevating the DevOps-on-mainframe discussion: Forward-thinking IT organizations are already successfully doing DevOps on the mainframe — and reaping the considerable associated rewards. So the mainframe DevOps discussion is not just theoretical. It is also practical and actionable. And it starts delivering ROI quickly.

So if you have a mainframe and have been leaving it out of your DevOps initiatives, stop. You are robbing your business of a real source of significant competitive advantage.

And if you don't have a mainframe, pay attention anyway. It may be worth getting one — or offering your talents to a company that does. Mainframes have been around for a long time, and they will be around for a long time to come.

Who knows? Mainframes may even outlast the on-premise x86 commodity server infrastructure that was once touted to replace it, but that is not aging nearly as well — and that may therefore wind up expiring way before the mainframe ever does.

Chris O'Malley is CEO of Compuware.

The Latest

July 17, 2018

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

June 18, 2018

An overwhelming 83 percent of respondents have concerns about deploying traditional firewalls in the cloud, according to Firewalls and the Cloud, a survey conducted by Barracuda Networks...

Share this