DevOps on the Mainframe
October 26, 2015

Chris O'Malley

Most discussions of DevOps assume that the "dev" is being done exclusively in programming languages of recent vintage and that the "ops" are occurring exclusively on distributed or cloud platforms.

There are, however, at least three compelling reasons to have a DevOps discussion that focuses on the mainframe.

Reason #1: Necessity

Much, if not most, of the world's most economically valuable code still runs on the mainframe in languages such as COBOL, PL/I, and Assembler. A lot of people fail to acknowledge this reality — but as we eagerly hail rides and order pizzas on our smartphones, global banks and other major corporations are executing billions of transactions worth trillions of dollars using their so-called "legacy" systems.

These systems are not going anywhere. No matter how often and how loudly myopic industry pundits may predict the demise of the mainframe, the empirically verifiable truth is instead that mainframe owners have no plans to jettison the platform. Most, in fact, see their mainframe workloads growing as their businesses grow and as they add new logic to their systems of record.

Plus, the mainframe platform itself has evolved dramatically in recent years — despite lack of attention from the trade press. IBM's z13 is the most powerful, reliable, scalable, and secure computing platform on the planet. It also runs Linux and Java. And despite misconceptions to the contrary, its incremental costs for additional workloads are far less than those for distributed and cloud environments.

It simply doesn't make sense to leave such a massive volume of high-value application logic running on such a powerful platform out of the DevOps discussion. If it is worthwhile to apply DevOps best practices to the code that lets us "like" our cousin's neighbor's classmate's baby pictures, it is reasonable to conclude that there may be equal or greater value in applying those same practices to the code that empowers international trade and currency exchange.

Reason #2: Uniqueness

There is nothing inherently unique about applying DevOps best practices to the mainframe. Code is code. So the only inherent difference between managing the lifecycle of a COBOL app and the lifecycle of a Java app is the programming syntax — which is cognitively trivial.

There are, however, significant conditional differences that make DevOps on the mainframe a unique challenge. For one thing, COBOL programs are typically long, involved and not very well documented. Because of their longevity, these applications have also undergone a lot of modification and become deeply intertwined with each other. This makes code-parsing, runtime analysis, and visual mapping of inter-application dependencies much more important in the mainframe environment than they usually are in Java/C++/etc. environments.

For another, the tools that mainframe development teams have historically used — and that they still use in 9 out of 10 cases — are very unlike those being used by today's more freshly tooled Java-centric development teams. So inclusion of pre-Java application logic in the broader enterprise DevOps environment will usually require substantive smart re-tooling of mainframe code management.

Finally, the cultural shift to DevOps, Agile, and continuous delivery can initially be a much greater one for mainframe shops that have been focused for decades (with good reason) on application stability and hyper-rigorous change management, rather than on efficient scrumming and rapid response . This cultural shift places special demands on IT leadership — above and beyond the other process and technology change required for best-practices DevOps.

None of these are insurmountable obstacles to bringing DevOps to the mainframe (or, perhaps more precisely, bringing the mainframe to DevOps). But they do represent a unique set of near-term challenges that require their own discussion, strategy, actions, tools, and leadership.

Reason #3: Significant business upside

The third and most compelling reason to give DevOps on the mainframe its own dedicated focus is that a business gains tremendous advantages when it can more adaptively and efficiently re-aligning its COBOL code base with the ever-changing dictates of the world's increasingly technology-centric markets.

That code base, after all, is the digital DNA of the business. It defines how the business is operated, measured, and managed. So no business with a substantial mainframe environment can successfully compete in today's fast-moving markets if that environment remains slow and unresponsive.

Conversely, a mainframe-centric company that does manage to bring DevOps to the mainframe will be able to out-maneuver mainframe-centric competitors who fail to do likewise.

This is especially true as mainframe applications increasingly act as back-ends for customer-facing mobile apps and customer analytics. Companies that can adaptively update their mainframe code will have a distinct advantage when it comes to customer engagement, because they will be able to deliver better mobile apps and get more relevant analytic results.

The advantages of the DevOps-enabled mainframe, though, go well beyond more adaptive COBOL code. The mainframe platform is the most cost-effective place to host any application logic that has to be fast, scalable, reliable, and secure. So IT organizations creating new workloads can reap massive economic advantages from running those workloads on the mainframe.

But they won't run those workloads on the mainframe if they can't easily modify and extend those applications as circumstances require. DevOps-enablement of the mainframe is therefore a prerequisite for taking advantage of the mainframe's superior technical performance and economics.

There's also a fourth compelling reason for elevating the DevOps-on-mainframe discussion: Forward-thinking IT organizations are already successfully doing DevOps on the mainframe — and reaping the considerable associated rewards. So the mainframe DevOps discussion is not just theoretical. It is also practical and actionable. And it starts delivering ROI quickly.

So if you have a mainframe and have been leaving it out of your DevOps initiatives, stop. You are robbing your business of a real source of significant competitive advantage.

And if you don't have a mainframe, pay attention anyway. It may be worth getting one — or offering your talents to a company that does. Mainframes have been around for a long time, and they will be around for a long time to come.

Who knows? Mainframes may even outlast the on-premise x86 commodity server infrastructure that was once touted to replace it, but that is not aging nearly as well — and that may therefore wind up expiring way before the mainframe ever does.

Chris O'Malley is CEO of Compuware.

The Latest

September 20, 2018

The latest Accelerate State of DevOps Report from DORA focuses on the importance of the database and shows that integrating it into DevOps avoids time-consuming, unprofitable delays that can derail the benefits DevOps otherwise brings. It highlights four key practices that are essential to successful database DevOps ...

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

August 27, 2018

Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered "serious" ...

Share this