Getting the Mainframe Up to DevOps Speed
January 23, 2017

Chris O'Malley
Compuware

Until recently, many IT leaders still believed they could allow their mainframe environments to languish in two-code-drops-a-year waterfall mode, while they embraced DevOps and Agile across their distributed and cloud environments.

This so-called "Bimodal lT" strategy has proven to be dangerously flawed. The fact is, if your business has a mainframe, that's probably where your most important applications and data live. As such, there's no way your business can remain competitive unless you can quickly adapt your use of those applications and data to keep pace with your rapidly and relentlessly evolving market demands.

That's especially true given the fact that your customer-facing mobile and web systems of engagement almost universally leverage your back-end mainframe systems of record.

So how do you actually get your mainframe environment up to speed? Given the fact that your existing mainframe dev/test processes and tools are pretty entrenched, how can you integrate the platform into a truly nimble and unified cross-platform enterprise DevOps environment?

Different organizations will take different approaches to this challenge. But here are three principles to bear in mind as you go about the difficult but ultimately extremely rewarding work of bringing your mainframe into the DevOps fold:

1. Transform the developer workspace

Most mainframe dev, test and ops work is still performed in "green screen" TSO/ISPF environments that require specialized knowledge, constrain productivity, and are extremely off-putting to the kind of skilled, ambitious programmers who are the lifeblood of Agile and DevOps transformation. It is therefore essential to migrate to more modern, graphical tools within a preferred DevOps toolchain that empower staff at all experience levels to perform mainframe tasks in much the same manner as they do other non-mainframe work.

Also, mainframe applications are typically large, complex and poorly documented. These attributes are a major impediment to mainframe transformation — and they tend to make enterprise IT highly dependent on the personal/tribal knowledge of senior mainframe staff.

To overcome the skills-and-knowledge gap, it's not enough to just make mainframe workspaces more graphical. You also need tools that enable new participants in mainframe DevOps to quickly and easily "read" existing application logic, program interdependencies, and data structures.

Recent innovations in mainframe workspace technology can also give developers on-the-fly feedback on any new bugs and quality issues they inject into their code. By investing in these tools, IT can empower even mainframe-inexperienced developers to quickly produce quality work that fits within the daily requirements of an Agile process. In addition, the latest mainframe development dashboard solutions enable managers to track defects, program complexity and technical debt so they can better pinpoint issues requiring additional coaching or training.

2. Remodel mainframe processes

Once you've built a better working environment for the mainframe, you can start to aggressively shift your process from a traditional waterfall model with large sets of requirements and long project timelines to a more incremental model that allows teams to quickly collaborate on so-called user "stories" and "epics." By estimating the size of these stories and assigning them their appropriate priority, your teams can start engaging in scrums that allow them to quickly iterate towards their goals.

The move from large-scale waterfall projects to Agile scrumming represents a significant change in work culture for most mainframe teams. Training in Agile process and work culture is therefore a must. You may also want to build your initial Agile mainframe team by choosing select mainframe developers and pairing them with Agile-experienced developers from other platforms to work collaboratively on user stories and epics.

You'll obviously also need the right enabling technologies for this shift. Key requirements include Agile project management software that supports Agile methodology — as well as Agile-enabled Source Code Management (SCM). The latter is especially pivotal since traditional mainframe SCM environments are inherently designed for waterfall development, and are thus incapable of providing essential Agile capabilities — such as parallel development work on user stories.

When engaged in this re-tooling, it is generally wiser to leverage best-in-class tools rather than fall into a monolithic approach that requires all SDLC activities to be performed within a single vendor's solution set. That's because best-in-class tools allow you to avoid vendor lock-in while taking advantage of the latest innovations in Agile management.

3. Integrate mainframe workflows into the cross-platform enterprise DevOps toolchain

The target state of mainframe transformation is ultimately a de-siloed enterprise DevOps environment where the mainframe is "just another platform" — albeit an especially scalable, reliable, high-performing, cost-efficient and secure one — that can be quickly and appropriately modified as needed to meet the needs of the business by whoever is available to do so.

This requires integration between mainframe and distributed tools (typically via REST APIs) so that DevOps teams have a single point-of-control for all changes across z/OS, Windows, Unix and other platforms. An effective cross-platform toolchain will also provide cross-platform impact analysis — so your developers can see how the code they're working on in one tier of an application (e.g. a mobile app server) may potentially affect another application tier (e.g. a DB2 database).

The de-siloing of your mainframe can also lead to unified IT service management (ITSM) for both mainframe and non-mainframe applications. This unified ITSM model is for companies with large numbers of multi-tier applications that are critical to their financial performance.

Of course, it takes budget, hard work and strong leadership to turn these principles into in-the-trenches realities. It is important for mainframe users to align themselves with others that don't just pay lip service to bringing the mainframe into the DevOps fold, and are committed to actually doing what it takes to get it done, and have experience doing it within their own organizations. Getting the mainframe up to DevOps speed is possible — and is being done by enterprises that recognize the need to complement their existing advantages of scale with new advantages of speed. And the alternative of "bimodal," "two-speed," or "multi-speed" IT is simply untenable. 

Every company with a mainframe therefore needs to get with the mainframe DevOps program. Now.

Chris O'Malley is CEO of Compuware.

The Latest

July 17, 2018

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps. To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey ...

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

June 18, 2018

An overwhelming 83 percent of respondents have concerns about deploying traditional firewalls in the cloud, according to Firewalls and the Cloud, a survey conducted by Barracuda Networks...

Share this