Getting Bimodal IT Right: Challenges and Choices - Part 2
September 28, 2016

Akhil Sahai
Perspica

Nobody said transitioning to a more dynamic and continuous process would be easy. However, failure, fear and skepticism should not give people license to remain stuck in their legacy systems or rush headlong into change. Let's examine what this term "bimodal IT" actually means, why it makes sense in some cases and how to ease the pain of transition.

Start with Getting Bimodal IT Right: Challenges and Choices - Part 1

Managing the Transition

The reason Gartner brought the idea of bimodal IT to light was to create breathing space so that organizations could transform and innovate without crashing and burning. The reason that Agile was created, for instance, was to enable a faster, more responsive process than waterfall practices can offer. However, switching to continuous delivery and integration mode too quickly could prove disastrous for certain systems, as some change carries more inherent risk than other changes. Following are key elements to consider when transitioning to ensure that applications continue to run at optimal levels.

Whichever mode apps are running in, they are doing so on complex and dynamic infrastructures more than ever, with underlying resources constantly changing to meet these applications' performance requirements. You need visibility into all your data — including performance data, logs and topology — and the ability to visualize all layers of your application infrastructure stack in one place at any point of time. This allows you to identify the root cause of an outage or performance degradation in the past or the present. These tools can also provide the capability to understand the impact of a software release on the operations in the Continuous Delivery and Integration mode (Mode 2). In the absence of such tools, conducting definitive post-mortem analysis is a costly, manual and confusing process — if it can be done at all.

IT is becoming increasingly complex and dynamic as it undergoes this transformation. There is a big data problem brewing in IT. Relying solely on traditional IT monitoring tools that trigger numerous alarms makes the job of IT operations teams even more difficult. Understanding all the raw data to make intelligent decisions in real time and sifting through the sea of alarms and telemetry data at the same time poses major challenge to IT operations teams. AI — especially machine learning — is well suited to take all the data and generate the necessary operational intelligence to distinguish critical, service-impacting events from false positives that do not require the immediate attention of an operator. As IT transitions, you need IT operations intelligence that can handle both modes of operations.

Predicting issues before they become problems is the key to preventing outages. A companion problem to the one above is that traditional monitoring tools trigger alerts only after a problem has already occurred. Look for solutions that incorporate predictive analytics to alert you to anomalous trends or potentially dangerous issues before they impact your application.

To ease and manage this transition, automated solutions that analyze and provide insight into ever-changing applications and infrastructure topologies are essential. Equipping users with the ability to replay and analyze past incidents and to pinpoint performance degradation and root cause, while cutting out the noise and preventing future costly outages and downtime, is important to facilitate the transition. This operational intelligence connects enterprise DevOps and TechOps teams, giving them what they need to quickly address issues as they arise.

Eyes on the Prize

There's no one solution that will work with every organization when it comes to digital transformation. IT Operations teams must take a close and hard look at which aspects can proceed to Mode 2 and which need to remain in Mode 1 for the time being. Transition times will vary, and some organizations will arrive fully at Mode 2 faster than others, but that's fine – it's not a race. Bimodal IT was never intended to be a permanent fix but a step in the right direction toward agile and dynamic IT. By providing visibility into systems and activities while keeping the business functioning without disruption, IT operations analytics can play a significant role in a successful, efficient transition.

Akhil Sahai, Ph.D., is VP Product Management at Perspica.

The Latest

August 13, 2018

Agile is expanding within the enterprise. Agile adoption is growing within organizations, both more broadly and deeply, according to the 12th annual State of Agile report from CollabNet VersionOne. A higher percentage of respondents this year report that "all or almost all" of their teams are agile, and that agile principles and practices are being adopted at higher levels in the organization ...

August 09, 2018

For the past 13 years, the Ponemon Institute has examined the cost associated with data breaches of less than 100,000 records, finding that the costs have steadily risen over the course of the study. The average cost of a data breach was $3.86 million in the 2018 study, compared to $3.50 million in 2014 – representing nearly 10 percent net increase over the past 5 years of the study ...

August 08, 2018

Hidden costs in data breaches – such as lost business, negative impact on reputation and employee time spent on recovery – are difficult and expensive to manage, according to the 2018 Cost of a Data Breach Study, sponsored by IBM Security and conducted by Ponemon Institute. The study found that the average cost of a data breach globally is $3.86 million ...

August 06, 2018

The previous chapter in this WhiteHat Security series discussed dependencies as the second step of the Twelve-Factor App. This next chapter examines the security component of step three of the Twelve-Factor methodology — storing configurations within the environment.

August 02, 2018

Results from new Forrester Consulting research reveal the 20 most important Agile and DevOps quality metrics that separate DevOps/Agile experts from their less advanced peers ...

July 31, 2018

Even organizations that understand the importance of cybersecurity in theory often stumble when it comes to marrying security initiatives with their development and operations processes. Most businesses agree that everyone should be responsible for security, but this principle is not being upheld on a day-to-day basis in many organizations. That’s bad news for everyone. Here are some best practices for implementing SecOps ...

July 30, 2018

While the technologies, processes, and cultural shifts of DevOps have improved the ability of software teams to deliver reliable work rapidly and effectively, security has not been a focal point in the transformation of cloud IT infrastructure. SecOps is a methodology that seeks to address this by operationalizing and hardening security throughout the software lifecycle ...

July 26, 2018

Organizations are shifting away from traditional, monolithic architectures, with three-quarters of survey respondents delivering at least some of their applications and more than one-third delivering most of their applications as microservices, according to the State of DevOps Observability Report from Scalyr ...

July 24, 2018

What top considerations must companies make to ensure – or at least help improve – Agile at scale? The following are key techniques and practices to help accelerate Agile delivery rollouts and scale Agile and DevOps in the Enterprise ...

July 23, 2018

Digital transformation is an important part of most corporate agendas for 2018. Successful digital transformation, encompassing your current business, partners and both current and prospective customers, isn't always easy. However, adopting an enterprise-wide Agile methodology can help ease the burden and deliver discernible ROI much faster ...

Share this