Getting Bimodal IT Right: Challenges and Choices - Part 2
September 28, 2016

Akhil Sahai
Perspica

Nobody said transitioning to a more dynamic and continuous process would be easy. However, failure, fear and skepticism should not give people license to remain stuck in their legacy systems or rush headlong into change. Let's examine what this term "bimodal IT" actually means, why it makes sense in some cases and how to ease the pain of transition.

Start with Getting Bimodal IT Right: Challenges and Choices - Part 1

Managing the Transition

The reason Gartner brought the idea of bimodal IT to light was to create breathing space so that organizations could transform and innovate without crashing and burning. The reason that Agile was created, for instance, was to enable a faster, more responsive process than waterfall practices can offer. However, switching to continuous delivery and integration mode too quickly could prove disastrous for certain systems, as some change carries more inherent risk than other changes. Following are key elements to consider when transitioning to ensure that applications continue to run at optimal levels.

Whichever mode apps are running in, they are doing so on complex and dynamic infrastructures more than ever, with underlying resources constantly changing to meet these applications' performance requirements. You need visibility into all your data — including performance data, logs and topology — and the ability to visualize all layers of your application infrastructure stack in one place at any point of time. This allows you to identify the root cause of an outage or performance degradation in the past or the present. These tools can also provide the capability to understand the impact of a software release on the operations in the Continuous Delivery and Integration mode (Mode 2). In the absence of such tools, conducting definitive post-mortem analysis is a costly, manual and confusing process — if it can be done at all.

IT is becoming increasingly complex and dynamic as it undergoes this transformation. There is a big data problem brewing in IT. Relying solely on traditional IT monitoring tools that trigger numerous alarms makes the job of IT operations teams even more difficult. Understanding all the raw data to make intelligent decisions in real time and sifting through the sea of alarms and telemetry data at the same time poses major challenge to IT operations teams. AI — especially machine learning — is well suited to take all the data and generate the necessary operational intelligence to distinguish critical, service-impacting events from false positives that do not require the immediate attention of an operator. As IT transitions, you need IT operations intelligence that can handle both modes of operations.

Predicting issues before they become problems is the key to preventing outages. A companion problem to the one above is that traditional monitoring tools trigger alerts only after a problem has already occurred. Look for solutions that incorporate predictive analytics to alert you to anomalous trends or potentially dangerous issues before they impact your application.

To ease and manage this transition, automated solutions that analyze and provide insight into ever-changing applications and infrastructure topologies are essential. Equipping users with the ability to replay and analyze past incidents and to pinpoint performance degradation and root cause, while cutting out the noise and preventing future costly outages and downtime, is important to facilitate the transition. This operational intelligence connects enterprise DevOps and TechOps teams, giving them what they need to quickly address issues as they arise.

Eyes on the Prize

There's no one solution that will work with every organization when it comes to digital transformation. IT Operations teams must take a close and hard look at which aspects can proceed to Mode 2 and which need to remain in Mode 1 for the time being. Transition times will vary, and some organizations will arrive fully at Mode 2 faster than others, but that's fine – it's not a race. Bimodal IT was never intended to be a permanent fix but a step in the right direction toward agile and dynamic IT. By providing visibility into systems and activities while keeping the business functioning without disruption, IT operations analytics can play a significant role in a successful, efficient transition.

Akhil Sahai, Ph.D., is VP Product Management at Perspica.

Share this

Industry News

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.

November 18, 2024

Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.

Read the full news on APMdigest

November 18, 2024

Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.

November 18, 2024

Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.