The "Big DevOps" Bottleneck
October 11, 2016

Barry Phillips
Panzura

DevOps discussions typically center around process, culture, and technology. But if you work for a global financial institution or a high-end game developer, you probably wish someone would talk about scale.

In fact, the differences between DevOps and "Big DevOps" are non-trivial. Two scale-related attributes in particular make Big DevOps susceptible to bottlenecks that organizations working at smaller scale are far less likely to encounter:

1. The massive size of the application codebases

2. The need to distribute those massive codebases across multiple globally dispersed dev and test teams

Together, these two attributes can result in some pretty serious process bottlenecks that impede their digital agility and seriously undermine their ability to compete in today's fast-moving markets.

That's why anyone leading a Big DevOps enterprise has to solve their Big DevOps codebase distribution problem.

Unfavorable Code-to-WAN Ratios

The primary cause of Big DevOps code distribution bottlenecks is the network. Enterprise WAN connections are just too narrow to accommodate massive codebases. Even if you spend lots of money on additional bandwidth and network acceleration, the ratio between your bits of code and your bits-per-second of network will invariably result in unacceptably slow transfers. The result is software delivery that keeps getting delayed with every distribution of every large code artifact.

Of course, code distribution bottlenecks aren't the only cause of DevOps delays. Large, complex software projects can fall behind schedule for many reasons. But the code distribution bottleneck adds a chronic impediment that makes it impossible to ever make up lost time. So, as the saying goes, the hurrier you go, the behinder you get.

By itself, the cloud cannot address this Big DevOps bottleneck. Your teams simply can't work on codebases hosted in the cloud. They have to work locally. So the cloud presents two problems. First, all your teams all over the world have to keep downloading and uploading massive files. Second, you have to make sure everything everybody does everywhere stays in synch.

That's why Big DevOps requires an entirely different approach to codebase distribution.

For Global DevOps, Try Global Dedupe

Big DevOps will ultimately require you to adopt a hybrid hub-and-spoke model that lets you maintain a "gold copy"of your codebase in the cloud — while giving everyone everywhere a local copy that gets continuously updated with any changes to the current build. This model eliminates network-related bottlenecks while allowing your geographically dispersed teams to collaborate without tripping over each other's work.

This hybrid cloud hub-and-spoke model has actually been used for year by CAD/CAM teams for years to address a very similar file distribution problem. And it does more than just eliminate process delays. It can also save you considerable sums of money — because you can spend less on your network and your local storage infrastructure.

Time, however, is the real enemy when it comes digital deliverables. So if you're doing Big DevOps, take a hard look at your codebase distribution bottleneck. The future of your company may depend on it.

Barry Phillips is CMO of Panzura.

Share this

Industry News

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.