Cloud Cuckoo Land: When Cloud Customers Get Locked In
April 01, 2016

Sven Dummer
Loggly

Back in the day when the word “cloud” was only used for those things in the sky, software typically ran on local machines sitting in local datacenters. Often, this software was licensed from a vendor on a per-seat basis. Vendors tried their best to make customers renew their license agreements, and vendor lock-in was a common phenomenon.

Such lock-in could be caused by a variety of factors: sometimes purposely caused by the vendors by making migration to competing products unnecessarily hard, sometimes caused by customers being naïve and betting too much of their software stack or infrastructure on the products of one single vendor.

Over the past decade, two factors have made it significantly easier to escape that legacy lock-in model. Free/open-source software has reached a level of maturity and adoption in almost every area of the industry, offering more and more valid alternatives to commercial products.

Secondly, the commoditization of cloud computing removed the need for companies to run their own datacenter.

A Storms A-Brewin'

Cloud providers often claim to offer the perfect escape path from legacy vendor lock-in by providing modular solutions in which customers only pay for what they use, instead of being tied into complex, long-term, per-seat licensing agreements.

However, most cloud providers (in particular the big players) have extended their portfolio over the past few years, offering a broad variety of services that go beyond computing and storage. They provide everything from load balancing and DNS, to messaging, monitoring, log management, analytics, databases, and much more.

It's very tempting for customers to replace more and more components running in their own colos with these in-cloud solutions. Why? Because they typically are easy to set up and they remove the costs of hardware ownership and datacenter footprint.

It's also an easy purchase because a contract with the cloud provider is already in place, and adding (or removing) a service is not much more than a mouse click — no need to negotiate with a sales rep over complicated long-term license agreements like it used to be with many legacy, on-premise solutions (and no need for the provider to deal with selling through fear attempts and related nuisances).

I Can See Clearly Now …

With so many advantages, it is easy to be blind to (or willingly ignore) that these services have huge lock-in potential, and many providers probably exist for exactly that reason. The more of these services a customer uses, the more difficult it is to move their application stack from one cloud provider to another, or to their own datacenter or a hybrid solution.

Whenever companies make decisions to move services to the cloud, it is worth it to thoroughly review the architecture specifically from the perspective of potential vendor lock-in. It might also be worth it to go with solutions that live and function outside of a specific vendor's cloud, even if it is less convenient and maybe more expensive. It's not unlikely that today's convenience and savings will turn into a nightmare and cost explosion tomorrow. There is no such thing as a harmless, one-vendor dependency, not even in the most comfortable and fluffiest cloud.

Sven Dummer is Senior Director of Product Marketing at Loggly.

The Latest

September 20, 2018

The latest Accelerate State of DevOps Report from DORA focuses on the importance of the database and shows that integrating it into DevOps avoids time-consuming, unprofitable delays that can derail the benefits DevOps otherwise brings. It highlights four key practices that are essential to successful database DevOps ...

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

August 27, 2018

Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered "serious" ...

Share this