We Don't Need Another Hero
May 27, 2016

Dave Josephsen
Librato

Someone recently asked me this question: "What's the first thing that comes to your mind when you hear the word 'DevOps'?"

A loaded question, I agree, and of course I lied, and made up something about the "first way." I mean really, if you want a possibly embarrassing answer to a loaded question you really should confront me face to face with it in a public place. If the asker of that question had done so, I would have had to answer honestly that the first thing I think about when I hear the word DevOps is Brent from The Phoenix Project.

If you haven't read it, Brent is probably you. That person who knows how all the stuff actually works, and who everyone depends on to fix things when they go sideways. Brent is a hero, and DevOps abhors hero's. They constrain the productivity of the system overall and create single points of failure. They enable broken systems to limp along instead of allowing them to fail and be replaced. So in the book, Brent is basically a huge organizational problem.

I was deeply hurt by this plot device on my first reading. In fact if I hadn't been trapped on a plane with nothing else to read, I probably wouldn't have finished The Phoenix Project because of it (which totally would have been my loss). DevOps is an emergent property of failure in classical IT management, and it's just hard to wrap your head around the fact that sometimes failure needs to happen before we can pick up the pieces and make something great from them.

Further, the DevOps end-game is hard to visualize, because in many ways it's orthogonal to what preceded it. Many organizations just can't get there until they've failed. Getting there isn't easy either way, so we talk a lot about improvement-kata and "getting there", and why you'd want to be there in the first place. What it looks like day-to-day once you've arrived is something you almost never hear about.

What does that mean? Well to understand that I need to digress a little bit into what we make for a living, which happens to be time-series databases.

One big problem with designing time-series databases is that the pattern of reads and writes is very different. Typically we want to optimize for writes, because we write a lot more than we read, but reads also need to be pretty efficient or no-one will want to use the UI. One way we manage to minimize read latency is to use a rotating row key. That is, we simply preface the name of a given metric with a key that changes every so often, which in turn forces the DB to create a new row to store the data. Breaking the data up like this helps us keep the rows down to a predictable size, and allows us to make queries of large datasets parallel (ie we can read out a bunch of chunks of predictably sized data at the same time rather than one huge chunk).

Predictable is something of a key word there, because really what row-keys buy you is a heap of different kinds of very important predictability. With row keys, we know our rows will always be the size of a measurement times the number of measurements in a row-key interval. From there we can extrapolate all kinds of stuff like how much data we'll need to put on the wire for client reads, and how much storage and processing power our data tier will require and etc. The point is, we can make really important decisions by relying on the predictability that row-keys provide.

But here's the rub. We're a multi-tenant system. In other words, we have users, and users have the power to name things. Uh-oh. If users can put an arbitrarily-sized label on a data point, then we longer have a predictably sized data structure anymore. That's ok, we can isolate that variable by creating a predictably sized UID which maps back to the end-user's human-readable label.

So when you use a row key to buy predictable quantities in the data tier, you also often have to pay some taxes in the form of lookup-tables, or indexes if you prefer. In this example we're going to need two indexes, one to keep track of where in our storage tier a given 6-hour block of measurements is stored (because our rows are named after rotating numbers now, so we need some way to actually find the right data when a user asks for 60-minutes of metric:foo), and another to map user-generated variable length names into either hashes or some other unique identifier (so we know what to even search for in the first index when a user asks for 60 minutes of metric:foo).

Ok, now we know pretty much everything we need to re-examine the problem. Ben, the ops guy in that conversation, is tracking what he considers to be a resource allocation problem. The symptom Ben is reacting to here is high CPU utilization.

Dave Josephsen is the Developer Evangelist for Librato.

Share this

Industry News

November 26, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).

November 26, 2024

Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.

November 26, 2024

Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.

November 26, 2024

Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

November 26, 2024

Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.

November 26, 2024

Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.

November 25, 2024

Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.

November 25, 2024

Red Hat announced an extended collaboration with Microsoft to streamline and scale artificial intelligence (AI) and generative AI (gen AI) deployments in the cloud.

November 25, 2024

Endor Labs announced that Microsoft has natively integrated its advanced SCA capabilities within Microsoft Defender for Cloud, a Cloud-Native Application Protection Platform (CNAPP).

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.