3 DevOps Lessons Learned While Scaling
July 15, 2016

Adam Serediuk
xMatters

Throughout my 15 years in operations I've noticed the same dilemma pop up within most organizations that practice traditional software development – many companies have gotten into the habit of "triage" development. They react to problems defensively, and because the strategy is reactive, triage development occupies most of the ops team's time.

When the ops team is constantly putting out fires, they have no time, people, or tools left for building the actual product. Make no mistake, the operational component is equally important to the product. Recognizing the cost of constant fire-fighting, many teams take a more proactive approach and try to predict breakdowns.

Failures are inevitable. However by implementing a few best practices, operations professionals can take back control of their output and build more resilient systems. Here are three lessons I've learned that have allowed my teams to spend more time building up, rather than hunkering down:

1. Automate the B.S.

We've already established that spending your time putting out fires is a sure-fire way to guarantee you'll never be productive. So, should you automate your system completely and eliminate the need for human intervention in incidents? Maybe, but that's an awfully lofty goal to start. You may not need total automation.

The key is to find the right areas to automize. This should be determined strategically, but there will also be an element of trial and error. If you put automation in the wrong place or discover a newer, better way of doing things, don't be afraid to throw code out.

Some people think that constant change causes lines and logic to become muddled. I happen to believe the opposite. Constant change can eliminate problems. Ops teams should not be intimidated by change. Embracing change on a regular basis will make it less scary, and you'll see fewer fires as a result.

This must be balanced with automated testing and QA of Ops code and infrastructure. The same software development approach to unit testing and test plans can and should be applied to Ops, to eliminate regression and enable confidence in change.

2. Go Beyond IT Automation

Repeat after me: Deploying an enterprise IT automation platform is not the same as adopting DevOps. Developers, systems administrators and operations professionals use these platforms to manage the continuous integration/delivery pipeline that defines agile software development and manage system environments. While IT automation platforms are important for DevOps practitioners, they are in no way the foundation of the model.

Give equal focus to the process – the build, test, release, deploy and monitoring lifecycle – so you can iterate quickly on changes. I've seen far too many DevOps teams focus only on their automation code without giving adequate attention to the software development process and how this code fits into the larger picture. This means having ongoing conversations with your teams, QA, Development and Ops alike.

By ensuring conversations are ongoing, DevOps teams can deploy IT automation without making the situation more complicated for themselves. The team should be able to understand each deployment framework or tool selected to run automation code and where it fits into the big picture. This may mean team meetings and regular messaging, but, hey, communication is what DevOps is all about.

3. Reset Your Definition of Done

As we've seen, constant change is essential for avoiding problems. That's why startups are passing over traditional development for soft releases and continuous, everyday delivery. It's also more stimulating for the team when every day is different, and releasing small, incremental changes is safer than large monolithic releases. Recognizing the changing tide of software development, the industry has developed deployment tools that have unit and integration testing baked into them.

Thanks to these new tools, IT professionals are able to complete tasks to a fuller extent. Not only do they build, they also test and launch. With this power comes responsibility; there's no excuse for leaving anything short of "done." This is where product owners can enable and support the process, by giving equal importance to uptime, continuous delivery and testing as part of story planning.

One of my past roles was Lead Operations Engineer at a mobile and online gaming company, where my team and I built the software and cloud infrastructure. Recently, I spoke to my former boss and he noted how the code I built performed reliably in the two years since I left. The secret was developing a cohesive system, a complete package through continuous iteration. Investing in this process allowed us to build a product that adhered to the new definition of "done", which made it good enough to last.

Which brings us to what might be the most important lesson of all: software code should not exist separately from infrastructure code. The reason is simple – infrastructure without software is pointless and software can't exist without infrastructure.

Conway's Law states that "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." In short, if teams don't talk to each other, parts won't talk to each other. At the same time, if a product is built to work as a whole, it will work. So, why not put the pieces together?

Adam Serediuk is Director of Operations at xMatters.

The Latest

September 20, 2018

The latest Accelerate State of DevOps Report from DORA focuses on the importance of the database and shows that integrating it into DevOps avoids time-consuming, unprofitable delays that can derail the benefits DevOps otherwise brings. It highlights four key practices that are essential to successful database DevOps ...

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

August 27, 2018

Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered "serious" ...

Share this