Fail Forward, Fail Fast - Part 2
August 24, 2017

Mike Cuppett
Author of "DevOps, DBAs, and DbaaS"

Being able to deploy distinct code elements quickly, matched with the ability to deploy the next release version or the previous version, facilitates moving forward, even on failure. The small program unit minimizes the production impact upon failure — maybe only a few people experience the problem instead of a large set of application users when large code deployments go wrong.

Start with Fail Forward, Fail Fast - Part 1

Continuous Integration, Continuous Testing

Besides implementing small code segments, there are two additional reasons why fail forward has proven successful: continuous integration and testing.

For DBAs whom you mentor, that means shifting direction from isolated inlands of specific tasks to inclusion directly into the code-producing effort. Code, schema changes, and even job scheduling tasks have to assimilate into the software code process, including the way DBA code is built, tested, version controlled, and packaged for release. Server clones, each built from the same script, eliminate platform variability, making application systems more resilient. For this reason, all software has to be managed without variability from start to finish. The only exceptions are new or modified code requested by the business or customers.

The continuous flow of code into production may initially disorient DBAs because the release and postrelease support model has been a brutalizing cultural norm for decades. It is patterned like this: deployment night = pull an all-nighter and then get a little sleep before being called back into the office because the business is about to implode on itself (a total distortion of reality) if the problem is not fixed promptly.

After hours of troubleshooting, someone discovers that the C++ library was not updated on the production system, causing updated code to run incorrectly with the older library files. In this case, the production system obviously was a huge variable, requiring separate work to upgrade the compiler that was missed as the release progressed. Variability burns you nearly every time.

When the production system has to remain, the best move is to clone the nonprod environments from the production server. Once the first nonprod server is built, the process can be automated to manage additional server builds.

When something like an upgrade to the C++ libraries is needed, test for backward compatibility; if successful, upgrade production, clone production, and start the nonprod builds. When older code fails (perhaps due to deprecated commands or libraries) and forces the upgrade to be included in a larger release of all code needing to be modified for the new libraries, very stringent change management processes must be adhered to. This scenario is becoming more rare because agile development and database management tools have been built to overcome these legacy challenges.

Tools of the Trade

Agile development and DevOps have not only changed how code is built, tested, released, and supported, and changed how teams collaborate to be successful, but new suites of tools were also specifically built to transform the SDLC. There is a movement away from waterfall project management — serialized code progression starting with development and then proceeding to testing, integration, quality assurance, and production.

New opportunities to create applications in weeks or even days has led to products being produced and then held for release until the company can be officially formed and readied for business operations. That reality did not seem possible a short 10 years ago.

Powerful tools have enabled businesses to move from "scrape together a little money, spend most of the money forming the company, start coding, go hungry, sleep in the car, beg for more money from family and friends, visit Mom and Dad to get laundry done and consume real food, and release version 1 in desperation, hoping to generate enough revenue to fix numerous bugs to be released as version 2" to an early-capture revenue model in which the application is built and readied to release and generate revenue, possibly even while the paperwork to form the company is underway.

Imagine releasing an application on the day the company comes into existence, possibly even recognizing revenue on day 1. Today, if the product is conservatively successful, the continuously growing revenue stream allows focus toward new products instead of figuring out where the next meal comes from. Tools empower possibilities.

Best time ever for software startups!

Working with tools comes easily for DBAs. Logically developing process flows to incorporate database administrative tasks accelerates the SDLC. The biggest challenge may be selecting which tools are needed from among the plethora of popular DevOps tools.

As DBAs progress through the stages necessary to transition, become educated and share knowledge, learn that small failures are a part of the plan, morph their tasks into the mainstream workflow, and become tool experts, DevOps teams become stronger by sharing experiences, technical skills, improved collaboration, and (most importantly) trust.

This blog is an excerpt from Mike Cuppet's book: DevOps, DBAs, and DBaaS

Mike Cuppett is a Business Resiliency Architect for a Fortune 25 healthcare organization, and the author of "DevOps, DBAs, and DbaaS: Managing Data Platforms to Support Continuous Integration"

The Latest

September 20, 2018

The latest Accelerate State of DevOps Report from DORA focuses on the importance of the database and shows that integrating it into DevOps avoids time-consuming, unprofitable delays that can derail the benefits DevOps otherwise brings. It highlights four key practices that are essential to successful database DevOps ...

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

August 27, 2018

Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered "serious" ...

Share this