Fail Forward, Fail Fast - Part 2
August 24, 2017

Mike Cuppett
Author of "DevOps, DBAs, and DbaaS"

Being able to deploy distinct code elements quickly, matched with the ability to deploy the next release version or the previous version, facilitates moving forward, even on failure. The small program unit minimizes the production impact upon failure — maybe only a few people experience the problem instead of a large set of application users when large code deployments go wrong.

Start with Fail Forward, Fail Fast - Part 1

Continuous Integration, Continuous Testing

Besides implementing small code segments, there are two additional reasons why fail forward has proven successful: continuous integration and testing.

For DBAs whom you mentor, that means shifting direction from isolated inlands of specific tasks to inclusion directly into the code-producing effort. Code, schema changes, and even job scheduling tasks have to assimilate into the software code process, including the way DBA code is built, tested, version controlled, and packaged for release. Server clones, each built from the same script, eliminate platform variability, making application systems more resilient. For this reason, all software has to be managed without variability from start to finish. The only exceptions are new or modified code requested by the business or customers.

The continuous flow of code into production may initially disorient DBAs because the release and postrelease support model has been a brutalizing cultural norm for decades. It is patterned like this: deployment night = pull an all-nighter and then get a little sleep before being called back into the office because the business is about to implode on itself (a total distortion of reality) if the problem is not fixed promptly.

After hours of troubleshooting, someone discovers that the C++ library was not updated on the production system, causing updated code to run incorrectly with the older library files. In this case, the production system obviously was a huge variable, requiring separate work to upgrade the compiler that was missed as the release progressed. Variability burns you nearly every time.

When the production system has to remain, the best move is to clone the nonprod environments from the production server. Once the first nonprod server is built, the process can be automated to manage additional server builds.

When something like an upgrade to the C++ libraries is needed, test for backward compatibility; if successful, upgrade production, clone production, and start the nonprod builds. When older code fails (perhaps due to deprecated commands or libraries) and forces the upgrade to be included in a larger release of all code needing to be modified for the new libraries, very stringent change management processes must be adhered to. This scenario is becoming more rare because agile development and database management tools have been built to overcome these legacy challenges.

Tools of the Trade

Agile development and DevOps have not only changed how code is built, tested, released, and supported, and changed how teams collaborate to be successful, but new suites of tools were also specifically built to transform the SDLC. There is a movement away from waterfall project management — serialized code progression starting with development and then proceeding to testing, integration, quality assurance, and production.

New opportunities to create applications in weeks or even days has led to products being produced and then held for release until the company can be officially formed and readied for business operations. That reality did not seem possible a short 10 years ago.

Powerful tools have enabled businesses to move from "scrape together a little money, spend most of the money forming the company, start coding, go hungry, sleep in the car, beg for more money from family and friends, visit Mom and Dad to get laundry done and consume real food, and release version 1 in desperation, hoping to generate enough revenue to fix numerous bugs to be released as version 2" to an early-capture revenue model in which the application is built and readied to release and generate revenue, possibly even while the paperwork to form the company is underway.

Imagine releasing an application on the day the company comes into existence, possibly even recognizing revenue on day 1. Today, if the product is conservatively successful, the continuously growing revenue stream allows focus toward new products instead of figuring out where the next meal comes from. Tools empower possibilities.

Best time ever for software startups!

Working with tools comes easily for DBAs. Logically developing process flows to incorporate database administrative tasks accelerates the SDLC. The biggest challenge may be selecting which tools are needed from among the plethora of popular DevOps tools.

As DBAs progress through the stages necessary to transition, become educated and share knowledge, learn that small failures are a part of the plan, morph their tasks into the mainstream workflow, and become tool experts, DevOps teams become stronger by sharing experiences, technical skills, improved collaboration, and (most importantly) trust.

This blog is an excerpt from Mike Cuppet's book: DevOps, DBAs, and DBaaS

Mike Cuppett is a Business Resiliency Architect for a Fortune 25 healthcare organization, and the author of "DevOps, DBAs, and DbaaS: Managing Data Platforms to Support Continuous Integration"
Share this

Industry News

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.

November 18, 2024

Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.

Read the full news on APMdigest

November 18, 2024

Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.

November 18, 2024

Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.