Why Choosing the Right Data Path Can Make or Break DevOps Projects - Part 1
October 21, 2019

Jonathan Parnell
Insight Enterprises

You often hear that data is the new oil. This valuable, ever-changing commodity has begun to play a starring role in many cloud-native applications. Yet, according to a number of DevOps teams, data issues continue to plague their efforts to continuously integrate, test and deploy frequent software releases. More specifically, issues with persistent data (and its underlying database engine) often appear to be the culprit.

Your organization might be pursuing emerging IoT applications that incorporate and analyze sensor data from multiple sources. Or, your DevOps team may be trying to develop applications that extract further actionable insight about customers. Whatever the use case, there's no doubt that back-end database architectures have gained growing importance to the success of such projects. Yet, in many cases, such database systems appear to be having difficulty just keeping up with the pace of today's DevOps pipelines.

According to one report on the state of database deployments in 2019, 46% of DevOps teams on an accelerated release schedule (with weekly or, even, daily releases) found it "extremely or very difficult" to speed their database release cycles accordingly. In a related Redgate report on the "2019 State of Database DevOps," 20% of respondents cited slow development and release cycles as one of the biggest drawbacks to "traditional siloed database development practices." Another 23% saw a higher risk of deployment failure and extra downtime when database changes were introduced in a traditional database environment.

Getting to the Heart of the Data Problem

Are such database issues the result of the wrong choice of underlying database technology — such as the use of an RDBMS, NoSQL or, even, a NewSQL system? Possibly.

Maybe database issues are also caused by poor (or non-existent) cross-communication between database experts and their developer counterparts? Undoubtedly, this is true as well.

Would you be surprised to learn both situations (wrong technology and poor communication) are often to blame? Also to blame is the urge to solve new data problems in the same old way that organizations approached their earlier, legacy use cases.

Ultimately, as with many things DevOps-related, the answer to the data problem is likely to require change along three, separate fronts: People, process and technology. It will also require a fundamental shift in how such issues are approached at the start.

Starting with People and Process

Instead of worrying about database slowdowns, what if you began again with how DevOps teams first approach the many facets of the underlying data layer? This means:

1. From the start of a project, include database administrators (DBAs) as part of the cross-functional DevOps team. They will help promote healthy cross-communication. They will also positively influence the development of appropriate underlying data layers and infrastructure that support your emerging use cases.

(In larger organizations that manage many data sets, these may be specialized DBAs. In smaller companies, these may be more full-stack engineers who also have high-quality expertise in database operations.)

2. Identify early the different data types, domains, boundaries and optimal cloud-native patterns associated with your data. Once these are established, the team can gain a better understanding about the different ways that might be needed to develop applications and data architectures for each new use case.

In effect, organizations need to make a concerted effort to "shift left" with their overall data architecture discussions. This shift allows more of the right questions about data to be asked, answered and incorporated early on in the design of the overall application.

Read Why Choosing the Right Data Path Can Make or Break DevOps Projects - Part 2

Jonathan Parnell is Senior Digital Transformation Architect at Insight, Cloud & Data Center Transformation
Share this

Industry News

April 17, 2025

GitLab announced the general availability of GitLab Duo with Amazon Q.

April 17, 2025

Perforce Software and Liquibase announced a strategic partnership to enhance secure and compliant database change management for DevOps teams.

April 17, 2025

Spacelift announced the launch of Saturnhead AI — an enterprise-grade AI assistant that slashes DevOps troubleshooting time by transforming complex infrastructure logs into clear, actionable explanations.

April 16, 2025

CodeSecure and FOSSA announced a strategic partnership and native product integration that enables organizations to eliminate security blindspots associated with both third party and open source code.

April 16, 2025

Bauplan, a Python-first serverless data platform that transforms complex infrastructure processes into a few lines of code over data lakes, announced its launch with $7.5 million in seed funding.

April 15, 2025

Perforce Software announced the launch of the Kafka Service Bundle, a new offering that provides enterprises with managed open source Apache Kafka at a fraction of the cost of traditional managed providers.

April 14, 2025

LambdaTest announced the launch of the HyperExecute MCP Server, an enhancement to its AI-native test orchestration platform, HyperExecute.

April 14, 2025

Cloudflare announced Workers VPC and Workers VPC Private Link, new solutions that enable developers to build secure, global cross-cloud applications on Cloudflare Workers.

April 14, 2025

Nutrient announced a significant expansion of its cloud-based services, as well as a series of updates to its SDK products, aimed at enhancing the developer experience by allowing developers to build, scale, and innovate with less friction.

April 10, 2025

Check Point® Software Technologies Ltd.(link is external) announced that its Infinity Platform has been named the top-ranked AI-powered cyber security platform in the 2025 Miercom Assessment.

April 10, 2025

Orca Security announced the Orca Bitbucket App, a cloud-native seamless integration for scanning Bitbucket Repositories.

April 10, 2025

The Live API for Gemini models is now in Preview, enabling developers to start building and testing more robust, scalable applications with significantly higher rate limits.

April 09, 2025

Backslash Security(link is external) announced significant adoption of the Backslash App Graph, the industry’s first dynamic digital twin for application code.

April 09, 2025

SmartBear launched API Hub for Test, a new capability within the company’s API Hub, powered by Swagger.

April 09, 2025

Akamai Technologies introduced App & API Protector Hybrid.