The 5 Longest Lead Times in Software Delivery - Part 2
March 09, 2017

Mark Levy
Micro Focus

Every enterprise IT organization is unique in that it will have different bottlenecks and constraints in its deployment pipelines. With that being said, there are some common problem areas that typically produce the longest lead times in your software delivery process. Here are three more most common areas that generate the longest lead times.

Start with The 5 Longest Lead Times in Software Delivery - Part 1

3. Environment Management and Provisioning

The effective and efficient management of dev, test and production environments is critical to a successful release deployment. The combination of increased business requests, the large number of applications, and multiple application infrastructures have exponentially increased the complexity of managing these environments. There is nothing more demoralizing to a dev team than having to wait to get an environment to test a new feature. Lack of environment availability and/or environment contention can create extremely long lead times, delay releases, and increase the cost of release deployments. Dev and Test environments also often are misconfigured or are so different from production environments that they end up with production problems despite having passed preproduction testing.

Creating these environments is a very repetitive task that should be documented, automated, and put under revision control. You need to implement a process to schedule, manage, track, and control all of the environments in your deployment pipeline. Automated and self-service environmental provisioning will streamline the process to reduce lead times. The environments you create need to be as "production-like" as possible. Your developers will also be far more productive and happy. As you automate the provisioning of your environments your MTTR (mean-time-to-repair) will go down significantly as you will be able to replace your environments on a moment's notice and begin to move towards an immutable infrastructure.

4. Manual Software Deployments

People should not move or deploy the "bits" as machines are far better and much more consistent at deploying applications than humans. You would be surprised at the number of organizations that still manually deploy their code. Automating manual deployment tasks is one of the first things you should look at. You can get a lot of quick wins with automation, and this approach can be delivered rapidly without major organizational changes. The initial effort to document and automate your deployment processes pays off once you start letting the machines perform the work. It is not uncommon for organizations to see deployment lead times reduced by over 90%.

Automate your code and configuration deployments with a single set of deployment processes across all environments. Ensure that these deploy from the same source. Deploying the same way across all of your environments is extremely efficient in both time and cost. By using the same process, it gets tested more often and any environmental issues will be easier to identify. All preproduction deployments should be rehearsals for the final deployment into production. The more automated this process is, the more repeatable and reliable it will be. When it's time to deploy to production, you will be ready. This translates into dramatically lower lead times, less downtime and keeps the business open so that it can make more money.

5. Manual Software Testing

Once the environment is ready and the code is deployed, it's time to test to ensure the code is working as expected and doesn't break anything else. The problem is that most organizations today manually test their code base. Manual software testing drives lead times up because the process is very slow, error prone and expensive to scale out across large organizations. As the velocity of software delivery increases, you have to exponentially increase the number of human resources to test the software changes. Furthermore, manual testing provides lower overall coverage. The time and expense of manual testing forces organizations into the "Batch and Queue" mode which slows the overall flow and dramatically increases lead times.

Automated testing is a prime area to focus on when you need to reduce lead times. Automated testing is less expensive, more reliable and repeatable, can provide broader coverage, and is a lot faster. There will be an initial cost of developing the automated test scripts, but a lot of that can be absorbed by shifting manual tester resources to "Test Development Engineers" to focus on automated API-based testing. Over time your manual testing costs and lead times will go down as your quality improves.

Summary

The velocity and complexity of software delivery continues to increase as businesses adapt to new economic conditions. Optimizing and automating your deployment pipelines will dramatically reduce your lead times and enable you to deliver software faster and with better quality. Delivering software faster means businesses can innovate and test out new ideas more quickly. The business can deliver features and bring on new revenue streams faster, making them agile enough to respond immediately to marketplace opportunity, events and trends.

Mark Levy is Director of Strategy at Micro Focus

The Latest

August 15, 2018

Microservices are a hot topic in IT circles these days. The idea of a modular approach to system building – where you have numerous, smaller software services that talk to each other instead of monolithic components – has many benefits ...

August 13, 2018

Agile is expanding within the enterprise. Agile adoption is growing within organizations, both more broadly and deeply, according to the 12th annual State of Agile report from CollabNet VersionOne. A higher percentage of respondents this year report that "all or almost all" of their teams are agile, and that agile principles and practices are being adopted at higher levels in the organization ...

August 09, 2018

For the past 13 years, the Ponemon Institute has examined the cost associated with data breaches of less than 100,000 records, finding that the costs have steadily risen over the course of the study. The average cost of a data breach was $3.86 million in the 2018 study, compared to $3.50 million in 2014 – representing nearly 10 percent net increase over the past 5 years of the study ...

August 08, 2018

Hidden costs in data breaches – such as lost business, negative impact on reputation and employee time spent on recovery – are difficult and expensive to manage, according to the 2018 Cost of a Data Breach Study, sponsored by IBM Security and conducted by Ponemon Institute. The study found that the average cost of a data breach globally is $3.86 million ...

August 06, 2018

The previous chapter in this WhiteHat Security series discussed dependencies as the second step of the Twelve-Factor App. This next chapter examines the security component of step three of the Twelve-Factor methodology — storing configurations within the environment.

August 02, 2018

Results from new Forrester Consulting research reveal the 20 most important Agile and DevOps quality metrics that separate DevOps/Agile experts from their less advanced peers ...

July 31, 2018

Even organizations that understand the importance of cybersecurity in theory often stumble when it comes to marrying security initiatives with their development and operations processes. Most businesses agree that everyone should be responsible for security, but this principle is not being upheld on a day-to-day basis in many organizations. That’s bad news for everyone. Here are some best practices for implementing SecOps ...

July 30, 2018

While the technologies, processes, and cultural shifts of DevOps have improved the ability of software teams to deliver reliable work rapidly and effectively, security has not been a focal point in the transformation of cloud IT infrastructure. SecOps is a methodology that seeks to address this by operationalizing and hardening security throughout the software lifecycle ...

July 26, 2018

Organizations are shifting away from traditional, monolithic architectures, with three-quarters of survey respondents delivering at least some of their applications and more than one-third delivering most of their applications as microservices, according to the State of DevOps Observability Report from Scalyr ...

July 24, 2018

What top considerations must companies make to ensure – or at least help improve – Agile at scale? The following are key techniques and practices to help accelerate Agile delivery rollouts and scale Agile and DevOps in the Enterprise ...

Share this