Ensuring Efficient Test Environment Management is Part of a Healthy DevOps Diet
March 26, 2018

Bob Davis
Plutora

In 2011, Marc Andreessen, in his famous essay, ominously yet optimistically pointed out that "software is eating the world," and it's fair to say that it hasn't missed many meals since. Software is so ubiquitous throughout every business and industry that any organization with a crumb of dependence on information or communications must reimagine itself as a software company or risk being consumed by more efficient, capable and productive competition.

The software revolution that Andreessen predicted is in full swing and a new system of order is clearly settling into place. First, Borders and Barnes & Noble felt the impact of Amazon's online presence and then Blockbuster ultimately met its end because of the Netflix appeal; music is in the hands of iTunes, Spotify and whatever Kanye and Jay-Z are up to; the most successful direct marketing platforms are Google, Facebook and Twitter; and Walmart, the world's largest brick-and-mortar has recently shown an aggressive appetite for ecommerce and online retail.

It's clear that whatever an organization or business is built to do, budgets are increasingly being invested in software-programing tools and cloud-based services in order to compete in this new software-powered system.

Yet, with anything new comes challenges.

One place new challenges are particularly evident is in application development and the test environments that organizations rely on to maintain the continuous deployment cycle of software, and thereby hang onto their relevance. The size, complexity and importance of test environments are increasing along with the budgets needed to keep pace. Additionally, organizations are increasingly finding themselves managing environments that are on-premises, in the cloud and/or hosted by a third party – and often one environment can span all three. So, how can organizations efficiently manage their environments?

Managing test environments efficiently and consistently across the software test life cycle is a process that requires increased levels of automation and real-time collaboration in each phase of an application's lifecycle. It's a challenge, but the benefits of doing it are worth the effort.

Let's examine three key elements of the process that are indispensable when managing test environments.

1. End-to-End Visibility

Seamlessly running DevOps requires having end-to-end perspective while embracing continuous delivery. To be truly end-to-end, with visibility and constant feedback, a business needs to realize it's not about one single tool, it's a chain of solutions pulled together into a single platform that provides an overarching view. This allows continuous integration and continuous delivery workflow, with feedback, visibility and the ability to deliver services to customers quickly.

Having that end-to-end visibility provides a spot for both Dev and Ops teams to go and monitor activities as they evolve across the entire portfolio of applications and the varying combinations of on-premises, cloud and third party environments. Having comprehensive end-to-end management that delivers test plans, spots defects, supports agile based testing and provides precise reporting gives organizations a booking system to keep each environment in line and make sure they are operating as efficiently as possible.

2. Improved Quality of Work

If a business has misconfigured environments, it risks creating false positives. The common problem with managing environments is tracking them. If unsound environments are used for testing, it creates problems by both slowing down the project and trying to push things through the pipeline without proper assurances. The end goal should be maintaining quality, so making sure the right test environment is available and utilized when new code is made available is essential for efficiency and accuracy.

An efficient test environment management also eliminates the complications of misconfiguration, and helps to ensure the quality of tests and ultimately the quality of the end product. Quality is maintained by automatically reaching into source code control systems to make changes available to test teams, and then, in turn, it can be applied to accurate test cases to make sure they have accurate test coverage.

Quality can be assured when software is tested properly, and teams work with testing and development seamlessly. Efficient test environments help remove the dependence on spreadsheets and improve quality and predictability.

Simply put, an efficient platform closes the loop, streamlines communication across the company and ensures a higher quality of work.

3. Streamline Deployment for Quicker Time to Market

Efficient environment management provides real-time visibility across the enterprise portfolio and establishes a single source of truth to align teams and identify and resolve resource conflicts. Using an interactive environment map, and the functionality of a continuous delivery pipeline, enterprises can create a structure to converge fast-moving continuous delivery activities, helping delivery teams to streamline the progression of code through each phase of a release.

Large enterprises increasingly find that releases are tightly coupled, causing more projects to approach the testing stage in parallel, which in turn creates more conflicted environments, leading to possible delays if the release happens to get derailed. Some businesses have more than thousands of test environments made up of a combination of on-premises, cloud and third party infrastructure. Release managers can't possibly manually know the status of the delivery pipeline, and it can be incredibly difficult to spot risk. The need is for visibility and governance and the ability to manage version control to bring efficiencies into the pipeline.

Test environments along the pipeline are moving to more complex environments as you get closer to production. Adding an efficient test rate and default rate gives a better ability to access schedule risk to streamline deployment and help assist in providing a quicker time to market. It gives release managers visibility and confidence that the test coverage is complete and accurate.

No one should expect that taking these steps toward managing test environments will be easy, it's a complex, daunting task that requires detailed planning, a strong team and supporting budget. But if we press forward with the knowledge that our world relies more than ever on software and success depends on finding the most efficient ways to feed its growth, then we are on the right track.

According to Constellation Research, since 2000 more than half of the companies in the Fortune 500 are no longer on the list. Each has its own individual circumstance but when figures climb above fifty percent there is likely more to it than individual circumstance. We've witnessed a sea-change in business models and the tools that organizations most rely upon and we need to be aware of that. It's clear that the world is consumed by software, we now must understand the best ways to manage this reality.

Bob Davis is CMO at Plutora

The Latest

July 16, 2018

The key to mainframe DevOps success is in quickly identifying and removing major bottlenecks in the application delivery lifecycle. Major challenges include collaboration between mainframe and distributed teams, lack of visibility into the impact of software changes, and limited resource flexibility with scaling out necessary testing initiatives. Now let's take a closer look at some of these key challenges and how IT departments can address them ...

July 11, 2018

How much are organizations investing in the shift to cloud native, how much is it getting them? ...

July 10, 2018

In the shift to cloud native, many organizations have adopted a configuration-as-code approach. This helps drive up application deployment velocity by letting developers and DevOps teams reconfigure their deployments as their needs arise. Other organizations, particularly the more regulated ones, still have security people owning these tools, but that creates increased pressure on the security organization to keep up. How much are organizations investing in this process, and how much is it getting them? ...

June 28, 2018

More than a third of companies that use serverless functions are not employing any application security best practices and are not using any tools or standard security methodologies to secure them, according to the State of Serverless Security survey, conducted by PureSec ...

June 27, 2018

The popularity of social media platforms and applications is spurring enterprises to adopt "social business" models to better engage with employees and customers and improve collaboration, according to a new study published by ISG ...

June 25, 2018

The previous chapter in this WhiteHat Security series discussed Codebase as the first step of the Twelve-Factor App and defined a security best practice approach for ensuring a secure source control system. Considering the importance of applying security in a modern DevOps world, this next chapter examines the security component of step two of the Twelve-Factor methodology. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

June 21, 2018

DevSecOps is quickly gaining support and traction, within and beyond information security teams. In fact, 70% of respondents believe their culture can embrace the change needed to fuse Security and DevOps, according to a new survey of 80 security professionals by Aqua Security ...

June 20, 2018

The larger the company size, the higher the proportion of low IT performers, according to the State of DevOps: Market Segmentation Report from Puppet, based on the 2017 State of DevOps Survey data ...

June 18, 2018

An overwhelming 83 percent of respondents have concerns about deploying traditional firewalls in the cloud, according to Firewalls and the Cloud, a survey conducted by Barracuda Networks...

June 14, 2018

Despite the vast majority of cloud management decision-makers believing that DevOps and microservice enablement are important, very few believe that their organizations are capable of delivering them today — a gap that is costing the average enterprise $34 million per year, according to new report from the Ponemon Institute ...

Share this