Beginning Your Test Automation Journey
Part 1 of a three-part series on introducing and building test automation into your application development and deployment pipeline
April 01, 2019

Drew Horn
Applause

Many companies are embracing test automation in an effort to move quicker and more efficiently, while simultaneously saving on costs. Test automation continues to experience growth year over year. In fact, in 2017 KPMG found an 85 percent increase in test automation in a two-year period across all industry domains.

Automation is a tool that all companies should prioritize when operating in an Agile development environment. However, while one of the main goals of test automation is to move faster, maturing test automation within a deployment pipeline should be a progressive, iterative process. If you try to automate too much, or automate the wrong things, you can end up slowing down and even hurting development and testing efforts.

The success you have will hinge on how you set up your automation practice, so the more attention you pay to the early stages, the more benefits you will realize in the long run.

In your journey to mature test automation within your deployment pipeline, you will move through three distinct stages: beginner, intermediate, and expert. We will explore the beginner stage in part one of this series.

Taking the First Step

At its core, automation is all about doing things better, faster, and cheaper. It improves coverage by increasing the number of tests you can run and improves test quality through repeatability. However, you need to crawl before you can walk and eventually run, so, like most things, it is advisable to start small with your test automation efforts.

The beginner stage of test automation starts with assessing the maturity of your testing organization and defining a goal of where you want to go. You’ll also need to develop a (or integrate an existing) framework on which to run your unit, integration, and functional tests. You can assess your testing maturity by self-evaluating across five key criteria:

1. Team – What is the team makeup? Where are the expertise gaps? Are QA and Dev teams working in silos?

2. Technology – What does the deployment pipeline look like? How are automated tests triaged? How is automation integrated with a VSC/TCM/BTS?

3. Process – When are tests written? How are bugs triaged and tests updated? How fast are sprint cycles? How does feedback guide test strategy?

4. Reporting – What is the test coverage? What are the common devices used? How are test results viewed? How is a "go/no-go" decision made? What is the automation ROI?

5. Pain – What are the existing pain points? What bugs have been missed?

Once you’ve assessed your maturity and developed a framework, you will want to focus on one main thing: Creating quick wins for your team. This will help to build confidence in your automation practice and get the full weight of the team involved.

Your development team can begin by checking in unit tests to immediately receive pass/fail feedback. After seeing these unit tests pass on a consistent basis, the team should then begin to prioritize developing a core set of functional smoke tests.

How often a set of smoke tests should be run initially will depend on your team's experience and bandwidth. A good rule of thumb is to start with a nightly smoke test that you can review each morning. This allows you to focus on code quality, test reliability, and process as opposed to performance and optimization. Eventually, though, you will want these automated tests up and running as part of your overarching build efforts so they are automatically triggered upon a successful build.

Steady iteration, where success is built on success, will help you gain valuable experience with your automation. Once you see your early unit and smoke tests passing consistently, you can consider adding more smoke tests to your automation suite. In general, you will want to keep the runtime of your tests to 15 minutes or less. As you advance your automation practice, you will find additional ways to run more smoke tests in a 15-minute period, such as test parallelization. We'll cover these strategies in the next article.

Though it’s not the most exciting part of test automation, setting up a documentation process is an important step in building a successful automation practice. The entire team should agree on a definition of "done" for automation tasks that need to be written out during sprint planning. While developing and documenting your automation processes, you should also look to answer the following questions: When should tests be run initially? How is the team notified when tests fail? Who is responsible for fixing failed tests? What is the action for triaging failures and logging bugs? Document the answers to the questions so everyone understands the protocol, otherwise you can expect to run into problems in the future as you scale up the speed and breadth of your practice.

How long you spend in the beginner stage of your automation journey will depend on a number of factors. In general, the hardest part of this stage is just getting the ball rolling. Try to set a goal of about eight weeks to stand up your initial test automation practice. This is a good pace that will all you to keep momentum. When time constraints, lack of prior experience, and competing priorities are large factors, consider pulling in turn-key solutions to get up and running quickly. Remember that your strategy can always be adjusted later. It's much easier make tweaks with an existing practice in place that can provide value to the team almost immediately.

Read Part 2: Building Confidence in Automation.

Drew Horn is Senior Director of Automation at Applause
Share this

Industry News

October 17, 2024

Progress announced the latest release of Progress® Flowmon®, the network observability platform with AI-powered detection for cyberthreats, anomalies and fast access to actionable insights for greater network and application performance across hybrid cloud ecosystems.

October 17, 2024

Mirantis announced the release of Mirantis OpenStack for Kubernetes (MOSK) 24.3, which delivers enterprise-ready and fully supported OpenStack Caracal, featuring enhancements tailored for artificial intelligence (AI) and high-performance computing (HPC).

October 17, 2024

StreamNative announced a managed Apache Flink BYOC product offering will be available to StreamNative customers in private preview.

October 17, 2024

Gluware announced a series of new offerings and capabilities that will help network engineers, operators and automation developers deliver network security, AI-readiness, and performance assurance better, faster and more affordably, using flawless intent-based intelligent network automation.

October 17, 2024

Sonar released SonarQube 10.7 with AI-driven features and expanded support for new and existing languages and frameworks.

October 16, 2024

Red Hat announced a collaboration with Lenovo to deliver Red Hat Enterprise Linux AI (RHEL AI) on Lenovo ThinkSystem SR675 V3 servers.

October 16, 2024

mabl announced the general availability of GenAI Assertions.

October 16, 2024

Amplitude announced Web Experimentation – a new product that makes it easy for product managers, marketers, and growth leaders to A/B test and personalize web experiences.

October 16, 2024

Resourcely released a free tier of its tool for configuring and deploying cloud resources.

October 15, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of KubeEdge.

October 15, 2024

Perforce Software announced its AI-driven strategy, covering four AI-driven pillars across the testing lifecycle: test creation, execution, analysis and maintenance, across all main environments: web, mobile and packaged applications.

October 15, 2024

OutSystems announced Mentor, a full software development lifecycle (SDLC) digital worker, enabling app generation, delivery, and monitoring, all powered by low-code and GenAI.

October 15, 2024

Azul introduced its Java Performance Engineering Lab, which collaborates with global Java developers and customers’ technical teams to deliver enhanced Java performance through continuous benchmarking, code modernization recommendations and in-depth analysis of performance impacts from new OpenJDK releases.

October 10, 2024

AWS has added support for Valkey 7.2 on Amazon ElastiCache and Amazon MemoryDB, a fully managed in-memory services.

October 10, 2024

MineOS announced a major upgrade: Data Subject Request Management (DSR) 2.0.