June 26, 2014

Making Your Automated Tests Faster

By Ann Marie Fred.  See original post here.

One part of my job is helping other teams adopt DevOps in general, and continuous delivery in particular. But I have a problem: many of them have a suite of automated tests that run slowly; so slowly that they only run a build, and the tests that run in the build, about once per day. (Automated test run times of 8-24 hours are not uncommon.) There are several reasons why this is the case, including:

The artifacts that are produced from the build, and then copied over to the test servers, are very large (greater than 1 GB in size). Also, sometimes the artifacts are copied across continents.

Sometimes there are multiple versions of the build artifacts that must be copied to different test servers after the build. A typical product I deal with will support at least a dozen platforms; a few support around 100 different platforms, when you multiply the number of supported operating system versions times the number of different components (client, server, gateway, etc.) times 2 (for 32- and 64-bit hardware).

Often, the database(s) for the product must be loaded with a large amount of test data, which can take a long time to copy and load.

Many products have separate test teams writing test automation. Testers who are not developers tend to write tests that run through the UI, and those tests are usually slower than developers’ code-level unit tests.

Running builds and tests often, so developers know when they make a change that breaks something else, is a key goal of both continuous integration and continuous delivery. Ideally, a developer should get feedback on whether their code is “ok”, using a quick personal build and test run, within 5 minutes. Anything over 10 minutes is definitely too slow; the developer will probably move on to something else, make more changes, and forget exactly what was changed for that particular test.

Once the quick tests pass, the developer can run a full set of tests and then integrate the tested changes. Or, in cases where a full set of tests is extremely slow, the developer can integrate his or her code changes once the quick tests pass, and then let the daily build run the full set of tests.

I proposed a DevOps Days open space session on this topic at DevOps Days Rome 2012.  We brainstormed ways to make automated tests run more quickly. We focused more on quick builds for personal tests, but most of these ideas would make the full set of tests faster too. Many thanks to the dozens of smart people who contributed their ideas. I don’t even have their names, but they know who they are. I’m sure we’ll use several of these ideas right away.

Fail quickly

Often, tests are run in a silly order - alphabetically, for example, or in the order that they were created. We can be much smarter about the order in which we run our tests.

Run a quick smoke test first

Run a quick smoke test to make sure that the infrastructure and services your tests will need are available. Can you ping the servers you need, connect to your databases, etc.? Ideally, this should only take a few seconds to a minute. If some critical part of the infrastructure is missing, there's no reason to waste time setting up the rest of the tests.By the way, running smoke tests often on your production servers is a good idea too.

Run a small set of tests that fail often next

With experience, it's easy enough to identify a small set of tests that seem to fail more often than the others. Run those next. Ideally, this should only take a few minutes. If this takes more than 5 minutes, reduce the number of tests in this bucket.

Run tests that never fail later

If you have tests that are on code that has not changed in a long time, and the tests never fail, you can run them near the end of the series of tests.

Run slow tests last, or not at all

Run the slowest test buckets last, as long as earlier tests don't depend on them.By the way, as a side note - ideally you would run your tests in a different order now and then. Sometimes a test won't fail unless it's done in the wrong (or is that the right) order.

Run in parallel

Run test buckets in parallel.Sad but true - often, tests are run sequentially. Look for ways to run many tests in parallel, even if it means deploying many VMs to do the testing. When you can deploy many VMs cheaply and automatically, then you should take advantage of that to make the testing faster.Use snapshots of databases or VMs to make it easier to run tests in parallelYou can configure VMs more quickly if the saved snapshot/image that you start with is close to the final system that you need. There's an art to putting enough of your configuration into the VM images to make your deployment fast, without getting into a state where you have far too many VM images to manage.You can also create database backups with a lot of your test data pre-loaded, rather than relying on SQL statements or running code to populate the database. In some cases you may be able to leave a database (or several databases) up and running, and reuse the same data over and over again. In other cases you may be able to have a pool of databases that are ready for testing, assign one database from the pool to a test suite, and then revert the database to its original state after the test suite has executed.

Break up tests into smaller groups

Divide your application into components, and test the changed components.Most software can be divided into logically separated components, with clearly defined interfaces with and dependencies on other components. Once you do that, you can build and test the components separately before testing them together. If you only have to re-test your own component in personal builds, you won't have to wait as long for results.

Automatically determine which tests to run when code changes

There are some very snazzy systems out there which can automatically decide what tests to run, based on the code that was changed. This is a topic that's too broad for this blog post.

Save time on I/O

Mock responses.Instead of contacting actual external services, in personal builds you can mock the services you're not trying to test. For example, you can record the expected response to each request, and play that back. This saves you the time spent in the request/response round trip, and if you are doing many round trips in your tests, that time can add up. There are many open source and commercial tools available to help with this. Green Hat is a great service simulation tool what was recently acquired by IBM.

Use LXC (Linux Containers)

Sometimes you don't need to set up a new virtual machine for testing. LXC may be enough, and it's faster.

Move servers and data so they are close to one another

If you have to copy data across networks, or even across continents, take a long hard look and see if there's a way to avoid it. Even if this means spending money on hardware, software, and setup, it might be worth it.

Make your test infrastructure faster

This seems obvious, but sometimes we forget this option. Use faster hardware, or faster VMs. Do performance tuning on your test infrastructure, including the systems, storage, networks, and so on. For example, some companies use BitTorrent to transfer large files much more quickly.

Cache what you can

This is going to depend on your specific application, but you should cache what you can, without making your tests less useful or less accurate.

Remove slow tests

Set a maximum run time for each test, and enforce it. If an individual test takes a long time to run, it will slow down everything and everyone else. If a test is too slow, you can improve the test itself to make it faster. Or, it may be that the performance of that part of your application is too slow, in which case you should fix the performance of the application.People have different ideas on how to enforce this. Some companies created a report of the slowest tests each day or week, and told people to fix them. Others put a timer on each test and automatically made the test fail after a certain amount of time. The idea is that the people writing the tests should feel the pain when their tests are too slow.

Replace some UI tests with code-level tests

If a UI test is slow, and you can test the same thing by testing the API without the UI, then it's usually faster to replace the UI test with an API test.

Replace some tests with monitoring

In some cases, it may be OK to release code that is not fully tested and rely on monitoring to catch any problems. In reality, no code is 100% tested. It's just a question of how much testing you need for your specific business environment.A few people also suggested that your running components can report their health back to a monitoring server on a regular basis.Note that it's easier for monitoring to help you find where and when problems were introduced if you have "deployment lines" in your metrics -- markers showing when changes were made, with a link to what those changes were.

Share on: