Published April 25, 2020
by Doug Klugh
Code & Test Optimization
Build your software and your tests in ways that minimize test execution time. Common causes for long running tests include over-engineered test fixtures, asynchronous code, components with high latency, Test Overlap, and too many tests due to a tightly coupled architecture. This causes bottlenecks in Continuous Integration, inhibits rapid feedback facilitated by automated testing, and constrains frequent code merges that are required for Trunk-Based Development.
Here are some solutions to these common causes for slow tests:
You will often experience high latency when the System Under Test (SUT) contains one or more components that are slow to respond. One common example is code that interacts with a database. Even fast running database queries can cause latency — which, by itself, may not extend test execution times that much, but when combined with other tests within your test suite that initiate database queries and repeat all those tests over and over, that latency will quickly add up. And if your tests write to the database... forget about it!
One solution to resolving high latency is to replace those slow components with test doubles. In the example of a component integrating with a database, a good type of test double is a Fake Object. A Fake could facilitate the use of an in-memory data structure as a substitute for the database — which would certainly execute much faster than an actual database.
Extended test execution often occurs when test methods build their own Fresh Fixtures for each test case. Building Shared Fixtures provides one solution to this problem — if they are immutable. Re-using the same instance of a test fixture across multiple tests keeps us from having to destroy and recreate the fixture for each test. If there are any objects that the tests need to modify or delete, then those objects should be built by each test as a Fresh Fixture.
Using a Persistent Fixture allows some state (or resource) to persist from test to test. Sharing resources across a suite of tests can be accomplished in JUnit using the @BeforeClass attribute to designate suite setups and the @AfterClass attribute to designate suite teardowns. This will reduce the amount of fixture setup performed by each test.
Code that has an asynchronous interface is inherently difficult and time-consuming to test because those objects cannot be tested with direct method calls. The test must first spawn a thread or process, then initiate explicit delays to ensure the executable is running before interacting with it. Not only does this add complexity to the tests, but it also causes them to run much longer compared to (ordinary) synchronous tests. And here's the kicker... You really won't know how long is long enough to ensure that thread or process is up and running. Because of this inherent variability, you will want to wait longer than is actually needed to ensure consistency with your test runs. Otherwise, your tests can fail for reasons completely unrelated to the SUT — giving you false negatives.
Even if you add a two-second delay within each test, those seconds add up very quickly over multiple tests. In contrast, consider that we can usually run hundreds of ordinary tests each second.
To solve this issue, we need to decouple the logic from the asynchronous access mechanism. The Humble Object design pattern provides an effective method for restructuring asynchronous code so it can be tested in a synchronous manner.
Even a test suite with fast tests will take a long time to run if it contains a large number of tests. You may be testing a very large system where a large number of tests really is needed or perhaps there is too much Test Overlap. Either way it comes down to running too many tests too frequently.
Keep in mind, you do not have to run all the tests all of the time. But you should run them all on a regular cadence. For long-running test suites, a good practice is to create a subset, or Named Test Suite, with a suitable (risk-based) cross-section of tests. This subset can then be automated to run upon every code commit. The remaining tests can then be scheduled to run less often at a more convenient time — perhaps overnight.
When testing large chucks of tightly coupled code, such is the case with Monolithic Architectures, you will often end up with many complex tests. A much better solution that promotes testability, as well as fast running tests, is to encapsulate functionality within independent subsystems or components. This enables other teams to develop and test those components independently of other parts of the system. Their tests will be focused on smaller, isolated components allowing them to focus more on test performance and efficiency. It is generally easier to write fast tests against small portions of encapsulated code than it is for a large collection of tightly-couple code.