Posted on July 23, 2017
Keeping your customers happy depends a lot on your team’s ability to deliver (and sustain) a high-quality product.
And to ensure high quality, you must effectively validate your software artifacts against the functional (and non-functional) requirements of your system.
In many of my classes, I often talk about testability being an essential quality of good software.
But what exactly do we mean by “testability”?
by Doug Klugh
Simply put, testability is a quality that represents how well a software artifact supports testing. If defects are easily found through testing, that system is said to have high testability. A system with low testability would require an increased testing effort, demanding more time, resources, and money; which is obviously something to be avoided.
So how do you enhance software testability? A good place to start is with your requirements.
Testability of RequirementsHow well do your requirements support testing of your software? Good requirements are a prerequisite to testability. They establish a clear, common, and coherent understanding of the expected validation results. Requirements drive the testing activities and they must adopt the following qualities to enhance testability:
A requirement is complete when it contains sufficient detail to drive the entire development process — including testing. However, we may choose not to write various requirements; this does not contribute to incompleteness, so long as those requirements are already known and well understood. The big issue for completeness are the unwritten requirements that we do need to write, but don’t realize it. This often comes down to risk vs. reward. How complete must the requirements be to drive testing activities at an acceptable risk level?
A requirement is concise when it addresses a single issue and contains just the necessary information, expressed in as few words as possible. Lack conciseness can be caused by overly-complex grammar, compound statements (multiple requirements in one), or embedded rationale, examples, or design — which will only slow down the testing process.
A requirement is consistent when it does not conflict with any other requirements. Conflicting requirements say different things about system behavior or qualities. Consistency is improved by referring to the original statement where needed instead of repeating statements.
A requirement is correct when it is error-free. An incorrect requirement may accurately express a false requirement, or inaccurately express a true requirement. These types of errors are typically discovered by Subject Matter Experts (SMEs), but may also be caught by other stakeholders. A requirement must be consistent with all source materials, and reviewed and approved by all appropriate stakeholders.
A requirement is known to be feasible through use in prior products, through analysis, or through prototyping. Feasibility is often overlooked (or at least under-emphasized) early in the project. By evaluating feasibility early, we get data on where the risk is on the project, whether it stems from lack of data, lack of experience, or well-demonstrated (and understood) technological risk. Putting off this evaluation just increases the likelihood of adversely affecting the project. You’ll find out when a requirement is infeasible eventually — why not find out early when there’s something you can do about it?
A Requirement is necessary when it can be traced to a need expressed by a customer, end user, or other stakeholder. It may also be dictated by business strategy, roadmaps, or sustainability needs. Unnecessary requirements waste development resources and do little but create support costs after release. If you cannot trace your requirements to at least one of these sources, then they should not be in the specification. It is your responsibility to question the presence of requirements and features that do not come from these sources. There might be a good reason; but too often, there isn't one.
A requirement is traceable if it is uniquely and persistently identified. Requirements should be traced to and from test cases, tests, and test results, enabling improved test coverage analysis. There are various automated tools (such as Visual Studio) that assign unchangeable (persistent) identifiers for each requirement and perform traceability between elements.
A requirement is unambiguous when it is clear to the intended audience and possesses a single interpretation. Ambiguity can be reduced by defining terms, writing concisely, and avoiding weak words (easy, fast, etc.) and unbounded lists (such as, including, …). Diagrams, algorithms, use cases, tables, or other artifacts can also be used to reduce ambiguity where appropriate.
A requirement is verifiable if it can be proved that the requirement was correctly implemented. It is often unverifiable because it is ambiguous, cannot be decided, or is not worth the cost to verify. When possible, a requirement should be quantified using an appropriate scale of measure and avoid using relative terms such as big, small, fast, or slow. For example, a requirement such as “user response time must be fast” cannot be verified. You must quantify the requirement such as “user response time must be less than ten seconds.” And verification must be feasible in practice, not just in theory.
Testability of Software ComponentsA software component is an entity composed of many classes which can be deployed as a library, such as a DLL, JAR, or gem. The approach we take in designing and writing the classes within our components will determine how well they support testing. We can enhance this testability by improving the following qualities:
A component is controllable if it is easy to manipulate the state of the component as required for testing. For example, a good developer will improve testability by constructing classes with methods that can be overridden by test-specific subclasses. This will allow them to modify or eliminate any behavior in that base class, making it easier to test.
A component is independent if it can be developed and tested in isolation of other components. This enables tests of one component to be developed and executed concurrently with tests of other components. Different teams can work on separate components independently without interfering with each other. A component that is tightly coupled with other parts of the system will require more complex tests to validate functionality and verify quality.
A component is understandable if it is easy to comprehend what it does, either through documentation or through the code itself. A component cannot be effectively tested if it is not well understood.
Component Segregation PrinciplesThere are several Component Segregation Principles that enhance our ability to develop, test, and deploy components independently of each other. Being able to organize components and manage dependencies goes a long way in enabling effective and efficient component and integration testing. Component cohesion principles are used to determine which classes should go into a component, while component coupling principles tell us how components should be related to each other. Together these principles help us determine the best component partitioning to further enhance testability.
Separation of ConcernsSeparation of Concerns (SoC) is a design principle that encapsulates cohesive information and/or logic in a function, class, module, or component. By combining this with the Single Responsibility Principle (SRP), these components can be made to be highly testable.
For example, testability can be greatly enhanced by using SoC to encapsulate high-level policies in one component, while capturing implementation details in another component. If this separation is accomplished using Component Segregation Principles, these components can be maintained, compiled, and deployed independently of each other. Furthermore, they can be easily substituted for different implementations and hot deployed with zero downtime of your system.
Test Driven DevelopmentWriting tests for code that has not yet been written is a sure way to write testable code. If done right, TDD will organically create dependency inversion boundaries which spawn loosely coupled code. Through the Dependency Inversion Principle (DIP), production code can be easily swapped out for test doubles, enabling tests to focus on specific units, components, or integration points.
Understanding Test DoublesIn 2007, Gerard Meszaros defined different types of test objects that were used as substitutes for production objects during testing. These test objects provide an effective way to manage dependencies by representing production resources or devices.
Gerard categorized these test objects as Test Doubles (think stunt doubles). Under this generic term, he defined different types of test doubles as follows:
A dummy is the simplest form of a test double. It implements an interface where all the functions do absolutely nothing. And if they return a value, they return as close to null or zero as possible. A dummy is most often used as an argument to a function, where neither the test nor the function cares what happens to that argument. It simply severs as a placeholder where an object is required.
A stub is a dummy. Its functions do nothing, but they will return fixed values that are consistent with the needs of the tests; although usually not nulls or zeros. A stub is most often used when you want to direct execution of the code through certain pathways to be tested. Since stubs are usually reused by multiple tests, you will typically have far more tests than you will stubs.
A spy is a stub, whose functions perform no external actions. Like stubs, it returns values that drive execution of the code through certain pathways to be tested. It also remembers certain facts about the way it was called so the tests can verify that those functions were called properly. A spy can be used to remember that a function was called, how many times it was called, how many arguments were passed in, or what those arguments were. The only thing a spy does is watch and remember.
A mock is a spy, whose functions do nothing, returns values that are useful to the tests, and remembers interesting facts about the way it was called. It also knows what should happen. A mock sets up conditions that are to be tested and evaluates whether those conditions have been met. The test does not check what the mock spied on. It simply asks the mock if everything went as expected.
A fake is very different than other types of test doubles. It is often used as a simulator to replace (or act like) an external device or service. A fake typically has a lot of logic in it and can grow into a complex piece of software of its own. The more the system grows, the more the fake grows. And this can present additional maintenance challenges. For this reason alone, its use should be kept to a minimum.