Posted on July 23, 2017
by Doug Klugh

Keeping your customers happy depends a lot on your team’s ability to deliver (and sustain) a high-quality product.  And to ensure high quality, you must effectively validate your software artifacts against the functional (and non-functional) requirements of your system.  In many of my classes, I often talk about testability being an essential quality of good software.  But what exactly do we mean by “testability”?

Simply put, testability is a quality that represents how well a software artifact supports testing.  If defects are easily found through testing, that system is said to have high testability.  A system with low testability would require an increased testing effort, demanding more time, resources, and money; which is obviously something to be avoided.

So how do you enhance software testability?  A good place to start is with your requirements.

Testability of Requirements

How well do your requirements support testing of your software?  Good requirements are a prerequisite to testability.  They establish a clear, common, and coherent understanding of the expected validation results.  Requirements drive the testing activities and they must adopt the following qualities to enhance testability:
Complete

A requirement is complete when it contains sufficient detail to drive the entire development process — including testing.  However, we may choose not to write various requirements; this does not contribute to incompleteness, so long as those requirements are already known and well understood.  The big issue for completeness are the unwritten requirements that we do need to write, but don’t realize it.  This often comes down to risk vs. reward.  How complete must the requirements be to drive testing activities at an acceptable risk level?

Concise

A requirement is concise when it addresses a single issue and contains just the necessary information, expressed in as few words as possible.  Lack conciseness can be caused by overly-complex grammar, compound statements (multiple requirements in one), or embedded rationale, examples, or design — which will only slow down the testing process.

Consistent

A requirement is consistent when it does not conflict with any other requirements.  Conflicting requirements say different things about system behavior or qualities.  Consistency is improved by referring to the original statement where needed instead of repeating statements.

Correct

A requirement is correct when it is error-free.  An incorrect requirement may accurately express a false requirement, or inaccurately express a true requirement.  These types of errors are typically discovered by Subject Matter Experts (SMEs), but may also be caught by other stakeholders.  A requirement must be consistent with all source materials, and reviewed and approved by all appropriate stakeholders.

Feasible

A requirement is known to be feasible through use in prior products, through analysis, or through prototyping.  Feasibility is often overlooked (or at least under-emphasized) early in the project.  By evaluating feasibility early, we get data on where the risk is on the project, whether it stems from lack of data, lack of experience, or well-demonstrated (and understood) technological risk.  Putting off this evaluation just increases the likelihood of adversely affecting the project.  You’ll find out when a requirement is infeasible eventually — why not find out early when there’s something you can do about it?

Necessary

A Requirement is necessary when it can be traced to a need expressed by a customer, end user, or other stakeholder.  It may also be dictated by business strategy, roadmaps, or sustainability needs.  Unnecessary requirements waste development resources and do little but create support costs after release.  If you cannot trace your requirements to at least one of these sources, then they should not be in the specification.  It is your responsibility to question the presence of requirements and features that do not come from these sources.  There might be a good reason; but too often, there isn't one.

Traceable

A requirement is traceable if it is uniquely and persistently identified.  Requirements should be traced to and from test cases, tests, and test results, enabling improved test coverage analysis.  There are various automated tools (such as Visual Studio) that assign unchangeable (persistent) identifiers for each requirement and perform traceability between elements.

Unambiguous

A requirement is unambiguous when it is clear to the intended audience and possesses a single interpretation.  Ambiguity can be reduced by defining terms, writing concisely, and avoiding weak words (easy, fast, etc.) and unbounded lists (such as, including, …).  Diagrams, algorithms, use cases, tables, or other artifacts can also be used to reduce ambiguity where appropriate.

Verifiable

A requirement is verifiable if it can be proved that the requirement was correctly implemented.  It is often unverifiable because it is ambiguous, cannot be decided, or is not worth the cost to verify.  When possible, a requirement should be quantified using an appropriate scale of measure and avoid using relative terms such as big, small, fast, or slow.  For example, a requirement such as “user response time must be fast” cannot be verified.  You must quantify the requirement such as “user response time must be less than ten seconds.”  And verification must be feasible in practice, not just in theory.

Testability of Software Components

A software component is an entity composed of many classes which can be deployed as a library, such as a DLL, JAR, or gem.  The approach we take in designing and writing the classes within our components will determine how well they support testing.  We can enhance this testability by improving the following qualities:
Controllable

A component is controllable if it is easy to manipulate the state of the component as required for testing.  For example, a good developer will improve testability by constructing classes with methods that can be overridden by test-specific subclasses.  This will allow them to modify or eliminate any behavior in that base class, making it easier to test.

Independent

A component is independent if it can be developed and tested in isolation of other components.  This enables tests of one component to be developed and executed concurrently with tests of other components.  Different teams can work on separate components independently without interfering with each other.  A component that is tightly coupled with other parts of the system will require more complex tests to validate functionality and verify quality.

Understandable

A component is understandable if it is easy to comprehend what it does, either through documentation or through the code itself.  A component cannot be effectively tested if it is not well understood.

Component Segregation Principles

There are several Component Segregation Principles that enhance our ability to develop, test, and deploy components independently of each other.  Being able to organize components and manage dependencies goes a long way in enabling effective and efficient component and integration testing.  Component cohesion principles are used to determine which classes should go into a component, while component coupling principles tell us how components should be related to each other.  Together these principles help us determine the best component partitioning to further enhance testability.

Separation of Concerns

Separation of Concerns (SoC) is a design principle that encapsulates cohesive information and/or logic in a function, class, module, or component.  By combining this with the Single Responsibility Principle (SRP), these components can be made to be highly testable.

For example, testability can be greatly enhanced by using SoC to encapsulate high-level policies in one component, while capturing implementation details in another component.  If this separation is accomplished using Component Segregation Principles, these components can be maintained, compiled, and deployed independently of each other.  Furthermore, they can be easily substituted for different implementations and hot deployed with zero downtime of your system.

Test Driven Development

Writing tests for code that has not yet been written is a sure way to write testable code.  If done right, TDD will organically create dependency inversion boundaries which spawn loosely coupled code.  Through the Dependency Inversion Principle (DIP), production code can be easily swapped out for test doubles, enabling tests to focus on specific units, components, or integration points.

Understanding Test Doubles

In 2007, Gerard Meszaros defined different types of test objects that were used as substitutes for production objects during testing.  These test objects provide an effective way to manage dependencies by representing production resources or devices.

Gerard categorized these test objects as Test Doubles (think stunt doubles).  Under this generic term, he defined different types of test doubles as follows:
Dummy

A dummy is the simplest form of a test double.  It implements an interface where all the functions do absolutely nothing.  And if they return a value, they return as close to null or zero as possible.  A dummy is most often used as an argument to a function, where neither the test nor the function cares what happens to that argument.  It simply severs as a placeholder where an object is required.

Stub

A stub is a dummy.  Its functions do nothing, but they will return fixed values that are consistent with the needs of the tests; although usually not nulls or zeros.  A stub is most often used when you want to direct execution of the code through certain pathways to be tested.  Since stubs are usually reused by multiple tests, you will typically have far more tests than you will stubs.

Spy

A spy is a stub, whose functions perform no external actions.  Like stubs, it returns values that drive execution of the code through certain pathways to be tested.  It also remembers certain facts about the way it was called so the tests can verify that those functions were called properly.  A spy can be used to remember that a function was called, how many times it was called, how many arguments were passed in, or what those arguments were.  The only thing a spy does is watch and remember.

Mock

A mock is a spy, whose functions do nothing, returns values that are useful to the tests, and remembers interesting facts about the way it was called.  It also knows what should happen.  A mock sets up conditions that are to be tested and evaluates whether those conditions have been met.  The test does not check what the mock spied on.  It simply asks the mock if everything went as expected.

Fake

A fake is very different than other types of test doubles.  It is often used as a simulator to replace (or act like) an external device or service.  A fake typically has a lot of logic in it and can grow into a complex piece of software of its own.  The more the system grows, the more the fake grows.  And this can present additional maintenance challenges.  For this reason alone, its use should be kept to a minimum.

Test doubles are often used in various types of tests.  For unit tests, dummies, stubs, and spies are adequate.  Using a fake for a unit test would be unnecessarily complex.  But simple fakes can be useful for integration tests.

Conclusion

Enhancing software testability will reduce the time and cost required to plan, write, automate, and execute test cases.  Your team will be more effective by increasing code coverage and catching more defects before going to production; be more efficient by requiring less time, fewer resources, and smaller test suites to test the same amount of code; and be more productive by developing tests that run faster, and provide greater maintainability and extensibility of your test suites.  Without changing a single process, these development practices will result in improved quality, lower costs, and faster time-to-market.
Tags:
application development architecture automated tests code component defect dummy fake mock quality requirement risk SDLC SOLID spy stub TDD test double testing

Doug Klugh

Doug is an experienced software development leader, engineer, and craftsman having delivered consumer and enterprise firmware/software solutions servicing more than one billion users through 20+ years of leadership.

Related Items


assistant Development Tip

Open/Closed Principle

Write code that is open for extension but closed for modification.  Start by delivering a minimum viable product, then as the change requests begin pouring in, you will see the type of changes that are likely to come.  You can then begin applying the OCP to enable your code to be easily extended (without modification) for those types of changes.  This will help avoid over-engineering your initial solution (YAGNI).

Learn More
subject Article

Why Bother With TDD?

Does it really make sense to test code that hasn’t even been written yet?  Or to disrupt your development mojo every minute to stop, write tests, and refactor code?  It does if you want to deliver software faster, through better code, with fewer defects, and greater agility.  As software development leaders, it is important to understand that Test Driven Development goes well beyond quality control.

Read More
assistant Development Tip

Safety Net

Build comprehensive test suites so you can refactor code without fear of breaking it.  This safety net enables you to work more quickly and adopt a more experimental style of changing software.  It will give you the confidence to improve existing code and discover immediately if something breaks in the process.  Missing tests are like holes in your safety net and bad assertions equate to broken strands.  Building mature test suites will increase both agility and speed to market.

Learn More