Posted on February 9, 2016
As software development teams look for ways to improve their products and services, they often look at improving time to market, better prediction of release schedules, improving customer satisfaction, and raising overall software quality.
Software “quality” can mean a lot of things to different people.
But quality goes way beyond how well a product functions or how many bugs it contains.
by Doug Klugh
When I refer to software quality, I am referring to the quality model as defined by ISO 9126. This quality model covers everything from Functionality to Reliability to Maintainability; more than two dozen quality characteristics. In managing these characteristics, you must have well defined metrics to drive your decision making. Otherwise, you’re relying too much on luck in making effective decisions regarding DevOps planning.
As you begin analyzing defects, this metric provides perspective for many of the others, such as the number of defects produced during an iteration.
As many of us are aware, the earlier in the development lifecycle we discover bugs, the sooner and cheaper we can fix them. And it is critical that we understand where in the lifecycle bugs are being introduced. If there is a flaw in the business model or in the solution or architectural designs, we need to be aware of that to fix reoccurring problems incurred during those development activities. Bugs are not only introduced during implementation. Other artifacts can be flawed as easily as the code.
Features that incur a high number of bugs, especially over a short period of time, may give indication of technical debt being introduced into that feature; whether intentionally or not. Either way, the team needs to be aware of it. And if that debt is not intended to be there, then it should be addressed as early as possible.
To help ensure your confidence in the functionality and reliability of your software, you should know the code coverage for various types of tests. At the very least, you should know how many unit tests have been written for each thousand lines of code (KLOC). If this number is low, you're likely to miss detection of a large number of defects (until it's too late). And keep in mind that developing a lot of tests won't help if they only cover a small portion of your code. Although it is rarely achieved, your goal should be 100% code coverage. Anything less provides cracks (or perhaps giant doors) for the bugs to crawl through.
Obviously regression tests are useful in identifying new and recurring bugs introduced within existing artifacts, but they are also useful in identifying fragile areas of code that might need to be refactored. If the same features or modules keep breaking during regression testing, it's time to root cause the issue.
Understanding the most common causes of defects enables you to proactively eliminate certain categories of defects. When investigating a defect, don't stop at knowing how to fix it. It's important to understand the underlying cause of the problem to prevent it from happening again or in a different portion of the code.
Of all the quality metrics you can compile, Cost of Poor Quality is probably the most difficult to quantify.
The objective is to determine how much time (users, analysts, developers, testers, etc.) it takes to fix a bug, along with the business (monetary) impact.
This helps to clarify the ROI in employing sound engineering practices in preventing software defects.
In my role as Process Improvement Leader at Intel, I built a development process for calculating the cost of poor quality within an Agile team. We were able to capture the time spent in fixing bugs, but it was much more difficult in determining the monetary impact, outside of the cost for team members to fix them. If you're able to quantify this cost, it will be one of your more valuable metrics.
Code QualityOptimizing resources is (almost) always a priority for any development manager. But measuring productivity of a development team can be difficult. Those who know me are familiar with my commitment to building high quality software through clean coding practices and SOLID engineering principles. If you're employing Agile practices, you should already have some basic metrics such as Velocity, WIP Count, Cycle Time, and Lead Time. But how do you measure how well code is written? While metrics alone cannot answer this question, there are trends that can give you a heads-up that your team may be starting to churn out some bad code.
The total number of lines of code written does not tell you anything about how well they're written. But if you examine the number of lines in your methods, you may be able to determine how well your team is adhering to the Single Responsibility Principle. For example, if you have methods with over 100 lines of code, you may want to further examine those methods to determine if they have more than one responsibility (chances are they do). You may also want to scan the number of methods in a class, for the same reason. Set a limit of 10 to 20 methods and you will be alerted when classes may be written or extended beyond a single responsibility.
Cyclomatic Complexity is a measurement of the complexity of source code. One strategy, called Basis Path Testing, uses control flow graphs to test each linearly independent path through a set of code to determine its complexity. This analysis can be applied to solutions, modules, classes, or individual methods. As an example, if your code contains a lot of nested If statements, or large Switch statements, this analysis would assign a high value to its complexity. This metric can be useful in tracking down violations of the Open/Closed Principle.