metrics

Software Metrics: Trends Trump Goals

“I don’t set trends. I just find out what they are and exploit them.” – Dick Clark, New Year’s Rockin’ Eve software metric guru.

Management loves software metrics. They love to set goals and then measure how their employees are doing against those goals (system stability needs to be 95%, for example). Software metrics don’t have to be a bad thing, but unfortunately, they are often used inappropriately. A single software metric is a snapshot and without context means nothing. While we can all agree that a codebase with a system stability of 5% is significantly worse than a codebase with a system stability of 95%, what about a codebase with 60% system stability versus a codebase with 70% system stability? It is easy to compare one number to another, but it is harder to see if that number is relevant in context of the larger software system.

In terms of “good” and “bad” codebases, a single metric is also not very helpful. You need a combination of software metrics. You could look at System Cyclicality, Intercomponent Cyclicality, System Stability and Coupling, for example, to get a better understanding of your codebase.

software metric: system stability trend

Then you have the question about what is the difference between a codebase with 94% system stability and one with 95%. If it requires a large amount of work to go from 94% to 95% just to get to that goal of 95% system stability, is that final 1% really worth it? This is where trends come in. Trends are the true added value to software metrics. How do you prioritize what should be fixed in your code? You can look at the trend or evolution of that software metric over time.

The “magic insight” only comes from looking at a number of relevant software metrics and the trends of those software metrics over time. This is why trends are more important than the actual goal. The trend will show if a team is moving in the right direction and the rate of that change. And trends create actionable insight for an organization. These insights, based on real data, into current performance trigger thinking about deeper underlying forces at work in the development of the software. Anticipating and responding to trends means thinking through all the scenarios that a trend could bring about, and how you need to respond to that trend. The goal should be to ensure trends accelerate, decelerate or reverse based on the project.

Trends also encourage experimentation. What happens if we implement pair-programming or if we switch to GitHub? Over a long project, trends can be motivating. You focus on moving in the right direction instead of the large gap between now and the end of the project. This is why trends are more important than the actual goal.

Learn how Lattix is Tracking Stability of a Software System or check out all of our metrics in our Lattix web demo.

Cyclicality and Bugs

Metrics have an obvious charm. If we could measure the quality of a system then we could track it and act as soon as it starts to degrade. But can we even hope to come up with a metric that works across something as complex as the software that runs the Mars Rover or something as simple as the software that plays the game of Tic-Tac-Toe?

Remember that the metric(s) we seek is not likely to be an individual metric such as file size, or the number of paths within a method, or even the count of bugs filed against each component. Useful as these individual metrics may be, what we want is something that is a predictor of the overall system quality. A metric that, if monitored, will help us manage the overall quality as the system evolves.

Indeed there are a number of system metrics to consider. They are graph theoretic. They come under names such as: Cohesion, Coupling, Cyclicality, Normalized Cumulative Dependency, Propagation Cost, Stability etc. But, how do we know if they are a good predictor of system quality? Research is beginning to catch up. New research shows a clear correlation between the cyclicality of your code and how buggy it is.

Cyclicality Matters

Interestingly, the evidence comes not from some dyed-in-the-wool computer science guru but from astute observers whose work is rooted in business and management. It’s a trio of business school professors collaborating across the Altantic Ocean: Manual Sosa, Tyson Browning and Mihm Jurgen. They are skilled at statistics and experts at teasing apart and verifying correlations.

And their conclusion is: Cyclicality of your code has a bearing on how buggy it is.

This conclusion may not be a surprise to many software engineers; and yet, it is a big deal because now we have large scale empirical evidence that demonstrates it. The researchers examined more than a hundred releases for various open source projects. They conclude that cyclicality of code and the presence of bugs in it are correlated. Their research goes deeper into the nature of cyclicality as well. The size of the cycle, the centrality of the component in the cycle, and the lack of encapsulation of the cycle all have an impact on the quality. They also present interesting results about “hubs” which are generally good until there are “overdone”.

You can peruse a highly readable article that summarizes the results of the research. You can also delve deeper into articles [7] and [9] at this link for the original work.

And then there is the question of “why”. Why do the bugs in code increase if cyclicality increases? The answer is not, nor is it likely to be, a mathematical theorem. Instead, the answer lies in how our brains function and how we think. I believe that cycles, particularly large cycles, make it harder for us to think about abstractions in a coherent way. This is also why architecture is so valuable. The systems we design and maintain are less prone to errors when we can think about them in ways that makes them understandable and maintainable.

Postscript: There is additional research that has arrived at largely the same conclusion. It’s from MIT in a doctoral thesis by Dan Sturtevant. Dan is a seasoned software engineer with a PhD in Systems Engineering. His pioneering work examined cyclicality using techniques that go well beyond traditional static code analysis. Dan’s work suggests that not just the bugginess of code but even employee turnover may have something to do with large scale cyclicality! Companies struggling with woes related to their software systems might consider giving him a call (Dan is at Harvard these days).