Failing Fast: Test Impact Analysis and Software Architecture

Wolfgang Platz, Founder and Chief Strategy Officer of Tricentis, recently wrote an article on test impact analysis and how it benefits end-to-end testing. This article builds upon those themes and applies them to unit testing and software architecture.

Failing fast is a philosophy that values extensive testing and incremental software development. Agile is all about failing fast. Continuous integration testing helps to fail fast by allowing you to find issues as soon as they are introduced, which is the best time to fix them. This leads to more robust software as Jim Shore/Martin Fowler said in Fail Fast:

“...’failing immediately and visibly’ sounds like it would make your software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production.”

The longer it takes for bugs to appear, the harder and more costly it is to fix them. Bugs add expense and risk to projects; failing fast decreases debugging expense and improves quality.

The combination of digital transformation, DevOps, and agile software testing is changing developers’ expectations. They want immediate feedback on their code, but large code bases (especially legacy code bases) tend to have large unit test suites that can take hours (or days) to execute. This can lead to developers skipping unit testing or scaling it back to save time. A slow CI pipeline slows the productivity of the entire team. One solution is to figure out which modules can be built and tested in parallel or distributed. A second solution is to figure out which tests need to be run and in what order. This is typically referred to as test impact analysis.

What is test impact analysis?

Test impact analysis is a change-based testing practice that rapidly finds issues in new or modified code. To speed up testing and “fail fast,” we suggest two things:

  1. Map all unit tests to the code and only run tests that are mapped to code that has been added or modified. This way you are not running tests that are never going to fail.
  2. Prioritize the unit tests based on the likelihood of failing the test suite. Now you can fail fast and expose bugs as soon as possible.

One way to prioritize is based on the “importance” of a file or module. By importance, we mean how much impact that file will have on the rest of the software if it is changed. For this we use the System Stability metric. System stability measures how sensitive the system is to change. When a change is made to the software, system stability will tell you how much of the rest of the software will be affected. In software with a layered architecture, the lowest layers tend to have the lowest system stability because any change to them affects the layers above. Therefore, the lower layers need much more testing.

Lower stability means the software is harder to maintain because every change affects a greater portion of the system and, therefore, requires more testing and validation. This is why less stable software tends to break easily (fragile) when even small changes are made. High stability means there is less change impact, so changes are localized. We consider this robust software. Also, because there are fewer unexpected consequences when making a change, developers have an easier time understanding the software.

Test impact analysis with system stability allows you to optimize your testing and fail fast. This means the feedback loop is tighter so builds that are set to fail upon the first reported test case failure are reached as soon as possible. Working builds are delivered faster. By reducing the amount of time running the unit test suite, developers are more likely to test. This reduces the number of defects reported later in the process or in the field, which means less wasted time on bug fixes, etc.

How to implement test impact analysis?

Test impact analysis can be run with any type of tests such as unit tests and integration tests. With this process, a subset of test cases is selected and executed in a particular order for each test run. Here are some steps to implement it:

  1. Use a static dependency analysis solution like Lattix Architect to identify which code has been added or modified since the last test run.
  2. Use Automated Unit Testing Framework like Cantata to correlate unit tests with new or modified code in the build.
  3. Unit tests that are not affected by new or modified code are eliminated from the test run.
  4. The remaining tests are sorted according to their System Stability so that the most “important” tests (i.e. the most likely to fail) are executed first.


Test impact analysis rapidly exposes defects in new and modified source code. When you add in prioritization using system stability, you are supercharging the process and making that feedback loop even tighter.

If you are interested in using test impact analysis in your testing practice, Lattix Architect can help by giving you the change sets (new and modified code) for each build, system stability (by module or file) and other impact analysis of your source code. Please contact us with any questions or to request an evaluation.