software architecture

Measure your Software Architectural Health

How do you measure the “architectural health” of a software project?

Since every software project is different, it is hard to come up with a single number that represents the architectural health of an entire project. Lattix Architect, therefore, provides a variety of architectural metrics. These metrics were chosen based on academic research on system architecture as well as the practical experience of the Lattix development team. We will look at one of these architecture metrics today: system stability.

Architectural metrics are different from traditional code metrics (i.e. bugs per line of code, cyclomatic complexity, etc.) that focus on understanding the quality and complexity of sections of code. Architectural metrics focus on the big picture (the system as a whole) and examine its organizational structure.

What is system stability?

System stability measures how sensitive the system is to change. When a change is made to the software, system stability will tell you how much of the rest of the software will be affected. In software with a layered architecture, the lowest layers tend to have the lowest system stability because any change to them affects the layers above. Therefore, the lower layers need much more testing.

Lower stability means the software is harder to maintain because every change affects a greater amount of the system and therefore requires more testing and validation. This is why less stable software tends to break easily (fragile) when even small changes are made. High stability means there is less change impact, so changes are localized. We consider this robust software. Also, because there are fewer unexpected consequences when making a change, developers have an easier time understanding the software.

How is it calculated?

System stability is measured by analyzing the impact of change in a software system for every element of the system. The overall stability number is the average stability number from all of the elements. The dependency information for every element is examined. Then the number of elements that are potentially affected (dependencies) when an element is changed is calculated. This is done through to transitive closure.

If the system stability is 70%, this means that 30% of the elements on average are affected when any element is changed and 70% are unaffected. Stability is computed as a percentage of the size of the system, so it doesn’t necessarily change simply because the software project gets larger or smaller.

How can you use it? And why?

The architecture of a software system has a profound impact on its stability. Therefore, by monitoring system stability you can measure the quality of your architecture. System stability can also be used to focus your testing efforts. As an example, if we create a set of applications on a common framework (see picture below) and then change the framework, we will affect all of the applications.

Software Architecture

If we change just an application then the impact will be much lower. Therefore, it is essential to understand not just the stability of the entire system, but understand where the stability is coming from.

System Stability Breakdown

In the picture above, the total stability of Project is about 62%, but the Apps layer has a stability of 73%, while Frameworks and Util have lower stability. This means the Frameworks and Util layers have more dependencies on other components, which means that changes to these components have a much larger impact and a need for focused testing (i.e. changes to these layers are riskier).

There is also value in tracking the stability of the software over time.

System Stability Trend

We have talked in previous blog posts about architecture erosion and how that leads to software that is harder to maintain. The system stability metric gives you measurable and actionable insight into this phenomenon. If the stability goes down over time as in the picture above, there needs to be a corresponding increase in testing and verification.


System stability tends to decrease over time if not monitored. Every change to the software can erode the architecture and therefore the stability. This means the software becomes harder to maintain over time, resulting in longer testing cycles and reduced developer productivity.

Monitoring stability can help you maintain a clean and modular architecture. Lattix products makes architecture management part of your continuous integration and makes architecture management easy and actionable in your development lifecycle.

Nudge Theory for Software Architecture

In 2012, the UK was increasingly worried about low pension savings rates among private sector workers. So the government forced employers to establish “automatic enrollment” for their pension plans.

Employees were automatically entered into their firm’s pension plan, and contributions were taken out of their paycheck each pay period, unless they formally asked to be removed.

The idea was that most people wanted to save for retirement but put off doing so by the fear it would be hard or complicated. Auto enrollment would make saving the default option, and therefore make it easier for employees to do what they wanted to do and push up savings rates.

This is called Nudge Theory. It is about making it easier for people to make a certain decision. Richard Thaler and Cass Sunstein wrote in their book Nudge “By knowing how people think, we can make it easier for them to choose what is best for them, their families and society.”

Has “automatic enrollment” worked? Yes, since 2012, when automatic enrollment was introduced in the UK, active membership in private sector pension plans has jumped from 2.7 million to 7.7 million in 2016.

Nudge Theory Auto Enrollment

What does this behavior stuff have to do with software architecture? Maybe more than you think.

Problems with current software development

As large software projects evolve, the development effort involves many different programmers and can span years. This leads to a common phenomenon called architecture erosion: the source code’s actual architecture has diverged from the intended architecture. There are many reasons for this:

  • Programmers of large systems don’t see the high-level view of the system
  • New design decisions are introduced that change the intended architecture, but old code is not refactored because of cost restrictions. This means the old code doesn’t comply with the new architecture
  • With team turnover, tribal knowledge is lost, so new team members are less familiar with the architecture
  • During the maintenance phase, time and cost pressures force developers to take shortcuts, disregarding the intended architecture
  • The architecture was never properly communicated to the entire team

If architecture erosion is left unchecked maintainability goes down. Bad dependencies (shortcuts) make code brittle, hard to understand, and hard to maintain. The intended architecture was designed to achieve certain quality goals regarding reliability, security, modifiability, performance, portability, and interoperability. The more erosion that happens to the intended architecture, the more these qualities are negatively affected.

Can a Nudge help?

Software developers are not always inclined to worry about software architecture and maintainability. Rather than just hoping for the best and preparing for the worst, knowing how developers think and applying nudge theory could help persuade any development team to do what is best for the long term maintainability of the software.

The Nudge Unit (U.K.’s Behavioral Insights team) has developed a framework for influencing behavior. To encourage behavior, it must be Easy, Attractive, Social, and Timely (EAST). These four principles can be applied to a software architecture context.

1. Make It Easy

The effort required to monitor software architecture usually discourages developers from attempting it. Often they have to stop using one set of tools and pick up another, resulting in possible delivery delays. This typically leads to the software architecture being neglected for other priorities that are easier to monitor.

Nudging developers to do the right thing when it comes to software architecture has to be easy. This means serving software architecture insights immediately and in the context of their daily work. So if developers want to understand the architectural impact of their code changes on the software system, information should be presented as they perform each build, together with granular insights into which software changes are causing architecture problems.

2. Make It Attractive

According to the Nudge Unit, we are more likely to do something that our attention is drawn toward, including pleasing images, color, and personalization.

A nudge toward better software architecture involves drawing the developer’s attention to a place that they most often frequent, i.e. email, code dashboard, etc. To make it attractive, send their personalized architecture violations (impact) to their preferred communication platform.

3. Make It Social

The Nudge Unit states that we are all embedded in networks and those networks can positively shape our actions. This is true for development organizations as well. Teams typically operate in silos and rarely engage in a collaborative fashion to drive systematic improvements like software architecture.

Development leaders can foster networks to encourage collaborative behaviors to spread across development organizations. For example, they can provide a single collaboration app that integrates all architecture issues and metrics so cross-functional teams have one point where they react, decide, and solve problems together.

4. Make It Timely

Timing makes a big difference in development organizations. Information must be delivered in the context of a developer’s daily workflow. To encourage better behaviors, architecture monitoring must provide developers earlier guidance on architecture impacts - in their world, in their code, and in their terms. If this is done consistently, the behavioral mindset will shift from ignoring architecture problems to driving and sharing improvements in both architecture and design.


Insights from behavioral science can encourage development teams to make better choices for themselves, their colleagues, and their business. Lattix Architect can help by making architectural impact and insights available after each build. Architecture violations and metrics can also be made available in Lattix Web or other dashboards like SonarQube.

Design Rules to Manage Software Architecture

In Design Rules, The Power of Modularity, Carliss Y. Baldwin and Kim B. Clark argue the computer hardware industry has grown so quickly because of modularity, the building of complex products by breaking the functionality into smaller subsystems that are designed to work independently yet can be used as building blocks to create a whole product. The key to this modularity is the use of design rules that must be followed and that allow designers (and software developers) to creatively solve complex problems. Design rules are also key to the computer software industry.

What are design rules?

Design rules are a way to specify the allowed nature of the relationship between various subsystems. Design rules have two purposes:

  • Flag architectural errors that erode the architecture over time
  • Capture critical changes to the architecture that might necessitate further changes to the system as a whole or to how subsystems interact with each other

There are many benefits to design rules. Design rules are an easy way for the software architecture to be communicated to the entire development team. With clearly defined design rules, new developers can come up to speed quickly on how the software is supposed to work and how they should structure their code. When design rules are monitored, tight scheduling does not erode the architecture and, if it does, the consequences of time pressure can be tracked (architectural technical debt) and monitored.

Design rules make managing large, complex software systems easier because there are clear rules on how different elements can and cannot interact. Distributed teams (outsourcing, offshoring) can be counted on to produce higher quality code because they have rules to follow. Without design rules, it is impossible to manage the long-term health and maintainability of the software.

Consequences of not implementing design rules

Software architecture degrades over time with successive revisions. This is typically called architecture erosion. This happens because of the development team’s inability to communicate and enforce architectural intent in the software, i.e. not implementing design rules. Without clear rules, developers can and will change the software with unintended consequences.

Architecture erosion also leads to maintainability issues. Bad dependencies are introduced which leads to code that is hard to understand and change. This is typically referred to as brittle code. Some of the other consequences of a lack of design rules include lower reliability, less modularity, lower performance, and lower interoperability. Design rules give actionable insight into violations of the intended architecture that are a consequence of normal development.

How to implement design rules

The first step is finding an easy way to communicate the architecture to the entire team. Architecture diagrams communicate important aspects of the model. We recommend using a mixture of the dependency structure matrix (DSM, below left) and conceptual architecture diagrams (CAD, below right). The DSM is a simple, compact, and visual representation of a system or project in the form of a square matrix. This is a good way of getting an understanding of the the entire software project in one view. DSMs are also a powerful way of setting and visualizing design rules. They make it easy to pinpoint violations to design rules. The CAD is a good way of looking at a smaller, more manageable subsystems because it is a simple diagram which is easily understood by managers, users, and business stakeholders.

Design Rules: Dependency Structure Matrix and CAD

Once you understand your architecture, you need a way to enforce it with build-time checking and reporting. Here’s a white paper on how Lattix does this: “The Lattix Approach: Design Rules to Manage Software Architecture”.

When you are creating design rules, the things that you want to enforce are:

  • Placement of UI, business, and data logic
  • Use of infrastructure or util modules
  • Design standards
  • Layered architecture

You need a process to evolve (update) the architecture when required. Sometimes you will be adjusting the architecture ahead of development and sometimes you will be changing the architecture during development as new information becomes available.


The goal of using design rules to manage software architecture is to keep the code clean and consistent. This will allow you to keep maintenance costs down over the entire lifecycle of the product. This is especially important because, as the product evolves, new team members will be introduced and new business requirements will be needed that were not thought of in the original architecture. With design rules, this product evolution can be handled efficiently. If you are interested in trying out the Lattix Approach to design rules, sign up for a trial.

Software Architecture and GDPR Compliance

The General Data Protection Regulation (GDPR) is an EU regulation on privacy protection that goes into effect in May 2018. GDPR applies not only to EU companies that process personal data on EU residents but also to companies not located in the EU. As Article 3 states, it is “applied to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union.”

Software architecture is an important part of GDPR compliance. An architectural model of the software gives you a complete view of everything connected to the personal data in your system. The GDPR defines personal data as any information that has the potential, alone or paired with other information, to identify a person. You need to preserve the identity of an individual across different names and properties and be able to trace them across the system and disparate data points as stated in Article 30. You have to record what you do with personal data and define which applications use it.

The Automated Decision Making section of the GDPR states that any system which undertakes automated individual decision-making, including profiling (Article 22), is now contestable by law. This includes automation components such as calculation engines, scoring systems, or other processing of personal data. You need to be able to trace the personal information through these systems and demonstrate compliance. Article 5 states “the controller shall be responsible for, and be able to demonstrate compliance…”

Compliance Steps for GDPR

As part of ensuring compliance for GDPR, you will need a good overview of the personal data involved.

  1. Identity all data that the GDPR considers personal data. Lattix Architect will give you this information with its member level expansion feature that allows you to see all of the variables associated with personal data.

    software architecture (see our video on Member Level Expansion)

  2. Once you have identified the personal data, you need to analyze its use. Lattix Architect understands all of the dependencies in your software system, so it will know all of the dependencies on the personal data. Now you will be able to model the data flow and show which applications, processes, etc. use the personal data.
  3. Once you have modeled the data flow, you will be able to demonstrate compliance with GDPR by using the Impact Analysis Report in Lattix Architect. This report tells you all the dependencies on selected elements (in this case variables) and can be exported to Excel, csv, or XML formats.

    software architecture (see our video on Impact Analysis)

GDPR compliance is something you need to regularly revisit. You must go through the above steps frequently to ensure you remain compliant. This becomes part of your governance framework.


Non-compliance with GDPR can result in large fines. Penalties,as outlined in Article 38, include “fines up to 20,000,000 EUR or in the case of an undertaking, up to 4% of the total worldwide annual turnover.” There is personal damage that can be claimed by any individuals who are the data subjects and there is personal liability for directors and senior managers. This all makes it worthwhile for organizations to take these risks seriously.

Motivation for Software Architecture Refactoring

Refactoring is commonly applied to code, but refactoring can also be applied to other development artifacts like databases, UML models, and software architecture. Refactoring software architecture is particularly relevant because during development the architecture is constantly changing (sometimes for the worse; see our blog post on Architectural Erosion) and expanding. Software architecture refactoring should happen regularly during the development cycle.

We have talked in the past on how to perform architectural refactoring (see our blog post What is Architectural Refactoring?). In this blog post we talk about why you should.

Why refactor software architecture?

The ongoing success of a project is based in large part on the software architecture. Software architecture directly influences system qualities like modifiability, performance, security, availability, and reliability. If the architecture is poor, no amount of tuning or implementation tricks after the initial development will help significantly improve system qualities. You need to evaluate and refactor your software architecture early to know whether it will meet your requirements.

Software architecture evaluation and refactoring should be a standard activity in any development process because it is a way to reduce risk, it is relatively inexpensive, and it pays for itself in the reduction of costly errors or schedule delays. Architecture also influences things like schedules and budgets, performance goals, team structure, documentation, and testing and maintenance. As software teams grow and/or become more distributed, understanding software architecture becomes even more vital. If everyone on the team does not have a clear understanding of the architecture (what components depend on other components, etc.), defects start to develop in the code.

For example, if you were building a house, you would carefully examine and follow the blueprints before and during construction and make changes to them as new requirements are introduced. In construction, this extra time is worth it because it’s better and cheaper to find out the homeowner wanted an extra bathroom during design or construction than on moving day! The same is true for software development.

What is software architecture?

To properly refactor a software architecture, you need to understand what information is relevant. Software architecture is the structure of a system. It is made up of software components, their properties, and their dependencies. The architecture defines the modules, objects, processes, subsystems, and relationships (calls, uses, instantiations, depends on, etc.). Architecture defines what’s in a system and provides all the information you need to know how the system will meet its requirements. Software architecture fills the gap between requirements and design. To refactor the architecture, it has to be understandable and easily visible using Dependency Structure Matrix (DSM) and Conceptual Architecture Diagram (CAD) views.

software architecture

Finally, the architecture creates the requirements for the low-level designs.

When should you refactor software architecture?

In a typical project, you design and think about the architecture only at the beginning. But, as stated earlier, architecture refactoring can and should be applied at all stages of software development. As an example, in agile development architecture evaluation and refactoring should happen once per sprint.

Architecture refactoring is particularly helpful after implementation has been completed. This might happen when an organization inherits a legacy system or if you are put in charge of an existing application. Understanding and refactoring the architecture of a legacy system is useful because it gives you a complete view of the system and answers the question of whether the system can meet the requirements in terms of performance, security, quality, and maintainability.

What are the goals of software architecture refactoring?

“If you don’t know where you are going - any road will get you there” - Cheshire Cat

If you don’t know what your goals are or if the goals are too vague (“the system shall be highly modifiable,” “the system shall be secure from unauthorized break-in,” “the system shall exhibit acceptable performance”), then there can be a misunderstanding of what needs to be done during refactoring or when refactoring has been completed. The point is that system attributes are not absolute quantities, but exist in the context of specific goals.

Not all system attributes can be improved with software architecture refactoring. Usability is a good example. This has more to do with the user interface than the underlying architecture. Although if the user interface has its own module with limited dependencies it can be easily swapped out for a different interface (i.e. using the web instead of desktop GUI).

System attributes that are determined by the architecture and can be improved with refactoring include:

  • Performance - This is how responsive the system is in certain workload conditions (as specified by the end users) or how many events it can process during a certain period of time.
  • Reliability or availability - This is the ability to keep the system up and running over time. The system needs to recover gracefully from failures or unexpected behavior.
  • Security - This is the ability to resist or defeat unauthorized usage and/or denial of service attacks while still providing the correct service to legitimate users.
  • Modifiability - This is how quickly new features and updates can be made to the system based on changing requirements or bugs found in the field or through testing.


The benefit of software architecture refactoring is uncovering problems earlier in the development cycle when they are cheaper and easier to fix. It produces a better architecture that helps with future development and maintainability. An iterative and consistent architectural refactoring process increases everyone’s confidence in the architecture and in the system as a whole.

Software architecture refactoring gives everyone a better understanding of the architecture. This can then easily be communicated to all interested parties including product management, other developers, QA, etc. Lattix Architect is a great companion for architectural refactoring and evaluation as it makes the visualization of the architecture easier and allows for quick what-if analysis of the architecture.

The Smell of Rotting Software

Jack Reeve introduced the concept that source code is the design and programming is about designing software.1 As software grows, the design, or architecture, tends to grow large and complex. This is because software architecture is constantly evolving, making software maintenance difficult and error-prone. In this article, we will talk about symptoms of bad architecture and how to fix them.

Poor Software Architecture

According to Robert Martin2, there are seven symptoms of poor architecture.

  1. Rigidity: this means the system is hard to change. Every change forces other changes to be made. The more modules that must be changed, the more rigid the architecture. This slows down development as changes take longer than expected because the impact of a change can not be forecast (impact analysis can help). System stability and average impact are good architecture metrics to monitor for rigidity. System stability measures the percentage of elements (on the average) that would not be affected by a change to an element. Average impact for an element is calculated as the total number of elements that could be affected if a change is made to this element (or the transitive closure of all elements that could be affected).
  2. Fragility: when a change is made to the system, bugs appear in places that have no relationship to the part that was changed. This leads to modules that get worse the more you try to fix them. In this case, these modules need to be redesigned or refactored. Cyclicality metrics can help find fragile modules. Cyclicality is useful in determining how many elements of a system are in cycles. See our blog post “Cyclicality and Bugs” for more information.
  3. Immobility: this is when a component cannot be easily extracted from a system, making it unable to be reused in other systems. If a module is found that would be useful in other systems, it cannot be used because the effort and risk are too great. This is becoming a significant problem as companies move to microservices and cloud-ready applications. A metric that is useful in this case is called coupling. Coupling is the degree of interdependence between software modules; a measure of how closely connected two routines or modules are and the strength of the relationship between modules.
  4. Viscosity: this is when the architecture of the software is hard to preserve. Doing the right thing is harder than doing the wrong thing (breaking the architecture). The software architecture should be created so it is easy to preserve the design.
  5. Needless complexity: the architecture contains infrastructure that adds no direct benefit. It is tempting to try to prepare for any contingency, but preparing for too many contingencies makes the software more complex and harder to understand. Architectures shouldn’t contain elements that aren’t currently useful. Cyclomatic complexity metrics can help diagnose this problem.
  6. Needless repetition: this is when an architecture contains code structures that are repeated, usually by cut and paste, that instead should be unified under a single abstraction. When there is redundant code in software, the job of changing the software becomes complex. If a defect is found in code that has been repeated, the fix has to be implemented in every repetition. However, each repetition might be slightly different.
  7. Opacity: this is when the source code is hard to read and understand. If source code is the design, this is source code that does not express its intent very well. In this case, a concerted effort to refactor code must be made so that future readers can understand it. Code reviews can help in this situation.


While source code may be the design, trying to figure out the architecture from the source code can be a daunting experience. Using architectural analysis tools like Lattix Architect can help by visualizing the dependencies. This allows you to refactor the architecture, prevent future architectural erosion, and provide metrics like system stability, average impact, cyclicality, coupling, and cyclomatic complexity.

1. C++ Journal, “What is Software Design?”
2. Agile Software Development, Principles, Patterns, and Practices, Robert Martin

Architecture Erosion in Agile Development

Software architecture erosion refers to the gap between the planned and actual architecture of a software system as observed in its implementation.1

Architecture erosion is a common and recurring problem faced by agile development teams. Unfortunately, the process of solving this problem is usually ad hoc or very manual, without adequate visibility at the architecture level. One effective solution is the reflexion model technique. The technique is a lightweight way of comparing high-level architecture models with the actual source code implementation while also specifying and checking architectural constraints.

The diagram below is an example of the reflexion model technique.

Agile Architectural Analysis

Architecture erosion can result in lower quality, increased complexity, and harder-to-maintain software. As these changes happen, it becomes more and more difficult to understand the originally planned software architecture. This is particularly important in an agile environment where, according to the Agile Manifesto, working software is valued over comprehensive documentation and responding to change is valued over following a plan. In reality, this means that the architecture is evolving as the software is evolving. Therefore, software changes need special attention (architectural assessment) from software architects. If this does not happen, the architecture could erode or become overly complex. Uncontrolled growth of a software system can lead to architectural issues that are difficult and expensive to fix.

How to avoid architecture erosion

Architecture erosion can be avoided or corrected by continuously monitoring and improving the software. Continuous checking of the implemented architecture against the intended architecture is a good strategy for detecting software erosion. Once architectural issues have been found, refactoring should be used to fix them. In an agile environment, you should combine development activities with lightweight continuous architectural improvement to avoid or reverse architecture erosion. The process of continuous architectural improvement can be broken down into four steps:

  1. Architecture assessment
    1. Identify architectural smells and design problems
    2. Create a list of identified architectural issues
  2. Prioritization
    1. Decide the order in which the architectural issues will be tackled starting with strategic design issues or high-importance requirements first
  3. Selection
    1. Choose the appropriate refactoring pattern to fix the issue. If none exist create your own.
  4. Test
    1. Make sure the behaviors of the system did not change
    2. Update the architecture assessment to make sure you fixed the design problems and did not introduce new issues. Watch the Lattix Update Feature video for more information on this step.

This is particularly useful in agile development. In a scrum environment, architecture refactoring should be integrated into sprints by adding time for refactoring both code and architecture. During the sprint, architects need to check their architecture, while testers and product owners should validate the system still meets requirements. Architecture refactoring should be done once during a sprint as opposed to code refactoring, which should be done daily. If it is done less often, fixing architectural issues involves more time and complexity as more code changes are added on top of design issues. If done more often, the architecture could change needlessly and add to software complexity. Architectural problems not solved in a current sprint should be saved and maintained in a backlog.


Architecture erosion can happen in any software project where the architectural assessments are not part of the development process. Architectural refactoring makes sure wrong or inappropriate decisions can be detected and eliminated early. One of the principles of agile development is "maintain simplicity." Focus on simplicity in both the software being developed and in the development process. Whenever possible, actively work to eliminate complexity from the system. A clean architecture eliminates complexity from the software while a lightweight, reflexion technique compliant tool like Lattix Architect makes the process of continuous architecture improvement simple.

1. Terra, R., M.T. Valente, K. Czarnecki, and R.S. Bigonha, "Recommending Refactorings to Reverse Software Architecture Erosion", 16th European Conference on Software Maintenance and Reengineering, 2012

Architectural Flaws: The Enemy Of Software Security

“Microsoft reports that more than 50% of the problems the company uncovered during its ongoing security push are architectural in nature. Cigital data shows a 60/40 split in favor of architectural flaws.”
- Gary McGraw

Nearly 40% of the 1,000 CWEs (common weakness enumeration) are architectural flaws. Architectural design in secure software is an often overlooked aspect of software development. So much so that the IEEE established a Center for Secure Design and released a document “Avoiding the Top 10 Software Security Design Flaws”.

Static analysis is not enough

The static analysis testing of software source code is necessary but not enough. Architectural flaws are difficult to find via static analysis. Architectural flaws can obscure coding bugs that static analysis might have otherwise detected because of the added complexity. Research from Rich Kazman at the Software Engineering Institute shows that you should focus on identifying design weaknesses to alleviate software bug volume. In identifying structures in the design and codebase that have a high likelihood of containing bugs, hidden dependencies, and structural design flaws, SEI has found that architectural flaws and security bugs are highly correlated (.9 correlation). This is because defective files seldom exist alone in large-scale software systems. They are usually architecturally connected, and their architectural structures exhibit significant design flaws that can propagate bugs among many files.

Example HeartBleed

In his essay “How to Prevent the next HeartBleed” David Wheeler said “OpenSSL uses unnecessarily complex structures, which makes it harder for both humans and machines to review.” There should be a continuous effort to simplify the code. Otherwise, just adding capabilities will slowly increase software complexity. The code should be refactored over time to make it simple and clear while new features are being added. The goal should be code that is “obviously right,” as opposed to code that is so complicated that “I can’t see any problems.”

As we stated above, this is a good example of static analysis techniques not being enough. These techniques that were supposed to find HeartBleed-like defects in OpenSSL were thwarted because the code was too complex. Code that is security-sensitive needs to be “as simple as possible.” Many security experts believe using tools, like Lattix Architect, to detect especially complicated structures and then simplifying those structures is likely to produce more secure software. Simplifying code is a mindset. There needs to be a continuous effort to simplify (refactor) the code. If not, architectural erosion starts to happen as you add capabilities and slowly increase software complexity.

As David stated above, the goal should be code that is obviously right, as opposed to code that is so complicated that you can’t see any errors. I think Rus Cox said it best when talking about HeartBleed and complexity: “Try not to write clever code. Try to write well-organized code. Inevitably, you will write clever, poorly-organized code. If someone comes along asking questions about it, use it as a sign that perhaps the code is probably too clever or not well enough organized. Rewrite it to be simpler and easier to understand.”