CIOREVIEW >> Internet Of Things >>

Technical Debt, Legacy Code and the Internet of Things

Bill McCaffrey, COO, Vector Software
Bill McCaffrey, COO, Vector Software

Bill McCaffrey, COO, Vector Software

Software used to be seen as something that could be written once and used many times without ever “breaking down.” However, that illusion faded when problems began to appear, ultimately caused by continual development without the correct quality control processes in place. This was typically due to incredible business pressures to release new products.

  As technical debt increases, developers spend a greater majority of their time fixing bugs and struggling with fragile code, rather than building new features 

These issues have resulted in software applications carrying an enormous amount of technical debt, a metaphor for the latent defects introduced during system architecture, design or development. The accumulated liability created when organizations take these design and test shortcuts, in order to meet short-term goals, eventually makes software difficult to maintain. As technical debt increases, developers spend a greater majority of their time fixing bugs and struggling with fragile code, rather than building new features.  

Many organizations are now finding that legacy software typically has a waning lifespan; after that, they are forced to decide whether to throw it away and start again from scratch, or try to salvage it. In most cases, a substantial financial investment has been made in the code base, so there is tremendous pressure to re-use it.  
The key to reducing technical debt is to refactor components (the process of restructuring an application’s components without changing its external behavior) over time, but developers are often hesitant to do so for fear of breaking existing functionality. One of the biggest barriers to refactoring is the lack of tests that characterize the existing behavior of a component.

This is a growing problem as there are many deployed applications built upon legacy code bases that do not have the necessary test cases. This is compounded when legacy code is deployed on a new platform or product, as the lack of test artifacts means that technical debt continues to rise with no ability to “pay it off.”  
There is a massive quality gap that needs to be addressed, but many companies do not know where to start or do not have the necessary resources needed to address the problem.

Technical Debt Concerns Multiply in an IoT-Enabled World

With the advent and growing prevalence of Internet of Things (IoT), this has led to the problem of technical debt becoming more acute. Previously, when systems were self-contained and with low connectivity, it was possible to keep technical debt relatively isolated. However, with IoT-enabled devices, not only is the sheer number of systems increasing, but technical debt becomes compounded – the technical sacrifices in individual devices themselves were not a problem, but taken a whole, the issue is much more apparent.

By definition, every IoT-enabled electronic device will have network connectivity, which then makes every manufacturer of electronic devices also now be in the software business to some degree. This expands the scope of responsibility into new platforms and services, as well as introduces a demand for predictable behavior – especially if the safety of users or the environment is at risk. However, in a fiercely competitive industry such as IoT, the first-to-market advantage is huge, and developers will be under intense pressure to get products released quickly.

It has been proven time and again in software development that this thinking sacrifices quality for speed. This tradeoff can be dangerous with regard to many IoT-enabled products such as smart cars, medical devices and home safety systems. Malfunction of these systems can put lives at risk.

When Consumer-Grade Becomes Safety-Critical

There has also been a shift driven by IoT that has resulted in a new generation of software that previously did not have safety-critical requirements but now does. For example, as we move into the era of connected and autonomous cars, automatic emergency braking systems are controlled by software that powers cameras, radar, proximity sensors and more that all need to operate flawlessly in order to safely stop a vehicle if a driver is slow to respond. The embedded camera that was previously used for driver assistance (parking for example) will now also be part of this safety-critical system. As these software-driven systems migrate from consumer-grade to safety-critical applications, faulty software has severe ramifications. Quality is no longer an option – it is a necessity.   

Characterizing the Behavior of Software

A lack of sufficient tests typically means that a software application cannot be easily modified since changes can frequently break existing functionality. Consequently, when a developer modifies a unit of code and some existing capability is then broken, the developer needs to understand if it was because the original software was written incorrectly, missed a requirement that wasn’t adequately captured originally – or if it was due to a modification they introduced.

Baseline testing, also known as characterization testing, is useful for legacy code bases that have inadequate test cases. It is very unlikely that the owners of a deployed application, without proper testing, would ever go back to the beginning and build all of the test cases required. However, because the application has been used for some time, it is possible to use the existing source code as the basis to build test scenarios by using automatic test case generation to provide quickly a baseline set of tests that capture and characterize existing application behavior.  

While these tests do not prove correctness, they do encapsulate that the application does today. This is extremely beneficial because it is possible to automatically construct a complete regression suite that allows future changes, updates and modifications to be validated to ensure that they do not break existing functionality. As a result, test completeness of legacy applications is improved, and refactoring can be done with confidence that application behavior has not regressed.

Paying Off Technical Debt

Once the behavior of the software has been characterized through baseline testing, a developer can begin making updates and modifications to the code. To further automate the continuous integration and testing processes, impact analysis in the form of change-based testing can be used to run only the set of test cases that demonstrate what effect code changes have on the integrity of the whole system. It is not uncommon for a company to take weeks to run all of its test cases–but with change-based testing, a developer can make a code change and get feedback on its impact to the entire application within minutes. As a result, developers are able to make quick, incremental changes on the software knowing that they have the test cases needed to capture the existing behavior of the software.

They are also able to do further analysis if something is broken to work out if an error has been introduced, a capability has been removed that actually should be there, or if there is a bug that should be addressed because it may have other ramifications.

Figure 1: Baseline testing formalizes what an application does today, which allows future changes to be validated to ensure that existing functionality is not broken. Change-based testing can be used to run only the minimum set of test cases needed to show the effect of changes.


In an Internet of Things-enabled world, a great amount of legacy code will find its way onto critical paths in new applications. Without proper software quality methods in place to ensure the integrity of this legacy code, the overall safety of the system may be compromised.  

Baseline testing can help reduce technical debt in existing code bases, and allow developers to refactor and enhance with confidence. This ultimately permits the owners of legacy applications to extract more value.

See Also:

Top IoT Solution Companies

Read Also

Balancing Innovation and Standardization

Balancing Innovation and Standardization

Matt Kuhn, PhD, Chief Technology Officer, Innovative Technology Services, Thompson School District
Leveraging Quality Engineering and DevOps to thrive in the face of churning customer expectations

Leveraging Quality Engineering and DevOps to thrive in the face of...

Michelle DeCarlo, senior vice president, enterprise delivery practices, Lincoln Financial Group
Pioneering the Future Through Technology Innovation

Pioneering the Future Through Technology Innovation

Eric Kunnen, Senior Director, IT Innovation and Research, Information Technology, Grand Valley State University
Reimagine Naval Power

Reimagine Naval Power

Lorin Selby, Chief of Naval Research, Office of Naval Research
The Shifting Enterprise Operating System Ecosystem Is Helping Warehouse Operations Evolve

The Shifting Enterprise Operating System Ecosystem Is Helping...

Tom Lee, Director Sales Engineering, Zebra Technologies
Digital TRANSFORMATION: Challenge the Status Quo, Be Disruptive.

Digital TRANSFORMATION: Challenge the Status Quo, Be Disruptive.

Michael Shanno, Head of Digital Transformation, Global Quality, Sanofi