Legacy System Modernization Pitfalls

Because legacy system modernization projects tend to be the exception rather than the norm, most people have a limited amount of experience with them. It’s not hard to imagine the result—many of the modernization-specific pitfalls are easy to overlook. Not only is the scale of the effort much larger than the typical maintenance endeavor, but the burden of managing the transition across people, processes, data, and system behavior has the potential to put unexpected strain on the project. Let’s look at the most common pitfalls we encounter, and explore how to proactively mitigate their impact.

Pitfall #1 – Falling to fear

End users fearing change is natural. The tools they use to do their jobs are going away and being replaced with something completely unknown. Some will worry about how their workloads will be impacted if the new system doesn’t work the way they expect. Others may become nervous about not being able to adapt to the new system and being reprimanded or fired for lower productivity.

Unaddressed fear can severely undermine your project. While most users are hypothetically captive because they have no open-market alternatives to the software systems you’re providing, their raising of natural concerns or even unfounded complaining risks reducing the willingness of management and executives to support big changes. Many businesses rely on people with specific areas of expertise: If your Chief of Mechanical Operations threatens to quit rather than use something substantially different, how easy are they going to be to replace, and will that affect how much oversight support the project will receive?

The fix

Addressing fears is generally straightforward. Make sure you involve everyone directly affected in the project-planning. In-person user interviews are a great way to uncover current pain points and also provide a mechanism to find out what features and behaviors are critical to preserve. Be sure to communicate how the project goals align with users’ daily work and how the new system will make their lives easier.

Holding all these conversations can feel time-sapping, but it’s vital that the people who will actually use the new system feel they are part of the process. You can implement features in a proficient way without others, but you can’t guide the project’s acceptance if you aren’t helping paint the project’s perception.

Pitfall #2 – Forgetting to set expectations

Software systems can limit the flexibility of how the business can operate.  Retiring a legacy system for a modern one suddenly opens the doors for changes that weren’t previously possible…but this can be dangerous. Stakeholders across the spectrum—end users, support staff, business managers, technical folks, and overseeing executives—are all going to have ideas about what to change. Some opportunities will be low effort and high value, such as publishing alerts when processing exceptions occur instead of users having to monitor for failures themselves. Other opportunities, like enabling all workers to adjust any information on an order in progress, will have a huge impact multiple departments or affect downstream work processes. Proposals of high-impact changes can quickly expand the scope and cost of a project, jeopardizing the entire timeline and budget.

The fix

When navigating opportunities to redesign business processes, you absolutely must have a solid set of shared project goals to reference when evaluating each proposal. These project goals should be worked out at the beginning of the project, and ideally have executive buy-in to help steer the effort. Carefully evaluate and clearly communicate what changes the project can or cannot support. Doing this requires understanding how top-level business goals, departmental goals, and individual goals all align, so you can be sure your modernization project is delivering the most value across the organization.

Pitfall #3 – Underestimating the data migration effort

This is especially easy to do when the plan is to use the same schema on newer platforms, but you fail to account for all the activities that have to happen. Does your legacy database make direct queries into other specific systems that have to be untangled? Those queries will fail when moving to a new server and it will take time to rewrite or separate them. Has the schema been put into an automated migration tool yet? It can take days just to figure out the right order to run the table creations for more than a handful of tables. Going between dissimilar technologies becomes a mini-project in its own right. Mapping fields, writing an extract process, making it repeatable for multiple test deployments, and performance tuning all have to happen.

The consequence for underscoping data migration is usually a delayed system launch. Most businesses can’t shift to a new system without the previous data. While some of the features of the new system can be deprioritized to “nice-to-haves” to make up for schedule slip, that luxury doesn’t exist for the data migration.

The fix

The best way to quickly get an understanding of the data migration effort is to start doing it right away in a test environment. Plan a strategy, and then execute it to verify assumptions and uncover problems. You may choose to work with a subset of the data at first, just to cover the breadth of the system—better to uncover problems in five different resources in one day than spend one day exporting a single large resource. After you inventory all the issues, you may be able to work through them with a parallel “divide and conquer” approach. Lastly, you should do at least one complete run of the migration process before the cutover. Go-live is NOT the time to find translation errors or figure out how long the process will take!

Pitfall #4 – Failing to completely understand the old system’s behavior

As you replace screens or other functionality, it’s possible to miss replicating or replacing a silent behavior that the users aren’t aware of. That simple form with only two input fields for a start and stop date and a Go button might be hiding any number of notifications to other forgotten but vital systems. Did anyone read that crufty old code to make sure nothing’s lurking in it? As a participant in many data migration efforts, I’ve run into lots of these unexpected behaviors: data being read from one place but updated in another; important-looking input actually being discarded because it was no longer needed; unrelated but critical operations being silently embedded with mundane ones.

The risks involved in data migration are usually proportional to the amount of time that’s gone by since the code was last regularly maintained and amount of staff turnover that’s happened since it was first written. It’s also unfortunately a big risk to ignore—nobody wants to discover that the employee payroll process silently depended on that neon-colored popup window announcing company holidays, which users said was too annoying and the migration team decided to remove.

The fix

Mitigating this risk is easy, but time-consuming: Someone needs to carefully read the source code for all the screens you’re replacing and notate their behavior. This isn’t going to be a fun activity and experience with the old system’s technology won’t look exciting on anybody’s resume, so you’ll definitely have to choose the team members wisely. It should be someone with patience, because finding resources on the old technology may be difficult. It should also be someone with fortitude, because spending days going through and documenting old screens will be enervating. Try hard to find a way to compensate the team members involved with this long assessment by considering whatever motivates them personally (bonuses, time off, an exciting or coveted project afterwards). Other tactics include “sharing the pain” by delegating the responsibility to multiple people on a rotating basis, or contracting the effort.

It’s possible you may not have the source code available to read anymore. When you’re in this situation you might be able to use a decompiler to reconstruct it. The output from most decompilers, however, is usually not very readable for humans. The original names of the concepts being manipulated are usually lost, and instead of reading “call GetFinancialInstitution function with AccountNumber variable,” you’re reading “call f1 function with a1 variable.” In many cases where the source code has been lost, the best option is to profile the old system while running it to determine what interactions it’s triggering. Database profilers can capture queries and updates, and network proxies can be used to detect what other systems are being interacted with. Looking through the information captured by profilers is usually time intensive, so you may have to rate the risk of screens in the old system and only profile the most important ones.

Own your modernization journey

Modernization projects can be daunting. They involve not only all the usual challenges of typical product development, but also the added complexity of navigating the existing people, processes, data, and system behaviors. These extra factors can be frustrating if you don’t plan ahead to manage them explicitly. While it’s true that every new endeavor can have its share of unknown surprises, you can help reduce the worst of it by planning to keep a pulse with the people, setting expectations, carefully driving the data migration, and comprehensively cataloging the old system’s behavior.

Let's Talk