System Upgrades and Data Migration : Reducing the risks of a “big bang” cutover.
Director – Sales | Inviga | Software Development and Implementation of AI and Integrated Business Solutions
Published: July 19, 2022
If you’re ever considering a system upgrade proposal from a vendor, think carefully about their migration approach – it’s a red flag and could sink your project.
Not so long ago we were asked to step into a project to upgrade a 20-year-old legacy system to a modern web-based solution. The project was in trouble with key business requirements being far more challenging than anyone had anticipated. On top of that, there were two decades worth of transactions and data that needed to be migrated to a cloud-based platform, with no loss or error.
The previous vendor had done what most would do and proposed what we call a “big bang” cutover, one where the data is migrated in one big step over the weekend before go-live. A few quick calculations though showed that at best it would take 50 to 100 hours to migrate the data – and this is the important part – per attempt! More likely it would take significantly longer as the transfer was over the internet and likely to take a few attempts in the lead-up to iron out any data mapping issues.
We quickly realized that we needed a different approach, one that could survive a lot of change and easily accommodate multiple migration attempts as we tested and refined the data migration.
We concluded that the best solution would be to just continuously synchronize the target system data with the legacy system. This way we could deal with any migration issues well before go-live, and then automatically keep the target system’s data up to date as it changes in the source system.
This idea had some enormous benefits in de-risking the system upgrade, including:
Making it easy to migrate incrementally
Because the data was continually synchronized, we didn’t need to move everyone over to the new platform in one go. We could gradually migrate groups of users to the new system rather than cut everyone over at the same time, helping to limit the impact of any go-live issues. If anything had gone drastically wrong we could quickly and easily cut everyone back over to the existing system with little to no disruption.
Shortened testing cycle times
Testing and refining data migrations is pretty tricky when your data takes at least 50 hours to copy. Any small issue can cause you to have to start again, causing your 50-hour data copy to balloon to 100 hours, 200 hours, or even more if you were unlucky and hit further issues. Synchronizing data meant we could simply fix any issues and resync the affected data in a matter of hours, rather than days or weeks. This meant that we could do many more testing cycles in the same amount of time, increasing our chances of a successful upgrade
Giving us an easier way to solve data quality issues
During testing, it quickly became clear that the legacy system data was not perfect. There were weird data structures, formatting issues, rogue characters, and so on that needed to be fixed before we could move over to the new platform. Because we chose a synchronization approach we could easily see and fix these issues through the synchronization solution instead of having to perform data fixes in the legacy system. This was a huge improvement over the traditional approach of having to fix the data at the source, which would have added huge risks and complexity to the migration.
How did we do it?…
First and foremost, we utilized an open-source data synchronization solution that allows systems with very different data schemas to be kept in sync with each other. This gave us a lot of flexibility to map and transform the legacy system’s data outside of the legacy system, while at the same time keeping the target system constantly up-to-date as data in the legacy system changed. This meant we could continuously test the migration and make changes as we went along.
We also used the synchronization solution to gradually migrate groups of users from the legacy system to the new platform in a controlled manner. When we felt the data was tested enough, we simply asked groups of users to start using the new solution – there was no need for further data migrations as the data was automatically kept up to date with the legacy system. If any issues came to light after moving the group of users on, we could quickly fix them and resync the affected data, or in the worst case temporarily put the users back on the legacy system while we investigated.
In the end, we successfully upgraded the legacy system with minimal disruption and a lot less stress for everyone involved. If you’re planning a similar project, I strongly urge you to consider using a synchronization tool to keep your data in sync as you migrate it.
Not only will it save you time, but it will also minimize the risks!
To find out more about how we did it, please contact the author or get in touch by email at firstname.lastname@example.org
Inviga was founded in Brisbane, Australia in 2021 with a mission to offer leading-edge ideas and solutions to help Queensland businesses do more with less. Our team of creative IT experts are specialists in solving difficult IT challenges without the cost and frustration of working with big traditional IT vendors.