Three truths and a lie: Key things to know when moving your legacy environment to the cloud

05/10/2021 minute read Simon Mikellides

To modernise or not to modernise? That is the question many organisations running critical mainframe workloads find themselves having to face.

It’s clear that most CIOs and IT leaders are well-aware of the existential threat of slow transformation, and the real-life horror stories about legacy mainframe catastrophes are no secret either. But digital transformation, like anything, has its challenges. The move from waterfall and siloed development operations to a DevOps-centric land of continuous integration is a difficult task, but escaping the confines and ancient hieroglyphs of the mainframe world is an entirely different hurdle altogether.

It doesn’t have to be a Catch-22. The facts make it clear that retiring big iron as soon as possible is the right move, but this doesn’t mean that organisations should be hasty about it. Transparency is everything, so if you are finding yourself sceptical about whether to migrate or not, here are three truths and a lie about migrating your mission-critical workloads to the Cloud:


Truth #1: Not all workloads need to be migrated to cloud-native microservices

Cloud-native might be a logical design goal of newly developed cloud workloads. However, there are good reasons not to distill your complete legacy functionality down to an independent set of loosely coupled microservices. The good news is that you probably won’t need to.

Certain functionality might be better off staying as a monolith, due to the complexities involved in architecting, managing, scaling and monitoring highly transactional atomic-based workloads, but also because of their steady state (e.g. updated less frequently). Workloads with these characteristics don’t need a continuous delivery model supported by their own team. Disposition strategies for monolithic workloads will depend on business requirements, and may include re-hosting “as-is” with little-to-no change, or refactoring to Java or C# and further optimising to leverage specific cloud capabilities such as increased elasticity and availability.

It’s critical not to hurry down one modernisation path right out of the gate. The best outcome will be achieved by taking a tailored approach, and by developing a strategic modernisation roadmap with specific goals for different applications based on individual requirements. Identify a few high value capabilities that you want to decouple, and then progress towards a cloud-native microservices architecture.


Truth #2: You can significantly reduce risk by completing a thorough assessment that combines a top-down and bottom-up analysis

Lines of business and other stakeholders should not be kept in silos. As such, members of the business teams need to be brought into any modernisation conversations from the beginning. This is required in order to proactively address cultural change issues associated with reorganising teams to support any new development and deployment model, as well as ensure they see the value in this future application state.

A top-down and bottom-up analysis should be executed in tandem. A top-down analysis, driven via workshops using proven approaches and techniques such event storming and domain driven design (DDD) allow the future to be shaped by describing the business and how events flow through the system. Legacy functionality usage has most likely evolved over the years, so incorporating user experience in order to build specific service use cases is also critical.

The purpose of a bottom-up analysis is to provide a comprehensive picture of the contents and interrelationships between application components. The cost and complexity of any future modernisation effort can be dramatically reduced by isolating unused components, anticipating potential roadblocks and proactively focusing on areas in need of particular concentration. Using our own analysis tools here at Advanced, we’ve seen scope reduction results of approximately 40 to 70 percent. A bottom-up analysis also helps you expose the legacy application design and understand the anatomy of the source code which is critical for ensuring that the future state architecture doesn’t inherit the very design weaknesses that potentially cause you the greatest amount of pain.

To be able to successfully plan for the different disposition strategies requires a form of structured code analysis to find the tightly coupled dependencies in your code. As such, it’s important to use specialised modernisation tooling that can address the outcomes of the top-down analysis coupled with a bottom-up analysis.


Truth #3: Moving to the cloud is best approached as an incremental journey

A byproduct of any good assessment should be a strategic modernisation roadmap containing a multi-phased approach to achieving a ROI at each step of the migration. There are different levels of maturity to consider when moving through this incremental journey: Cloud ready, cloud optimised and cloud native.

Whilst cloud native is the ideal destination for “born in the cloud” greenfield development projects, a logical first phase option for monolithic applications is to automatically convert code into cloud native languages such as Java or C#. In this situation, by removing the dependency on the mainframe, you can target a cloud ready containerised environment optimised to take advantage of many of the benefits that the cloud has to offer. In a cloud optimised environment, workloads are optimised further to provide scalability at the container level, while replacing a few high-value capabilities with new microservices functionality. Further optimisation efforts can continue at your own pace in order to incrementally move towards a cloud native environment, realising that you might never replace the entire monolith.


Lie #1: You don’t need to worry about operations and infrastructure

In a typical modernisation project, around 40 percent of the effort is focused on application source code and data conversion, with 40 to 50 percent expended on testing, and the remaining 10 to 20 percent spent on designing, implementing and managing the target operations and hardware infrastructure. The target platform should never be an afterthought. This is especially true when deploying to a cloud environment. The reality is that a legacy environment comes with well-established operational and infrastructure standards and processes. A major challenge you’ll face as you embrace a cloud-native microservices approach is the required level of operational readiness and skilled resources needed to support the new, continuous delivery processes. Building out the target environment as part of the incremental journey will give your team tasked with managing it time to adjust before any microservices-driven projects even begin.

Being able to get to the cloud in general is a good thing, to help you gain quick access to new services, platforms and toolsets. Once you have a cloud native infrastructure in place, you will be operating your business with a modern application foundation which will support vastly improved innovation, continuous improvement, faster development cycles, agility and flexibility – just like the Department for Work and Pensions recently experienced. If you want to learn more about their modernisation journey – the largest of its kind in Europe – be sure to sign up for our live discussion with Oracle on Tuesday 12th October at 10am. If you can't join us live, then fear not - an on-demand version will also be available after the event.