Photo by Clemens van Lay on Unsplash
Migrating from Monolith to Microservices: A Software Architect's Guide
Many organizations have large, complex monolithic applications powering key business functions. Over time, as these applications age, they become challenging to maintain, scale, and extend. Microservices architectures have emerged as a popular approach to overcoming these challenges by breaking a monolith down into a suite of independently deployable services.
However, migrating from a monolith to microservices is complex. There are many technical and organizational considerations. In this blog post, I'll provide a practical, step-by-step guide to approaching such a migration project. I will cover:
Understanding service boundaries
Incrementally extracting services
Handling data and databases
Managing organizational changes
Mitigating risk
Let’s explore each of these topics in detail.
Understanding Service Boundaries
The first step is analyzing the monolith’s domain model and architecture to identify, map, and document the various problem areas and current capabilities. Look specifically for:
Module and component boundaries
Silos of functionality
Clean versus messy areas of code
Data models and storage schemes
Group associated features and data models into tentative service contexts and domains. High cohesion within services and loose coupling between services should drive this decomposition. Refine the decomposition through discussions across teams to arrive at the initial candidate microservices.
Document the most sensible technical boundaries for each microservice as well as the interdependencies between services. Avoid overly granular services as well as services trying to encapsulate too many functions or data models. Useful service boundary and domain analysis tools include Context Mapping, Simon Brown's C4 Model, and Domain-Driven Design
Incrementally Extracting Services
Rather than a risky “big bang” rewrite, we can mitigate complexity by using the strangler migration pattern when transitioning from monolith to microservices. This pattern provides a gradual, low-risk approach.
The idea is to slowly strangle the legacy monolithic application by incrementally replacing pieces of its functionality with new microservices over time. We set up the microservices to interoperate with and ultimately replace portions of the monolith.
Concretely, here is how strangler migration works:
Build new microservices that perform a subset of the monolith’s original capabilities using modern languages, frameworks, and data stores.
Expose APIs from the new microservices that are as compatible as possible with the monolith’s existing interfaces.
Redirect a portion of client traffic from the monolith to the microservices using routing rules.
As more microservices come online, route an increasing percentage of requests away from the legacy application which “strangles” it.
Eventually decommission the monolithic application after it is completely replaced by the new suite of microservices.
This technique allows the gradual transition of functionality at low risk. The old and new systems run in parallel during migration. If any part of the new architecture fails, we route traffic back to the monolith. This style of incremental adoption isolates risk and issues.
Strangler migration thereby gives us a pathway to realize the benefits of microservices without disrupting existing systems until the organization is ready. It mitigates the dangers of a “big bang” rewrite when transitioning away from monolithic architectures.
Handling Data and Databases
An immense challenge in any monolith to microservice migration is handling data persistence. Giant, shared databases power most legacy applications so database decomposition is necessary. But breaking down databases has major implications for foreign keys, transactions, data synchronization, reporting, analytics, etc.
Some strategic patterns for data migration help, including:
Maintaining legacy databases read-only as a reporting system of record during and after migration
Leveraging events and event sourcing to propagate data changes across services
Using Bounded Context Mapping to determine how data models breakdown across services
Employing SAGAS to maintain data consistency across services
There are also some new technologies that simplify cloud-native data management across microservices like distributed SQL engines and multi-model databases. The key is choosing the data access patterns that balance consistency, availability, correctness, and complexity based on the specific microservices being built.
Managing Organizational Changes
Such a profound technical transformation requires meticulous organizational realignment. Development teams must reorganize around the new microservices, requiring new team structures, roles, processes, KPIs, and culture changes. Existing ops teams also need to develop automation skills to manage highly dynamic infrastructure.
Create separate teams accountable for each microservice including developers, SREs, BAs, QAs, etc. Make them autonomous and cross-functional within each microservice context. Implement culture initiatives as needed to foster autonomy, innovation, and productivity across teams.
IT leadership must champion the migration through extensive internal marketing campaigns. User experience designers should craft customer journey roadmaps based on the microservices available during intermediate migration states. Executives need to diligently oversee the significant coordination efforts across impacted teams.
Mitigating Risk
Lastly, putting extensive risk mitigation scaffolding in place is paramount before committing fully to such a disruptive endeavor. Techniques like canary deployments, extensive monitoring, circuit breakers, blue-green deployments, and chaos engineering tests help control risk. Maintain legacy systems until microservices reach feature and scale parity. Leverage containerization, immutable infrastructure, and Infrastructure as Code for production services to prevent configuration drift.
Architect a rollback plan even as services cut over, at least until all microservices are deployed in production and stable. Allocate appropriate migration timelines and milestone quality bars accounting for complexity, competing priorities, and other organizational constraints. Bring on skilled delivery partners if necessary to complement internal skills and capacity.
Conclusion
Migrating from monolithic architectures to microservices enables key benefits like scalability, feature velocity, and resilience. However realizing these benefits necessitates meticulous technical and organizational realignment using incremental data migration, organizational change management, and extensive risk mitigation guardrails. With a holistic migration approach, experienced software architects can successfully guide even the most complex Java monoliths to the world of cloud-native microservices.