How traditional application architectures can benefit from microservices innovations

What we can learn from microservices, yet

The vote is still out on the best use cases for microservices, and the depth in which the approach should be followed. While the dust is settling however, we can already learn from the innovations that microservices introduce and evaluate how we benefit from these in our more traditional application architectures.
For this discussion I will use the article from Martin Fowler on his site as the base for the definition or rather characterization of what a microservices comprises. I will then investigate how we can apply those characteristics to traditional applications.
As a reference “traditional” application I will use traditional mainframe CICS/Batch/COBOL/DB2 application, and see how we can apply microservices principles to this traditional application architecture (and how we may sometimes already do).

Monolithic or bad practice architecture – a side step

In the microservices architecture overviews such as Fowler’s, a model monolithic application is described as the counterpart to the microservices architecture.
This “reference” monolithic application that used as an illustration really incorporates really a very large number of bad design practices. A significant part of the design flaws in this reference monolith could have been addressed by applying existing best practice approaches, such as better decoupling, better modular design and more.  Even the original Netflix application that Meshenberg describes , which seems to have been a big Java war application comprising many jar, is an example of extraordinarily bad design.
This reference monolith is really a very, very badly designed application. The danger of using this ultra-bad example is that a conclusion may be drawn that any application architecture that is not microservices, carries the disadvantages of such a monolithic architecture.
Fowler’s characteristics of microservices
Componentization via Services
Organized around Business Capabilities
Products not Projects
Smart endpoints and dumb pipes
Decentralized Governance
Decentralized Data Management
Infrastructure Automation
Design for failure
Evolutionary Design

Componentization via Services

With microservices typically componentization (a component defined as a unit of software that is independently replaceable and upgradeable.) is accomplished by breaking up the problem space into services with the main reason to make them independently deployable.
For legacy Java applications as an example this is a change. Where mostly componentisation was done through separating logic in packages which ended up in separated jar files that had to be built together into a war or ear and deployed all together (as in the Netflix monolith discussed above).
In our reference application with COBOL we do not have this problem (rather the opposite: it is often problematic to deploy a full application when needed, such as when initiating a new test environment). Separate COBOL programs can be created for individual services, whether batch of online, which can be separately deployed. These can to have multiple interfaces even: a service interface and a local (optimized, cross-memory) interface and use these appropriately for the right non-funtionals. Best practice tells us that some applications may call each other locally – owe allow this to cater for operational efficiency, but other ‘foreign’ applications may not. Thus we have 2 formal interfaces to a service: a local, binary, interface, and an external, logical interface (the latter xml / soap / yaml or rest).
We do need to put some guidelines in place to assure we cut up the application modules in the right way, but that is a topic we’ll address next.

Set up around business capabilities

As Fowler indicates monolithic applications can be modularized around business capabilities too. In traditional application development organisations this is a best practice that many companies have applied already. The days where we have development departments and separate maintenance departments were long gone before agile and DevOps entered the arena.
The problem with this setup however in traditional applications, and I would argue this is true for microservices as well, is that once legacy is created, the knowledge of the legacy modules (whether services or COBOL programs) and the intricacies of the functioning of the services, is only known to the insiders that have worked on the services.
So indeed, we organize around business capabilities. But we should make should that knowledge about the service is managed.

Products, not projects

The key here is that teams that have developed a service remain responsible for running the service. No applications are thrown over the fence to the maintenance department, as was an organization practice in the past.
Fowler argues that this is easier to do with the smaller granularity of services in microservices architectures.
But also with our traditional application architecture there is no technical limitation preventing applications of this principle. In traditional application activities this principle is carried out with the adoption of agile methods and DevOps approaches.

Smart endpoints and dumb pipes

This principle for microservices is a reaction to the often failed architecture concept of an ESB.
The idea is to have lightweight communication, either though simple http request/response methods, or lightweight messaging for an assured-delivery asynchronous protocol.
To apply these principles we have 3 types of interfaces: local interfaces for ‘friendly’ callers (there is nothing smart in between the call and the called program), services based of SOAP or REST for ‘foreign’ requesters.
The only thing our middleware takes care of is assuring a secure transport layer, and converting between technologies where needed (such a load balancing, firewalls, ssl termination/initiation., …).

Decentralised governance

Traditional application are historically under centralized control. Only culture, and not technology, prevents adopting a decentralized governance model for traditional applications.
We can decentrally develop applications, using tools for choice, interacting through open, published interfaces. As long as we apply principles like loose coupling, cohesiveness etc, there is not technical construct in the way of applying this in the traditional application space.
Another question is whether we should want to have to totally decentralized governance. There are benefits in cost and operational efficiency to be gained from centralized governance. Even in the case of Netflix, there is a central team that puts in place frameworks for all DevOps teams to use for their development efforts.
What we need centrally is the governance that allows the decentralised teams to work in the highly independent way that we are aiming for here. But there are some rules and guidelines to adhere to prevent that the pieces will not work together, will work against each other, or will not work at all. Which would include guidelines about the way we chop up the problem space, and the way we next integrate the disparate pieces. In technical terms: service contracts (what services expect from each other), and how they interact which each other technically.
For our COBOL example, the Account Management team is responsible for Account Opening service, agrees with service requesters what an Account_Opening service looks like. In technical term, this service can then be called through a web service interface that is exposed – or for a friendly app through the local interface.
Another issue we want to address with some centralisation of governance is the lifecycle of services. In large organisations you often find old services (or services versions) need to be maintained by team because there are consumers of these services that can not be moved to a newer version of the service, or be sunsat altogether. This leads to a proliferation of services and uncontrollable maintenance effort.

Decentralized data management

Decentralized data management is a topic where complex trade-offs need to be made between simplification of the programming model, data consistency, transaction assurance. Martin Fowler in his article indicates the votes are still out.
And also Meshenberg in his view on what is done in Netflix indicates that decentralised data is nice, and the main driver was to have multiregional partitioning, but for the Billing application where money is involved, other solutions are used than for the base Netflix engine itself. Also, to “protect” the database, a seeminly complex layer of caching infrastructure was put in place in Netflix to allow for the reuired scalability and performance.
This all seems to indicate that the virtues of a centralized datra management should not be thrown overboard while the votes are still out on the use cases for distrbuted data models.
We are a few years after the appearance of the article, and stating that still the dust of use cases for the distributed data management model has to settle, is an understatement.

Infrastructure automation

Definitely the concept of infrastructure automation that Netflix have achieved is exceptional, and something traditional systems can learn from.
In traditional application processes this was never deeply invested in.
Technologies are emerging now that brings these traditional technologies to a state where they can benefit for (ultra) fast provisioning of infrastructure.
The maturity of the technologies supporting this however, is way beyond the technologies available for microservices.
Suppliers and customers must be expected to significantly invest in this area.
In out mainframe example, we see suppliers like IBM, CA and others invest in this space, but at the time of writing I would argue if this is enough.

Design for failure

Netflix actively runs a service that kills servers to test resilience of applications – they call it ChaosMonkey.
I believe this is a great idea that should be embraced broadly for any business critical application. Test automation has become mainstream now, but this approach to also test-automate resilience testing could be called revolutionary.
The concept is that robots are configured that go about killing processes in the application stack to validate if applications kan survive such breakages.
I have never seen such an approach however with the clients I have worked with. An area to explore and develop: disaster tolerance test automation.

Evolutionary design

I would rephrase this to evolutionary realisation. Where realisation includes both the option to redo the internal structure of the service implementation (the design) but also the ability to enhance the functionality of the service.
Not sure if it is Fowler’s intent but in his article he seems to talk only about the design, the internal structure of the service.
When we apply out decoupling rules well, this opens up possibilities for evolutionary design.
We can now hide our service implementations from out service consumers much better, allowing for an agile evolution of service functionality.
If we combine this with an agile organisation of the DevOps teams (decentralised governance) provides an evolutionary realisation of our services and business functionality.
In our COBOL example, we have taken care of this by strictly living up to our decoupling rules described in the above.

A resulting microservices architecture for traditional applications

Our traditional CICS/Batch/COBOL/DB2 architecture now has to adhere to a number of technical architecture principles.
  • Well-defined interactions: foreign interactions: web services (online) or file interchange (batch).
  • Embrace agile and DevOps ways of working and organising teams.
  • Each application is responsible for its own data: account opening data may only be accessed by account opening programs. Billing data may only be accessed by billing services.
  • Infrastructure provisioning is automated, and infrastructure therefore must be standardised.
  • Application provisioning (deployment) is automated. All application components must be standardised.
  • Organisation: DevOps teams, organised around business capabilities.
  • DevOps teams agree service contracts (not service implementations).

 Closing thoughts: Testing

There are 2 hefty aspects that are overlooked often in a microservices approach, both related to testing.
  1. How to set up test environment in such a distributed model. How to ensure testing with the right versions of all these flexible services.
  2. Data management: the distributed data model. How to get a consistent test set.
Food for thought.
Close Menu