The 80/20 principle

(Part of my collection of Universal Principles of Software System Design)

Also known as the Pareto principle.

A well know formula, applicable to many areas. Probably primarily known in the form: 20% of the effort produces 80% of the job to be do done (or: the last 20% will take 80% of the effort).

In IT, the principle applies also to requirements versus functionality: 20% of the requirements determine 80 percent of the architecture. 20% of the requirements are the important ones for the core construction of a solution.

Focusing on the 20% important requirements in determining the architecture lets you shrink the options you need to consider seriously and helps you prioritize options. It is up to the experience of the architect to determine which of the set of the requirements are the important ones.

Literature: The 80/20 Principle by Richard Koch.

An example – airline reservation system.

The first time I (unconsciously) applied the 80/20 rule was in my early days as an architect. I was working with a team of architects on application infrastructure for completely new web applications. A wide variety of applications were planned to run on this infrastructure. But it was not clear yet what were the characteristics of all of these applications, what things to cater for in this generic hosting solution.

So we decided to walk through the known use cases for the different applications.

We worked our way through the main use cases for four of the tens of applications. During the fourth we somehow could not come up with additional requirements for the application infrastructure. We realized that the rest of the set of applications would be a variety of one of the ones we looked at. We had our 80% from looking at 20%.

GSE Architecture workgroup 25 september 2019 on Containers and Z

Somewhat dated news, but wanted to share anyway because I thought it was very interesting. On 25 September the GSE NL architecture workgroup gathered at Belastingdienst in Apeldoorn. 

Frank van der Wal from IBM presented IBM’s Container strategy for Z, after the recent announcements around Z Container Extensions.

Mattijs Koper from Belastingdienst shared the advancements and views in Belastingdienst on containerization on Z.

Both talks ignited very interesting discussions on this important development.
Presentations attached: Belastingdienst, IBM_Containers_and_z

By the way… everyone is welcome to join the GSE NL Architecture workgroup. 
Please follow this link to register (GDPR – friendly): http://eepurl.com/bUu6cz  – use checkbox “Architect”. You will then receive updates and announcements from the workgroup. 

A very short summary of replication solutions (for Db2 on z/OS)

Last week I performed a short presentation on my experience with replication solutions, for Db2 on z/OS. The pictures and text are quite generic, so I thought it might be worthwhile sharing the main topics here.

The picture below summarizes the options visually:

  • Queue replication: synchronizes tables. The synchronization process on the capture side reads the Db2 transactions log, and put the updates for which a “subscription” is defined on a queue. On the apply side, the tool retrieves the updates from the queue and applies them to the target database.
  • SQL replication: also synchronizes tables. In this case the capture process stores the updates in an intermediate or staging table, from which the apply process takes the updates and applies them to the target tables.
  • Data Event Publishing: take the updates to the tables for which a subscription is defined and produces a comma-delimited or xml message from it which is put on a queue. The consumer of the message can be any user-defined program.
  • Change Data Capture provides a flexible solution that can push data updates to multiple forms of target media, whether tables, messages or an ETL tool.

What was functionally required in the team I was discussing with:

  • A lean operational database for the primary processes.
  • Ad-hoc and reporting queries on archival tables, where data in archive table can be kept over several years.The amount of data is relatively large: it should support tens to hundreds of millions of database updates per day, with a peak of tens of millions in an hour.

A solution then should provides these capabilities:

  • Replicate the operational database to an archive database, while
  • Data must be very current, demanding near-realtime synchronization.

So we focused in on the Queue Replication solution. 

The target DBMS for this solution can be a number of supported relational database management systems, such as Oracle, Db2 and SQL Server. Futhermore, our experience shows it supports high volumes in peak periods: millions of row inserted/updated in short period of time. Lastly, latency can remain within seconds, even in peak periods – this does require tuning of the solution, such as spreading messages over queues.For selected table you can specify suppress deletes, which allows for building up of historical data. 

There are a few concerns in the Queue Replication solution:

  • Data model changes will require coordination of changes in source, Queue Replication configuration and target data model.
  • Very large transactions (not committing often enough) may be a problem for Queue Replication (and also a very bad programming practice).

Hope this helps.

An approach to settling technical debt in your application portfolio

A small summary of some key aspects in the approach to fixing the technical debt in your legacy application portfolio.


Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compiler can not handle the old program sources without (significant) updates*.
  • Source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for an project to update your legacy applications

  • A quick assessment of the technical debt in your application portfolio, in technical terms (what technologies), and in volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of the technical debt.

Then, how to approach a legacy renovation project:

Take an inventory of your legacy. With the inventory, for every application make explicit what the business risk is, in context of the expected application lifecycle and the criticality of the application. Clean up everything that is not used. Migrate application that are strategic.

Make an inventory of the artefacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools.
  • Load module: what load modules do you have in our runtime environment,  in which libraries do these reside.
  • Runtime usage: what load modules are actually used, and by which batch jobs, or application servers.

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business critical applications in general, the risks described above are seldomly acceptable.

Next, unclutter your application portfolio. Artefact that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, from your asset management tools, and from any other supporting tool your may have. 

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organisation set up a “migration factory”.  This team is combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. Remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and may be more if test environments and test automation for applications do not exist.

*Most compilers in the 1990s required modifications to the source programs to be compile-able. The runtime modules of the old compiler however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java for example, this has always been a pain in the back, which is accepted. For the mainframe, backwards compatibility has always been a strong principle. Which has it’s advantages, but certainly also it disadvantages. The disadvantage of being an obstacle for technological progress, or in other words, the building up of technical debt, is often severely underestimated.

Billy Korando on J9 and ways to optimize Spring Boot applications

Last week I attended two refreshing talks by Billy Korando. Billy is a developer advocate working for IBM, and an expert in Java.

The first talk was on Eclipse J9, a “lightweight” or low-footprint open-source JVM implementation by IBM. (J9 has nothing to do with Java 9). See more on this in his article on his blog.

The second talk was on optimizing Spring Boot applications. This talk is be viewed on Vimeo.

He runs a great blog.

Close Menu