GSE Architecture workgroup 25 september 2019 on Containers and Z

Somewhat dated news, but wanted to share anyway because I thought it was very interesting. On 25 September the GSE NL architecture workgroup gathered at Belastingdienst in Apeldoorn. 

Frank van der Wal from IBM presented IBM’s Container strategy for Z, after the recent announcements around Z Container Extensions.

Mattijs Koper from Belastingdienst shared the advancements and views in Belastingdienst on containerization on Z.

Both talks ignited very interesting discussions on this important development.
Presentations attached: Belastingdienst, IBM_Containers_and_z

By the way… everyone is welcome to join the GSE NL Architecture workgroup. 
Please follow this link to register (GDPR – friendly): http://eepurl.com/bUu6cz  – use checkbox “Architect”. You will then receive updates and announcements from the workgroup. 

A very short summary of replication solutions (for Db2 on z/OS)

Last week I performed a short presentation on my experience with replication solutions, for Db2 on z/OS. The pictures and text are quite generic, so I thought it might be worthwhile sharing the main topics here.

The picture below summarizes the options visually:

  • Queue replication: synchronizes tables. The synchronization process on the capture side reads the Db2 transactions log, and put the updates for which a “subscription” is defined on a queue. On the apply side, the tool retrieves the updates from the queue and applies them to the target database.
  • SQL replication: also synchronizes tables. In this case the capture process stores the updates in an intermediate or staging table, from which the apply process takes the updates and applies them to the target tables.
  • Data Event Publishing: take the updates to the tables for which a subscription is defined and produces a comma-delimited or xml message from it which is put on a queue. The consumer of the message can be any user-defined program.
  • Change Data Capture provides a flexible solution that can push data updates to multiple forms of target media, whether tables, messages or an ETL tool.

What was functionally required in the team I was discussing with:

  • A lean operational database for the primary processes.
  • Ad-hoc and reporting queries on archival tables, where data in archive table can be kept over several years.The amount of data is relatively large: it should support tens to hundreds of millions of database updates per day, with a peak of tens of millions in an hour.

A solution then should provides these capabilities:

  • Replicate the operational database to an archive database, while
  • Data must be very current, demanding near-realtime synchronization.

So we focused in on the Queue Replication solution. 

The target DBMS for this solution can be a number of supported relational database management systems, such as Oracle, Db2 and SQL Server. Futhermore, our experience shows it supports high volumes in peak periods: millions of row inserted/updated in short period of time. Lastly, latency can remain within seconds, even in peak periods – this does require tuning of the solution, such as spreading messages over queues.For selected table you can specify suppress deletes, which allows for building up of historical data. 

There are a few concerns in the Queue Replication solution:

  • Data model changes will require coordination of changes in source, Queue Replication configuration and target data model.
  • Very large transactions (not committing often enough) may be a problem for Queue Replication (and also a very bad programming practice).

Hope this helps.

An approach to settling technical debt in your application portfolio

A small summary of some key aspects in the approach to fixing the technical debt in your legacy application portfolio.


Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compiler can not handle the old program sources without (significant) updates*.
  • Source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for an project to update your legacy applications

  • A quick assessment of the technical debt in your application portfolio, in technical terms (what technologies), and in volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of the technical debt.

Then, how to approach a legacy renovation project:

Take an inventory of your legacy. With the inventory, for every application make explicit what the business risk is, in context of the expected application lifecycle and the criticality of the application. Clean up everything that is not used. Migrate application that are strategic.

Make an inventory of the artefacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools.
  • Load module: what load modules do you have in our runtime environment,  in which libraries do these reside.
  • Runtime usage: what load modules are actually used, and by which batch jobs, or application servers.

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business critical applications in general, the risks described above are seldomly acceptable.

Next, unclutter your application portfolio. Artefact that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, from your asset management tools, and from any other supporting tool your may have. 

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organisation set up a “migration factory”.  This team is combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. Remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and may be more if test environments and test automation for applications do not exist.

*Most compilers in the 1990s required modifications to the source programs to be compile-able. The runtime modules of the old compiler however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java for example, this has always been a pain in the back, which is accepted. For the mainframe, backwards compatibility has always been a strong principle. Which has it’s advantages, but certainly also it disadvantages. The disadvantage of being an obstacle for technological progress, or in other words, the building up of technical debt, is often severely underestimated.

Go fix it while it ain’t broken yet

Reg Harbeck wrote an excellent article in Destination z, Overcoming IBM Z Inertia, in which encourages IBM Z (mainframe) users to take action on modernizing their mainframe.

The path of action Harbeck describes is to assign new mainframers (RePro’s) with the task to find and document what the current mainframe solutions in place are expected to do, and to work with the organisation to see what needs to retired, replaced or contained.

In other words task a new generation with the mainframe portfolio renewal, and not leave this to the old generation, who are “surviving until they retire while rocking the boat as little as possible” (hard words from Harbeck but it is time to get people in action).

In additional to the general approach Harbeck describes I think it is important to assure senior management support on a level as high as possible. Doing so you prevent that the priority of this program is too easily watered down by day-to-day priorities and you assure perceived or real “impossibilities” and roadblocks are moved out of the way resolutely. So:

  • Make modernization a senior management priority. Separate it organizationally from task from the KSOR (Keep the Show On the Road) activities, to make sure modernization priorities compete as little as possible with KSOR activities.
  • Appoint a senior management and senior technical exec with a strong Z affiliation to mentor and support and guide the young team from a organisational and strategic perspective.
  • Have a forward thinking, strong and respected senior mainframe specialist  support the team, with education and coaching (not to do it for them).

It will be an investment and, according to the “survivors” never be as efficient as before, but one very important thing it will be: fit for the future.

AWS // Zowe?

Diving into AWS I found this similarity of Zowe with AWS Management Service striking. AWS Management Services comprises three parts. As does Zowe. And the three Zowe functions are remarkably similar to the AWS management services.

AWS Management console – a secure, easy-to-access, web-based portal. Zowe Application Framework  – A web user interface (UI) that provides a virtual desktop containing a number of apps allowing access to z/OS function…

AWS CLI – a unified tool to manage your AWS services… Zowe CLI – a command-line interface that lets you interact with the mainframe

AWS SDK – incorporate AWS services into your code. Zowe API Mediation Layer – Enables a single point of access to mainframe APIs.

Logic or coincidence?

Close Menu