Barry is an IT consultant specialized in enterprise integration. In this role, he advises companies and designs solutions. He lets different business units and platforms communicate with each-other. Automating the communication between the various parts of the business allows the company to be more effective and more agile at a reduced cost of operations.
With a long history in Enterprise Application Integration (EAI), Service Oriented Architecture (SOA) and Business Process Management (BPM), he has plenty of experience and a rich palette for solution design. With a master degree in computer science, he has the technical background to go in-depth into the material.
Barry has been working for companies in many different domains, from high-tech industry, the energy sector, fashion, marketing and communication and also government, each with their specific focus and quirks.
About this blog
On this blog, I’m sharing my thoughts about the IT challenges, cases and trends that I encounter in my line of work as an integration specialist at Rubix.
Comments and opinions expressed are my own.
How the cloud forces you to plan ahead With the cloud migration in full effect, I notice that a lot of my customers do not have any process in place for capacity planning and cloud contract management. Cloud providers make it easy to scale up and down, but their pricing model encourages you to plan ahead. On demand prices are higher than on-premise prices, and to reduce prices, long term reservations are offered. These long-term reservations however are not governed yet. Here lies an important task for the architects within the company. What...read more
Today we will continue our journey to build a fully operational CICD environment for home use. After setting up the artifact repository, we will add the orchestration. The Jenkins server will monitor the source repositories and launch our build jobs. We want our Jenkins server to be part of the Docker composition, so that we can easily start it. This post is part of a series about creating a continues integration platform for home use. Create an artifact repository Configure the artifact repository Secure the artifact repository...read more
In this blogpost, we will configure the Nexus repository that we introduced in the previous post. We will create a basic repository setup with three levels: snapshot repository for our development artifacts that are only for testing, a releases repository for final artifacts that might go to a live environment, and a proxy repository that can access external repositories in order to integrate them with our own artifacts. A virtual layer will be put on top of these: the group repository. This allows us to use fallback rules: if the artifact is...read more
Often when I am working at home, I wish I had a CICD setup similar to the one at my customers. Developing code without a continues integration platform feels like a big step back. Any self-respecting developer should use CICD, even at home. The only pain is the time needed to setup the applications, which can be significant the first time you do it. In the upcoming posts I will be creating a CICD setup for home use, so that you might go through the steps faster. I will explicitly not choose any development language or platform, as I will be...read more
In order to build our CICD platform, we will start with the creation of an artifact repository. The artifact repository can be used in various locations in the pipelines, has no dependencies itself, and as such it is a great starting point. This repository will hold all the binaries for our project: it will store and distribute the deliverables and all dependencies. This ensures that all developers use the same binaries, and that the exact same binary goes to production. It provides a central location for managing and securing the libraries...read more
Creating a good Docker image is an art. There are no fixed rules that can be applied in every situation. Instead, we need to look at the pros and cons of every decision. We can however provide guidance.
Here are 7 golden rules for docker images. Following these rules, you can improve the containers you build, making them more reusable, more efficient and more stable.
Component testing is an important protection against regression errors. After every change to your component, you should test its public interfaces in isolation from the environment it runs in. In classic OTAP setups, this can be a pain, but using Docker, you can avoid many of the problems by creating a dedicated environment, just for the occasion. Our test strategy consists of just 5 simple steps for component testing, that are fully automated using a Jenkins build server: 1) Perform unit test after compiling your code During development,...read more