Posts Tagged "jenkins"


In this post, we will be creating a sample java project to demonstrate our CICD workflow. We will create a git repository, configure the build tools and add the pipeline script. At the end of this post, we will have a finished sample workflow, and our first artifacts will be in Nexus, ready to be installed.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Preparations for the sample workflow

Let’s start with setting up the tools. Go to the Jenkins configuration > Global Tool Configuration and find the Maven section at the bottom.

Click on the ‘Maven installations’ button to expand the section.

Enter the name ‘Maven_3_5’: we will select the tool later on in the pipeline using this name. We want to have automatic installation enabled, and we select the most recent version of the 3.5 branch. Selecting an explicit version makes sure that you don’t unexpectedly get broken builds because the maven team pushed a new release which breaks backward compatibility.

Save your config and return to the Jenkins configuration menu.

Once we start building, we will be down- and uploading artifacts from Nexus. For this, we’ll need credentials. In the menu on the left side of your screen, you’ll find the ‘Credentials’ item. Click it so that it shows ‘System’. Click ‘System’ and you will navigate to a screen where you see ‘Global credentials (unrestricted)’ in the center of your screen. Click that link to open the screen where you can ‘Add Credentials’.

Create a useraccount in Jenkins

Select a ‘username with password’ type of account, and provide your credentials. A good description will help you when you need to select the credentials from a long list.

Store your changes and return to the main screen.

Activating plugins

We will be using some plugins that are not installed by default. Go to the ‘Plugin Manager’ screen to install them.

Navigate to the ‘Available’ tab and select “Config File Provider” and “Pipeline Maven Integration”. Make sure to ‘Download now and install after restart’, so that the plugin is active.

Global settings for Maven

Go to ‘Magaged Files’ > ‘Add a new config’ to create a global maven configuration file.

A unique ID will be generated, you should not edit it. Enter the name ‘MyGlobalSettings’ and a comment. Make sure Replace All is selected, so that the credentials in the settings file will be overwritten by Jenkins. We want the secure store in Jenkins to hold the credentials. All plain text settings files like this one should not contain any sensitive information.

Since we have two server declarations in our settings file, we need to add two credential sets. Both instances will use the same credentials: select the nexus credentials we made earlier.

Finally, copy-paste the settings file below into the content box and ‘Submit’ your changes.

This file defines some settings for all our maven builds:

  1. It defines a cache folder on the slave where all builds share their plugin and artifact downloads
  2. It defines credentials for two artifact repositories. The passwords here are not used, they are overwritten by Jenkins using the Server Credentials we entered in the screenshot above.
  3. We define a mirror site, so that all downloads will pass through our own Nexus repository. Nexus will return our private artifacts, or when it doesn’t have the artifact, it will search on maven central for a download, and it will cache it. This behavior is specified in our Nexus setup.

User settings for Maven

The user settings for maven is the place where you should put project specific setup. For me, my global setup is good enough, so my file is mostly empty.

Copy the following content into the file:

There are no server entries in this file, so we don’t need to overrule Server Credentials by ‘Add’-ing them. Just save the config and continue.

Activate the global settings.xml

Go to Global tool configuration > Maven Configuration > Default Global Settings Provider  and select the config file Myglobalsettings

 

Prepare a sample workflow project

In order to verify our setup, we will need something to build. I have chosen a very basic Java program to showcase the build.

Start by creating a git repository on your favorite host, and clone the git repo so that you can start working. Inside the root of the project, we call:

mvn archetype:generate -DgroupId=java_sample -DartifactId=sample -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

This creates a basic java program, with a single unit test. No need to go into an editor yet to write a program.

Add a pom in the root folder:

We define two important parts:

  1. In repositories we define where Maven can find any dependency it needs to build the project. All downloads will be from this repository. We use our public nexus repository, which searches our releases, snapshots and if still nothing is found, it will try to download from maven central.
  2. In distribution management, the destinations for our artifacts are defined. When we do a snaphot build, we upload to maven-snapshots in nexus, and we we do a release, the file is uploaded to maven-releases.

Furthermore, we define the folder sample as a module, so that it will get build recursively, and we define the maven-release-plugin. We are not going into the details of setting up the Java project, as that is outside of the scope for this post.

Before continuing, you should validate your build locally. Try “mvn install” in your workdir and see what happens. If you get download errors, make sure that nexus resolves to your machine, for example by adding the line “127.0.0.1           nexus” to your hosts file.

Add the following Jenkinsfile:

In the above sample, replace the id d4b07913-04d5-48c5-9c0e-292b565f152e by the id of your second settings file from the Jenkins Config File plugin.

The Jenkinsfile defines your pipeline. We start out by declaring that we will explicitly set the agent in every step, and our pipeline makes use of the Maven 3.5 tool which we setup previously in the Jenkins tools config.

The build consists of stages: visually separate actions in the sequence of actions to build and deploy your program. has only one stage for now: the Build stage. For this stage, we need to run on a slave agent that has the label ‘slave-java-11’. During the stage, we can have multiple steps, but we only need one: we need the maven tool and a config file from the Files plugin in Jenkins in order to execute a single shell command: “mvn clean deploy”, which will build, unit-test and upload to Nexus.

You should now have the following structure:

Commit and push.

Create your first Jenkins build

  • Create credentials for your Git user.
  • Create a new build job for this git repo: multi branch pipeline
  • Link it to your repo

 

Screenshot of a successful build in Jenkins pipeline

Select ‘Scan Multibranch Pipeline Now’ on the root of your project to let Jenkins find all branches and execute a build job for each new branch it finds.

 

I enabled mvn debugging (mvn -X clean deploy) to see which settings.xml were used. It shows both the global settings as the user settings:

Your first build should now run successfully. This concludes our sample workflow.

 

In our next post, we will start checking the quality of the program using SonarQube.

Read More

In this post, we will create a simple Jenkins slave image, capable of compiling Java code. We will register it in the Jenkins master instance that we created during the previous blog post.

 

We will take a docker image that holds a java environment. On top of that, we will deploy the Jenkins slave binary. It needs to be configured to connect to the master. Finally we add the software to execute the jobs it is required to do.

 

Remember the microservice principle: make small containers that can do one job well, not one large container that can do everything.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Register the Jenkins slave

We need to register every new slave in Jenkins before it can connect. Go to ‘Manage Jenkins’, ‘Manage Nodes’ and add a ‘New Node’. Provide the default settings like in the picture below:

Save your configuration. You will get a confirmation screen below. It contains one critical piece of information: the secret key needed to connect the slave to Jenkins. Copy the secret key. We will need it in our slave image.

 

Building the Jenkins slave

Go to your docker-compose folder, and create a new subfolder called ‘slave-java-11’. Copy the slave.jar file which you downloaded in the jenkins master setup blogpost into this folder. Create three files: a Dockerfile and two script files: setup.sh and wait-for-it.sh If you are working on windows, make sure the line-endings of the script files are in Unix mode, or you will get errors during runtime. Copy-paste the content from below:

Dockerfile

You can replace the jenkins_token with the secret key you copied from the slave screen above, but it is not necessary, we will override it in the docker-compose file.

The dockerfile defines a new image based upon the official openjdk image. This image gives us the build tools we are looking for, but conveniently also includes a java runtime that allows us to execute slave.jar. It adds the files from our build folder, so that we can use them inside the image, and assign correct execution rights to the scripts. Finally it sets the defaults for the environment variables. This is more for understanding the image, as we will override the values in docker-compose later on.

startup.sh

The startup.sh script will be called upon execution of the docker image. We first check if the two Jenkins ports are available, to avoid busy waiting with a lot of spam in the log file. When the ports are available, we start the slave process.

wait-for-it.sh

We use a wait script from github [under The MIT License] to wait for the availability of a tcp port on the network before we start the slave.

The scriptfile above is on top of the depends_on statement in the docker compose file. Depends_on only waits untill the dockerfile has begun executing, but the depending image may be started before the depends_on image is actually ready to receive connections. It is beter to wait untill the ports are available, so that our connection attempts will at least reach the process.

Your folder structure should now look like this:

 

Edit the docker-compose.yml file and add a service for the slave:

This is where we will paste the secret key we copied on the slave screen above. Replace the jenkins_token value 86f28fafeeb1f4500d546f1957df26718a14fbca244605ea5762da9ad2f721e8 with your copy.

 

Execute the command docker-compose up in the main folder to build the slave image and start the composition.

The slave should show up as an active node in Jenkins master.

Picture with the master and one slave node.

Active Jenkins Nodes

 

This concludes this post. In the next post, I will go into the configuration of the slave node by creating a sample workflow.

 

Read More

Jenkins server


Posted By on 25 Sep 2018 in CICD in docker

Today we will continue our journey to build a fully operational CICD environment for home use. After setting up the artifact repository, we will add the orchestration. The Jenkins server will monitor the source repositories and launch our build jobs. We want our Jenkins server to be part of the Docker composition, so that we can easily start it.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

First, we need to define a volume. Jenkins stores data on disk,  and you don’t want it to be lost when the docker container is stopped. Add the volume in the docker-compose.yml volumes section, right after the nexus volume:

Next, we add Jenkins to the services section:

  • We use an explicit version of jenkins. Backward incompatible changes may happen if you do otherwise.
  • We open the port 8080 to the outside world. This port is used to host the administration page of Jenkins server.
  • We expose port 50000 in the internal network. This port will be used by the Jenkins slaves to connect to the master.
  • The volume is mounted at the location /var/jenkins_home, which is the predefined data location of this docker image.

 

Prepare our host system

We want to access our Jenkins server on the url http://jenkins-master:8080. To make this work, we have to add it to our DNS, or we can simply add a mapping in the hostfile on our machine, which is perfectly acceptable for this local installation.

On linux, edit the file /etc/hosts

On windows, edit the file C:\Windows\System32\drivers\etc\hosts

Add the following line:

This tells your computer that any traffic for jenkins-master will be routed towards the loopback ip number.

 

Start Jenkins Server for the first time

On the command-line, enter the docker-compose up command. All three containers in the composition will be started.

Once the services have started, we need to search the logging for the following information:

The above lines will only show as long as the initial setup has not been performed yet. They contain a secret key with which we can create our admin user. Copy the key for later use.

 

Create the administrator account

Open your browser and go to the Jenkins interface at http://localhost:8080. You will see an unlock screen like this:

Jenins Unlock

Copy the key into the password field, and press Continue. You will be asked to select the plugins to install.

Select the suggested plugins, we can change them afterwards. You will see a progress screen showing the installation progress..

This may take a couple of minutes. After installing the pre-selected plugins, we will be asked to provide an Admin Account:

Create the user and press “Save”.

Set the Jenkins URL

Change the URL to http://jenkins-master:8080/ and select ”Save and Finish”. This is important, because Jenkins slaves will be accessing the master using this url.

Press ‘Start using Jenkins’ to complete the setup.

At this point you can log in to Jenkins, but if your browser-screen remains blank, do a clean stop and start again using docker-compose stop and docker-compose start.

You should get a screen like this, indicating that the installation was successful.

Jenkins main page

 

We now have a jenkins server ready to orchestrate jobs.

 

Go to the commandline on your machine and execute the following command to download the slave.jar file. We will need this file to create slaves for Jenkins to execute jobs.

You could also use your browser to download the file. Keep the file for the next step: creating a jenkins worker.

Configure the master node to only execute jobs that are intended to be executed there, so that it will not be clogged by execute jobs that should run on slaves. Go to Configure > Nodes > Master

We are now ready to add a Jenkins slave to our setup, which we will do in the next post.

Read More

Often when I am working at home, I wish I had a CICD setup similar to the one at my customers. Developing code without a continues integration platform feels like a big step back. Any self-respecting developer should use CICD, even at home.  The only pain is the time needed to setup the applications, which can be significant the first time you do it. In the upcoming posts I will be creating a CICD setup for home use, so that you might go through the steps faster.

I will explicitly not choose any development language or platform, as I will be using it for many different things. I dabble around with many languages and such, so I want my environment to be able to support them all. A small sample of languages and platforms I am supporting using this platform: Python, Django, Java, Angular, Tibco BW, docker.

Our Continues Integration platform is build upon

 

The integration lifecycle

Setting up a continues integration is quite a project. A good setup is straight forward from administrating point of view, easy to use as a developer and most important: stable. A continues integration setup is not a static thing, but it changes over time, just as fast as the IT world itself is changing. Therefor we need a stable basis that is a good foundation on which we can build in the future.

A sample Continues integration and deployment cycle.

The docker infrastructure

To create this CI platform, we will be using Docker-compose. This allows us to re-create the composition independent of server availability, networks and admin permissions. All we need is a computer with sufficient disk and memory space, and sufficient permissions on that computer to install docker.

We have to configure our artifact repository. We can create areas for different packaging systems: maven, pip, docker. Also, we need to consider the types of updates: do we use allow overwrite actions on an existing version, or do we force new version numbers?

Next, the Jenkins master will be added to the stack, so that we have a director to control the build jobs.

We will configure the slave to work with our repository by creating a sample project.

First we will create an artifact repository to hold our build artifacts. It will contain both the temporary artifacts created at the build phase, as well as the docker images created at the packaging phase, as well as all supporting binaries.

Docker is quite strict in its security requirements. We will secure the repository, so that it will be accessible without hacking or compromising security settings of docker. We do this by adding a reverse proxy as central entry point into our stack.

Once we have Jenkins up, we can add a Jenkins slave to execute build jobs.

Finally, we add a SonarQube installation to validate the quality of the code.

Read More

Jenkins logoComponent testing is an important protection against regression errors. After every change to your component, you should test its public interfaces in isolation from the environment it runs in. In classic OTAP setups, this can be a pain, but using Docker, you can avoid many of the problems by creating a dedicated environment, just for the occasion.

Our test strategy consists of just 5 simple steps for component testing, that are fully automated using a Jenkins build server:

1) Perform unit test after compiling your code

During development, it’s important to get quick feedback on errors in your code. The GUI you use is the first layer, and the most direct protection. It protects against syntax errors. The second layer is the unit test. It should verify that your code has no erroneous constructions, like breaking on empty lists and other out of bounds exceptions. It should focus on the technical details of your implementation. It should test the constructions in your code, but take care your are not testing the libraries and such that you are using. Libraries have their own test suites, and duplicating these tests will not add any value.

 

SonarQube LogoIf your are using a code-quality gateway, such as SonarQube, it should be invoked just after the unit-tests. A gateway will improve the quality of the entire project by enforcing code standards, unittest coverage and by preventing architectural debt in your code. It reduces the burden of peer-reviews by automating the bulk of the review work, leaving only the interesting work to the developers.

 

2) Create a docker image of your component, as usual

Docker logo
Once you have passed your unit tests, you should create a docker image of your component, ready for deployment in the environment. This is a candidate image for production, and it will not be changed anymore. Whenever it passes a test-phase, it will be promoted to a next environment. This means that our docker image needs to be configurable for different environments, but the executable inside together with the internal structure of the docker image must be final.

3) Create a docker image from the image of step 2, and add mocks and settings

The image created in step 2 is final, but Docker allows us to derive from an image and add extra components. We run an application server in our docker image, where the component is deployed. In the same application server, we can deploy our mock services. All external api’s used by our component are mocked using the same platform as our component itself. This is important, because when using Docker, you should have only one executable running per container. This executable should perform the role of both the component as the mocked services. It also simplifies things, because the developer needs only one skill-set instead of two: the application and the mock framework.

The configuration for our component is also added to this image, so that it connects to the mocked api’s out of the box instead of the external api’s. The docker image needs no further configuration, and is ready to respond to our test messages directly after spinning up.

4) Deploy the image of step 3, and run your component tests against your mocked component

Our component uses one dependency that is hard to mock: the database. This can be circumvented by creating a dedicated database per test. Again, Docker shows its strengths, as we can just spin up a Docker database image together with our component test image. This implies that our component must be able to create it’s own database structure, or that we have a database image with the predefined structure available. We use the former.

Now that our component is running together with it’s mocks and database dependencies, we can initiate the component test suite from Jenkins. All tests are run in isolation, on the just created stand-alone environment, and results are gathered.

The things we verify in the component test phase are functional, and can be written down using the following format: given that the mocks provide certain data, when I call the provided public api of my component, then I expect a certain result. For example: given that a customer X is returned from the customer mock service, when I call the order service to create an order for customer X, the result should be that an order is created.

A good practice is to write down small scenario’s of business events and bundle each scenario in a testcase. Testcases should be independent of eachother, so you shouldn’t use database data stored in one scenario to execute the next one. The only dependencies are between steps inside a scenario, where you can create something, read it, update it, etc. This way can can choose the order of the scenarios and perhaps limit your testing to one case when you try to reproduce an issue.

5) Proceed to deploy the image of step 2, and perform integration and system tests as usual

Once the component testing is successful, the component test environment is deleted, since it isn’t needed anymore, and it should be newly created before every test.

We take the base image we created in step 2 and deploy it on our integration test environment.

 

Some points to take away

  • Our component is able to create its own database structure from scratch, so we can start with an empty database every time.
  • We use an application server to host both the component and the mock services
  • We build a mocked docker image on top of our production-ready image
  • Jenkins is used to create and destroy the docker environments
  • Docker compose can be used to create an environment, but specialized products such as Kubernetes or OpenShift make life for a developer much easier.
  • Component testing can seem expensive, but the longer the software lives, the more value is returned from component tests. Don’t skip out on the tests, but make implementing tests easier.
Read More

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close