BIT

Barry's IT Blog


How the cloud forces you to plan ahead

With the cloud migration in full effect, I notice that a lot of my customers do not have any process in place for capacity planning and cloud contract management.

Cloud providers make it easy to scale up and down, but their pricing model encourages you to plan ahead. On demand prices are higher than on-premise prices, and to reduce prices, long term reservations are offered. These long-term reservations however are not governed yet. Here lies an important task for the architects within the company.

What are the concerns for cloud contract management?

Long term contracts are cheaper

The first concern is obvious. To reduce cost, we want the longest commitment, and we would like to pay upfront. This results in the lowest price, so it is very tempting to simply select this type of contract.

On demand resources are flexible

A resource, once committed, can not be changed during the run-time of the contract.  This means that if you hit a performance problem, or your business grows or shrinks a lot, you can’t adjust the resource anymore. Only with on-demand resources, you can scale fully.

This is mostly an issue with databases and similar static, complex resources, where you just can’t simply add a server to increase capacity. You will need to adjust the resource itself, so flexibility is important.

Cash flow is irregular

When you move to the cloud, you tend to need a lot of resources at once. This means that all long-term contracts will have identical start dates, and financially, all contracts have the same due date. If all contracts have similar payment conditions, you might end up paying your three year contracts for your entire data center all at the same moment. You will need to make sure you have the buffers to back it up, or you need to spread the contracts, otherwise you might go bankrupt.

Change management and road-maps may conflict with existing contracts

You plan to phase-out a database, or to scrap that expensive server, but contract-management just renewed the three-year contract on it? Guess you need to wait before you get the expense cuts. Replace that application server with an application that has a completely different footprint, and requires an other type of server? Now you pay for the new server and the reservation on the old server.

Changes on your inventory must be aligned with contract management, or you’ll end up paying for machines you don’t need.

Conclusion

The examples show that it is important to plan ahead and align with contract management before committing to long-term contracts and before setting an architecture road-map. You need to make sure that the software contracts and hardware contracts align, and that they match the enterprise road-map. Furthermore, operations needs to stay aligned with contract management, to prevent lock-in when scaling or tuning of the environment is required.

A good process for alignment between architecture, operations and contract management is key to a successful long-term adoption of the cloud.

 

 

Read More

In this post, we will be creating a sample java project to demonstrate our CICD workflow. We will create a git repository, configure the build tools and add the pipeline script. At the end of this post, we will have a finished sample workflow, and our first artifacts will be in Nexus, ready to be installed.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Preparations for the sample workflow

Let’s start with setting up the tools. Go to the Jenkins configuration > Global Tool Configuration and find the Maven section at the bottom.

Click on the ‘Maven installations’ button to expand the section.

Enter the name ‘Maven_3_5’: we will select the tool later on in the pipeline using this name. We want to have automatic installation enabled, and we select the most recent version of the 3.5 branch. Selecting an explicit version makes sure that you don’t unexpectedly get broken builds because the maven team pushed a new release which breaks backward compatibility.

Save your config and return to the Jenkins configuration menu.

Once we start building, we will be down- and uploading artifacts from Nexus. For this, we’ll need credentials. In the menu on the left side of your screen, you’ll find the ‘Credentials’ item. Click it so that it shows ‘System’. Click ‘System’ and you will navigate to a screen where you see ‘Global credentials (unrestricted)’ in the center of your screen. Click that link to open the screen where you can ‘Add Credentials’.

Create a useraccount in Jenkins

Select a ‘username with password’ type of account, and provide your credentials. A good description will help you when you need to select the credentials from a long list.

Store your changes and return to the main screen.

Activating plugins

We will be using some plugins that are not installed by default. Go to the ‘Plugin Manager’ screen to install them.

Navigate to the ‘Available’ tab and select “Config File Provider” and “Pipeline Maven Integration”. Make sure to ‘Download now and install after restart’, so that the plugin is active.

Global settings for Maven

Go to ‘Magaged Files’ > ‘Add a new config’ to create a global maven configuration file.

A unique ID will be generated, you should not edit it. Enter the name ‘MyGlobalSettings’ and a comment. Make sure Replace All is selected, so that the credentials in the settings file will be overwritten by Jenkins. We want the secure store in Jenkins to hold the credentials. All plain text settings files like this one should not contain any sensitive information.

Since we have two server declarations in our settings file, we need to add two credential sets. Both instances will use the same credentials: select the nexus credentials we made earlier.

Finally, copy-paste the settings file below into the content box and ‘Submit’ your changes.

<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" 
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <localRepository>/opt/m2/repository</localRepository>
  <servers>
    <server>
      <id>nexus-snapshots</id>
      <username>barry</username>
      <password>***</password>
    </server>
    <server>
      <id>nexus-releases</id>
      <username>barry</username>
      <password>***</password>
    </server>
  </servers>

  <mirrors>
    <mirror>
      <id>central</id>
      <name>central</name>
      <url>http://nexus:8081/repository/maven-public/</url>
      <mirrorOf>*</mirrorOf>
    </mirror>
  </mirrors>

  <pluginGroups/>
  <proxies/>
  <profiles/>
</settings>

This file defines some settings for all our maven builds:

  1. It defines a cache folder on the slave where all builds share their plugin and artifact downloads
  2. It defines credentials for two artifact repositories. The passwords here are not used, they are overwritten by Jenkins using the Server Credentials we entered in the screenshot above.
  3. We define a mirror site, so that all downloads will pass through our own Nexus repository. Nexus will return our private artifacts, or when it doesn’t have the artifact, it will search on maven central for a download, and it will cache it. This behavior is specified in our Nexus setup.

User settings for Maven

The user settings for maven is the place where you should put project specific setup. For me, my global setup is good enough, so my file is mostly empty.

Copy the following content into the file:

<?xml version="1.0" encoding="UTF-8"?>
<!-- MySettings -->
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" 
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <profiles>
  </profiles>
</settings>

There are no server entries in this file, so we don’t need to overrule Server Credentials by ‘Add’-ing them. Just save the config and continue.

Activate the global settings.xml

Go to Global tool configuration > Maven Configuration > Default Global Settings Provider  and select the config file Myglobalsettings

 

Prepare a sample workflow project

In order to verify our setup, we will need something to build. I have chosen a very basic Java program to showcase the build.

Start by creating a git repository on your favorite host, and clone the git repo so that you can start working. Inside the root of the project, we call:

mvn archetype:generate -DgroupId=java_sample -DartifactId=sample -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

This creates a basic java program, with a single unit test. No need to go into an editor yet to write a program.

Add a pom in the root folder:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>java_sample</groupId>
  <artifactId>main</artifactId>
  <packaging>pom</packaging>
  <version>1.0-SNAPSHOT</version>

  <name>sample</name>
  <url>http://maven.apache.org</url>

  <modules>
    <module>sample</module>
  </modules>

  <repositories>
    <repository>
      <id>maven-group</id>
      <url>http://nexus:8081/repository/maven-public/</url>
    </repository>
  </repositories>

  <distributionManagement>
    <snapshotRepository>
      <id>nexus-snapshots</id>
      <name>snapshots</name>
      <url>http://nexus:8081/repository/maven-snapshots/</url>
    </snapshotRepository>
    <repository>
      <id>nexus-releases</id>
      <name>releases</name>
      <url>http://nexus:8081/repository/maven-releases/</url>
    </repository>
  </distributionManagement>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-release-plugin</artifactId>
        <version>2.5</version>
      </plugin>
    </plugins>
  </build>
</project>

We define two important parts:

  1. In repositories we define where Maven can find any dependency it needs to build the project. All downloads will be from this repository. We use our public nexus repository, which searches our releases, snapshots and if still nothing is found, it will try to download from maven central.
  2. In distribution management, the destinations for our artifacts are defined. When we do a snaphot build, we upload to maven-snapshots in nexus, and we we do a release, the file is uploaded to maven-releases.

Furthermore, we define the folder sample as a module, so that it will get build recursively, and we define the maven-release-plugin. We are not going into the details of setting up the Java project, as that is outside of the scope for this post.

Before continuing, you should validate your build locally. Try “mvn install” in your workdir and see what happens. If you get download errors, make sure that nexus resolves to your machine, for example by adding the line “127.0.0.1           nexus” to your hosts file.

Add the following Jenkinsfile:

pipeline {
    agent none
    tools {
        maven 'Maven_3_5'
    }

    stages {
        stage('Build') {
            agent {
                label 'slave-java-11'
            }

            steps {
                withMaven(
                    maven: 'Maven_3_5',
                    mavenSettingsConfig: 'd4b07913-04d5-48c5-9c0e-292b565f152e',
                    mavenLocalRepo: '/opt/m2/repository'
                ) {
                    sh 'mvn clean deploy'
                }
            }
        }
    }
}

In the above sample, replace the id d4b07913-04d5-48c5-9c0e-292b565f152e by the id of your second settings file from the Jenkins Config File plugin.

The Jenkinsfile defines your pipeline. We start out by declaring that we will explicitly set the agent in every step, and our pipeline makes use of the Maven 3.5 tool which we setup previously in the Jenkins tools config.

The build consists of stages: visually separate actions in the sequence of actions to build and deploy your program. has only one stage for now: the Build stage. For this stage, we need to run on a slave agent that has the label ‘slave-java-11’. During the stage, we can have multiple steps, but we only need one: we need the maven tool and a config file from the Files plugin in Jenkins in order to execute a single shell command: “mvn clean deploy”, which will build, unit-test and upload to Nexus.

You should now have the following structure:

<repo>/
	sample/
		...
	Jenkinsfile
	pom.xml

Commit and push.

Create your first Jenkins build

  • Create credentials for your Git user.
  • Create a new build job for this git repo: multi branch pipeline
  • Link it to your repo

 

Screenshot of a successful build in Jenkins pipeline

Select ‘Scan Multibranch Pipeline Now’ on the root of your project to let Jenkins find all branches and execute a build job for each new branch it finds.

 

I enabled mvn debugging (mvn -X clean deploy) to see which settings.xml were used. It shows both the global settings as the user settings:

[DEBUG] Reading global settings from /opt/workspace/java_sample_develop-TDM3SN5C526GKQJCW7Y5YCZ7F3QNYB4LBZ6FVYCHSEUMOS72ZOPQ@tmp/withMaven85e33fb7/globalSettings.xml
[DEBUG] Reading user settings from /opt/workspace/java_sample_develop-TDM3SN5C526GKQJCW7Y5YCZ7F3QNYB4LBZ6FVYCHSEUMOS72ZOPQ@tmp/withMaven85e33fb7/settings.xml

Your first build should now run successfully. This concludes our sample workflow.

 

In our next post, we will start checking the quality of the program using SonarQube.

Read More

In this post, we will create a simple Jenkins slave image, capable of compiling Java code. We will register it in the Jenkins master instance that we created during the previous blog post.

 

We will take a docker image that holds a java environment. On top of that, we will deploy the Jenkins slave binary. It needs to be configured to connect to the master. Finally we add the software to execute the jobs it is required to do.

 

Remember the microservice principle: make small containers that can do one job well, not one large container that can do everything.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Register the Jenkins slave

We need to register every new slave in Jenkins before it can connect. Go to ‘Manage Jenkins’, ‘Manage Nodes’ and add a ‘New Node’. Provide the default settings like in the picture below:

Save your configuration. You will get a confirmation screen below. It contains one critical piece of information: the secret key needed to connect the slave to Jenkins. Copy the secret key. We will need it in our slave image.

 

Building the Jenkins slave

Go to your docker-compose folder, and create a new subfolder called ‘slave-java-11’. Copy the slave.jar file which you downloaded in the jenkins master setup blogpost into this folder. Create three files: a Dockerfile and two script files: setup.sh and wait-for-it.sh If you are working on windows, make sure the line-endings of the script files are in Unix mode, or you will get errors during runtime. Copy-paste the content from below:

Dockerfile

FROM openjdk:11

COPY slave.jar .
COPY wait-for-it.sh .
COPY startup.sh .
RUN chmod u+x startup.sh wait-for-it.sh

ENV JENKINS_MASTER_SERVER=jenkins-master
ENV JENKINS_MASTER_PORT=8080
ENV JENKINS_MASTER_JNLP_PORT=50000
ENV SLAVE_NAME=slave-java-11
ENV JENKINS_TOKEN=b26ad819e8d4f823302e1ea4abd724e488967130b7910ea7762c4579c80852ee

CMD ["sh", "./startup.sh"]

You can replace the jenkins_token with the secret key you copied from the slave screen above, but it is not necessary, we will override it in the docker-compose file.

The dockerfile defines a new image based upon the official openjdk image. This image gives us the build tools we are looking for, but conveniently also includes a java runtime that allows us to execute slave.jar. It adds the files from our build folder, so that we can use them inside the image, and assign correct execution rights to the scripts. Finally it sets the defaults for the environment variables. This is more for understanding the image, as we will override the values in docker-compose later on.

startup.sh

The startup.sh script will be called upon execution of the docker image. We first check if the two Jenkins ports are available, to avoid busy waiting with a lot of spam in the log file. When the ports are available, we start the slave process.

#!/usr/bin/sh

bash ./wait-for-it.sh ${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_PORT} -t 300
bash ./wait-for-it.sh ${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_JNLP_PORT} -t 300

java -jar slave.jar -jnlpUrl http://${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_PORT}/computer/${SLAVE_NAME}/slave-agent.jnlp -secret ${JENKINS_TOKEN} -workDir /opt

wait-for-it.sh

We use a wait script from github [under The MIT License] to wait for the availability of a tcp port on the network before we start the slave.

#!/usr/bin/bash
#   Use this script to test if a given TCP host/port are available

cmdname=$(basename $0)

echoerr() { if [[ $QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }

usage()
{
    cat << USAGE >&2
Usage:
    $cmdname host:port [-s] [-t timeout] [-- command args]
    -h HOST | --host=HOST       Host or IP under test
    -p PORT | --port=PORT       TCP port under test
                                Alternatively, you specify the host and port as host:port
    -s | --strict               Only execute subcommand if the test succeeds
    -q | --quiet                Don't output any status messages
    -t TIMEOUT | --timeout=TIMEOUT
                                Timeout in seconds, zero for no timeout
    -- COMMAND ARGS             Execute command with args after the test finishes
USAGE
    exit 1
}

wait_for()
{
    if [[ $TIMEOUT -gt 0 ]]; then
        echoerr "$cmdname: waiting $TIMEOUT seconds for $HOST:$PORT"
    else
        echoerr "$cmdname: waiting for $HOST:$PORT without a timeout"
    fi
    start_ts=$(date +%s)
    while :
    do
        if [[ $ISBUSY -eq 1 ]]; then
            nc -z $HOST $PORT
            result=$?
        else
            (echo > /dev/tcp/$HOST/$PORT) >/dev/null 2>&1
            result=$?
        fi
        if [[ $result -eq 0 ]]; then
            end_ts=$(date +%s)
            echoerr "$cmdname: $HOST:$PORT is available after $((end_ts - start_ts)) seconds"
            break
        fi
        sleep 1
    done
    return $result
}

wait_for_wrapper()
{
    # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
    if [[ $QUIET -eq 1 ]]; then
        timeout $BUSYTIMEFLAG $TIMEOUT $0 --quiet --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
    else
        timeout $BUSYTIMEFLAG $TIMEOUT $0 --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
    fi
    PID=$!
    trap "kill -INT -$PID" INT
    wait $PID
    RESULT=$?
    if [[ $RESULT -ne 0 ]]; then
        echoerr "$cmdname: timeout occurred after waiting $TIMEOUT seconds for $HOST:$PORT"
    fi
    return $RESULT
}

# process arguments
while [[ $# -gt 0 ]]
do
    case "$1" in
        *:* )
        hostport=(${1//:/ })
        HOST=${hostport[0]}
        PORT=${hostport[1]}
        shift 1
        ;;
        --child)
        CHILD=1
        shift 1
        ;;
        -q | --quiet)
        QUIET=1
        shift 1
        ;;
        -s | --strict)
        STRICT=1
        shift 1
        ;;
        -h)
        HOST="$2"
        if [[ $HOST == "" ]]; then break; fi
        shift 2
        ;;
        --host=*)
        HOST="${1#*=}"
        shift 1
        ;;
        -p)
        PORT="$2"
        if [[ $PORT == "" ]]; then break; fi
        shift 2
        ;;
        --port=*)
        PORT="${1#*=}"
        shift 1
        ;;
        -t)
        TIMEOUT="$2"
        if [[ $TIMEOUT == "" ]]; then break; fi
        shift 2
        ;;
        --timeout=*)
        TIMEOUT="${1#*=}"
        shift 1
        ;;
        --)
        shift
        CLI=("$@")
        break
        ;;
        --help)
        usage
        ;;
        *)
        echoerr "Unknown argument: $1"
        usage
        ;;
    esac
done

if [[ "$HOST" == "" || "$PORT" == "" ]]; then
    echoerr "Error: you need to provide a host and port to test."
    usage
fi

TIMEOUT=${TIMEOUT:-15}
STRICT=${STRICT:-0}
CHILD=${CHILD:-0}
QUIET=${QUIET:-0}

# check to see if timeout is from busybox?
# check to see if timeout is from busybox?
TIMEOUT_PATH=$(realpath $(which timeout))
if [[ $TIMEOUT_PATH =~ "busybox" ]]; then
        ISBUSY=1
        BUSYTIMEFLAG="-t"
else
        ISBUSY=0
        BUSYTIMEFLAG=""
fi

if [[ $CHILD -gt 0 ]]; then
    wait_for
    RESULT=$?
    exit $RESULT
else
    if [[ $TIMEOUT -gt 0 ]]; then
        wait_for_wrapper
        RESULT=$?
    else
        wait_for
        RESULT=$?
    fi
fi

if [[ $CLI != "" ]]; then
    if [[ $RESULT -ne 0 && $STRICT -eq 1 ]]; then
        echoerr "$cmdname: strict mode, refusing to execute subprocess"
        exit $RESULT
    fi
    exec "${CLI[@]}"
else
    exit $RESULT
fi

The scriptfile above is on top of the depends_on statement in the docker compose file. Depends_on only waits untill the dockerfile has begun executing, but the depending image may be started before the depends_on image is actually ready to receive connections. It is beter to wait untill the ports are available, so that our connection attempts will at least reach the process.

Your folder structure should now look like this:

cicd/
  reverse/
    certs/
      ...
    Dockerfile
    https.conf
  slave-java-11/
    slave.jar
    Dockerfile
    startup.sh
    wait-for-it.sh
  docker-compose.yml

 

Edit the docker-compose.yml file and add a service for the slave:

  slave-java-11:
    build: slave-java-11
    environment:
      - JENKINS_MASTER_SERVER=jenkins-master
      - JENKINS_MASTER_PORT=8080
      - JENKINS_MASTER_JNLP_PORT=50000
      - JENKINS_TOKEN=86f28fafeeb1f4500d546f1957df26718a14fbca244605ea5762da9ad2f721e8
      - SLAVE_NAME=slave-java-11
    depends_on:
      - jenkins-master

This is where we will paste the secret key we copied on the slave screen above. Replace the jenkins_token value 86f28fafeeb1f4500d546f1957df26718a14fbca244605ea5762da9ad2f721e8 with your copy.

 

Execute the command docker-compose up in the main folder to build the slave image and start the composition.

The slave should show up as an active node in Jenkins master.

Picture with the master and one slave node.

Active Jenkins Nodes

 

This concludes this post. In the next post, I will go into the configuration of the slave node by creating a sample workflow.

 

Read More

Jenkins server


Posted By on 25 Sep 2018 in CICD in docker

Today we will continue our journey to build a fully operational CICD environment for home use. After setting up the artifact repository, we will add the orchestration. The Jenkins server will monitor the source repositories and launch our build jobs. We want our Jenkins server to be part of the Docker composition, so that we can easily start it.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

First, we need to define a volume. Jenkins stores data on disk,  and you don’t want it to be lost when the docker container is stopped. Add the volume in the docker-compose.yml volumes section, right after the nexus volume:

volumes:
  nexus-data:
  jenkins-data:

Next, we add Jenkins to the services section:

jenkins-master:
  image: jenkins/jenkins:2.129
  ports:
    - "8080:8080"
  expose:
    - "50000"
  volumes:
    - jenkins-data:/var/jenkins_home
  • We use an explicit version of jenkins. Backward incompatible changes may happen if you do otherwise.
  • We open the port 8080 to the outside world. This port is used to host the administration page of Jenkins server.
  • We expose port 50000 in the internal network. This port will be used by the Jenkins slaves to connect to the master.
  • The volume is mounted at the location /var/jenkins_home, which is the predefined data location of this docker image.

 

Prepare our host system

We want to access our Jenkins server on the url http://jenkins-master:8080. To make this work, we have to add it to our DNS, or we can simply add a mapping in the hostfile on our machine, which is perfectly acceptable for this local installation.

On linux, edit the file /etc/hosts

On windows, edit the file C:\Windows\System32\drivers\etc\hosts

Add the following line:

127.0.0.1           jenkins-master

This tells your computer that any traffic for jenkins-master will be routed towards the loopback ip number.

 

Start Jenkins Server for the first time

On the command-line, enter the docker-compose up command. All three containers in the composition will be started.

Once the services have started, we need to search the logging for the following information:

jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  |
jenkins-master_1  | Jenkins initial setup is required. An admin user has been created and a password generated.
jenkins-master_1  | Please use the following password to proceed to installation:
jenkins-master_1  |
jenkins-master_1  | 3d3a06eaeea14f6e96f053228902dd66
jenkins-master_1  |
jenkins-master_1  | This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
jenkins-master_1  |
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************

The above lines will only show as long as the initial setup has not been performed yet. They contain a secret key with which we can create our admin user. Copy the key for later use.

 

Create the administrator account

Open your browser and go to the Jenkins interface at http://localhost:8080. You will see an unlock screen like this:

Jenins Unlock

Copy the key into the password field, and press Continue. You will be asked to select the plugins to install.

Select the suggested plugins, we can change them afterwards. You will see a progress screen showing the installation progress..

This may take a couple of minutes. After installing the pre-selected plugins, we will be asked to provide an Admin Account:

Create the user and press “Save”.

Set the Jenkins URL

Change the URL to http://jenkins-master:8080/ and select ”Save and Finish”. This is important, because Jenkins slaves will be accessing the master using this url.

Press ‘Start using Jenkins’ to complete the setup.

At this point you can log in to Jenkins, but if your browser-screen remains blank, do a clean stop and start again using docker-compose stop and docker-compose start.

You should get a screen like this, indicating that the installation was successful.

Jenkins main page

 

We now have a jenkins server ready to orchestrate jobs.

 

Go to the commandline on your machine and execute the following command to download the slave.jar file. We will need this file to create slaves for Jenkins to execute jobs.

wget http://jenkins-master:8080/jnlpJars/slave.jar

You could also use your browser to download the file. Keep the file for the next step: creating a jenkins worker.

Configure the master node to only execute jobs that are intended to be executed there, so that it will not be clogged by execute jobs that should run on slaves. Go to Configure > Nodes > Master

We are now ready to add a Jenkins slave to our setup, which we will do in the next post.

Read More

Secure repository


Posted By on 24 Sep 2018 in CICD in docker

In the previous post, we introduced a Nexus repository and prepared it for use with docker. The individual repositories are present, and outbound communication has been established. However, we still can’t use the Nexus repository from docker. Docker is quite strict in its communication and requires a secure repository with encrypted connections. This means setting up an SSL-secured reverse-proxy to facilitate the communication.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Setup the secure repository proxy

We will start by creating a folder for the reverse-proxy. This folder will hold the information needed to build a docker image specific for our need. It will hold the configuration for the proxy, which will be Nginx, and it will hold the certificates. This is the quickest and easiest way to build an image, but lacks some re-use potential. For now we will proceed with this simple setup, and we will use self-signed certificates.

In the demo folder, run the following commands.

mkdir reverse
cd reverse
mkdir certs

openssl req \
  -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
  -x509 -days 365 -out certs/domain.crt

You will be asked to fill in some details like your organisation name etc. These can be entered as you like. The only important question is the FQDN. This is the name by which the user will access the docker repository. This can be an official domain name you own, like docker.mycompany.com, a domain name setup on your local netwerk, a well known ip number (not user friendly), or (like I am using for local development) you can choose a name like mydocker, and add a mapping from mydocker to the correct ip number in your host file on every computer that is using the repository. (Requires root permissions on the clients).

You will see something like this:

Generating a 4096 bit RSA private key
............................................................................................................++
......................................................................................................++
writing new private key to 'certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord Brabant
Locality Name (eg, city) []:Helmond
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Rubix
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:mydocker
Email Address []:barry@b*********cker.nl

You will now see two files in the certs folder: a domain.crt file containing your public certificate, and a domain.key file containing the private key. Make sure to keep the last one secret, and only use it on the reverse proxy.

 

Bonus: Generate the certificate using docker

If you are on windows, or just don’t wish to install openssl in order to generate one certificate, try using a docker image to create the certificate:

docker run -it centurylink/openssl sh
mkdir certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt

While the container is running, open a new commandline. You can find the id with docker ps and copy the certificate out of it using docker cp <containerid>:/certs .

 

Configure Nginx

We now have finished the preparations and are ready to start configuring Nginx.

Create the file https.conf in the folder reverse, and start adding the following upstreams:

upstream docker-releases {
   server nexus:8082;
}
upstream docker-snapshots {
   server nexus:8083;
}
upstream docker-public {
   server nexus:8084;
}

Each upstream refers to a docker repository we configured in Nexus in the previous posts. An upstream is a destination where Nginx can forward it’s requests to. The reference is by hostname and portnumber. The hostname matches the name of the Nexus container in the docker-compose.yml, while the port number matches the http port we defined for each repository individually during the configuration of Nexus.

Next, we add a header field mapping that is required for the docker repository system.

map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
   '' 'registry/2.0';
}

Finally, we start adding the listeners for the inbound requests. The first listener will be on port 443, which is the default https port as well as the default docker registry port. This will allow us to use just mydocker as a destination, without specifying a port number.

server {
   listen 443 ssl http2;
   listen [::]:443 ssl http2;

   server_name mydocker mydocker.local;
   set $fqdn mydocker;

   ssl_certificate /etc/ssl/domain.crt;
   ssl_certificate_key /etc/ssl/domain.key;

   add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

   ssl_protocols TLSv1.1 TLSv1.2;
   ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:10m;

   client_max_body_size 0;
   chunked_transfer_encoding on;

   location /v2/ {
      # Do not allow connections from docker 1.5 and earlier
      # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
      if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
        return 404;
      }

      ## If $docker_distribution_api_version is empty, the header will not be added.
      ## See the map directive above where this variable is defined.
      add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;

      proxy_pass                          http://docker-public;
      proxy_set_header  Host              $http_host;   # required for docker client's sake
      proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
      proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
      proxy_set_header  X-Forwarded-Proto $scheme;
      proxy_read_timeout                  900;
   }
}

Let’s analyse the above configuration:

  • A server definition creates a listener for inbound requests on a given port
  • We tell Nginx to listen on port 443for ssl-encrypted http2 requests. The interfaces we use are the ip4 and ip6 interfaces of this host.
  • Nginx should expect mydocker or mydocker.local as hostname. This is the hostname you’d type in a browser, before it is resolved to the ip-number. It must match the FQDN of the certificate we created earlier.
  • The public certificate and private key are provided. We will have to add the files to the docker image later on.
  • A required docker header is added.
  • Not all ssl protocols are secure. Some are outdated. Some are not supported by docker. We list the protocols and ciphers we wish to use.
  • Docker transfers can be huge. You might want to transfer a 16G image. We remove the max-size limit on the request, so that the client is allowed to send this much information. This also means that we can’t encode the entire request in memory, but we have to use a chunked approach.
  • The docker repository we use is version 2, so we expect the path to start with /v2/ which allows us to add a v1 or v3 with different settings if we ever need to.
  • Exclude old docker versions that don’t play nice.
  • We add the header mapping we defined at the very beginning of the http2.conf file, right after the upstreams. The mapping is required because add_header by itself only allows fixed data.
  • Finally, we tell Nginx what to do with the incoming request. The request should be forwared (proxied) to the upstream docker-public, which we defined at the very start. Nexus will need some extra headers again, this time they are related to the way a proxy server talks to the proxied server. It is used to forward information, such as the protocol used between the client and the proxy, the ip number of the client etc. We also set a large timeout, because storing large binaries might take some time.

What did we do?

  • We have forwarded the default docker port towards the Nexus docker-public repository. This is the group repository for our docker images, which means that when we pull an image from this default endpoint, the image will be retrieved from one of the following locations: docker-releases, docker-snapshots or docker-hub.

This endpoint allows us to find any docker image we created ourselves, or from the public docker-hub repository on the internet. We don’t need to know in which repository it is stored, all magic is handled by Nexus. Great.

The next step is storing docker images. We don’t want to use the generic port for this, but rather, we would like to specify what kind of image we are storing: is it a snapshot build created during development, or is it a candidate release build that might end up on production?

For this, we introduce two endpoints in Nginx, in a way similar to the default endpoint.

server {
   listen 8082 ssl http2;
   listen [::]:8082 ssl http2;

   server_name mydocker mydocker.local;
   set $fqdn mydocker;

   ssl_certificate /etc/ssl/domain.crt;
   ssl_certificate_key /etc/ssl/domain.key;

   add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

   ssl_protocols TLSv1.1 TLSv1.2;
   ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:10m;

   client_max_body_size 0;
   chunked_transfer_encoding on;

   location /v2/ {
      # Do not allow connections from docker 1.5 and earlier
      # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
      if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
        return 404;
      }

      ## If $docker_distribution_api_version is empty, the header will not be added.
      ## See the map directive above where this variable is defined.
      add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;

      proxy_pass                          http://docker-releases;
      proxy_set_header  Host              $http_host;   # required for docker client's sake
      proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
      proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
      proxy_set_header  X-Forwarded-Proto $scheme;
      proxy_read_timeout                  900;
   }
}

and

server {
   listen 8083 ssl http2;
   listen [::]:8083 ssl http2;

   server_name mydocker mydocker.local;
   set $fqdn mydocker;

   ssl_certificate /etc/ssl/domain.crt;
   ssl_certificate_key /etc/ssl/domain.key;

   add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

   ssl_protocols TLSv1.1 TLSv1.2;
   ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:10m;

   client_max_body_size 0;
   chunked_transfer_encoding on;

   location /v2/ {
      # Do not allow connections from docker 1.5 and earlier
      # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
      if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
        return 404;
      }

      ## If $docker_distribution_api_version is empty, the header will not be added.
      ## See the map directive above where this variable is defined.
      add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;

      proxy_pass                          http://docker-snapshots;
      proxy_set_header  Host              $http_host;   # required for docker client's sake
      proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
      proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
      proxy_set_header  X-Forwarded-Proto $scheme;
      proxy_read_timeout                  900;
   }
}

Note that the only differences are:

  • The listen port has changed for both ip4 and ip6
  • The proxy_pass destination has changed to the corresponding upstream.

This concludes our Nginx configuration. You can close the editor on https.conf. All we need to do now is to bundle the software, configuration and certificates in a docker image.

Bundling the package

In the reverse folder, we create a file Dockerfile and add the following content:

FROM nginx:1.15-alpine

COPY certs/* /etc/ssl/
COPY https.conf /etc/nginx/conf.d/

This specifies that we use the official Nginx distribution from docker-hub. We select a specific version to avoid update problems in the future. Our certificates are copied to the location we specified in the configuration file. Finally we copy the configuration file itself to the default location where Nginx expects it to be.

We have ended up with the following file-structure for the reverse proxy image:

/reverse
        /certs
              /domain.crt
              /domain.key
        /https.conf
        /Dockerfile

Validate your results by typing docker build . on the command-line inside the reverse folder. It should download the base image and add the required files.

Bring it together

We now have a reverse proxy configured to forward all traffic towards the Nexus repository. All we need is to put them together in a single docker-compose environment, so that they can communicate. Go to the root folder of your project and edit the docker-compose.yml file. We will add some lines, so that the result will be:

version: '2'

services:
  nexus:
    image: sonatype/nexus3:3.12.1
    volumes:
      - "nexus-data:/nexus-data"
    ports:
      - "8081:8081"
    expose:
      - "8082"
      - "8083"
      - "8084"
      - "8085"
      - "8086"
      - "8087"
      - "8088"
      - "8089"
  reverse-proxy:
    build: reverse
    ports:
      - "443:443"
      - "8082:8082"
      - "8083:8083"
      - "8084:8084"
      - "8085:8085"
      - "8086:8086"
      - "8087:8087"
      - "8088:8088"
      - "8089:8089"
      
volumes:
  nexus-data:

Everything we added is in the reverse-proxy service:

  • The docker image will be identified by the name reverse-proxy
  • It is not a downloaded image like nexus, but instead it’s a locally build image that can be found in the folder reverse
  • It exposes a number of ports to the outside world, most specifically port 443, 8082 and 8083. The others are there for future use.

 

Running and testing

Now that we have both Nexus and Nginx in the docker-compose, it is time to start using it. Make sure your previous compose is stopped by typing docker-compose stop

Go to the main directory and build the composition by running docker-compose build (without the . that docker build . uses). You should see output like this:

nexus uses an image, skipping
Building reverse-proxy
Step 1/3 : FROM nginx:alpine
 ---> ba60b24dbad5
Step 2/3 : COPY https.conf /etc/nginx/conf.d/
 ---> 49d1e664e3f5
Step 3/3 : COPY certs/* /etc/ssl/
 ---> 1cc416dc1cd2
Successfully built 1cc416dc1cd2
Successfully tagged demo_reverse-proxy:latest

Now we are ready to run the composition for the first time. Run it with docker-compose up so that it creates missing volumes if needed. To run it afterwards, use docker-compose start instead.

Before we can login, we need to make sure we can find the host mydocker. As discussed before, it needs to be registered. The simplest way is to register the name on the local machine:

On linux, edit the file /etc/hosts

On windows, edit the file C:\Windows\System32\drivers\etc\hosts

Add the following line:

127.0.0.1           mydocker

This tells your computer that any traffic for mydocker will be routed towards the loopback ip number.

 

Now we can start testing. Try to log in to your repository by entering the following command

docker login mydocker

You will be asked for credentials. Provide the username and password for the user you created.

The command should end with the message “Login succeeded”

You are now logged on to the group repository that also contains the reference to docker-hub. Confirm this by doing a docker pull nginx

It should show:

Using default tag: latest
latest: Pulling from library/nginx
Digest: sha256:9fca103a62af6db7f188ac3376c60927db41f88b8d2354bf02d2290a672dc425
Status: Image is up to date for nginx:latest

Now try a docker push nginx. This should give you a denied message: you have no permissions to push to the nginx image on docker-hub.

Lets store this image in our docker repository. Begin by logging in to our snapshot repo: enter docker login mydocker:8083 and provide the usercredentials.

Tag and push the image:

docker tag nginx mydocker:8083/nginx:latest
docker push mydocker:8083/nginx:latest

You should see the layers being uploaded.

Verify the data in Nexus. 

It should show the nginx image you just uploaded:

You can find more details if you drill-down deeper.

 

Securing the admin interface

Now that we have secured the Docker interface, we can add the admin interface as well. Edit the https.conf file and add the admin port as an upstream at the start of the file.

upstream admin-page {
   server nexus:8081;
}

Scroll down to the server component for port 443. This server contains one location for /v2/. What we want is to lroute trafic on /v2/ towards the Docker repository, and to route other data towards the admin pages. This works because no admin pages exists that use /v2/ as prefix.

Below the location /v2/ we add a new location. Make sure it is still inside the server section for port 443.

location / {
   proxy_pass                          http://admin-page;
   proxy_set_header  Host              $http_host;
   proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
   proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
   proxy_set_header  X-Forwarded-Proto $scheme;
   proxy_read_timeout                  90;
}

Test the new endpoint by rebuilding and starting your docker-compose. Make sure you can log on to the admin page. You’ll most likely need to accept the untrusted certificate before you can continue.

Finally, go to the docker-compose.yml and remove the following lines from the nexus configuration. The port will no longer be public.

    ports:
      - "8081:8081"

Instead we will open a private port. Inside the expose section of nexus add:

      - "8081"

Now our http admin port is no longer visible from the outside world, and we can only access it using the public https endpoint of the proxy.

 

Recap

We have introduced a reverse-proxy in order to create a secure repository. The proxy provides a https-secured Docker endpoint, which ensures that the transferred data is not intercepted. The data remains private and unmodified during transport.

In our case, we used a self-generated certificate for encryption. One important thing to note is that we didn’t have to install the certificate in docker. Docker accepts our certificate, even if it isn’t signed by a trusted certificate authority. Should future versions of Docker enforce the trust of te certificate, you’ll need to add the public certificate in the certificate folder on every client, which can be found at

c:\Users\<username>\.docker\machine\certs
or
/home/<username>/.docker/machine/certs

Upcoming post

In the next post, we will deploy the Jenkins master in the docker-compose network and set it up as a work orchestration server.

Read More

In this blogpost, we will configure the Nexus repository that we introduced in the previous post. We will create a basic repository setup with three levels: snapshot repository for our development artifacts that are only for testing, a releases repository for final artifacts that might go to a live environment, and a proxy repository that can access external repositories in order to integrate them with our own artifacts.

A virtual layer will be put on top of these: the group repository. This allows us to use fallback rules: if the artifact is not in the first repo, we will search the second etc. The group repository can be used to pull all artifacts, while the snapshot and release repos are to push artifacts.

We will also create the minimal users and permissions to access the system.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Take your browser to the Nexus login page at http://localhost:8081. Log in with the admin user: At the top-right side of the screen you find the Sign in button. Click it and enter the default credentials

name: admin
password: admin123

Now is a good time to change your credentials to something more secure: click on the admin button in the top bar, and select change password.

After changing your password, proceed towards the Repository administration page by clicking on the clog icon in the top-bar. You should see a navigation menu on the left with different areas for configuration.

Nexus configuration page

Use the left side navigation to go to the Repositories, it’s the second item from the top. It will show you some default repositories that are configured and ready to use. We will go through the details.

 

The repositories in Nexus

Name Type Format Status
maven-central proxy maven2 Online – Ready to Connect
maven-public group maven2 Online
maven-releases hosted maven2 Online
maven-snapshots hosted maven2 Online
nuget-group group nuget Online
nuget-hosted hosted nuget Online
nuget.org-proxy proxy nuget Online – Ready to Connect
  • The maven-central repository is of type proxy, which means that it doesn’t store data locally, but instead it forwards all requests to the maven-central repository on the internet.
  • The maven repositories are using the format maven2, which means that artifacts are stored using the identifier artifactid,groupid,version from the maven build system.
  • The maven-releases and maven-snapshots repositories are hosted, which means that the files are stored and managed in this Nexus instance.
  • The maven-public repository is of type group. The group consists of the other three maven repositories we just discussed above. (the group members are not visible on this screen). When a group repository receives a request, it tries all the member repositories to find a match, so it aggregates multiple other repositories into a single location.
  • The nugget repositories follow the same pattern with a proxy to the internet, a hosted repo for the local data and a group to aggregate both locations into one. It uses the nugget format to identify the artifacts.
  • Apart from the identification of the artifacts, each format also has it’s own api used to store and retrieve binaries. By selecting the correct format, you enable the api.

Since we are storing docker images in our build, we will create four extra repositories:

  1. A proxy repository to access the master docker repository on the internet
  2. A releases repository where we store our final builds, this repo is write-once, read many, so that we can’t accidentally overwrite a published artifact
  3. A snapshots repository where we store our work in progress. This repo allows overwriting exiting binaries, so that we can rebuild fast and often.
  4. A group repository to aggregate the three previous repositories in one place.

 

The proxy repository

Click on the “Create repository” button

Select docker (proxy)

Name your repo, for example “docker-hub”. It should be marked as Online.

Scroll down and enter the Proxy – Remote storage field. It should read https://hub.docker.com

Mark the checkbox “Use certificates stored in the Nexus truststore to connect to extrnal systems” and click view certificate. It will show some certificate information like the screenshot shown here. Ensure that it is the certificate you expect to see, and press the Add button.

 

Leave the other options as-is, scroll down to the bottom of the page and press “Create repository” to finish.

You now have your first repository, which is a virtual read-only copy of docker hub.

 

The releases repository

Create a new repository. This time, we select type docker (hosted) and name it “docker-releases“.

In the section Repository connectors, we mark the http checkbox, and enter the number 8082 in the data field behind the checkbox. This will make the repository available on the port number 8082 inside the docker container. This will allow us to connect to the repository later on.

Finally we scroll down to the section Hosted. The deployment policy is by default “Allow redeploy”. Since this is a releases repository, and we don’t want to overwrite existing artifacts, we have to select “Disable redeploy” here, so all artifacts become write-once.

Press “Create repository” to finish.

 

The snapshot repository

Create a new repository. Select type docker (hosted) and name it “docker-snapshots“.

In the section Repository connectors, we mark the http checkbox, and enter the number 8083 in the data field behind the checkbox.

Leave all other settings at default.

Press “Create repository” to finish.

 

Aggregating into a single repository

Create a new repository. Select type docker (group) and name it “docker-public“.

In the section Repository connectors, we mark the http checkbox, and enter the number 8084 in the data field behind the checkbox.

Scroll down to the bottom and the other three docker repositories to the group. The order is important here. Nexus will try to find artifacts by trying the repositories from top to bottom. At the top should be docker-releases, then docker-snapshots and finally docker-hub. Add all three and make sure the order is correct.

Press “Create repository” to finish.

 

Summary

We have now added four repositories, as shown in the table below

Name Type Format Status Purpose
docker-central proxy docker Online – Ready to Connect Proxy towards docker.io so that we can use public docker images as if they were part of our repository
docker-public group docker Online One central access point for pulling docker images, regardless of the physical repository where they are stored
docker-releases hosted docker Online A repository for our final builds. These docker images are protected from accidental overwriting
docker-snapshots hosted docker Online A repository for our development builds. These docker images can be pushed repeatedly, providing easy of use during development

 

Security in Nexus

Before we can access the repositories, we will have to set up some permissions. We will start by creating a role for docker.

 

Creating the docker role

Enter the role id “nx-docker” and the role name “Docker user“.

Add four priviliges:

  • nx-repository-admin-docker-docker-hub.*
  • nx-repository-admin-docker-docker-public.*
  • nx-repository-admin-docker-docker-releases.*
  • nx-repository-admin-docker-docker-snapshots.*
  • nx-repository-view-docker-*-*

This will grant rights for all four docker repositories to all users that have this role. Press “Create role” to finalize the role.

Next we can add a user.

 

Create a local user

Navigate to the Users section via the menu on the left side and select create local user. Provide the information for the user you wish to use. Make sure that:

  1. Status is Active
  2. Roles Granted contains “Docker user

Press “Create local user” to finish.

 

Activate the docker realm

As a final step, we want to use the docker login system. Therefore we need to activate the docker security realm. In the left side menu, navigate to Realms. It will show the security realms. Add Docker Bearer Token Realm to the active realms and press Save.

 

Conclusion

This concludes the setup of Nexus itself, however we still can’t access Nexus with our tooling. Only the admin interface is exposed. The next blog will guide us through the setup of the reverse proxy, so that we can have a secure connection into the repositories.

Read More

Often when I am working at home, I wish I had a CICD setup similar to the one at my customers. Developing code without a continues integration platform feels like a big step back. Any self-respecting developer should use CICD, even at home.  The only pain is the time needed to setup the applications, which can be significant the first time you do it. In the upcoming posts I will be creating a CICD setup for home use, so that you might go through the steps faster.

I will explicitly not choose any development language or platform, as I will be using it for many different things. I dabble around with many languages and such, so I want my environment to be able to support them all. A small sample of languages and platforms I am supporting using this platform: Python, Django, Java, Angular, Tibco BW, docker.

Our Continues Integration platform is build upon

 

The integration lifecycle

Setting up a continues integration is quite a project. A good setup is straight forward from administrating point of view, easy to use as a developer and most important: stable. A continues integration setup is not a static thing, but it changes over time, just as fast as the IT world itself is changing. Therefor we need a stable basis that is a good foundation on which we can build in the future.

A sample Continues integration and deployment cycle.

The docker infrastructure

To create this CI platform, we will be using Docker-compose. This allows us to re-create the composition independent of server availability, networks and admin permissions. All we need is a computer with sufficient disk and memory space, and sufficient permissions on that computer to install docker.

We have to configure our artifact repository. We can create areas for different packaging systems: maven, pip, docker. Also, we need to consider the types of updates: do we use allow overwrite actions on an existing version, or do we force new version numbers?

Next, the Jenkins master will be added to the stack, so that we have a director to control the build jobs.

We will configure the slave to work with our repository by creating a sample project.

First we will create an artifact repository to hold our build artifacts. It will contain both the temporary artifacts created at the build phase, as well as the docker images created at the packaging phase, as well as all supporting binaries.

Docker is quite strict in its security requirements. We will secure the repository, so that it will be accessible without hacking or compromising security settings of docker. We do this by adding a reverse proxy as central entry point into our stack.

Once we have Jenkins up, we can add a Jenkins slave to execute build jobs.

Finally, we add a SonarQube installation to validate the quality of the code.

Read More

In order to build our CICD platform, we will start with the creation of an artifact repository. The artifact repository can be used in various locations in the pipelines, has no dependencies itself, and as such it is a great starting point.

This repository will hold all the binaries for our project: it will store and distribute the deliverables and all dependencies. This ensures that all developers use the same binaries, and that the exact same binary goes to production. It provides a central location for managing and securing the libraries that are used in the tools. Furthermore it can host build plugins, docker images and tools that are required by the CICD platform itself.

We opt for Nexus, since the opensource community version provides support for docker. Competitors like Artifactory do provide the same, but only for the commercial version. Since this project is for home use, we choose to go cheap.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Workspace setup

Lets start off and create a folder to hold our environment.

mkdir demo
cd demo

It is a good idea to put this folder under version control right now. It will contain critical configuration for your project, and you will want to have proper version management in place for the files in this folder. We will keep version control out of scope for this walkthrough, but is good to remind yourself to commit your changes regularly so that you can quickly reproduce the system if needed.

Ironically, the CICD workspace repository is the only repository that won’t have a build pipeline attached.

The artifact repository service

Next, we will create a simple docker-compose.yml file to start our repository. Start your editor and create the file.

version: '2'

services:
  nexus:
    image: sonatype/nexus3:3.12.1
    volumes:
      - "nexus-data:/nexus-data"
    ports:
      - "8081:8081"
    expose:
      - "8082"
      - "8083"
      - "8084"
      - "8085"
      - "8086"
      - "8087"
      - "8088"
      - "8089"
      - "5000"

volumes:
  nexus-data:

What this does

  • We specify that we use docker-compose version 2. It’s a bit out-dated, but I use it for backward compatibility. You might be able to move on to version 3, but we won’t be using any of the new features yet.
  • We define a service, a docker container called nexus. The name is not only something to recognize the container by, but it is used as a network identifier inside docker. Each service can be considered a small virtual machine, and the networking between these services will be done through the docker network layer, which uses the name for routing.
    • The container is based upon the official sonatype/nexus3 image, which will be downloaded from docker-hub.
    • Always use explicit versioning. It may be tempting to use latest or stable when you select an image, but this may result in your application failing to start when you enter docker-compose up next time, due to breaking changes in the newer image that somebody else pushed to docker-hub.
    • A volume called nexus-data will be mounted at location /nexus-data inside the docker container. The program running inside the container will be able to store its data on the volume.
    • The port 8081 inside the container will be accessible on port 8081 from the outside world, aka your computer and anybody else on the local network. This allows us to use a browser on port 8081 to administrate the running nexus. You could select any available port for the second (external) port number, but the first number much match the configuration of nexus, which is by default configured to host on 8081.
    • We expose a range of ports afterwards. These ports are not visible to the outside world, but when we add more docker containers to this compose file, the new containers can communicate with nexus on the exposed ports. The ports definition is for ports that need to be accessed from outside of docker-compose, the expose ports are only accessible inside the same docker-compose.
  • Finally we declare the volume. This is a data storage location that can survive reboots. All data inside a docker container normally gets flushed when the container is stopped. A new container will be created with a clean filesystem as defined in the build process. To let the container persist data, it will need an external data store, and for file systems this is done through a volume.

That is it. We can now start our first container and use the browser to configure it.

Testing

Start the container by typing the following command on the command-line:

docker-compose up

The up command tells docker-compose to pull the base images from the internet and to do the initial setup for the containers and volumes.

You will see a log-trace on the terminal about downloading the image layers from docker, followed by the following lines:

Creating network "demo_default" with the default driver
Creating volume "demo_nexus-data" with default driver
Creating demo_nexus_1 ...
Creating demo_nexus_1 ... done
Attaching to demo_nexus_1
nexus_1  | 2017-11-09 09:01:08,901+0000 WARN  [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4JInitialiser - Your logging framework class org.ops4j.pax.logging.slf4j.Slf4jLogger is not known - if it needs access to the standard printl
n methods on the console you will need to register it by calling registerLoggingSystemPackage
nexus_1  | 2017-11-09 09:01:08,909+0000 INFO  [FelixStartLevel] *SYSTEM uk.org.lidalia.sysoutslf4j.context.SysOutOverSLF4J - Package org.ops4j.pax.logging.slf4j registered; all classes within it or subpackages of it will be allowed to print to System.out and Sy
stem.err

Let it run for a minute, nexus takes some time to setup the data store for first use.

Start up your browser and go to the nexus web page on http://localhost:8081/ It should show you the welcome page.

Welcome Page for Nexus

Nexus

Conclusion

This concludes part 1 of our walkthrough. We now have a running Nexus instance. You can stop it using ctrl-c in the terminal where docker runs, or the command docker-compose stop in the configuration folder. It can be restarted again by typing docker-compose start (not docker-compose up) in the folder where the docker-compose.yml file is stored. Use the command docker ps to see if your instance is running:

> docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                             NAMES
2ff5f21afccc        sonatype/nexus3     "bin/nexus run"     18 minutes ago      Up 18 minutes       5000/tcp, 8082-8089/tcp, 0.0.0.0:8081->8081/tcp   demo_nexus_1

It might seem a bit overkill to create a compose file for just one docker image, but it will become clear in the next blogpost. Nexus by itself doesn’t provide the secure access that is required by docker to use it as a repository. We will need to add a reverse-proxy, like Nginx, to hold the openssl certificates and to encrypt the communication.

In the next blogpost, we will setup the Nexus repository and prepare it for use with docker.

Read More

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close