Posts Tagged "container"


In this post, we will create a simple Jenkins slave image, capable of compiling Java code. We will register it in the Jenkins master instance that we created during the previous blog post.

 

We will take a docker image that holds a java environment. On top of that, we will deploy the Jenkins slave binary. It needs to be configured to connect to the master. Finally we add the software to execute the jobs it is required to do.

 

Remember the microservice principle: make small containers that can do one job well, not one large container that can do everything.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

Register the Jenkins slave

We need to register every new slave in Jenkins before it can connect. Go to ‘Manage Jenkins’, ‘Manage Nodes’ and add a ‘New Node’. Provide the default settings like in the picture below:

Save your configuration. You will get a confirmation screen below. It contains one critical piece of information: the secret key needed to connect the slave to Jenkins. Copy the secret key. We will need it in our slave image.

 

Building the Jenkins slave

Go to your docker-compose folder, and create a new subfolder called ‘slave-java-11’. Copy the slave.jar file which you downloaded in the jenkins master setup blogpost into this folder. Create three files: a Dockerfile and two script files: setup.sh and wait-for-it.sh If you are working on windows, make sure the line-endings of the script files are in Unix mode, or you will get errors during runtime. Copy-paste the content from below:

Dockerfile

FROM openjdk:11

COPY slave.jar .
COPY wait-for-it.sh .
COPY startup.sh .
RUN chmod u+x startup.sh wait-for-it.sh

ENV JENKINS_MASTER_SERVER=jenkins-master
ENV JENKINS_MASTER_PORT=8080
ENV JENKINS_MASTER_JNLP_PORT=50000
ENV SLAVE_NAME=slave-java-11
ENV JENKINS_TOKEN=b26ad819e8d4f823302e1ea4abd724e488967130b7910ea7762c4579c80852ee

CMD ["sh", "./startup.sh"]

You can replace the jenkins_token with the secret key you copied from the slave screen above, but it is not necessary, we will override it in the docker-compose file.

The dockerfile defines a new image based upon the official openjdk image. This image gives us the build tools we are looking for, but conveniently also includes a java runtime that allows us to execute slave.jar. It adds the files from our build folder, so that we can use them inside the image, and assign correct execution rights to the scripts. Finally it sets the defaults for the environment variables. This is more for understanding the image, as we will override the values in docker-compose later on.

startup.sh

The startup.sh script will be called upon execution of the docker image. We first check if the two Jenkins ports are available, to avoid busy waiting with a lot of spam in the log file. When the ports are available, we start the slave process.

#!/usr/bin/sh

bash ./wait-for-it.sh ${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_PORT} -t 300
bash ./wait-for-it.sh ${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_JNLP_PORT} -t 300

java -jar slave.jar -jnlpUrl http://${JENKINS_MASTER_SERVER}:${JENKINS_MASTER_PORT}/computer/${SLAVE_NAME}/slave-agent.jnlp -secret ${JENKINS_TOKEN} -workDir /opt

wait-for-it.sh

We use a wait script from github [under The MIT License] to wait for the availability of a tcp port on the network before we start the slave.

#!/usr/bin/bash
#   Use this script to test if a given TCP host/port are available

cmdname=$(basename $0)

echoerr() { if [[ $QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }

usage()
{
    cat << USAGE >&2
Usage:
    $cmdname host:port [-s] [-t timeout] [-- command args]
    -h HOST | --host=HOST       Host or IP under test
    -p PORT | --port=PORT       TCP port under test
                                Alternatively, you specify the host and port as host:port
    -s | --strict               Only execute subcommand if the test succeeds
    -q | --quiet                Don't output any status messages
    -t TIMEOUT | --timeout=TIMEOUT
                                Timeout in seconds, zero for no timeout
    -- COMMAND ARGS             Execute command with args after the test finishes
USAGE
    exit 1
}

wait_for()
{
    if [[ $TIMEOUT -gt 0 ]]; then
        echoerr "$cmdname: waiting $TIMEOUT seconds for $HOST:$PORT"
    else
        echoerr "$cmdname: waiting for $HOST:$PORT without a timeout"
    fi
    start_ts=$(date +%s)
    while :
    do
        if [[ $ISBUSY -eq 1 ]]; then
            nc -z $HOST $PORT
            result=$?
        else
            (echo > /dev/tcp/$HOST/$PORT) >/dev/null 2>&1
            result=$?
        fi
        if [[ $result -eq 0 ]]; then
            end_ts=$(date +%s)
            echoerr "$cmdname: $HOST:$PORT is available after $((end_ts - start_ts)) seconds"
            break
        fi
        sleep 1
    done
    return $result
}

wait_for_wrapper()
{
    # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
    if [[ $QUIET -eq 1 ]]; then
        timeout $BUSYTIMEFLAG $TIMEOUT $0 --quiet --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
    else
        timeout $BUSYTIMEFLAG $TIMEOUT $0 --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
    fi
    PID=$!
    trap "kill -INT -$PID" INT
    wait $PID
    RESULT=$?
    if [[ $RESULT -ne 0 ]]; then
        echoerr "$cmdname: timeout occurred after waiting $TIMEOUT seconds for $HOST:$PORT"
    fi
    return $RESULT
}

# process arguments
while [[ $# -gt 0 ]]
do
    case "$1" in
        *:* )
        hostport=(${1//:/ })
        HOST=${hostport[0]}
        PORT=${hostport[1]}
        shift 1
        ;;
        --child)
        CHILD=1
        shift 1
        ;;
        -q | --quiet)
        QUIET=1
        shift 1
        ;;
        -s | --strict)
        STRICT=1
        shift 1
        ;;
        -h)
        HOST="$2"
        if [[ $HOST == "" ]]; then break; fi
        shift 2
        ;;
        --host=*)
        HOST="${1#*=}"
        shift 1
        ;;
        -p)
        PORT="$2"
        if [[ $PORT == "" ]]; then break; fi
        shift 2
        ;;
        --port=*)
        PORT="${1#*=}"
        shift 1
        ;;
        -t)
        TIMEOUT="$2"
        if [[ $TIMEOUT == "" ]]; then break; fi
        shift 2
        ;;
        --timeout=*)
        TIMEOUT="${1#*=}"
        shift 1
        ;;
        --)
        shift
        CLI=("$@")
        break
        ;;
        --help)
        usage
        ;;
        *)
        echoerr "Unknown argument: $1"
        usage
        ;;
    esac
done

if [[ "$HOST" == "" || "$PORT" == "" ]]; then
    echoerr "Error: you need to provide a host and port to test."
    usage
fi

TIMEOUT=${TIMEOUT:-15}
STRICT=${STRICT:-0}
CHILD=${CHILD:-0}
QUIET=${QUIET:-0}

# check to see if timeout is from busybox?
# check to see if timeout is from busybox?
TIMEOUT_PATH=$(realpath $(which timeout))
if [[ $TIMEOUT_PATH =~ "busybox" ]]; then
        ISBUSY=1
        BUSYTIMEFLAG="-t"
else
        ISBUSY=0
        BUSYTIMEFLAG=""
fi

if [[ $CHILD -gt 0 ]]; then
    wait_for
    RESULT=$?
    exit $RESULT
else
    if [[ $TIMEOUT -gt 0 ]]; then
        wait_for_wrapper
        RESULT=$?
    else
        wait_for
        RESULT=$?
    fi
fi

if [[ $CLI != "" ]]; then
    if [[ $RESULT -ne 0 && $STRICT -eq 1 ]]; then
        echoerr "$cmdname: strict mode, refusing to execute subprocess"
        exit $RESULT
    fi
    exec "${CLI[@]}"
else
    exit $RESULT
fi

The scriptfile above is on top of the depends_on statement in the docker compose file. Depends_on only waits untill the dockerfile has begun executing, but the depending image may be started before the depends_on image is actually ready to receive connections. It is beter to wait untill the ports are available, so that our connection attempts will at least reach the process.

Your folder structure should now look like this:

cicd/
  reverse/
    certs/
      ...
    Dockerfile
    https.conf
  slave-java-11/
    slave.jar
    Dockerfile
    startup.sh
    wait-for-it.sh
  docker-compose.yml

 

Edit the docker-compose.yml file and add a service for the slave:

  slave-java-11:
    build: slave-java-11
    environment:
      - JENKINS_MASTER_SERVER=jenkins-master
      - JENKINS_MASTER_PORT=8080
      - JENKINS_MASTER_JNLP_PORT=50000
      - JENKINS_TOKEN=86f28fafeeb1f4500d546f1957df26718a14fbca244605ea5762da9ad2f721e8
      - SLAVE_NAME=slave-java-11
    depends_on:
      - jenkins-master

This is where we will paste the secret key we copied on the slave screen above. Replace the jenkins_token value 86f28fafeeb1f4500d546f1957df26718a14fbca244605ea5762da9ad2f721e8 with your copy.

 

Execute the command docker-compose up in the main folder to build the slave image and start the composition.

The slave should show up as an active node in Jenkins master.

Picture with the master and one slave node.

Active Jenkins Nodes

 

This concludes this post. In the next post, I will go into the configuration of the slave node by creating a sample workflow.

 

Read More

Jenkins server


Posted By on 25 Sep 2018 in CICD in docker

Today we will continue our journey to build a fully operational CICD environment for home use. After setting up the artifact repository, we will add the orchestration. The Jenkins server will monitor the source repositories and launch our build jobs. We want our Jenkins server to be part of the Docker composition, so that we can easily start it.

This post is part of a series about creating a continues integration platform for home use.

 

  Create an artifact repository

  Configure the artifact repository

  Secure the artifact repository

 Create the Jenkins master

 Add a Jenkins slave

 Creating a sample project.

First, we need to define a volume. Jenkins stores data on disk,  and you don’t want it to be lost when the docker container is stopped. Add the volume in the docker-compose.yml volumes section, right after the nexus volume:

volumes:
  nexus-data:
  jenkins-data:

Next, we add Jenkins to the services section:

jenkins-master:
  image: jenkins/jenkins:2.129
  ports:
    - "8080:8080"
  expose:
    - "50000"
  volumes:
    - jenkins-data:/var/jenkins_home
  • We use an explicit version of jenkins. Backward incompatible changes may happen if you do otherwise.
  • We open the port 8080 to the outside world. This port is used to host the administration page of Jenkins server.
  • We expose port 50000 in the internal network. This port will be used by the Jenkins slaves to connect to the master.
  • The volume is mounted at the location /var/jenkins_home, which is the predefined data location of this docker image.

 

Prepare our host system

We want to access our Jenkins server on the url http://jenkins-master:8080. To make this work, we have to add it to our DNS, or we can simply add a mapping in the hostfile on our machine, which is perfectly acceptable for this local installation.

On linux, edit the file /etc/hosts

On windows, edit the file C:\Windows\System32\drivers\etc\hosts

Add the following line:

127.0.0.1           jenkins-master

This tells your computer that any traffic for jenkins-master will be routed towards the loopback ip number.

 

Start Jenkins Server for the first time

On the command-line, enter the docker-compose up command. All three containers in the composition will be started.

Once the services have started, we need to search the logging for the following information:

jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  |
jenkins-master_1  | Jenkins initial setup is required. An admin user has been created and a password generated.
jenkins-master_1  | Please use the following password to proceed to installation:
jenkins-master_1  |
jenkins-master_1  | 3d3a06eaeea14f6e96f053228902dd66
jenkins-master_1  |
jenkins-master_1  | This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
jenkins-master_1  |
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************
jenkins-master_1  | *************************************************************

The above lines will only show as long as the initial setup has not been performed yet. They contain a secret key with which we can create our admin user. Copy the key for later use.

 

Create the administrator account

Open your browser and go to the Jenkins interface at http://localhost:8080. You will see an unlock screen like this:

Jenins Unlock

Copy the key into the password field, and press Continue. You will be asked to select the plugins to install.

Select the suggested plugins, we can change them afterwards. You will see a progress screen showing the installation progress..

This may take a couple of minutes. After installing the pre-selected plugins, we will be asked to provide an Admin Account:

Create the user and press “Save”.

Set the Jenkins URL

Change the URL to http://jenkins-master:8080/ and select ”Save and Finish”. This is important, because Jenkins slaves will be accessing the master using this url.

Press ‘Start using Jenkins’ to complete the setup.

At this point you can log in to Jenkins, but if your browser-screen remains blank, do a clean stop and start again using docker-compose stop and docker-compose start.

You should get a screen like this, indicating that the installation was successful.

Jenkins main page

 

We now have a jenkins server ready to orchestrate jobs.

 

Go to the commandline on your machine and execute the following command to download the slave.jar file. We will need this file to create slaves for Jenkins to execute jobs.

wget http://jenkins-master:8080/jnlpJars/slave.jar

You could also use your browser to download the file. Keep the file for the next step: creating a jenkins worker.

Configure the master node to only execute jobs that are intended to be executed there, so that it will not be clogged by execute jobs that should run on slaves. Go to Configure > Nodes > Master

We are now ready to add a Jenkins slave to our setup, which we will do in the next post.

Read More

7 golden rules for docker images

Creating a good Docker image is an art. There are no fixed rules that can be applied in every situation. Instead, we need to look at the pros and cons of every decision. We can however provide guidance.

Here are 7 golden rules for docker images. Following these rules, you can improve the containers you build, making them more reusable, more efficient and more stable.

Stateless

The prime requirement for all scalable containers is to never keep track of a state. Every action should be executed in it’s own context, without the need to store long-term information anywhere inside the micro-service. This means that permanent storage, like databases, data files or caches should not be living inside the container. When no data lives inside the container, it means that requests can be executed by any copy of the micro-service, so we can load-balance the requests, or recycle malfunctioning containers.

Statelessness is hard to achieve. It requires that the software you try to encapsulate is written in way that allows statelessness. When done properly, the software should allow you to push the state to an external resource, such as a database. As a hint, you could try to put your datafiles in central folders that can be mounted as external shared volumes (watch out for file-locks and concurrency on the files), make use of external, shared caches etc.

If the software doesn’t allow a stateless runtime, you might be able to use clustering features of the software. This is not advised, because it puts extra requirements on the network setup of your runtime, but it can allow you to run load-balanced.

My request should be handled by any container, independent of previous requests.

Small

When your container travels through the development landscape towards production, it is downloaded and uploaded many times. Even when it has hit production, it will be copied and unpacked every time a new instance is started. Even though disk, memory and cpu might be cheap, the total sum can add up. Look at the follow examples:

  • When an micro-service fails, we need a replacement instantly and all the delay during unpacking should be avoided.
  • After a disaster, all services might need to start at the same time, congesting the bandwidth for downloading image and other resources for unpacking the docker image.
  • Your artifact repository might hold hundreds of versions of you images. Even with good housekeeping rules, the diskspace might grow beyond what is available.

What can you do about this?

  • Clean up you package repository cache after using it to install. Package manager like yum and apt download a copy of the version information of all available packages. You won’t need that inside your running container, so clean up after installing.
  • Make sure you clean up at the end of every line in your Dockerfile. Docker creates a filesystem snapshot after every line, and stores the diff as a layer. Cleaning up on the next line will not reduce space, but instead use more space.

Fast Startup

Load-balancing and high-availability depend on the ability to react to changing conditions in a timely manner. If a container fails, you wish to have it replaced now, not one hour later. When sales are peaking on your website, you wish to have extra capacity now, without any lead time, or you might miss some revenue.

Your micro-service should be able to start fast, and without dependencies to external systems. You don’t want your service to download extra packages from a repository system at startup, but instead all the packages should be part of the docker image. A service should not register itself on a central server, other services should be able to connect to your service by using a well-known url or name, such as a openshift service name. There should never be an external license server that needs to authorize your instance, and that becomes a show-stopper if it is unavailable.

Stress

Configurable

Configuration is all about being able to change behavior without rebuilding the image

Think about how others will use your micro-service. What flexibility can we give them? Does your container need a server name and port to access an external database, or can we provide a full jndi url which allows more fine-tuning? Try not to restrict the use of your container. Containers are for IT-experts, not for end-users, so give them a powerful interface. It will be used by competent people that are trying to make it work in a situation you might not have foreseen, so try to give them the tools.

There are multiple ways to inject configuration into your container.

  • Environment variables are the easiest to use. They are clear, easy to find and well understood. They are however immutable once the container has started. The process that runs inside the container gets a copy at startup.
  • Configuration files are a bit more complex. The format depends on your software, and the may be scattered all over your file-system. However, good software package are able to detect changes at runtime and can re-load the file. Also, files can be mounted, so multiple images can use the same central configuration file, which makes it easier to maintain the settings.

Other solutions, such as storing settings in the central database are possible, but not advisable in a micro-service landscape. If you are running a large openshift or kubernetes environment, you wish to see config changes, and hiding them in a database is not advisable.

Extendable

When configuration is all about changing the runtime behavior of the container within the bounds of the software, extendability is about being able to enhance software behavior by building on top of the image.

Many container images on docker.io are build according to this principle. When you use an image of an application service such as glassfish, you can choose to start the container and to upload your application modules to the running instance. This is however is a painful process that needs to be repeated every time you start the container. Instead you can choose to build a new docker image on top of the application service image with the packages pre-installed. When you do so, you have extended the original micro-service with your code, creating a new micro-service.

When you design your micro-service, think about how others can extend it. Can they add classes, libraries or other things that enhance the behavior? Maybe you should split your container into two parts: a reusable base image and a customized extension to that image for your single purpose.

Layered

Docker containers are build in layers. A layer is basically a diff: we take the previous layer and apply a number of changes to it, in order to arrive at a new situation. Each layer adds a piece of the final image, and each layer depends on a previous layer.

We already mentioned that we should avoid adding unnecessary data to the layers, but even when we avoid that, we still need to look critically at the layer structure. When you create a docker image, you are focused on the end result: making it work. This is your primary goal. Once you have it working, you should review your Dockerfile, and see if you need to make changes.

Docker tries to be smart about the layer structure. Whenever you use a docker image, the layers are downloaded and cached locally. A cached layer can be re-used when 1) The chain of layers from the root layer until this layer is exactly the same and 2) this layer itself has not changed. As soon as you make a change to one layer, it invalidates the current layer and all layers that come after it. All these layers will need to be downloaded again, even-though the previous version of the layer was cached and the layer itself has not changed.

When you look at the order of the layers, you should follow the following guideline:

large before small, stable before volatile

When the order of two lines in your docker file is not defined by any dependencies, you should consider the above rule. Ideally the line that constitutes to the larges layer in size, should be the one that is on top. Also lines that are not subject to change in a next version should come before lines that will change, such as lines referring to a specific version of a package. These two rules allow docker to use its cache more effectively, reducing memory, disk space and bandwith significantly. As a bonus, the build-time for images is also reduced anytime you make a change to one of the volatile layers.

Versioned

You should use explicit versioning, always.

Dockerfiles are code, just like any other language. You put the in source-control and use a compiler (docker) to build it into an executable (the container). You want this process to be predictable and repeatable. You don’t want it to break suddenly when a dependency is updated. Imagine your container is in production and happily running for more than a year. A small change is requested and you agree to it. You take the Dockerfile out of source control, only to find out that it fails to build. Now you are left with an investigation that is preventing you from meeting your deadline.

What are the pieces you need to version?

  • The docker image in the FROM header. This is quite obvious.
  • The software package you encapsulate in the container, still a no-brainer.
  • The libraries and packages used by the software.
  • And finally, the tools you install with apt, yum etc. to prepare the docker image.

This last line is often forgotten. If you use a packaging system from a distribution, these packages also change. The behavior or interface may change slightly. The newer versions might be incompatible with the old distribution you are using through your FROM image. Make sure you version the tools, and that the tools remain available for download, for example by copying them to an artifact repository under your control.

Conclusion

There are many things to take into account when we create a container image. This makes the creation of a good image an art in its own. Good design might not be apparent at first. If the program inside the container runs correctly, who will complain? Only when an image is used extensively, will the flaws become visible. By following the steps in this article, you can remain clear of many of the pitfalls.

Read More

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close