How to build multiple docker containers from jenkinsfile? - docker

I have 3 different Docker images. I need to build these images from Jenkins file. I have Wildfly, Postgres, Virtuoso Docker images with individual Docker file. As of now, I am using the below command to build these images:
The directory structure is, Docker is the root diretory.
Docker->build->1. wildfly 2. postgres 3. virtuoso
In my jenkins file I have below command to build the image:
stage('Building test images')
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
But I am getting error as below:
Step 7/16 : COPY ./install $VIRT_HOME/install
COPY failed: stat /var/lib/docker/tmp/docker-builder636357036/install: no such file or directory
[Pipeline] }
For reference below is my dockerfile:
FROM virtuoso:latest
ENV var1 /opt/virtuoso-opensource
ENV VIRT_db /opt/virtuoso-opensource/var/lib/virtuoso/db
ENV RUN_CONFIG=/opt/virtuoso-opensource/install/config
RUN export PATH=$PATH:/opt/virtuoso-opensource/bin
RUN mkdir $var1/install
COPY ./install $var1/install
WORKDIR $VIRT_db
CMD ["/opt/virtuoso-opensource/bin/init.sh"]
And the workspace is /home/jenkins/Docker and my guess is I am running docker build command from $workspace directory and this command should run from the virtuoso directory.
My question is how build image from Jenkins file?
Thanks in advance.

the easiest solution to solve this would be to enter the proper folder in the script before executing the docker build command.
e.g.:
stage('Building test images') {
steps {
sh '''
cd $WORKSPACE/build/virtuoso
docker build -t virtuoso .
'''
}
}

Below is the answer:
stage('Build images'){
echo "workspace directory is ${workspace}"
dir ("$workspace/build/virtuoso")
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
dir ("$workspace/build/wildfly")
{
sh 'docker build -t wildfly -f $WORKSPACE/build/wildfly/Dockerfile .'
}
dir ("$workspace/build/postgres")
{
sh 'docker build -t postgres -f $WORKSPACE/build/postgres/Dockerfile .'
}
}
Thanks for helping me out.

Related

Docker Multi Stage Build access Test Reports in Jenkins when Tests Fail

I am doing a multi stage build in docker to separate the app from testing. Now at some point in my Dockerfile I run:
RUN pytest --junit=test-reports/junit.xml
In my Jenkinsfile respectivly I do:
def app = docker.build("app:${BUILD_NUMBER}", "--target test .")
app.inside {
sh "cp -r /app/test-reports test-reports"
}
junit 'test-reports/junit.xml'
Now if my test fail, the build fails which is good. But the rest of the stage is not executed, i.e. I dont have access to the test-reports folder. How can I manage that?
I resolved similar task by using always block after build stage.
Please check if below code can help.
always{
script{
sh '''#!/bin/bash
docker create -it --name test_report app:${BUILD_NUMBER} /bin/bash
docker cp test_report:/app/test-reports ./test-reports
docker rm -f test_report
'''
}
junit 'test-reports/junit.xml'
}

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

docker volume issue inside jenkins steps

Inside a jenkins stage and steps, I am trying to build an image; run the container with a volume and then stash a file in order to unstash it after.
But unfortunately it doesn't create the volume and doesn't stash.
Here is the jenkins code
stage('Android') {
agent {
label buildLabel()
}
steps {
checkout scm
sh '''
mkdir -p `pwd`/build_target
docker build -t android_build -f docker/Dockerfile.android .
docker run --rm -v `pwd`/build_target:/home/gradle/reactapp/android/app/build/outputs/apk/ android_build
ls -la `pwd`/build_target/*
'''
stash includes: 'build_target/app-release.apk', name: 'apk'
androidApkUpload apkFilesPattern: '**/app-release.apk', googleCredentialsId: 'jenkins_apk_upload', trackName: 'internal'
}
}
My solution would be configuring a ENV in Global properties on http://jenkins-server/configure.
In build script, I can use the env to get the shared path in the host.
And in all agent hosts, I mount the same NFS path to it.
mount -t nfs 10.6.188.1:/root /root/pacotest1
on every node

Docker in Docker - volumes not working: Full of files in 1st level container, empty in 2nd tier

I am running Docker in Docker (specifically to run Jenkins which then runs Docker builder containers to build a project images and then runs these and then the test containers).
This is how the jenkins image is built and started:
docker build --tag bb/ci-jenkins .
mkdir $PWD/volumes/
docker run -d --network=host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-v $PWD/volumes/jenkins_home:/var/jenkins_home \
--name ci-jenkins bb/ci-jenkins
Jenkins works fine. But then there is a Jenkinsfile based job, which runs this:
docker run -i --rm -v /var/jenkins_home/workspace/forkMV_jenkins-VOLTRON-3057-KQXKVJNXOU4DGSUG3P27IR3QEDHJ6K7HPDEZYN7W6HCOTCH3QO3Q:/tmp/build collab/collab-services-api-mvn-builder:2a074614 mvn -B -T 2C install
And this ends up with an error:
The goal you specified requires a project to execute but there is no POM in this directory (/tmp/build).
When I do docker exec -it sh to the container, the /tmp/build is empty. But when I am in the Jenkins container, the path /var/jenkins_home/...QO3Q/ exists and it contains the workspace with all the files checked out and prepared.
So I wonder - how can Docker happily mount the volume and then it's empty?*
What's even more confusing, this setup works for my colleague on Mac.
I am on Linux, Ubuntu 17.10, Docker latest.
After some research, calming down and thinking, I realized that Docker-in-Docker is not really so much "-in-", as it is rather "Docker-next-to-Docker".
The trick to make a container able to run another container is sharing /var/run/docker.sock through a volume: -v /var/run/docker.sock:/var/run/docker.sock
And then the docker client in the container actually calls Docker on the host.
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
After realizing that, the fix is to make the paths to the Jenkins workspace directory the same in the host filesystem and the Jenkins (middle) container:
docker run -d --network=host \
...
-v /var/jenkins_home:/var/jenkins_home
And voilá! It works. (I created a symlink instead of moving it, seems to work too.)
It is a bit complicated if you're looking at colleague's Mac, because Docker is implemented a bit differently there - it is running in an Alpine Linux based VM but pretending not to. (Not 100 % sure about that.) On Windows, I read that the paths have another layer of abstraction - mapping from C:/somewhere/... to a Linux-like path.
I hope I'll save someone hours of figuring out :)
Alternative Solution with Docker cp
I was facing the same problem of mounting volumes from a Build that runs in a Docker Container running in a Jenkins server in Kubernetes. As we use docker-in-docker, dind, I couldn't mount the volume in either ways proposed here. I'm still not sure what the reason is, but I found an alternative way: use docker cp to copy the build artifacts.
Multi-stage Docker Image for Tests
I'm using the following Dockerfile stage for Unit + Integration tests.
#
# Build stage to for building the Jar
#
FROM maven:3.2.5-jdk-8 as builder
MAINTAINER marcello.desales#gmail.com
# Only copy the necessary to pull only the dependencies from registry
ADD ./pom.xml /opt/build/pom.xml
# As some entries in pom.xml refers to the settings, let's keep it same
ADD ./settings.xml /opt/build/settings.xml
WORKDIR /opt/build/
# Prepare by downloading dependencies
RUN mvn -s settings.xml -B -e -C -T 1C org.apache.maven.plugins:maven-dependency-plugin:3.0.2:go-offline
# Run the full packaging after copying the source
ADD ./src /opt/build/src
RUN mvn -s settings.xml install -P embedded -Dmaven.test.skip=true -B -e -o -T 1C verify
# Building only this stage can be done with the --target builder switch
# 1. Build: docker build -t config-builder --target builder .
# When running this first stage image, just verify the unit tests
# Overriden them by removing the "!" for integration tests
# 2. docker run --rm -ti config-builder mvn -s settings.xml -Dtest="*IT,*IntegrationTest" test
CMD mvn -s settings.xml -Dtest="!*IT,!*IntegrationTest" -P jacoco test
Jenkins Pipeline For tests
My Jenkins pipeline has a stage for running parallel tests (Unit + Integration).
What I do is to build the Test Image in a stage, and run the tests in parallel.
I use docker cp to copy the build artifacts from inside the test docker container that can be started after running the tests in a named container.
Alternatively, you can use Jenkins stash to carry the test results to a Post stage
At this point, I solved the problem with a docker run --name test:SHA and then I use docker start test:SHA and then docker cp test:SHA:/path ., where . is the current workspace directory, which is similar to what we need with a docker volume mounted to the current directory.
stage('Build Test Image') {
steps {
script {
currentBuild.displayName = "Test Image"
currentBuild.description = "Building the docker image for running the test cases"
}
echo "Building docker image for tests from build stage ${env.GIT_COMMIT}"
sh "docker build -t tests:${env.GIT_COMMIT} -f ${paas.build.docker.dockerfile.runtime} --target builder ."
}
}
stage('Tests Execution') {
parallel {
stage('Execute Unit Tests') {
steps {
script {
currentBuild.displayName = "Unit Tests"
currentBuild.description = "Running the unit tests cases"
}
sh "docker run --name tests-${env.GIT_COMMIT} tests:${env.GIT_COMMIT}"
sh "docker start tests-${env.GIT_COMMIT}"
sh "docker cp tests-${env.GIT_COMMIT}:/opt/build/target ."
// https://jenkins.io/doc/book/pipeline/jenkinsfile/#advanced-scripted-pipeline#using-multiple-agents
stash includes: '**/target/*', name: 'build'
}
}
stage('Execute Integration Tests') {
when {
expression { paas.integrationEnabled == true }
}
steps {
script {
currentBuild.displayName = "Integration Tests"
currentBuild.description = "Running the Integration tests cases"
}
sh "docker run --rm tests:${env.GIT_COMMIT} mvn -s settings.xml -Dtest=\"*IT,*IntegrationTest\" -P jacoco test"
}
}
}
}
A better approach is to use Jenkins Docker plugin and let it do all the mountings for you and just add -v /var/run/docker.sock:/var/run/docker.sock in its inside function arguments.
E.g.
docker.build("bb/ci-jenkins")
docker.image("bb/ci-jenkins").inside('-v /var/run/docker.sock:/var/run/docker.sock')
{
...
}

Resources