Running a Docker container from a Jenkins pipeline, and capture the output - docker

In a Jenkinsfile, how can I run a Docker container and capture the output, without reverting to sh instructions?
This is how I build my container image during the Build stage:
dockerImage = docker.build(env.DOCKER_COORDS, "--build-arg BUILD_VERSION=${env.BUILD_VERSION} .")
And this is how I push the image in a later Publish stage:
withDockerRegistry(credentialsId: env.DOCKER_REG_CREDS_ID) {
dockerImage.push('latest')
}
In between the Build and Publish stages, in a Test stage, I would like to validate that the output from the container, when passing a --version argument, is equal to ${env.BUILD_VERSION}.
How can I run the container and capture the output, without having to use sh "docker ..." instructions again?

I am new to this but according to the documentation of Docker Pipeline plugin, you can use Image.run([args]), for your case it will be dockerImage.run()
To pass environment variables:
dockerImage.run(['-e your_variable=X'])

Normally you can run and capture the output using the sh method (on a Linux slave and bat for windows). For example, to retrieve the python version of the official python docker, it will look like this:
def version = sh label: '', returnStdout: true, script: 'docker run python python --version'
And then you could just compare the output of the commande with ${env.BUILD_VERSION}. This shouldn't be to different with your docker image.

Related

Jenkins Docker Pipeline Plugin: Get image building logs

Is there a way to extract the build logs when building a Docker image in a Jenkins pipeline using the "Docker Pipeline Plugin" solution docker.build()?
I want to be able to save these logs somewhere else (Jenkins console output):
I am able to get build logs of other shell commands (such as mvn install) very easily by using the sh() method, and then save the console output into a variable:
BUILD_JAR_LOGS = sh (
script: 'mvn install',
returnStdout: true
)
I know that I could use the same method with docker and do something like this:
BUILD_DOCKER_IMAGE_LOGS = sh (
script: 'docker build --network-host .',
returnStdout: true
)
However, I'd prefer to continue using the Docker Pipeline Plugin methods to achieve the actual image building.

Docker Multi Stage Build access Test Reports in Jenkins when Tests Fail

I am doing a multi stage build in docker to separate the app from testing. Now at some point in my Dockerfile I run:
RUN pytest --junit=test-reports/junit.xml
In my Jenkinsfile respectivly I do:
def app = docker.build("app:${BUILD_NUMBER}", "--target test .")
app.inside {
sh "cp -r /app/test-reports test-reports"
}
junit 'test-reports/junit.xml'
Now if my test fail, the build fails which is good. But the rest of the stage is not executed, i.e. I dont have access to the test-reports folder. How can I manage that?
I resolved similar task by using always block after build stage.
Please check if below code can help.
always{
script{
sh '''#!/bin/bash
docker create -it --name test_report app:${BUILD_NUMBER} /bin/bash
docker cp test_report:/app/test-reports ./test-reports
docker rm -f test_report
'''
}
junit 'test-reports/junit.xml'
}

How to pass Jenkins build parameter as arguments to dockerfile in declarative pipeline

I have a declarative Jenkins pipeline. I am trying to pass Jenkins build parameters like jira_id and repository name along with credentials to the dockerfile as arguments.
However, on execution, the dockerfile is not able to receive any arguments as passed below.
May I please know the correct way of doing it?
Jenkinsfile Stage
stage('Create Image'){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'admin', passwordVariable: 'admin_pw', usernameVariable: 'admin_user')]){
dir("${WORKSPACE}/jenkins/"){
def repo=docker.build("latest","--build-arg admin_user=$admin_user --build-arg admin_pw=$admin_pw --build-arg jira_id=$jira_id --build-arg repository_name=$repository_name --no-cache .")
}
}
}
}
}
Dockerfile
FROM centos:8
RUN echo "$repository_name, $jira_id, $admin_user"
There is difference between using ARG and ENV.
ARG can be set during the image build with --build-arg and you can’t access them anymore once the image is built.
ENV values are accessible during the build, and afterwards once the container runs. You can set ENV values in your Dockerfile.
The Jenkinsfile stage is correct.
Changes in Dockerfile
FROM centos:8
ENV jira_id=$jira_id
ENV repository_name=$repository_name
ENV admin_user=$admin_user
RUN echo "$repository_name, $jira_id, $admin_user"
Found the solution. In the dockerfile, I have to receive all the variables using ARG.
ARG jira_id=$jira_id
While the above answers are correct, it fails if, you need dynamic naming in the "1st line" of Dockerfile i.e. if you have multiple repos to pull for same build it could look like this:
FROM $corp_repo/centos:8
To make it work, I used sed with ENV in a different way, define ENV outside Dockerfile in your jenkins stage using something like this:
env.CORP_REPO = "${params.corp_repo}/centos:latest"
My jenkins stage looks something like this:
stage('Build docker') {
daArtifactoryHelper.artifactoryDockerLogin(dockerArtifactoryAddress, artifactoryCredentialId)
env.CORP_REPO = "${params.corp_repo}/centos:latest"
sh 'sed -i "1c\$CORP_REPO" Dockerfile'
sh "cat Dockerfile"
sh "docker build -t ${fullImageName}:${version} ."
sh "docker push ${fullImageName}:${version}"
}
Here, sh 'sed -i "1c\$CORP_REPO" Dockerfile' is replacing the First line with the dynamic repo name set using ENV.
You can play around with it.
Again, this works for nth number lines as you are defining scope outside your Dockerfile.
Cheers!

Run commands inside Docker container without mounting project directory

My Jenkins pipeline uses the docker-workflow plugin. It builds a Docker image and tags it app. The build step fetches some dependencies and bakes them into the image along with my app.
I want to run two commands inside a container based on that image. The command should be executed in the built environment, with access to the dependencies. I tried using Image.inside, but it seems to fail because inside mounts the project directory over the working directory (?) and so the dependencies aren't available.
docker.image("app").inside {
sh './run prepare-tests'
sh './run tests'
}
I tried using docker.script.withDockerContainer too, but the commands don't seem to run inside the container. The same seems to be true for Image.withRun. At least with that I could specify a command, but it seems that I'd have to run specify both commands in one statement. Also it seems that withRun doesn't fail the build if the command doesn't exit cleanly.
docker
.image("app")
.withRun('', 'bash -c "./app prepare-tests && ./app tests"') { container ->
sh "exit \$(docker wait ${container.id})"
}
Is there a way to use Image.inside without mounting the project directory? Or is there are more elegant way of doing this?
docker DSL, like docker.image().inside() {} etc will mount jenkins job workspace dir to container and make it as the WORKDIR which will overwrite the WORKDIR in Dockerfile.
You can verify that from jenkins console output .
1) CD workdir fristly
docker.image("app").inside {
sh '''
cd <WORKDIR of image specifyed in Dockerfile>
./run prepare-tests
./run tests
'''
}
2) Run container in sh , rather than via docker DSL
sh '''
docker run -i app bash -c "./run prepare-tests && ./run tests"
'''

Cannot execute a shell script with docker command from Jenkins Pipeline step

I am using Jenkins version 2.121.1 with Pipeline On MacOS-HighSierra.
I've a shell script called build_docker_image.sh that builds a docker image using the following command:
docker build -t test_api:1 -f test-dockerfile
test-dockerfile is a Dockerfile and has instructions to build an image.
From CLI the whole set up works!
However, when I run it from Jenkins server Pipeline context, it is failing at the above line with an error: docker: command not found
The step that triggers from Jenkins server is simple. Call the script:
stage ('Build-Docker-Image') {
steps {
sh '/path/to/build-docker_image.sh'
}
}
In Jenkinsfile, I made sure the $PATH is including the path to Docker.
The issue was that I was appending the actual docker application like this /Applications/Docker.app/Contents/Resources/bin/docker, instead of the directory /Applications/Docker.app/Contents/Resources/bin

Resources