Jenkins Docker Pipeline Plugin: Get image building logs - docker

Is there a way to extract the build logs when building a Docker image in a Jenkins pipeline using the "Docker Pipeline Plugin" solution docker.build()?
I want to be able to save these logs somewhere else (Jenkins console output):
I am able to get build logs of other shell commands (such as mvn install) very easily by using the sh() method, and then save the console output into a variable:
BUILD_JAR_LOGS = sh (
script: 'mvn install',
returnStdout: true
)
I know that I could use the same method with docker and do something like this:
BUILD_DOCKER_IMAGE_LOGS = sh (
script: 'docker build --network-host .',
returnStdout: true
)
However, I'd prefer to continue using the Docker Pipeline Plugin methods to achieve the actual image building.

Related

Docker Multi Stage Build access Test Reports in Jenkins when Tests Fail

I am doing a multi stage build in docker to separate the app from testing. Now at some point in my Dockerfile I run:
RUN pytest --junit=test-reports/junit.xml
In my Jenkinsfile respectivly I do:
def app = docker.build("app:${BUILD_NUMBER}", "--target test .")
app.inside {
sh "cp -r /app/test-reports test-reports"
}
junit 'test-reports/junit.xml'
Now if my test fail, the build fails which is good. But the rest of the stage is not executed, i.e. I dont have access to the test-reports folder. How can I manage that?
I resolved similar task by using always block after build stage.
Please check if below code can help.
always{
script{
sh '''#!/bin/bash
docker create -it --name test_report app:${BUILD_NUMBER} /bin/bash
docker cp test_report:/app/test-reports ./test-reports
docker rm -f test_report
'''
}
junit 'test-reports/junit.xml'
}

How to pass Jenkins build parameter as arguments to dockerfile in declarative pipeline

I have a declarative Jenkins pipeline. I am trying to pass Jenkins build parameters like jira_id and repository name along with credentials to the dockerfile as arguments.
However, on execution, the dockerfile is not able to receive any arguments as passed below.
May I please know the correct way of doing it?
Jenkinsfile Stage
stage('Create Image'){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'admin', passwordVariable: 'admin_pw', usernameVariable: 'admin_user')]){
dir("${WORKSPACE}/jenkins/"){
def repo=docker.build("latest","--build-arg admin_user=$admin_user --build-arg admin_pw=$admin_pw --build-arg jira_id=$jira_id --build-arg repository_name=$repository_name --no-cache .")
}
}
}
}
}
Dockerfile
FROM centos:8
RUN echo "$repository_name, $jira_id, $admin_user"
There is difference between using ARG and ENV.
ARG can be set during the image build with --build-arg and you can’t access them anymore once the image is built.
ENV values are accessible during the build, and afterwards once the container runs. You can set ENV values in your Dockerfile.
The Jenkinsfile stage is correct.
Changes in Dockerfile
FROM centos:8
ENV jira_id=$jira_id
ENV repository_name=$repository_name
ENV admin_user=$admin_user
RUN echo "$repository_name, $jira_id, $admin_user"
Found the solution. In the dockerfile, I have to receive all the variables using ARG.
ARG jira_id=$jira_id
While the above answers are correct, it fails if, you need dynamic naming in the "1st line" of Dockerfile i.e. if you have multiple repos to pull for same build it could look like this:
FROM $corp_repo/centos:8
To make it work, I used sed with ENV in a different way, define ENV outside Dockerfile in your jenkins stage using something like this:
env.CORP_REPO = "${params.corp_repo}/centos:latest"
My jenkins stage looks something like this:
stage('Build docker') {
daArtifactoryHelper.artifactoryDockerLogin(dockerArtifactoryAddress, artifactoryCredentialId)
env.CORP_REPO = "${params.corp_repo}/centos:latest"
sh 'sed -i "1c\$CORP_REPO" Dockerfile'
sh "cat Dockerfile"
sh "docker build -t ${fullImageName}:${version} ."
sh "docker push ${fullImageName}:${version}"
}
Here, sh 'sed -i "1c\$CORP_REPO" Dockerfile' is replacing the First line with the dynamic repo name set using ENV.
You can play around with it.
Again, this works for nth number lines as you are defining scope outside your Dockerfile.
Cheers!

Running a Docker container from a Jenkins pipeline, and capture the output

In a Jenkinsfile, how can I run a Docker container and capture the output, without reverting to sh instructions?
This is how I build my container image during the Build stage:
dockerImage = docker.build(env.DOCKER_COORDS, "--build-arg BUILD_VERSION=${env.BUILD_VERSION} .")
And this is how I push the image in a later Publish stage:
withDockerRegistry(credentialsId: env.DOCKER_REG_CREDS_ID) {
dockerImage.push('latest')
}
In between the Build and Publish stages, in a Test stage, I would like to validate that the output from the container, when passing a --version argument, is equal to ${env.BUILD_VERSION}.
How can I run the container and capture the output, without having to use sh "docker ..." instructions again?
I am new to this but according to the documentation of Docker Pipeline plugin, you can use Image.run([args]), for your case it will be dockerImage.run()
To pass environment variables:
dockerImage.run(['-e your_variable=X'])
Normally you can run and capture the output using the sh method (on a Linux slave and bat for windows). For example, to retrieve the python version of the official python docker, it will look like this:
def version = sh label: '', returnStdout: true, script: 'docker run python python --version'
And then you could just compare the output of the commande with ${env.BUILD_VERSION}. This shouldn't be to different with your docker image.

Cannot execute a shell script with docker command from Jenkins Pipeline step

I am using Jenkins version 2.121.1 with Pipeline On MacOS-HighSierra.
I've a shell script called build_docker_image.sh that builds a docker image using the following command:
docker build -t test_api:1 -f test-dockerfile
test-dockerfile is a Dockerfile and has instructions to build an image.
From CLI the whole set up works!
However, when I run it from Jenkins server Pipeline context, it is failing at the above line with an error: docker: command not found
The step that triggers from Jenkins server is simple. Call the script:
stage ('Build-Docker-Image') {
steps {
sh '/path/to/build-docker_image.sh'
}
}
In Jenkinsfile, I made sure the $PATH is including the path to Docker.
The issue was that I was appending the actual docker application like this /Applications/Docker.app/Contents/Resources/bin/docker, instead of the directory /Applications/Docker.app/Contents/Resources/bin

Jenkins pipeline files present in sh blocks not present in later steps

Here is my stage block
stage('Test') {
steps {
echo 'Testing ${env.JOB_NAME}:${env.BUILD_ID} on ${env.JENKINS_URL}..'
sh """
docker run -v /tmp/work/report:/report ${env.REPO}:${env.BUILD_ID} ./manage.py jenkins --enable-coverage --output-dir=/report
ls /work/report
cat /work/report/*.xml
"""
archiveArtifacts artifacts: '/work/report/*.xml'
// junit '/work/report/*.xml'
}
}
The files are present in the 'sh' block, as the output of the ls and cat show. However, in the next step 'archiveArtifacts' and (if I enable it) junit, the files are not found. What am I missing?
You are mounting the local folder /tmp/work/report to /report, but then you are testing /work/report
Also make sure to test it outside a pipeline, doing the docker run manually: when the container exit, it might reset its content.

Resources