Cannot execute a shell script with docker command from Jenkins Pipeline step - docker

I am using Jenkins version 2.121.1 with Pipeline On MacOS-HighSierra.
I've a shell script called build_docker_image.sh that builds a docker image using the following command:
docker build -t test_api:1 -f test-dockerfile
test-dockerfile is a Dockerfile and has instructions to build an image.
From CLI the whole set up works!
However, when I run it from Jenkins server Pipeline context, it is failing at the above line with an error: docker: command not found
The step that triggers from Jenkins server is simple. Call the script:
stage ('Build-Docker-Image') {
steps {
sh '/path/to/build-docker_image.sh'
}
}
In Jenkinsfile, I made sure the $PATH is including the path to Docker.

The issue was that I was appending the actual docker application like this /Applications/Docker.app/Contents/Resources/bin/docker, instead of the directory /Applications/Docker.app/Contents/Resources/bin

Related

Jenkins Docker Pipeline Plugin: Get image building logs

Is there a way to extract the build logs when building a Docker image in a Jenkins pipeline using the "Docker Pipeline Plugin" solution docker.build()?
I want to be able to save these logs somewhere else (Jenkins console output):
I am able to get build logs of other shell commands (such as mvn install) very easily by using the sh() method, and then save the console output into a variable:
BUILD_JAR_LOGS = sh (
script: 'mvn install',
returnStdout: true
)
I know that I could use the same method with docker and do something like this:
BUILD_DOCKER_IMAGE_LOGS = sh (
script: 'docker build --network-host .',
returnStdout: true
)
However, I'd prefer to continue using the Docker Pipeline Plugin methods to achieve the actual image building.

Running a Docker container from a Jenkins pipeline, and capture the output

In a Jenkinsfile, how can I run a Docker container and capture the output, without reverting to sh instructions?
This is how I build my container image during the Build stage:
dockerImage = docker.build(env.DOCKER_COORDS, "--build-arg BUILD_VERSION=${env.BUILD_VERSION} .")
And this is how I push the image in a later Publish stage:
withDockerRegistry(credentialsId: env.DOCKER_REG_CREDS_ID) {
dockerImage.push('latest')
}
In between the Build and Publish stages, in a Test stage, I would like to validate that the output from the container, when passing a --version argument, is equal to ${env.BUILD_VERSION}.
How can I run the container and capture the output, without having to use sh "docker ..." instructions again?
I am new to this but according to the documentation of Docker Pipeline plugin, you can use Image.run([args]), for your case it will be dockerImage.run()
To pass environment variables:
dockerImage.run(['-e your_variable=X'])
Normally you can run and capture the output using the sh method (on a Linux slave and bat for windows). For example, to retrieve the python version of the official python docker, it will look like this:
def version = sh label: '', returnStdout: true, script: 'docker run python python --version'
And then you could just compare the output of the commande with ${env.BUILD_VERSION}. This shouldn't be to different with your docker image.

Run commands inside Docker container without mounting project directory

My Jenkins pipeline uses the docker-workflow plugin. It builds a Docker image and tags it app. The build step fetches some dependencies and bakes them into the image along with my app.
I want to run two commands inside a container based on that image. The command should be executed in the built environment, with access to the dependencies. I tried using Image.inside, but it seems to fail because inside mounts the project directory over the working directory (?) and so the dependencies aren't available.
docker.image("app").inside {
sh './run prepare-tests'
sh './run tests'
}
I tried using docker.script.withDockerContainer too, but the commands don't seem to run inside the container. The same seems to be true for Image.withRun. At least with that I could specify a command, but it seems that I'd have to run specify both commands in one statement. Also it seems that withRun doesn't fail the build if the command doesn't exit cleanly.
docker
.image("app")
.withRun('', 'bash -c "./app prepare-tests && ./app tests"') { container ->
sh "exit \$(docker wait ${container.id})"
}
Is there a way to use Image.inside without mounting the project directory? Or is there are more elegant way of doing this?
docker DSL, like docker.image().inside() {} etc will mount jenkins job workspace dir to container and make it as the WORKDIR which will overwrite the WORKDIR in Dockerfile.
You can verify that from jenkins console output .
1) CD workdir fristly
docker.image("app").inside {
sh '''
cd <WORKDIR of image specifyed in Dockerfile>
./run prepare-tests
./run tests
'''
}
2) Run container in sh , rather than via docker DSL
sh '''
docker run -i app bash -c "./run prepare-tests && ./run tests"
'''

Jenkins docker container simply hangs and never executes steps

I'm trying to run a Python image in Jenkins to perform a series of unit tests with pytest, but I'm getting some strange behavior with Docker.
My Jenkinsfile pipeline is
agent {
docker { image 'python:3.6-jessie' }
}
stages {
stage('Run tests') {
steps {
withCredentials([
string(credentialsId: 'a-secret', variable: 'A_SECRET')
{
sh label: "Install dependencies", script: 'pip install -r requirements.txt'
sh label: 'Execute tests', script: "pytest mytests.py"
}
}
}
}
However, when I run the pipeline, Docker appears to be executing a very long instruction (with significantly more -e environment variables than I defined as credentials?), followed by cat.
The build then simply hangs and never finishes:
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 996:994
-w /var/lib/jenkins/workspace/myproject
-v /var/lib/jenkins/workspace/myproject:/var/lib/jenkins/workspace/myproject:rw,z
-v /var/lib/jenkins/workspace/myproject#tmp:/var/lib/jenkins/workspace/myproject#tmp:rw,z
-e ******** -e ******** python:3.6-jessie cat
When I SSH into my instance and run docker ps, I see
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
240d00459d92 python:3.6-jessie "cat" About a minute ago Up About a minute kind_wright
Why is Jenkins running cat? Why does Jenkins say I am not running inside a container, when it has clearly created a container for me? And most importantly, why are my pip install -r requirements and other steps not executing?
I finally figured this out. If you have empty global environment variables in your Jenkins configuration, it appears that you'll get a malformed docker run command since Jenkins will write the command, with your empty string environment variable, as docker run -e some_env_var=some_value -e = ...
This will cause the container to simply hang.
A telltale sign that this is happening is you'll get the error message:
invalid argument "=" for "-e, --env" flag: invalid environment variable: =
This is initially difficult to diagnose since Jenkins (rightfully) hides your actual credentials with ***, so the empty environment strings do not show up as empty.
You need to check your Jenkins global configuration and make sure you don't have any empty environment variables accidentally defined:
If these exist, you need to delete them and rerun.

Using Docker for Windows in Jenkins Declarative Pipeline

I am setting up a CI workflow with Jenkins declarative pipeline and Docker-for-Windows agents through Dockerfile.
Note: It is unfortunately currently not a solution to use a Linux-based docker daemon, since I need to run Windows binaries.
Setup: Jenkins master runs on Linux 16.04 through Docker. Jenkins build agent is
Windows 10 Enterprise 1709 (16299.551)
Docker-for-Windows 17.12.0-ce
Docker 18.x gave me headaches when trying to use Windows Containers, so I rolled back to 17.x. I still had some issues when trying to run with Jenkins and nohup not being on path, but it was solved by adding Git binaries to Windows search path (another reference). I suspect my current issue may be related.
Code: I am trying to initialize a Jenkinsfile and run a simple hello-world-printout within.
/Jenkinsfile
pipeline {
agent none
stages {
stage('Docker Test') {
agent {
dockerfile {
filename 'Dockerfile'
label 'windocker'
}
}
steps {
println 'Hello, World!'
}
}
}
}
/Dockerfile
FROM python:3.7-windowsservercore
RUN python -m pip install --upgrade pip
Basically, this should be a clean image that simply prints "Hello, World!". But it fails on Jenkins!
Output from the log:
[C:\jenkins\workspace\dockerfilecd4c215a] Running shell script
+ docker build -t cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d -f Dockerfile .
Sending build context to Docker daemon 337.4kB
Step 1/2 : FROM python:3.7-windowsservercore
---> 340689b75c39
Step 2/2 : RUN python -m pip install --upgrade pip
---> Using cache
---> a93f446a877f
Successfully built a93f446a877f
Successfully tagged cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d:latest
[C:\jenkins\workspace\dockerfilecd4c215a] Running shell script
+ docker inspect -f . cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d
.
Cannot run program "id": CreateProcess error=2, The system cannot find the file specified
The issue is, that windows is not supported at the moment. It is calling the linux "id" command to get the current user id.
There is an open Pull Request and JIRA Ticket at Jenkins to support Windows docker pipeline:
https://issues.jenkins-ci.org/browse/JENKINS-36776
https://github.com/jenkinsci/docker-workflow-plugin/pull/148

Resources