Docker agent volume mount in Jenkins pipeline not working as expected - docker

Following snippet with volume mount creates the maven dependencies under $JENKINS_HOME/workspace/<project-name>/? (Question Mark) instead of under $HOME/.m2/
Note that settings.xml mirror to our internal repository. And the instructions on how to mount has been directly taken from jenkins.io
Anyone has any clue why is this happening?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp/jenkins/.m2:/root/.m2:rw,z'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s settings.xml'
}
}
}
}
This is not as simple as using Docker in standalone. I have created /var/jenkins/.m2 directory on Jenkins slave where the build would be running. Ensured the new directory has 775 permission (although that may not be required) and also changed the owner to be the same as what is for "/var/opt/slave/workspace/pipeline_test" (got this path based on logs below)
$ docker login -u dcr-login -p ******** https:// nexus.corp.zenmonics.com:8449
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . nexus.corp.zenmonics.com:8449/maven:3-alpine
.
[Pipeline] withDockerContainer
cucj1sb3 does not seem to be running inside a container
$ docker run -t -d -u 1002:1002 -v /tmp/jenkins/.m2:/root/.m2:rw,z -w
/var/opt/slave/workspace/pipeline_test -v /var/opt/slave/workspace/pipeline_test:/var/opt/slave/workspace/pipeline_test:rw,z -v /var/opt/slave/workspace/pipeline_test#tmp:/var/opt/slave/workspace/pipeline_test#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** nexus.corp.zenmonics.com:8449/maven:3-alpine cat
$ docker top c7282468dbb6952aadbe4bb495757e7047122b179c81516645ba23759b78c366 -eo pid,comm
This statement on official maven image at Docker Hub (https://hub.docker.com/_/maven) makes me feel the volume mount is updated
To create a pre-packaged repository, create a pom.xml with the
dependencies you need and use this in your Dockerfile.
/usr/share/maven/ref/settings-docker.xml is a settings file that
changes the local repository to /usr/share/maven/ref/repository, but
you can use your own settings file as long as it uses
/usr/share/maven/ref/repository as local repo.

The documentation at : https://jenkins.io/doc/book/pipeline/docker/ is misleading and waste of time when it comes to volume mounting.
When Docker container is created its created with user 1002 and group 1002. The user 1002 doesn't have access to /root/.m2 and only has access to the working directory injected into the container.
Dockerfile
FROM maven:3-alpine
COPY --chown=1002:1002 repository/ /usr/share/maven/ref/repository/
RUN chmod -R 775 /usr/share/maven/ref/repository
COPY settings.xml /usr/share/maven/ref/
Settings.xml
<localRepository>/usr/share/maven/ref/repository</localRepository>
Docker command
docker build -t <server>:<port>/<image-name>:<image-tag> .
docker push <server>:<port>/<image-name>:<image-tag>
docker volume create maven-repo
Jenkinsfile
pipeline {
agent('linux2') {
docker {
label '<slave-label-here>'
image '<image-name>:<image-tag>'
registryUrl 'https://<server>:<port>'
registryCredentialsId '<jenkins-credentials-for-docker-login>'
args '-v maven-repo:/usr/share/maven/ref/repository/'
}
}
parameters {
booleanParam(name: 'SONAR', defaultValue: false, description: 'Select this option to run SONAR Analysis')
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s /usr/share/maven/ref/settings.xml -f pom.xml'
}
}
}
}

As #masseyb mentions in the comments, Jenkins treat $HOME as current building context.
And there are two workarounds:
a) use Jenkins plugin to set Env variable
You can use Envinject Plugin to set environment variables in Jenkins.
b) specify absolute path instead of $HOME/.m2
You can specify absolute path for .m2, e.g.:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /home/samir-shail/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
Note: please check that Jenkins has access to yours $HOME/.m2/ directory.

Related

How to build and run a Docker image in a scripted Jenkinsfile pipeline?

I'm trying to build and run a docker image on a specific build node from a scripted Jenkinsfile. Switching to declarative syntax is something I would rather like to avoid.
My code is quite close to the example from the documentation. The image builds as expected. But running the container fails Jenkins complaining the physical machine of the node is not running inside a container and the echo and make commands from the innermost block that I would expect to run inside the container are not executed and do not appear in the log.
As far as I understand Jenkins considers containers to be build nodes on their own and that nesting of node statements are not allowed. At the same time a node is required to build and run the Docker image.
What am I missing to build and run the image? As Im quite new to Jenkins as well as to Docker any hints or recommendations are appreciated.
The code:
node('BuildMachine1')
{
withEnv(envList)
{
dir("/some/path")
{
docker.build("build-image:${env.BUILD_ID}", "-f ${env.WORKSPACE}/build/Dockerfile .").inside
{
echo "Echo from Docker"
sh script: 'make'
}
}
}
}
The log:
Successfully built 8c57cad188ed
Successfully tagged build-image:79
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . build-image:79
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
BuildMachine1 does not seem to be running inside a container
$ docker run -t -d -u 1004:1005 -w /data/Jenkins_Node/workspace/myFeature/buildaarch64Release -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release:rw,z -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** build-image:87 cat
$ docker top 2242078968bc1ee5ddfd08c73a2e1551eda36c2595f0e4c9fb6e9b3b0af15b8b -eo pid,comm
[Pipeline] // withDockerContainer
Looks like the Entrypoint of the container was configured in a way that worked for manual usage in terminal but not inside the Jenkins pipeline.
It was set as
ENTRYPOINT ["/usr/bin/env", "bash"]
After changing it to
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
the resulting container is used by the Jenkinsfile as intended.

Jenkins Pipeline NPM Integration creates empty package.json

We're running a dedicated build server on Debian 10. Jenkins is installed on there as well as Docker.
The Jenkins user of course owns the jenkins folder and is part of the Docker group. So all rights should be set.
There are already a couple pipelines building Docker images just fine.
Now we're trying to set up a new pipeline to build a web front-end. It's using the "Pipeline NPM Integration" plugin.
The pipeline looks like this
pipeline {
environment {
applicationVersion = '0'
}
agent any
stages {
stage('Clone Git') {
steps {
git url: '<githuburl>',
credentialsId: '<githubcreds>',
branch: 'develop'
}
}
stage('Build module (npm)') {
agent {
docker {
image 'node:15.2-buster-slim'
}
}
steps {
withNPM(npmrcConfig:'af7db72c-3235-4827-8eb4-69819a44e612') {
sh 'npm install'
sh 'npm run build:dev'
}
}
}
}
}
The pipeline always fails at the npm build with the following output
Running on Jenkins in /var/lib/jenkins/workspace/test#2
+ docker inspect -f . node:15.2-buster-slim
.
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 108:114 -w /var/lib/jenkins/workspace/test#2 -v /var/lib/jenkins/workspace/test#2:/var/lib/jenkins/workspace/test#2:rw,z -v /var/lib/jenkins/workspace/test#2#tmp:/var/lib/jenkins/workspace/test#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:15.2-buster-slim cat
$ docker top c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c -eo pid,comm
Using settings config with name npmrc
A workscape local .npmrc already exists and will be overwrriten for the build.
Writing .npmrc file: /var/lib/jenkins/workspace/test#2/.npmrc
+ npm install
up to date in 178ms
found 0 vulnerabilities
+ npm run build:dev
npm ERR! missing script: build:dev
$ docker stop --time=1 c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
$ docker rm -f c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
Removed some lines here to keep it short.
After a couple hours I found out that the package.json while being there is actually empty!
cat /var/lib/jenkins/workspace/test#2/package.json
{}
I have no idea why the plugin creates an empty copy of the file.
Is there any way to get more output from the plugin about what it's doing and what could possibly cause this?

Using docker agent inside a docker-swarm slave that is running inside Docker as well

We have a Jenkins file that looks as follow
pipeline {
agent {
node {
label 'slave-test'
}
}
stages {
stage ('test docker run') {
agent {
docker {
image 'node:14.4.0-slim'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
reuseNode true
}
}
steps {
sh 'PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true npm ci'
sh 'npm run test:ci'
sh 'npm run patternlab:build'
}
}
}
The node labelled as slave-test is a docker-swarm client running as a docker image based on debian-buster. Inside this slave we want to start the image node:14.4.0-slim to run some tests and package some frontend-stuff.
We use reuseNode = true to use the same workspace as agent in the beginning of the pipeline. But Jenkins tells us :
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test docker run)
[Pipeline] getContext
[Pipeline] isUnix
[Pipeline] sh
13:24:07 + docker inspect -f . node:14.4.0-slim
13:24:07 .
[Pipeline] withDockerContainer
13:24:07 hofladen-slave01-20d7912d seems to be running inside container 23d34522985b2e7ec99327337cd2b20bee22018562886c9930a4ba777cda11ca
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2 could not be found among [/var/run/docker.sock]
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp could not be found among [/var/run/docker.sock]
13:24:07 $ docker run -t -d -u 10000:10000 -u root -v /var/run/docker.sock:/var/run/docker.sock -w /home/****/workspace/ttern-library_feature_BWEBHM-262#2 -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2:/home/****/workspace/ttern-library_feature_BWEBHM-262#2:rw,z -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:/home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:14.4.0-slim cat
13:24:08 $ docker top 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683 -eo pid,comm
[Pipeline] {
[Pipeline] sh
13:29:15 process apparently never started in /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp/durable-504ce105
13:29:15 (running Jenkins temporarily with -Dorg.****ci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
13:29:15 $ docker stop --time=1 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
13:29:17 $ docker rm -f 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Cancel running builds if exist)
Stage "Cancel running builds if exist" skipped due to earlier failure(s)
We need to run these commands all in the same Jenkins Workspace in order to perform the later steps.
Does anybody have an Idea how to achive this. We know that the pipeline runs fine if the pipeline is not running on an agent that is on a standalone machine.
Fixed by starting jenkins slave as container with a volume to share data and allow access to /var/run/docker.sock
#!/bin/bash
volume_name=myfinevolume-slave01-workspace
docker volume create ${volume_name}
docker run -d --name jenkins-agent-for-myfineproject \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--mount source=${volume_name},target=/home/jenkins/workspace \
--memory=8G \
--memory-swap=16G \
--restart unless-stopped \
myfineregsitry.domain.lala/acme/jenkins-swarm-client:3.17.0_buster

Jenkins pipeline alpine agent "apk update ERROR: Unable to lock database: Permission denied"

I'm using Alpine docker image as a Jenkins pipeline agent but I keep getting permission denied error while running apk update or apk add package. I seeing similar error for Ubuntu images also while running apt update or apt install
Here's my Jenkinsfile:
pipeline {
agent none
stages {
stage('Initialization') {
agent any
steps {
checkout scm
}
}
stage('Git Clone') {
agent { docker { image 'alpine:3.12.0' } }
steps {
sh '''
apk update;
apk add --no-cache git;
apk add --no-cache openssh;
git -v;
'''
}
}
}
}
and here's the Jenkins output:
+ docker inspect -f . alpine:3.12.0
WARNING: Error loading config file: /root/.docker/config.json: stat /root/.docker/config.json: permission denied
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1001:0 -w "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend" -v "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend:/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend:rw,z" -v "/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend#tmp:/opt/bitnami/jenkins/jenkins_home/workspace/Deploy Glosfy Frontend#tmp:rw,z" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** alpine:3.12.0 cat
$ docker top 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa -eo pid,comm
[Pipeline] {
[Pipeline] sh
+ apk update
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
[Pipeline] }
$ docker stop --time=1 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa
$ docker rm -f 166c9ace17a4eb6aef0af0bbc04902ee4a358212be7f029550fb39a921e305aa
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
ERROR: script returned exit code 99
Finished: FAILURE
Can someone help me figure out the issue?
Please modify the docker tag from Jenkins pipeline like this:
docker {
image 'alpine:3.12.0'
args '-u root:root'
}
I believe the problem is that Jenkins is running the container with a non-root user, hence the Permission denied error.
Try changing your pipeline like so:
agent {
docker {
image 'alpine:3.12.0'
args '-u root'
}
}
See this answer.

Jenkins docker container always adds cat command

I am creating Jenkins pipeline for running terraform on a Docker container.
Here is my pipeline script.
pipeline {
agent {
docker {
image 'hashicorp/terraform:full'
args '--entrypoint=/bin/bash'
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
When running this pipeline on Jenkins, I get the below error.
$ docker run -t -d -u 995:993 --entrypoint=/bin/bash -w /var/lib/jenkins/workspace/terraform -v /var/lib/jenkins/workspace/terraform:/var/lib/jenkins/workspace/terraform:rw,z -v /var/lib/jenkins/workspace/terraform#tmp:/var/lib/jenkins/workspace/terraform#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** hashicorp/terraform:full cat
$ docker top a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd -eo pid,comm
java.io.IOException: Failed to run top 'a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd'. Error: Error response from daemon: Container a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd is not running
This seems like Jenkins add a cat command to run the image hashicorp/terraform:full.
Note that, I have overridden the entrypoint to /bin/bash using --entrypoint=/bin/bash since hashicorp/terraform:full already has an entrypoint defined.
I had to change the ENTRYPOINT to empty to disable the entrypoint definition from the terraform container definition. And I think the light image is sufficient for just executing terraform.
I got it working with the following script:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '--entrypoint='
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
This seems to be a default behavior of docker-workflow-plugin in jenkins.
[FIXED JENKINS-41316] Switch 'inside' back to CMD, detect if entrypoint was badly designed #116 https://github.com/jenkinsci/docker-workflow-plugin/pull/116
we run whatever the process the image specifies (even sh -c)
Their purpose is
That will break in most images, since for this purpose we need to start a container, pause it while we exec some stuff, and then stop it, without having to guess what its “main command” might run and when it might exit on its own. That is why we cat (I also have considered sleep infinity or some POSIX-compliant variant).
https://issues.jenkins-ci.org/browse/JENKINS-39748
code is here: https://github.com/jenkinsci/docker-workflow-plugin/blob/50ad50bad2ee14eb73d1ae3ef1058b8ad76c9e5d/src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java#L184
They want the container will be /* expected to hang until killed */.
Original answer:
Would you try to run without -d option(which means run in background)
docker run -it --entrypoint=/bin/bash hashicorp/terraform:full
Then you can enter the container to run whatever you want.
Take nginx as an example:
docker run -it --entrypoint=/bin/bash nginx
root#e4dc1d08de1d:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#e4dc1d08de1d:/# cat /var/log/
apt/ btmp dpkg.log faillog lastlog nginx/ wtmp
root#e4dc1d08de1d:/# cat /var/log/faillog
root#e4dc1d08de1d:/#
in my case, the entrypoint did some troubles for me so I needed to overwrite it by passing empty entrypoint argument to the inside method like here:
pipeline {
agent {
label 'some_label'
}
stages {
stage('execute') {
steps {
script {
img = docker.build("docker_image_name:docker_image_tag")
img.inside('--entrypoint= -e NODE_ENV=test') {
sh 'npm install --dev'
sh 'npm run test'
}
}
}
}
}
}
this example is like #S.Spieker comment but with different syntax.
remark: npm commands like npm test can be different between node js projects, so you will need to get the relevant commands from the developer.
if this example is still not working for you so you probably need to change your docker image entrypoint like here: https://github.com/SonarSource/sonar-scanner-cli-docker/pull/31/files
you can learn more about the docker-flow plugin here: https://docs.cloudbees.com/docs/admin-resources/latest/plugins/docker-workflow
some more examples: Jenkins: How to use JUnit plugin when Maven builds occur within Docker container

Resources