Jenkins Pipeline NPM Integration creates empty package.json - docker

We're running a dedicated build server on Debian 10. Jenkins is installed on there as well as Docker.
The Jenkins user of course owns the jenkins folder and is part of the Docker group. So all rights should be set.
There are already a couple pipelines building Docker images just fine.
Now we're trying to set up a new pipeline to build a web front-end. It's using the "Pipeline NPM Integration" plugin.
The pipeline looks like this
pipeline {
environment {
applicationVersion = '0'
}
agent any
stages {
stage('Clone Git') {
steps {
git url: '<githuburl>',
credentialsId: '<githubcreds>',
branch: 'develop'
}
}
stage('Build module (npm)') {
agent {
docker {
image 'node:15.2-buster-slim'
}
}
steps {
withNPM(npmrcConfig:'af7db72c-3235-4827-8eb4-69819a44e612') {
sh 'npm install'
sh 'npm run build:dev'
}
}
}
}
}
The pipeline always fails at the npm build with the following output
Running on Jenkins in /var/lib/jenkins/workspace/test#2
+ docker inspect -f . node:15.2-buster-slim
.
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 108:114 -w /var/lib/jenkins/workspace/test#2 -v /var/lib/jenkins/workspace/test#2:/var/lib/jenkins/workspace/test#2:rw,z -v /var/lib/jenkins/workspace/test#2#tmp:/var/lib/jenkins/workspace/test#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:15.2-buster-slim cat
$ docker top c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c -eo pid,comm
Using settings config with name npmrc
A workscape local .npmrc already exists and will be overwrriten for the build.
Writing .npmrc file: /var/lib/jenkins/workspace/test#2/.npmrc
+ npm install
up to date in 178ms
found 0 vulnerabilities
+ npm run build:dev
npm ERR! missing script: build:dev
$ docker stop --time=1 c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
$ docker rm -f c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
Removed some lines here to keep it short.
After a couple hours I found out that the package.json while being there is actually empty!
cat /var/lib/jenkins/workspace/test#2/package.json
{}
I have no idea why the plugin creates an empty copy of the file.
Is there any way to get more output from the plugin about what it's doing and what could possibly cause this?

Related

How to build and run a Docker image in a scripted Jenkinsfile pipeline?

I'm trying to build and run a docker image on a specific build node from a scripted Jenkinsfile. Switching to declarative syntax is something I would rather like to avoid.
My code is quite close to the example from the documentation. The image builds as expected. But running the container fails Jenkins complaining the physical machine of the node is not running inside a container and the echo and make commands from the innermost block that I would expect to run inside the container are not executed and do not appear in the log.
As far as I understand Jenkins considers containers to be build nodes on their own and that nesting of node statements are not allowed. At the same time a node is required to build and run the Docker image.
What am I missing to build and run the image? As Im quite new to Jenkins as well as to Docker any hints or recommendations are appreciated.
The code:
node('BuildMachine1')
{
withEnv(envList)
{
dir("/some/path")
{
docker.build("build-image:${env.BUILD_ID}", "-f ${env.WORKSPACE}/build/Dockerfile .").inside
{
echo "Echo from Docker"
sh script: 'make'
}
}
}
}
The log:
Successfully built 8c57cad188ed
Successfully tagged build-image:79
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . build-image:79
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
BuildMachine1 does not seem to be running inside a container
$ docker run -t -d -u 1004:1005 -w /data/Jenkins_Node/workspace/myFeature/buildaarch64Release -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release:rw,z -v /data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:/data/Jenkins_Node/workspace/myFeature/buildaarch64Release#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** build-image:87 cat
$ docker top 2242078968bc1ee5ddfd08c73a2e1551eda36c2595f0e4c9fb6e9b3b0af15b8b -eo pid,comm
[Pipeline] // withDockerContainer
Looks like the Entrypoint of the container was configured in a way that worked for manual usage in terminal but not inside the Jenkins pipeline.
It was set as
ENTRYPOINT ["/usr/bin/env", "bash"]
After changing it to
ENTRYPOINT [ "/bin/bash", "-l", "-c" ]
the resulting container is used by the Jenkinsfile as intended.

Using docker agent inside a docker-swarm slave that is running inside Docker as well

We have a Jenkins file that looks as follow
pipeline {
agent {
node {
label 'slave-test'
}
}
stages {
stage ('test docker run') {
agent {
docker {
image 'node:14.4.0-slim'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
reuseNode true
}
}
steps {
sh 'PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true npm ci'
sh 'npm run test:ci'
sh 'npm run patternlab:build'
}
}
}
The node labelled as slave-test is a docker-swarm client running as a docker image based on debian-buster. Inside this slave we want to start the image node:14.4.0-slim to run some tests and package some frontend-stuff.
We use reuseNode = true to use the same workspace as agent in the beginning of the pipeline. But Jenkins tells us :
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test docker run)
[Pipeline] getContext
[Pipeline] isUnix
[Pipeline] sh
13:24:07 + docker inspect -f . node:14.4.0-slim
13:24:07 .
[Pipeline] withDockerContainer
13:24:07 hofladen-slave01-20d7912d seems to be running inside container 23d34522985b2e7ec99327337cd2b20bee22018562886c9930a4ba777cda11ca
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2 could not be found among [/var/run/docker.sock]
13:24:07 but /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp could not be found among [/var/run/docker.sock]
13:24:07 $ docker run -t -d -u 10000:10000 -u root -v /var/run/docker.sock:/var/run/docker.sock -w /home/****/workspace/ttern-library_feature_BWEBHM-262#2 -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2:/home/****/workspace/ttern-library_feature_BWEBHM-262#2:rw,z -v /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:/home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:14.4.0-slim cat
13:24:08 $ docker top 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683 -eo pid,comm
[Pipeline] {
[Pipeline] sh
13:29:15 process apparently never started in /home/****/workspace/ttern-library_feature_BWEBHM-262#2#tmp/durable-504ce105
13:29:15 (running Jenkins temporarily with -Dorg.****ci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
13:29:15 $ docker stop --time=1 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
13:29:17 $ docker rm -f 0759f74d1c2676d68a32edab9775b2ca3c518fa2e4e673af856a87e9da514683
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Cancel running builds if exist)
Stage "Cancel running builds if exist" skipped due to earlier failure(s)
We need to run these commands all in the same Jenkins Workspace in order to perform the later steps.
Does anybody have an Idea how to achive this. We know that the pipeline runs fine if the pipeline is not running on an agent that is on a standalone machine.
Fixed by starting jenkins slave as container with a volume to share data and allow access to /var/run/docker.sock
#!/bin/bash
volume_name=myfinevolume-slave01-workspace
docker volume create ${volume_name}
docker run -d --name jenkins-agent-for-myfineproject \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--mount source=${volume_name},target=/home/jenkins/workspace \
--memory=8G \
--memory-swap=16G \
--restart unless-stopped \
myfineregsitry.domain.lala/acme/jenkins-swarm-client:3.17.0_buster

Docker agent volume mount in Jenkins pipeline not working as expected

Following snippet with volume mount creates the maven dependencies under $JENKINS_HOME/workspace/<project-name>/? (Question Mark) instead of under $HOME/.m2/
Note that settings.xml mirror to our internal repository. And the instructions on how to mount has been directly taken from jenkins.io
Anyone has any clue why is this happening?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp/jenkins/.m2:/root/.m2:rw,z'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s settings.xml'
}
}
}
}
This is not as simple as using Docker in standalone. I have created /var/jenkins/.m2 directory on Jenkins slave where the build would be running. Ensured the new directory has 775 permission (although that may not be required) and also changed the owner to be the same as what is for "/var/opt/slave/workspace/pipeline_test" (got this path based on logs below)
$ docker login -u dcr-login -p ******** https:// nexus.corp.zenmonics.com:8449
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . nexus.corp.zenmonics.com:8449/maven:3-alpine
.
[Pipeline] withDockerContainer
cucj1sb3 does not seem to be running inside a container
$ docker run -t -d -u 1002:1002 -v /tmp/jenkins/.m2:/root/.m2:rw,z -w
/var/opt/slave/workspace/pipeline_test -v /var/opt/slave/workspace/pipeline_test:/var/opt/slave/workspace/pipeline_test:rw,z -v /var/opt/slave/workspace/pipeline_test#tmp:/var/opt/slave/workspace/pipeline_test#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** nexus.corp.zenmonics.com:8449/maven:3-alpine cat
$ docker top c7282468dbb6952aadbe4bb495757e7047122b179c81516645ba23759b78c366 -eo pid,comm
This statement on official maven image at Docker Hub (https://hub.docker.com/_/maven) makes me feel the volume mount is updated
To create a pre-packaged repository, create a pom.xml with the
dependencies you need and use this in your Dockerfile.
/usr/share/maven/ref/settings-docker.xml is a settings file that
changes the local repository to /usr/share/maven/ref/repository, but
you can use your own settings file as long as it uses
/usr/share/maven/ref/repository as local repo.
The documentation at : https://jenkins.io/doc/book/pipeline/docker/ is misleading and waste of time when it comes to volume mounting.
When Docker container is created its created with user 1002 and group 1002. The user 1002 doesn't have access to /root/.m2 and only has access to the working directory injected into the container.
Dockerfile
FROM maven:3-alpine
COPY --chown=1002:1002 repository/ /usr/share/maven/ref/repository/
RUN chmod -R 775 /usr/share/maven/ref/repository
COPY settings.xml /usr/share/maven/ref/
Settings.xml
<localRepository>/usr/share/maven/ref/repository</localRepository>
Docker command
docker build -t <server>:<port>/<image-name>:<image-tag> .
docker push <server>:<port>/<image-name>:<image-tag>
docker volume create maven-repo
Jenkinsfile
pipeline {
agent('linux2') {
docker {
label '<slave-label-here>'
image '<image-name>:<image-tag>'
registryUrl 'https://<server>:<port>'
registryCredentialsId '<jenkins-credentials-for-docker-login>'
args '-v maven-repo:/usr/share/maven/ref/repository/'
}
}
parameters {
booleanParam(name: 'SONAR', defaultValue: false, description: 'Select this option to run SONAR Analysis')
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s /usr/share/maven/ref/settings.xml -f pom.xml'
}
}
}
}
As #masseyb mentions in the comments, Jenkins treat $HOME as current building context.
And there are two workarounds:
a) use Jenkins plugin to set Env variable
You can use Envinject Plugin to set environment variables in Jenkins.
b) specify absolute path instead of $HOME/.m2
You can specify absolute path for .m2, e.g.:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /home/samir-shail/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
Note: please check that Jenkins has access to yours $HOME/.m2/ directory.

How do I run jenkins as a single master agent with a docker pipeline?

I am trying to run a single node Jenkins with:
Jenkins master as a docker container
Jenkins jobs running in the same container (single node)
Jenkins docker pipeline tasks, e.g. https://jenkins.io/doc/pipeline/tour/hello-world/
I am using the jenkins/jenkins:lts image. The issues I found and resolved so far were:
Missing docker binary (added it either via extending the image or bind-mounting)
Missing docker plugin not part of the standard recommended plugins (added via the dashboard)
It is able to run docker, but I still get the "Jenkins does not seem to be running inside a container", and the script will not execute.
The failure logs are here:
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1000:1000 -w "/var/jenkins_home/jobs/Test Pipeline/branches/master/workspace" -v "/var/jenkins_home/jobs/Test Pipeline/branches/master/workspace:/var/jenkins_home/jobs/Test Pipeline/branches/master/workspace:rw,z" -v "/var/jenkins_home/jobs/Test Pipeline/branches/master/workspace#tmp:/var/jenkins_home/jobs/Test Pipeline/branches/master/workspace#tmp:rw,z" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat node:6.3
[Pipeline] {
[Pipeline] stage
[Pipeline] { (build)
[Pipeline] sh
[workspace] Running shell script
sh: 1: cannot create /var/jenkins_home/jobs/Test Pipeline/branches/master/workspace#tmp/durable-3f202d99/jenkins-log.txt: Directory nonexistent
sh: 1: cannot create /var/jenkins_home/jobs/Test Pipeline/branches/master/workspace#tmp/durable-3f202d99/jenkins-result.txt: Directory nonexistent
Why does it not believe it is running in a container? Is it trying to run the agent worker locally and failing?

Jenkinsfile docker agent step dies after 1 second

I have a very simple Jenkinsfile as seen below.
def workspace
node {
workspace = sh(returnStdout: true, script: 'pwd').trim()
}
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker {
image 'composer'
args "-v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v $workspace:/app"
}
}
steps {
sh 'php -v'
sh 'composer install --no-interaction --working-dir=$WORKSPACE/backend'
}
}
}
}
I've gotten to the point where this works entirely as intended (e.g.: mounts volumes as expected, moves things around, pulls image, actually runs composer install), with one minor exception...
The immediately following the docker run it gets into the shell steps, runs sh 'composer install...' and dies after 1 second, going into the docker stop --time 1 ... and docker rm ... steps immediately after.
I have no idea if this is coming from Composer doing something odd, or if there's some configurable timeout I'm completely unaware of.
Has anyone dealt with this before?
Edit:
Here is more information:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 997:995 -v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v [...] -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat composer
[Pipeline] {
[Pipeline] sh
[workspace] Running shell script
+ php -v
PHP 7.1.9 (cli) (built: Sep 15 2017 00:07:01) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
[Pipeline] sh
[workspace] Running shell script
+ composer install --no-interaction --working-dir=/app/backend --no-progress --no-ansi
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 29 installs, 0 updates, 0 removals
- Installing laravel/tinker (v1.0.2): Downloading[Pipeline] }
$ docker stop --time=1 ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
$ docker rm -f ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
[Pipeline] // withDockerContainer
[Pipeline] }
and from an earlier job
Installing dependencies (including require-dev) from lock file
Package operations: 55 installs, 0 updates, 0 removals
- Installing symfony/finder (v3.3.6): Downloading (connecting...)[Pipeline] }
It's working as can be seen, but the return code at the end is....
GitHub has been notified of this commit’s build result
ERROR: script returned exit code -1
Finished: FAILURE
Edit 2:
Got this to work even simpler, see gist for more info:
https://gist.github.com/chuckyz/6b78b19a6a5ea418afa16cc58020096e
It's a bug in Jenkins, so until this is marked as fixed I'm just using sh steps with docker run ... in them, manually.
https://issues.jenkins-ci.org/browse/JENKINS-35370
e.g.:
sh 'docker run -v $WORKSPACE:/app composer install'
I experienced this and resolved it by upgrading durable-task-plugin from 1.13 to 1.16.
The 1.16 changelog contains:
Using a new system for determining whether sh step processes are still alive, which should solve various robustness issues.

Resources