How do I keep the local docker image made by Jenkins? - docker

Jenkins with docker-pipeline is downloading my git repository, and installing it.
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 8989:8989'
}
}
stages {
stage("install") {
steps {
sh 'npm install -production'
}
}
}
}
But then it's cleaning up and deleting the image:
$ docker stop --time=1
4b3220d100fae5d903db600992e91fb1ac391f1226b2aee01c6a92c3f0ff009c $
docker rm -f
4b3220d100fae5d903db600992e91fb1ac391f1226b2aee01c6a92c3f0ff009c
All of the many deployment examples online are for publishing to a registry.
how do I just stop it being cleaned up?
How do I name & save and the image locally ?

You may want to use dockerfile agent instead of the docker one. It uses a Dockerfile that is part of your project to build the local docker image. All image layers will be cached, so the next time you run the build, it won't spend time on re-building the image. It is also useful to put commands like npm install -production inside the Dockerfile, so those dependencies are downloaded and installed only one time.
Take a look at the following example.
Dockerfile:
FROM node:6-alpine
RUN npm install -production
Jenkinsfile:
pipeline {
agent {
dockerfile {
filename "Dockerfile"
args '-p 8989:8989'
additionalBuildArgs "-t my-custom-node:latest"
}
}
stages {
stage("Test") {
steps {
sh "npm --version" // this command gets executed inside the container
}
}
}
}
The output:
Running on Jenkins in /home/wololock/.jenkins/workspace/pipeline-dockerfile
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Agent Setup)
[Pipeline] isUnix
[Pipeline] readFile
[Pipeline] sh
+ docker build -t a3f11e979e510758f10ac738e7e4e5c1160db2eb -f Dockerfile .
Sending build context to Docker daemon 2.048 kB
Step 1/2 : FROM node:6-alpine
Trying to pull repository docker.io/library/node ...
sha256:17258206fc9256633c7100006b1cfdf25b129b6a40b8e5d37c175026482c84e3: Pulling from docker.io/library/node
bdf0201b3a05: Pulling fs layer
e9fa13fdf0f5: Pulling fs layer
ccc877228d8f: Pulling fs layer
ccc877228d8f: Verifying Checksum
ccc877228d8f: Download complete
bdf0201b3a05: Verifying Checksum
bdf0201b3a05: Download complete
bdf0201b3a05: Pull complete
e9fa13fdf0f5: Verifying Checksum
e9fa13fdf0f5: Download complete
e9fa13fdf0f5: Pull complete
ccc877228d8f: Pull complete
Digest: sha256:17258206fc9256633c7100006b1cfdf25b129b6a40b8e5d37c175026482c84e3
Status: Downloaded newer image for docker.io/node:6-alpine
---> dfc29bfa7d41
Step 2/2 : RUN npm install -production
---> Running in e058ab280807
[91mnpm[0m[91m WARN[0m[91m enoent[0m[91m ENOENT: no such file or directory, open '/package.json'
[0m[91mnpm [0m[91mWARN[0m[91m !invalid#1 No description
[0m[91mnpm[0m[91m [0m[91mWARN[0m[91m !invalid#1 No repository field.
[0m[91mnpm [0m[91mWARN[0m[91m !invalid#1 No README data
[0m[91mnpm [0m[91mWARN[0m[91m !invalid#1 No license field.
[0m ---> d685094800d9
Removing intermediate container e058ab280807
Successfully built d685094800d9
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
+ docker inspect -f . a3f11e979e510758f10ac738e7e4e5c1160db2eb
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 1000:1000 -w /home/wololock/.jenkins/workspace/pipeline-dockerfile -v /home/wololock/.jenkins/workspace/pipeline-dockerfile:/home/wololock/.jenkins/workspace/pipeline-dockerfile:rw,z -v /home/wololock/.jenkins/workspace/pipeline-dockerfile#tmp:/home/wololock/.jenkins/workspace/pipeline-dockerfile#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** a3f11e979e510758f10ac738e7e4e5c1160db2eb cat
$ docker top 84ca2e4a30487f114de81dbd60e53219a8cfa3959fb5b05d2c908f872bfe790c -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ npm --version
3.10.10
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 84ca2e4a30487f114de81dbd60e53219a8cfa3959fb5b05d2c908f872bfe790c
$ docker rm -f 84ca2e4a30487f114de81dbd60e53219a8cfa3959fb5b05d2c908f872bfe790c
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In this example I used additionalBuildArgs option to add additional tag. Now, when I do docker images on my host, I can see this image as:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my-custom-node latest d685094800d9 About a minute ago 56.1 MB
docker.io/node 6-alpine dfc29bfa7d41 6 months ago 56.1 MB
This is the good first step to start with. If at some point you will find building the docker image too consuming, you can consider building this image in the separate pipeline and push it to some private repository. But because docker caches all layers, I wouldn't think about remote docker repository at this stage. Use local Dockerfile, experiment with your build environment and then see if publishing the docker image would be any improvement for you.
And btw, Jenkins Pipeline terminates the docker container when it terminates the Jenkins job. Your current workspace is mounted and the container workspace is set to the current Jenkine job one, so any file(s) that get created inside your workspace inside the container will be persisted.
To read more about Jenkins Pipeline agents, go here - https://jenkins.io/doc/book/pipeline/syntax/#agent

Related

Jenkins does not run docker container as agent

I am new to Jenkins. Sorry if my question is basic.
I am trying to use Jenkins inside a container using the following command:
docker run --name jenkins --privileged -u root -d -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock:Z -v $(which docker):/usr/bin/docker -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
and after initializing Jenkins by its web page on localhost:8080, I installed docker pipeline and docker plugin plugins and restarted container. Then I created a multibranch pipeline and used my repositories to pull code from. The jenkins file in my repo:
pipeline {
agent {
docker {
image 'python'
}
}
stages {
stage('build') {
steps {
sh 'python --version'
}
}
}
}
However, at first, it didn't work due to some permission problems with docker. I noticed that docker socket inside the container had a uid and gid of nobody. Then I tried to change docker permission in my host machine to 666. Then, it no longer had the permission problem. But, still it doesn't run python --version command inside the container. I have also to add --entrypoint= as args inside jenkins file, -it entrypoint=/bin/bash, -u root. But none of them worked. Here are the logs of jenkins:
Started by user sobhan
07:20:29 Connecting to https://api.github.com using sobhansaf/******
Obtained Jenkinsfile from 2bd22bad5bfca8114cd9d98cc4f56b3915dd012e
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/mypipe_main
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
The recommended git tool is: NONE
using credential cd26ef85-7736-498b-9629-813934e651fb
> git rev-parse --resolve-git-dir /var/jenkins_home/workspace/mypipe_main/.git # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/sobhansaf/circlecitest.git # timeout=10
Fetching without tags
Fetching upstream changes from https://github.com/sobhansaf/circlecitest.git
> git --version # timeout=10
> git --version # 'git version 2.30.2'
using GIT_ASKPASS to set credentials
> git fetch --no-tags --force --progress -- https://github.com/sobhansaf/circlecitest.git +refs/heads/main:refs/remotes/origin/main # timeout=10
Checking out Revision 2bd22bad5bfca8114cd9d98cc4f56b3915dd012e (main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 2bd22bad5bfca8114cd9d98cc4f56b3915dd012e # timeout=10
Commit message: "Update Jenkinsfile"
> git rev-list --no-walk eb8007c2dd9171cf306a04b75cdc48b9cb0da32e # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . python
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
Jenkins seems to be running inside container ccd4d26ae72e22da420697318327033c77e8ff720034a8573a384d6ba442ca4b
but /var/jenkins_home/workspace/mypipe_main could not be found among []
but /var/jenkins_home/workspace/mypipe_main#tmp could not be found among []
$ docker run -t -d -u 0:0 -w /var/jenkins_home/workspace/mypipe_main -v /var/jenkins_home/workspace/mypipe_main:/var/jenkins_home/workspace/mypipe_main:rw,z -v /var/jenkins_home/workspace/mypipe_main#tmp:/var/jenkins_home/workspace/mypipe_main#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** python cat
$ docker top 453918fd912510c869adad2cd3f9fb422108484c7af3ca02a91c42eea9b77b66 -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (build)
[Pipeline] sh
process apparently never started in /var/jenkins_home/workspace/mypipe_main#tmp/durable-ef6dc772
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 453918fd912510c869adad2cd3f9fb422108484c7af3ca02a91c42eea9b77b66
$ docker rm -f 453918fd912510c869adad2cd3f9fb422108484c7af3ca02a91c42eea9b77b66
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code -2
GitHub has been notified of this commit’s build result
Finished: FAILURE
also it is worth mentioning that, after creating a container by Jenkins, when I check container, its command is always a cat command. Could you please help me realize what is wrong?

How to fix "process apparently never started in ..." error in Jenkins pipeline?

I am getting the strange error below in my Jenkins pipeline
[Pipeline] withDockerContainer
acp-ci-ubuntu-test does not seem to be running inside a container
$ docker run -t -d -u 1002:1006 -u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v /home/ubuntu/.docker:/home/ubuntu/.docker -w /home/ubuntu/workspace/CD-acp-cassandra -v /home/ubuntu/workspace/CD-acp-cassandra:/home/ubuntu/workspace/CD-acp-cassandra:rw,z -v /home/ubuntu/workspace/CD-acp-cassandra#tmp:/home/ubuntu/workspace/CD-acp-cassandra#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** quay.io/arubadevops/acp-build:ut-build cat
$ docker top 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44 -eo pid,comm
[Pipeline] {
[Pipeline] sh
process apparently never started in /home/ubuntu/workspace/CD-acp-cassandra#tmp/durable-70b242d1
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
$ docker stop --time=1 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
$ docker rm -f 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
[Pipeline] // withDockerContainer
The corresponding stage in Jenkins pipeline is
stage("Build docker containers & coreupdate packages") {
agent {
docker {
image "quay.io/arubadevops/acp-build:ut-build"
label "acp-ci-ubuntu"
args "-u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v $HOME/.docker:/home/ubuntu/.docker"
}
}
steps {
script {
try {
sh "export CI_BUILD_NUMBER=${currentBuild.number}; cd docker; ./build.sh; cd ../test; ./build.sh;"
ciBuildStatus="PASSED"
} catch (err) {
ciBuildStatus="FAILED"
}
}
}
}
What could be the reasons why the process is not getting started within the docker container? Any pointers on how to debug further are also helpful.
This error means the Jenkins process is stuck on some command.
Some suggestions:
Upgrade all of your plugins and re-try.
Make sure you've the right number of executors and jobs aren't stuck in the queue.
If you're pulling the image (not your local), try adding alwaysPull true (next line to image).
When using agent inside stage, remove the outer agent. See: JENKINS-63449.
Execute org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true in Jenkins's Script Console to debug.
When the process is stuck, SSH to Jenkins VM and run docker ps to see which command is running.
Run docker ps -a to see the latest failed runs. In my case it tried to run cat next to custom CMD command set by container (e.g. ansible-playbook cat), which was the invalid command. The cat command is used by design. To change entrypoint, please read JENKINS-51307.
If your container is still running, you can login to your Docker container by docker exec -it -u0 $(docker ps -ql) bash and run ps wuax to see what's doing.
Try removing some global variables (could be a bug), see: parallel jobs not starting with docker workflow.
The issue is caused by some breaking changes introduced in the Jenkins durable-task plugin v1.31.
Source:
https://issues.jenkins-ci.org/browse/JENKINS-59907 and
https://github.com/jenkinsci/durable-task-plugin/blob/master/CHANGELOG.md
Solution:
Upgrading the Jenkins durable-task plugin to v1.33 resolved the issue for us.
I had this same problem and in my case, it was related to the -u <user> arg passed to the agent. In the end, changing my pipeline to use -u root fixed the problem.
In the original post, I notice a -u ubuntu was used to run the container:
docker run -t -d -u 1002:1006 -u ubuntu ... -e ******** quay.io/arubadevops/acp-build:ut-build cat
I was also using a custom user, one I've added when building the Docker image.
agent {
docker {
image "app:latest"
args "-u someuser"
alwaysPull false
reuseNode true
}
}
steps {
sh '''
# DO STUFF
'''
}
Starting the container locally using the same Jenkins commands works OK:
docker run -t -d -u 1000:1000 -u someuser app:image cat
docker top <hash> -eo pid,comm
docker exec -it <hash> ls # DO STUFF
But in Jenkins, it fails with the same "process never started.." error:
$ docker run -t -d -u 1000:1000 -u someuser app:image cat
$ docker top <hash> -eo pid,comm
[Pipeline] {
[Pipeline] unstash
[Pipeline] sh
process apparently never started in /home/jenkins/agent/workspace/branch#tmp/durable-f5dfbb1c
For some reason, changing it to -u root worked.
agent {
docker {
image "app:latest"
args "-u root" # <=-----------
alwaysPull false
reuseNode true
}
}
If you have upgraded the durable-task plugin to 1.33 or later and it still won't work, check if there's an empty environment variable configured in your pipeline or stored in the Jenkins configuration (dashed) and remove it:
In addition to kenorb's answer:
Check permissions inside the container you are running in and the Jenkins directory on the build host.
I am running custom Docker containers and after several hours of debugging, I found that after trying to execute what Jenkins was trying to execute inside the running container (by exec into the container, running echo "$(ps waux)", and executing those sh -c commands one by one). I found Jenkins couldn't create the log file inside the container due to a mismatch in UID and GID.
If you are running Jenkins inside of Docker and using a DinD container for Jenkins running Docker jobs, make sure you mount your Jenkins data volume to /var/jenkins_home in the service providing the Docker daemon. The log creation is actually being attempted by the daemon, which means the daemon container needs access to the volume with the workspace that is being operated on.
Example snippet for docker-compose.yml:
services:
dind:
container_name: dind-for-jenkins
privileged: true
image: docker:stable-dind
volumes:
- 'jenkins-data:/var/jenkins_home'
This has eaten my life! I tried every imaginable solution on at least 10 SO posts, and in the end it was because my pipeline had spaces in its name. :|
So I changed "let's try scripting" with "scripts_try" and it just worked.
Building a Jenkins job which runs within a Docker container, and ran into this same error. The version of the Durable-Task plugin is at v1.35, so that was not the issue. My issue was ... my job was trying to run a chmod -R 755 *.sh command, and the active user within the container did not have sufficient permissions to execute chmod against those files. Would have expected Jenkins to fail the job here, but launching the container using an ID which did have permissions to run the chmod command got past this error.

Jenkins: dockerfile agent commands not run in container

This is my Jenkinsfile:
pipeline {
agent {
dockerfile true
}
stages {
stage('Run tests') {
steps {
sh 'pwd'
sh 'vendor/bin/phpunit'
}
}
}
}
I'm running Jenkins, and although I am able to build the image successfully, "Run tests" is run outside of the new container in the host. This is not good; I want the command to run from within the new container built with the help of the dockerfile agent.
I know that the shell command is run in the host, because I've already tried debugging with sh pwd to which I got /var/jenkins_home/workspace/youtube-delete-tracker_jenkins.
Here is the end of the output in the console for the Jenkins job:
Step 18/18 : RUN chmod a+rw database/
---> Using cache
---> 0fedd44ea512
Successfully built 0fedd44ea512
Successfully tagged e74bf5ee4aa59afc2c4524c57a81bdff8a341140:latest
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
+ docker inspect -f . e74bf5ee4aa59afc2c4524c57a81bdff8a341140
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** e74bf5ee4aa59afc2c4524c57a81bdff8a341140 cat
$ docker top 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Run tests)
[Pipeline] sh
+ pwd
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow
[Pipeline] sh
+ vendor/bin/phpunit
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: 1: /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: vendor/bin/phpunit: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
$ docker rm -f 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
As you can see, pwd gives me a path on the host (the Jenkins job folder), and vendor/bin/phpunit was not found (which should be there in the container, because the package manager successfully built it as per the docker build output that I didn't include).
So how can I get the sh commands running from within the container? Or alternatively how do I get the image tag name generated by the dockerfile agent so that I could manually do docker run from outside the new container to run the new container?
INFO: The issue doesn't seem to have to do with the Declarative Pipelines, because I just tried doing it Imperative style and also get the same pwd of the Jenkins container: https://github.com/amcsi/youtube-delete-tracker/blob/4bf584a358c9fecf02bc239469355a2db5816905/Jenkinsfile.groovy#L6
INFO 2: At first I thought this was an Jenkins-within-Docker issue, and I wrote my question as such... but it turned out I was getting the same issue if I ran Jenkins on my host rather than within a container.
INFO 3: Versions...
Jenkins ver. 2.150.1
Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
EDIT:
Jenkins mounts its local workspace into the docker container, and cd into it automatically. Therefore, the WORKDIR you set in Dockerfile gets over-ridden. In your console output, the docker run command shows this fact:
docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z
From docker run man page (man docker run):
-w, --workdir=""
Working directory inside the container
The default working directory for running binaries within a container is the root directory (/). The developer can set a different
default with the
Dockerfile WORKDIR instruction. The operator can override the working directory by using the -w option.
-v|--volume[=[[HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
Create a bind mount.
If you specify, -v /HOST-DIR:/CONTAINER-DIR, Docker
bind mounts /HOST-DIR in the host to /CONTAINER-DIR in the Docker
container. If 'HOST-DIR' is omitted, Docker automatically creates the new
volume on the host. The OPTIONS are a comma delimited list and can be:
So, you need to manually cd into your own $WORKDIR first before executing the said commands. Btw, you might also want to create a symbolic link (ln -s) from jenkins volume mount dir to your own $WORKDIR. This allows you to browse the workspace via the Jenkins UI etc.
Finally, to re-iterate for clarity, Jenkins run all its build stages inside a docker container if you specify the docker agent at the top.
Old answer:
Jenkins run all its build stages inside a docker container if you specify the docker agent at the top. You can verify under which executor and slave the build is running on using environment variables.
So, first find out an environment variable that is specific to your slave. I use NODE_NAME env var for this purpose. You can find out all available environment variables in your Jenkins instance via http://localhost:8080/env-vars.html (replace the host name to match your instance).
...
stages {
stage('Run tests') {
steps {
echo "I'm executing in node: ${env.NODE_NAME}"
sh 'vendor /bin/phpunit'
}
}
}
...
A note regarding vendor/bin/phpunit was not found - this seems to be due to a typo if I'm not mistaken.

Jenkinsfile docker agent step dies after 1 second

I have a very simple Jenkinsfile as seen below.
def workspace
node {
workspace = sh(returnStdout: true, script: 'pwd').trim()
}
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker {
image 'composer'
args "-v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v $workspace:/app"
}
}
steps {
sh 'php -v'
sh 'composer install --no-interaction --working-dir=$WORKSPACE/backend'
}
}
}
}
I've gotten to the point where this works entirely as intended (e.g.: mounts volumes as expected, moves things around, pulls image, actually runs composer install), with one minor exception...
The immediately following the docker run it gets into the shell steps, runs sh 'composer install...' and dies after 1 second, going into the docker stop --time 1 ... and docker rm ... steps immediately after.
I have no idea if this is coming from Composer doing something odd, or if there's some configurable timeout I'm completely unaware of.
Has anyone dealt with this before?
Edit:
Here is more information:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 997:995 -v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v [...] -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat composer
[Pipeline] {
[Pipeline] sh
[workspace] Running shell script
+ php -v
PHP 7.1.9 (cli) (built: Sep 15 2017 00:07:01) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
[Pipeline] sh
[workspace] Running shell script
+ composer install --no-interaction --working-dir=/app/backend --no-progress --no-ansi
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 29 installs, 0 updates, 0 removals
- Installing laravel/tinker (v1.0.2): Downloading[Pipeline] }
$ docker stop --time=1 ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
$ docker rm -f ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
[Pipeline] // withDockerContainer
[Pipeline] }
and from an earlier job
Installing dependencies (including require-dev) from lock file
Package operations: 55 installs, 0 updates, 0 removals
- Installing symfony/finder (v3.3.6): Downloading (connecting...)[Pipeline] }
It's working as can be seen, but the return code at the end is....
GitHub has been notified of this commit’s build result
ERROR: script returned exit code -1
Finished: FAILURE
Edit 2:
Got this to work even simpler, see gist for more info:
https://gist.github.com/chuckyz/6b78b19a6a5ea418afa16cc58020096e
It's a bug in Jenkins, so until this is marked as fixed I'm just using sh steps with docker run ... in them, manually.
https://issues.jenkins-ci.org/browse/JENKINS-35370
e.g.:
sh 'docker run -v $WORKSPACE:/app composer install'
I experienced this and resolved it by upgrading durable-task-plugin from 1.13 to 1.16.
The 1.16 changelog contains:
Using a new system for determining whether sh step processes are still alive, which should solve various robustness issues.

Unable to run sh steps inside docker agent: script.sh Not Found

When trying to run the following declarative pipeline:
pipeline {
agent { docker 'alpine' }
stages {
stage('Test') {
steps {
sh('printenv')
}
}
}
}
I get the error:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Agent Setup)
[Pipeline] sh
[TmpTest] Running shell script
+ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:1072e499f3f655a032e88542330cf75b02e7bdf673278f701d7ba61629ee3ebe
Status: Image is up to date for alpine:latest
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
[TmpTest] Running shell script
+ docker inspect -f . alpine
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 107:113 -w /var/lib/jenkins/workspace/TmpTest -v /var/lib/jenkins/workspace/TmpTest:/var/lib/jenkins/workspace/TmpTest:rw,z -v /var/lib/jenkins/workspace/TmpTest#tmp:/var/lib/jenkins/workspace/TmpTest#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat alpine
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Provision Server)
[Pipeline] sh
[TmpTest] Running shell script
sh: /var/lib/jenkins/workspace/TmpTest#tmp/durable-1abfbc69/script.sh: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 db551b51404ba6305f68f9086320634eeea3d515be134e5e55b51c3c9f1eb568
$ docker rm -f db551b51404ba6305f68f9086320634eeea3d515be134e5e55b51c3c9f1eb568
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
When monitoring the pipelines #tmp directory whilst its running I can see script.sh created for a short period. I am unable to tell if it has been created, or already deleted, when the pipeline tries to execute it in the running container.
some system details
Jenkins running as a single node system which has docker installed.
Jenkins v2.60.1
(all plugins fully updated)
docker --version
Docker version 17.06.0-ce, build 02c1d87
Have the same setup (single Jenkins 2.73.1 host on an EC2 instance, not inside a container, with Docker 17.09.0-ce) and the same behavior, with both declarative and scripted pipeline.
It tries to run the script on the host itself if you specify
sh 'sh ./yourscript.sh'
or
sh './yourscript.sh'
instead of sh 'script.sh'

Resources