How do I run multiple commands within a Docker container in Jenkins? - jenkins

I have a somewhat complicated build-and-test process that I would like to run within a Docker container. Since the software under test fires up some xterm windows, I'm running the X server and VNC within the container using supervisord to start them. Then, once the container is running with supervisord as the ENTRYPOINT, I can manually kick off the build-and-test process using docker exec.
I'm having trouble nailing down the approach to do this using Jenkins and the Docker Pipeline plugin. There seem to be some limitations with the current version of the plugin that I can't sufficiently work around. Here's the basic idea:
node {
stage('Fetch installer') {
// SCM commands go here
}
def image = docker.image('metatest')
image.inside() {
stage('Run installer') {
sh "./installer.sh"
}
stage('Run tests') {
sh "/opt/custom/run_tests.sh"
}
}
}
This approach is broken, however, because the image.inside() method overrides both the default user (which it sets to Jenkins' user ID) and the ENTRYPOINT. I need to be able to start the Docker container using the default root user because the X server processes must be root. However, I need to run the installation and tests within the container as the Jenkins non-root user so that Jenkins can manage the output materials.
For the record, image.inside() sets up a number of very useful parameters when starting the Docker container:
docker run -t -d -u 113:123 \
-w "/var/lib/jenkins/workspace/TestProj" \
-v "/var/lib/jenkins/workspace/TestProj:/var/lib/jenkins/workspace/TestProj:rw" \
-v "/var/lib/jenkins/workspace/TestProj#tmp:/var/lib/jenkins/workspace/TestProj#tmp:rw" \
-e ******** -e ******** -e ******** -e ******** -e ******** \
-e ******** -e ******** -e ******** -e ******** -e ******** \
-e ******** -e ******** -e ******** -e ******** -e ******** \
-e ******** -e ******** -e ******** \
--entrypoint cat \
metatest
The setup of the Jenkins uid:gid pair, the working directory, the volumes to share from the host, and the Jenkins environment variables (masked here) are all very useful and annoying to reproduce manually in Jenkins. The ENTRYPOINT override is not as helpful in this case.
Unfortunately the implementation of the Docker Pipeline command image.inside() does not allow me to separate the setup of a container from the execution of stages within the running container that it creates. There are other commands for part of that process, but they seem to be missing the actual execution step AND they don't include a lot of the useful parameter setup.
In other words, I'd like to do something like this:
node {
stage('Fetch installer') {
// SCM commands go here
}
def image = docker.image('metatest')
def container = image.run('-u 0:0')
container.inside() {
stage('Run installer') {
sh "./installer.sh"
}
stage('Run tests') {
sh "/opt/custom/run_tests.sh"
}
}
stage('Check output') {
// run a script on the Jenkins host
}
container.inside() {
stage('Final script step') {
sh "./another_script.py"
}
}
container.stop()
container.remove()
}
The key differences are:
image.run() sets up the same uid:gid pair for Jenkins (which I then override), plus all the workspace, volumes, and environment variables, but NOT the ENTRYPOINT override.
container.inside() exists and allows me to perform only the execution step inside a running container.
container.inside() sets up all the same parameters as step 1.
container.stop() only stops the container, while remove() actually removes it. No need for them to be combined.
With this approach I can have separate commands executed inside the container, then run some additional commands in Jenkins, return to the container, stop and restart the container, save an image of the container before removing it, etc. All of those things are more difficult with the current plugin.
I've put in a request to Jenkins to add this separation, but I need an approach without that. Any ideas?

Related

Jenkins Docker Pipeline Plugin not working when run inside a Jenkins Slave Docker Container

Here's the situation:
Have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container. Everything is setup correctly, and I have a Jenkins slave that is itself an Ubuntu docker container.
I'd like to run a command inside a separate docker container created by the Jenkinsfile. The Docker Pipeline plugin promises to offer a solution to this common requirement.
Here's my Jenkinsfile:
pipeline {
agent any
stages {
stage('Test') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
}
}
Running the above, where the slave jenkins node is not inside in docker container works as expected. It prints:
mvn --version Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T19:39:06Z) Maven
home: /usr/share/maven Java version: 1.8.0_131, vendor: Oracle
Corporation Java home: /usr/lib/jvm/java-1.8-openjdk/jre Default
locale: en_US, platform encoding: UTF-8 OS name: "linux", version:
"5.4.8-gentoo", arch: "amd64", family: "unix"
It also works when I run the above on the Jenkins Master (which is itself a docker container).
However, when the slave is running (as an ubuntu docker container), I get the following error. Fix-Jenkins-Local is the name of the github repo (branch) containing the Jenkinsfile:
Slave 1 seems to be running inside container
5cf4397c3abff105818b2ab92e275fa8265bd491170f47723a8e2cef9b308a3b but
/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2 could not be
found among [] but
/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2#tmp could not be
found among [] $ docker run -t -d -u 1000:1000 -w
/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2 -v
/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2:/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2:rw,z
-v /tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2#tmp:/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2#tmp:rw,z
-e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** maven:3-alpine cat $ docker top
977b4035f431b617eab29e6b8292bc66fc71e3fed82af4788d89d6f9b803c4c7 -eo
pid,comm [Pipeline] { [Pipeline] sh process apparently never started
in
/tmp/jenkins_workspace/workspace/Fix-Jenkins-Local#2#tmp/durable-63950950
(running Jenkins temporarily with
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true
might make the problem clearer)
Anyone have any experience attempting to run docker pipeline plugin inside a jenkins docker container? Any thoughts much appreciated.
Another approach, I've tried without using Docker Pipeline plugin here
Reading another SOF question here, it appears you have to to be mindful of the following:
For inside to work, the Docker server and the Jenkins agent must use
the same filesystem, so that the workspace can be mounted. The easiest
way to ensure this is for the Docker server to be running on localhost
(the same computer as the agent). Currently neither the Jenkins plugin
nor the Docker CLI will automatically detect the case that the server
is running remotely; When Jenkins can detect that the agent is itself
running inside a Docker container, it will automatically pass the
--volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
Here are two screenshots, when I run the job successfully on the Jenkins master (a docker container), and unsuccessfully on the slave (also, a docker container):
Master:
Slave:

How to fix "process apparently never started in ..." error in Jenkins pipeline?

I am getting the strange error below in my Jenkins pipeline
[Pipeline] withDockerContainer
acp-ci-ubuntu-test does not seem to be running inside a container
$ docker run -t -d -u 1002:1006 -u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v /home/ubuntu/.docker:/home/ubuntu/.docker -w /home/ubuntu/workspace/CD-acp-cassandra -v /home/ubuntu/workspace/CD-acp-cassandra:/home/ubuntu/workspace/CD-acp-cassandra:rw,z -v /home/ubuntu/workspace/CD-acp-cassandra#tmp:/home/ubuntu/workspace/CD-acp-cassandra#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** quay.io/arubadevops/acp-build:ut-build cat
$ docker top 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44 -eo pid,comm
[Pipeline] {
[Pipeline] sh
process apparently never started in /home/ubuntu/workspace/CD-acp-cassandra#tmp/durable-70b242d1
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
$ docker stop --time=1 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
$ docker rm -f 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
[Pipeline] // withDockerContainer
The corresponding stage in Jenkins pipeline is
stage("Build docker containers & coreupdate packages") {
agent {
docker {
image "quay.io/arubadevops/acp-build:ut-build"
label "acp-ci-ubuntu"
args "-u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v $HOME/.docker:/home/ubuntu/.docker"
}
}
steps {
script {
try {
sh "export CI_BUILD_NUMBER=${currentBuild.number}; cd docker; ./build.sh; cd ../test; ./build.sh;"
ciBuildStatus="PASSED"
} catch (err) {
ciBuildStatus="FAILED"
}
}
}
}
What could be the reasons why the process is not getting started within the docker container? Any pointers on how to debug further are also helpful.
This error means the Jenkins process is stuck on some command.
Some suggestions:
Upgrade all of your plugins and re-try.
Make sure you've the right number of executors and jobs aren't stuck in the queue.
If you're pulling the image (not your local), try adding alwaysPull true (next line to image).
When using agent inside stage, remove the outer agent. See: JENKINS-63449.
Execute org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true in Jenkins's Script Console to debug.
When the process is stuck, SSH to Jenkins VM and run docker ps to see which command is running.
Run docker ps -a to see the latest failed runs. In my case it tried to run cat next to custom CMD command set by container (e.g. ansible-playbook cat), which was the invalid command. The cat command is used by design. To change entrypoint, please read JENKINS-51307.
If your container is still running, you can login to your Docker container by docker exec -it -u0 $(docker ps -ql) bash and run ps wuax to see what's doing.
Try removing some global variables (could be a bug), see: parallel jobs not starting with docker workflow.
The issue is caused by some breaking changes introduced in the Jenkins durable-task plugin v1.31.
Source:
https://issues.jenkins-ci.org/browse/JENKINS-59907 and
https://github.com/jenkinsci/durable-task-plugin/blob/master/CHANGELOG.md
Solution:
Upgrading the Jenkins durable-task plugin to v1.33 resolved the issue for us.
I had this same problem and in my case, it was related to the -u <user> arg passed to the agent. In the end, changing my pipeline to use -u root fixed the problem.
In the original post, I notice a -u ubuntu was used to run the container:
docker run -t -d -u 1002:1006 -u ubuntu ... -e ******** quay.io/arubadevops/acp-build:ut-build cat
I was also using a custom user, one I've added when building the Docker image.
agent {
docker {
image "app:latest"
args "-u someuser"
alwaysPull false
reuseNode true
}
}
steps {
sh '''
# DO STUFF
'''
}
Starting the container locally using the same Jenkins commands works OK:
docker run -t -d -u 1000:1000 -u someuser app:image cat
docker top <hash> -eo pid,comm
docker exec -it <hash> ls # DO STUFF
But in Jenkins, it fails with the same "process never started.." error:
$ docker run -t -d -u 1000:1000 -u someuser app:image cat
$ docker top <hash> -eo pid,comm
[Pipeline] {
[Pipeline] unstash
[Pipeline] sh
process apparently never started in /home/jenkins/agent/workspace/branch#tmp/durable-f5dfbb1c
For some reason, changing it to -u root worked.
agent {
docker {
image "app:latest"
args "-u root" # <=-----------
alwaysPull false
reuseNode true
}
}
If you have upgraded the durable-task plugin to 1.33 or later and it still won't work, check if there's an empty environment variable configured in your pipeline or stored in the Jenkins configuration (dashed) and remove it:
In addition to kenorb's answer:
Check permissions inside the container you are running in and the Jenkins directory on the build host.
I am running custom Docker containers and after several hours of debugging, I found that after trying to execute what Jenkins was trying to execute inside the running container (by exec into the container, running echo "$(ps waux)", and executing those sh -c commands one by one). I found Jenkins couldn't create the log file inside the container due to a mismatch in UID and GID.
If you are running Jenkins inside of Docker and using a DinD container for Jenkins running Docker jobs, make sure you mount your Jenkins data volume to /var/jenkins_home in the service providing the Docker daemon. The log creation is actually being attempted by the daemon, which means the daemon container needs access to the volume with the workspace that is being operated on.
Example snippet for docker-compose.yml:
services:
dind:
container_name: dind-for-jenkins
privileged: true
image: docker:stable-dind
volumes:
- 'jenkins-data:/var/jenkins_home'
This has eaten my life! I tried every imaginable solution on at least 10 SO posts, and in the end it was because my pipeline had spaces in its name. :|
So I changed "let's try scripting" with "scripts_try" and it just worked.
Building a Jenkins job which runs within a Docker container, and ran into this same error. The version of the Durable-Task plugin is at v1.35, so that was not the issue. My issue was ... my job was trying to run a chmod -R 755 *.sh command, and the active user within the container did not have sufficient permissions to execute chmod against those files. Would have expected Jenkins to fail the job here, but launching the container using an ID which did have permissions to run the chmod command got past this error.

Docker agent volume mount in Jenkins pipeline not working as expected

Following snippet with volume mount creates the maven dependencies under $JENKINS_HOME/workspace/<project-name>/? (Question Mark) instead of under $HOME/.m2/
Note that settings.xml mirror to our internal repository. And the instructions on how to mount has been directly taken from jenkins.io
Anyone has any clue why is this happening?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp/jenkins/.m2:/root/.m2:rw,z'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s settings.xml'
}
}
}
}
This is not as simple as using Docker in standalone. I have created /var/jenkins/.m2 directory on Jenkins slave where the build would be running. Ensured the new directory has 775 permission (although that may not be required) and also changed the owner to be the same as what is for "/var/opt/slave/workspace/pipeline_test" (got this path based on logs below)
$ docker login -u dcr-login -p ******** https:// nexus.corp.zenmonics.com:8449
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . nexus.corp.zenmonics.com:8449/maven:3-alpine
.
[Pipeline] withDockerContainer
cucj1sb3 does not seem to be running inside a container
$ docker run -t -d -u 1002:1002 -v /tmp/jenkins/.m2:/root/.m2:rw,z -w
/var/opt/slave/workspace/pipeline_test -v /var/opt/slave/workspace/pipeline_test:/var/opt/slave/workspace/pipeline_test:rw,z -v /var/opt/slave/workspace/pipeline_test#tmp:/var/opt/slave/workspace/pipeline_test#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** nexus.corp.zenmonics.com:8449/maven:3-alpine cat
$ docker top c7282468dbb6952aadbe4bb495757e7047122b179c81516645ba23759b78c366 -eo pid,comm
This statement on official maven image at Docker Hub (https://hub.docker.com/_/maven) makes me feel the volume mount is updated
To create a pre-packaged repository, create a pom.xml with the
dependencies you need and use this in your Dockerfile.
/usr/share/maven/ref/settings-docker.xml is a settings file that
changes the local repository to /usr/share/maven/ref/repository, but
you can use your own settings file as long as it uses
/usr/share/maven/ref/repository as local repo.
The documentation at : https://jenkins.io/doc/book/pipeline/docker/ is misleading and waste of time when it comes to volume mounting.
When Docker container is created its created with user 1002 and group 1002. The user 1002 doesn't have access to /root/.m2 and only has access to the working directory injected into the container.
Dockerfile
FROM maven:3-alpine
COPY --chown=1002:1002 repository/ /usr/share/maven/ref/repository/
RUN chmod -R 775 /usr/share/maven/ref/repository
COPY settings.xml /usr/share/maven/ref/
Settings.xml
<localRepository>/usr/share/maven/ref/repository</localRepository>
Docker command
docker build -t <server>:<port>/<image-name>:<image-tag> .
docker push <server>:<port>/<image-name>:<image-tag>
docker volume create maven-repo
Jenkinsfile
pipeline {
agent('linux2') {
docker {
label '<slave-label-here>'
image '<image-name>:<image-tag>'
registryUrl 'https://<server>:<port>'
registryCredentialsId '<jenkins-credentials-for-docker-login>'
args '-v maven-repo:/usr/share/maven/ref/repository/'
}
}
parameters {
booleanParam(name: 'SONAR', defaultValue: false, description: 'Select this option to run SONAR Analysis')
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s /usr/share/maven/ref/settings.xml -f pom.xml'
}
}
}
}
As #masseyb mentions in the comments, Jenkins treat $HOME as current building context.
And there are two workarounds:
a) use Jenkins plugin to set Env variable
You can use Envinject Plugin to set environment variables in Jenkins.
b) specify absolute path instead of $HOME/.m2
You can specify absolute path for .m2, e.g.:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /home/samir-shail/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
Note: please check that Jenkins has access to yours $HOME/.m2/ directory.

Jenkins docker container always adds cat command

I am creating Jenkins pipeline for running terraform on a Docker container.
Here is my pipeline script.
pipeline {
agent {
docker {
image 'hashicorp/terraform:full'
args '--entrypoint=/bin/bash'
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
When running this pipeline on Jenkins, I get the below error.
$ docker run -t -d -u 995:993 --entrypoint=/bin/bash -w /var/lib/jenkins/workspace/terraform -v /var/lib/jenkins/workspace/terraform:/var/lib/jenkins/workspace/terraform:rw,z -v /var/lib/jenkins/workspace/terraform#tmp:/var/lib/jenkins/workspace/terraform#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** hashicorp/terraform:full cat
$ docker top a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd -eo pid,comm
java.io.IOException: Failed to run top 'a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd'. Error: Error response from daemon: Container a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd is not running
This seems like Jenkins add a cat command to run the image hashicorp/terraform:full.
Note that, I have overridden the entrypoint to /bin/bash using --entrypoint=/bin/bash since hashicorp/terraform:full already has an entrypoint defined.
I had to change the ENTRYPOINT to empty to disable the entrypoint definition from the terraform container definition. And I think the light image is sufficient for just executing terraform.
I got it working with the following script:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '--entrypoint='
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
This seems to be a default behavior of docker-workflow-plugin in jenkins.
[FIXED JENKINS-41316] Switch 'inside' back to CMD, detect if entrypoint was badly designed #116 https://github.com/jenkinsci/docker-workflow-plugin/pull/116
we run whatever the process the image specifies (even sh -c)
Their purpose is
That will break in most images, since for this purpose we need to start a container, pause it while we exec some stuff, and then stop it, without having to guess what its “main command” might run and when it might exit on its own. That is why we cat (I also have considered sleep infinity or some POSIX-compliant variant).
https://issues.jenkins-ci.org/browse/JENKINS-39748
code is here: https://github.com/jenkinsci/docker-workflow-plugin/blob/50ad50bad2ee14eb73d1ae3ef1058b8ad76c9e5d/src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java#L184
They want the container will be /* expected to hang until killed */.
Original answer:
Would you try to run without -d option(which means run in background)
docker run -it --entrypoint=/bin/bash hashicorp/terraform:full
Then you can enter the container to run whatever you want.
Take nginx as an example:
docker run -it --entrypoint=/bin/bash nginx
root#e4dc1d08de1d:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#e4dc1d08de1d:/# cat /var/log/
apt/ btmp dpkg.log faillog lastlog nginx/ wtmp
root#e4dc1d08de1d:/# cat /var/log/faillog
root#e4dc1d08de1d:/#
in my case, the entrypoint did some troubles for me so I needed to overwrite it by passing empty entrypoint argument to the inside method like here:
pipeline {
agent {
label 'some_label'
}
stages {
stage('execute') {
steps {
script {
img = docker.build("docker_image_name:docker_image_tag")
img.inside('--entrypoint= -e NODE_ENV=test') {
sh 'npm install --dev'
sh 'npm run test'
}
}
}
}
}
}
this example is like #S.Spieker comment but with different syntax.
remark: npm commands like npm test can be different between node js projects, so you will need to get the relevant commands from the developer.
if this example is still not working for you so you probably need to change your docker image entrypoint like here: https://github.com/SonarSource/sonar-scanner-cli-docker/pull/31/files
you can learn more about the docker-flow plugin here: https://docs.cloudbees.com/docs/admin-resources/latest/plugins/docker-workflow
some more examples: Jenkins: How to use JUnit plugin when Maven builds occur within Docker container

Jenkins: dockerfile agent commands not run in container

This is my Jenkinsfile:
pipeline {
agent {
dockerfile true
}
stages {
stage('Run tests') {
steps {
sh 'pwd'
sh 'vendor/bin/phpunit'
}
}
}
}
I'm running Jenkins, and although I am able to build the image successfully, "Run tests" is run outside of the new container in the host. This is not good; I want the command to run from within the new container built with the help of the dockerfile agent.
I know that the shell command is run in the host, because I've already tried debugging with sh pwd to which I got /var/jenkins_home/workspace/youtube-delete-tracker_jenkins.
Here is the end of the output in the console for the Jenkins job:
Step 18/18 : RUN chmod a+rw database/
---> Using cache
---> 0fedd44ea512
Successfully built 0fedd44ea512
Successfully tagged e74bf5ee4aa59afc2c4524c57a81bdff8a341140:latest
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // stage
[Pipeline] sh
+ docker inspect -f . e74bf5ee4aa59afc2c4524c57a81bdff8a341140
.
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** e74bf5ee4aa59afc2c4524c57a81bdff8a341140 cat
$ docker top 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Run tests)
[Pipeline] sh
+ pwd
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow
[Pipeline] sh
+ vendor/bin/phpunit
/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: 1: /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow#tmp/durable-4049973d/script.sh: vendor/bin/phpunit: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
$ docker rm -f 64bbdf257492046835d7cfc17413fb2d78c858234aa5936d7427721f0038742b
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
As you can see, pwd gives me a path on the host (the Jenkins job folder), and vendor/bin/phpunit was not found (which should be there in the container, because the package manager successfully built it as per the docker build output that I didn't include).
So how can I get the sh commands running from within the container? Or alternatively how do I get the image tag name generated by the dockerfile agent so that I could manually do docker run from outside the new container to run the new container?
INFO: The issue doesn't seem to have to do with the Declarative Pipelines, because I just tried doing it Imperative style and also get the same pwd of the Jenkins container: https://github.com/amcsi/youtube-delete-tracker/blob/4bf584a358c9fecf02bc239469355a2db5816905/Jenkinsfile.groovy#L6
INFO 2: At first I thought this was an Jenkins-within-Docker issue, and I wrote my question as such... but it turned out I was getting the same issue if I ran Jenkins on my host rather than within a container.
INFO 3: Versions...
Jenkins ver. 2.150.1
Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
EDIT:
Jenkins mounts its local workspace into the docker container, and cd into it automatically. Therefore, the WORKDIR you set in Dockerfile gets over-ridden. In your console output, the docker run command shows this fact:
docker run -t -d -u 112:116 -w /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow -v /var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:/var/jenkins_home/workspace/ube-delete-tracker_stackoverflow:rw,z
From docker run man page (man docker run):
-w, --workdir=""
Working directory inside the container
The default working directory for running binaries within a container is the root directory (/). The developer can set a different
default with the
Dockerfile WORKDIR instruction. The operator can override the working directory by using the -w option.
-v|--volume[=[[HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
Create a bind mount.
If you specify, -v /HOST-DIR:/CONTAINER-DIR, Docker
bind mounts /HOST-DIR in the host to /CONTAINER-DIR in the Docker
container. If 'HOST-DIR' is omitted, Docker automatically creates the new
volume on the host. The OPTIONS are a comma delimited list and can be:
So, you need to manually cd into your own $WORKDIR first before executing the said commands. Btw, you might also want to create a symbolic link (ln -s) from jenkins volume mount dir to your own $WORKDIR. This allows you to browse the workspace via the Jenkins UI etc.
Finally, to re-iterate for clarity, Jenkins run all its build stages inside a docker container if you specify the docker agent at the top.
Old answer:
Jenkins run all its build stages inside a docker container if you specify the docker agent at the top. You can verify under which executor and slave the build is running on using environment variables.
So, first find out an environment variable that is specific to your slave. I use NODE_NAME env var for this purpose. You can find out all available environment variables in your Jenkins instance via http://localhost:8080/env-vars.html (replace the host name to match your instance).
...
stages {
stage('Run tests') {
steps {
echo "I'm executing in node: ${env.NODE_NAME}"
sh 'vendor /bin/phpunit'
}
}
}
...
A note regarding vendor/bin/phpunit was not found - this seems to be due to a typo if I'm not mistaken.

Resources