I am creating Jenkins pipeline for running terraform on a Docker container.
Here is my pipeline script.
pipeline {
agent {
docker {
image 'hashicorp/terraform:full'
args '--entrypoint=/bin/bash'
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
When running this pipeline on Jenkins, I get the below error.
$ docker run -t -d -u 995:993 --entrypoint=/bin/bash -w /var/lib/jenkins/workspace/terraform -v /var/lib/jenkins/workspace/terraform:/var/lib/jenkins/workspace/terraform:rw,z -v /var/lib/jenkins/workspace/terraform#tmp:/var/lib/jenkins/workspace/terraform#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** hashicorp/terraform:full cat
$ docker top a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd -eo pid,comm
java.io.IOException: Failed to run top 'a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd'. Error: Error response from daemon: Container a0b801d657d0fffdfa95c387564128b130ab1d28569ad59bd0151c8b7faf6ffd is not running
This seems like Jenkins add a cat command to run the image hashicorp/terraform:full.
Note that, I have overridden the entrypoint to /bin/bash using --entrypoint=/bin/bash since hashicorp/terraform:full already has an entrypoint defined.
I had to change the ENTRYPOINT to empty to disable the entrypoint definition from the terraform container definition. And I think the light image is sufficient for just executing terraform.
I got it working with the following script:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '--entrypoint='
}
}
stages {
stage('execute') {
steps {
sh 'terraform --version'
}
}
}
}
This seems to be a default behavior of docker-workflow-plugin in jenkins.
[FIXED JENKINS-41316] Switch 'inside' back to CMD, detect if entrypoint was badly designed #116 https://github.com/jenkinsci/docker-workflow-plugin/pull/116
we run whatever the process the image specifies (even sh -c)
Their purpose is
That will break in most images, since for this purpose we need to start a container, pause it while we exec some stuff, and then stop it, without having to guess what its “main command” might run and when it might exit on its own. That is why we cat (I also have considered sleep infinity or some POSIX-compliant variant).
https://issues.jenkins-ci.org/browse/JENKINS-39748
code is here: https://github.com/jenkinsci/docker-workflow-plugin/blob/50ad50bad2ee14eb73d1ae3ef1058b8ad76c9e5d/src/main/java/org/jenkinsci/plugins/docker/workflow/WithContainerStep.java#L184
They want the container will be /* expected to hang until killed */.
Original answer:
Would you try to run without -d option(which means run in background)
docker run -it --entrypoint=/bin/bash hashicorp/terraform:full
Then you can enter the container to run whatever you want.
Take nginx as an example:
docker run -it --entrypoint=/bin/bash nginx
root#e4dc1d08de1d:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#e4dc1d08de1d:/# cat /var/log/
apt/ btmp dpkg.log faillog lastlog nginx/ wtmp
root#e4dc1d08de1d:/# cat /var/log/faillog
root#e4dc1d08de1d:/#
in my case, the entrypoint did some troubles for me so I needed to overwrite it by passing empty entrypoint argument to the inside method like here:
pipeline {
agent {
label 'some_label'
}
stages {
stage('execute') {
steps {
script {
img = docker.build("docker_image_name:docker_image_tag")
img.inside('--entrypoint= -e NODE_ENV=test') {
sh 'npm install --dev'
sh 'npm run test'
}
}
}
}
}
}
this example is like #S.Spieker comment but with different syntax.
remark: npm commands like npm test can be different between node js projects, so you will need to get the relevant commands from the developer.
if this example is still not working for you so you probably need to change your docker image entrypoint like here: https://github.com/SonarSource/sonar-scanner-cli-docker/pull/31/files
you can learn more about the docker-flow plugin here: https://docs.cloudbees.com/docs/admin-resources/latest/plugins/docker-workflow
some more examples: Jenkins: How to use JUnit plugin when Maven builds occur within Docker container
Related
We're running a dedicated build server on Debian 10. Jenkins is installed on there as well as Docker.
The Jenkins user of course owns the jenkins folder and is part of the Docker group. So all rights should be set.
There are already a couple pipelines building Docker images just fine.
Now we're trying to set up a new pipeline to build a web front-end. It's using the "Pipeline NPM Integration" plugin.
The pipeline looks like this
pipeline {
environment {
applicationVersion = '0'
}
agent any
stages {
stage('Clone Git') {
steps {
git url: '<githuburl>',
credentialsId: '<githubcreds>',
branch: 'develop'
}
}
stage('Build module (npm)') {
agent {
docker {
image 'node:15.2-buster-slim'
}
}
steps {
withNPM(npmrcConfig:'af7db72c-3235-4827-8eb4-69819a44e612') {
sh 'npm install'
sh 'npm run build:dev'
}
}
}
}
}
The pipeline always fails at the npm build with the following output
Running on Jenkins in /var/lib/jenkins/workspace/test#2
+ docker inspect -f . node:15.2-buster-slim
.
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 108:114 -w /var/lib/jenkins/workspace/test#2 -v /var/lib/jenkins/workspace/test#2:/var/lib/jenkins/workspace/test#2:rw,z -v /var/lib/jenkins/workspace/test#2#tmp:/var/lib/jenkins/workspace/test#2#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** node:15.2-buster-slim cat
$ docker top c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c -eo pid,comm
Using settings config with name npmrc
A workscape local .npmrc already exists and will be overwrriten for the build.
Writing .npmrc file: /var/lib/jenkins/workspace/test#2/.npmrc
+ npm install
up to date in 178ms
found 0 vulnerabilities
+ npm run build:dev
npm ERR! missing script: build:dev
$ docker stop --time=1 c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
$ docker rm -f c227e7fe6de64d5d7046f5565f109e90ecdd9ece1caf4203e6a6f810cc72f40c
Removed some lines here to keep it short.
After a couple hours I found out that the package.json while being there is actually empty!
cat /var/lib/jenkins/workspace/test#2/package.json
{}
I have no idea why the plugin creates an empty copy of the file.
Is there any way to get more output from the plugin about what it's doing and what could possibly cause this?
I have the below pipeline that would run the actual container along side with MYSQL container to run test.
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
def mysql = docker.image('mysql:5.6').run('-e MYSQL_ALLOW_EMPTY_PASSWORD=yes')
docker.build("rds-test", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test')
rds_test_image.inside("--link ${mysql.id}:mysql "){
sh 'echo "Inside Container"'
}
}
And i am stuck with the below error
Successfully tagged rds-test:latest
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . rds-test
.
[Pipeline] withDockerContainer
Jenkins seems to be running inside container d4e0934157d5eb6a9edadef31413d0da44e0e3eaacbb1719fc8d47fbf0a60a2b
$ docker run -t -d -u 1000:1000 --link d14340adbef9c95483d0369857dd000edf1b986e9df452b8faaf907fe9e89bf2:mysql -w /var/jenkins_home/workspace/test-jenkinsfile-s3-rds-backup --volumes-from d4e0934157d5eb6a9edadef31413d0da44e0e3eaacbb1719fc8d47fbf0a60a2b -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** rds-test cat
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: Failed to run image 'rds-test'. Error: docker: Error response from daemon: Cannot link to a non running container: /sharp_sanderson AS /fervent_lewin/mysql.
Just in case you want to look at the rds-test dockerfile https://github.com/epynic/rds-mysql-s3-backup/tree/feature
The id of the running container will not be captured in the return of the run method, but rather is stored in the temporary lambda variable of the withRun block. To leverage this capability, we would modify your code accordingly:
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
docker.build("rds-test", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test')
docker.image('mysql:5.6').withRun('-e MYSQL_ALLOW_EMPTY_PASSWORD=yes') { container ->
rds_test_image.inside("--link ${container.id}:mysql") {
sh 'echo "Inside Container"'
}
}
}
As you can see above, running your second container within the code block of the other container's withRun makes the container id accessible within the id member variable of the temporary lambda variable initialized within the block (named container here for convenience).
Note that you can also do a slight code cleanup here by assigning the value of rds_test_image to the return of docker.build("rds-test", "-f ${dockerfile} .") instead of adding another line of code assigning it to the return of docker.image('rds-test'). The new code would also be more stable.
The above case was as the mysql container was not available before --link with Matt Schuchard suggestion have updated the answer
stage('Test - To check MYSQL connect') {
def dockerfile = 'Dockerfile.test'
docker.build("rds-latest", "-f ${dockerfile} .")
def rds_test_image = docker.image('rds-test:latest')
docker.image('mysql:5.6').withRun('-e MYSQL_ROOT_PASSWORD=admin --name=mysql_server -p 3306:3306') { container ->
docker.image('mysql:5.6').inside("--link ${container.id}:mysql") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hmysql --silent; do sleep 1; done'
}
rds_test_image.inside("--link ${container.id}:mysql -e MYSQL_HOST=mysql -e MYSQL_PWD=admin -e USER=root "){
sh 'bash scripts/test_script.sh'
}
}
}
I am getting the strange error below in my Jenkins pipeline
[Pipeline] withDockerContainer
acp-ci-ubuntu-test does not seem to be running inside a container
$ docker run -t -d -u 1002:1006 -u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v /home/ubuntu/.docker:/home/ubuntu/.docker -w /home/ubuntu/workspace/CD-acp-cassandra -v /home/ubuntu/workspace/CD-acp-cassandra:/home/ubuntu/workspace/CD-acp-cassandra:rw,z -v /home/ubuntu/workspace/CD-acp-cassandra#tmp:/home/ubuntu/workspace/CD-acp-cassandra#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** quay.io/arubadevops/acp-build:ut-build cat
$ docker top 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44 -eo pid,comm
[Pipeline] {
[Pipeline] sh
process apparently never started in /home/ubuntu/workspace/CD-acp-cassandra#tmp/durable-70b242d1
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
$ docker stop --time=1 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
$ docker rm -f 83d04d0a3a3f9785bdde3932f55dee36c079147eb655c1ee9d14f5b542f8fb44
[Pipeline] // withDockerContainer
The corresponding stage in Jenkins pipeline is
stage("Build docker containers & coreupdate packages") {
agent {
docker {
image "quay.io/arubadevops/acp-build:ut-build"
label "acp-ci-ubuntu"
args "-u ubuntu --net=host -v /var/run/docker.sock:/var/run/docker.sock -v $HOME/.docker:/home/ubuntu/.docker"
}
}
steps {
script {
try {
sh "export CI_BUILD_NUMBER=${currentBuild.number}; cd docker; ./build.sh; cd ../test; ./build.sh;"
ciBuildStatus="PASSED"
} catch (err) {
ciBuildStatus="FAILED"
}
}
}
}
What could be the reasons why the process is not getting started within the docker container? Any pointers on how to debug further are also helpful.
This error means the Jenkins process is stuck on some command.
Some suggestions:
Upgrade all of your plugins and re-try.
Make sure you've the right number of executors and jobs aren't stuck in the queue.
If you're pulling the image (not your local), try adding alwaysPull true (next line to image).
When using agent inside stage, remove the outer agent. See: JENKINS-63449.
Execute org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true in Jenkins's Script Console to debug.
When the process is stuck, SSH to Jenkins VM and run docker ps to see which command is running.
Run docker ps -a to see the latest failed runs. In my case it tried to run cat next to custom CMD command set by container (e.g. ansible-playbook cat), which was the invalid command. The cat command is used by design. To change entrypoint, please read JENKINS-51307.
If your container is still running, you can login to your Docker container by docker exec -it -u0 $(docker ps -ql) bash and run ps wuax to see what's doing.
Try removing some global variables (could be a bug), see: parallel jobs not starting with docker workflow.
The issue is caused by some breaking changes introduced in the Jenkins durable-task plugin v1.31.
Source:
https://issues.jenkins-ci.org/browse/JENKINS-59907 and
https://github.com/jenkinsci/durable-task-plugin/blob/master/CHANGELOG.md
Solution:
Upgrading the Jenkins durable-task plugin to v1.33 resolved the issue for us.
I had this same problem and in my case, it was related to the -u <user> arg passed to the agent. In the end, changing my pipeline to use -u root fixed the problem.
In the original post, I notice a -u ubuntu was used to run the container:
docker run -t -d -u 1002:1006 -u ubuntu ... -e ******** quay.io/arubadevops/acp-build:ut-build cat
I was also using a custom user, one I've added when building the Docker image.
agent {
docker {
image "app:latest"
args "-u someuser"
alwaysPull false
reuseNode true
}
}
steps {
sh '''
# DO STUFF
'''
}
Starting the container locally using the same Jenkins commands works OK:
docker run -t -d -u 1000:1000 -u someuser app:image cat
docker top <hash> -eo pid,comm
docker exec -it <hash> ls # DO STUFF
But in Jenkins, it fails with the same "process never started.." error:
$ docker run -t -d -u 1000:1000 -u someuser app:image cat
$ docker top <hash> -eo pid,comm
[Pipeline] {
[Pipeline] unstash
[Pipeline] sh
process apparently never started in /home/jenkins/agent/workspace/branch#tmp/durable-f5dfbb1c
For some reason, changing it to -u root worked.
agent {
docker {
image "app:latest"
args "-u root" # <=-----------
alwaysPull false
reuseNode true
}
}
If you have upgraded the durable-task plugin to 1.33 or later and it still won't work, check if there's an empty environment variable configured in your pipeline or stored in the Jenkins configuration (dashed) and remove it:
In addition to kenorb's answer:
Check permissions inside the container you are running in and the Jenkins directory on the build host.
I am running custom Docker containers and after several hours of debugging, I found that after trying to execute what Jenkins was trying to execute inside the running container (by exec into the container, running echo "$(ps waux)", and executing those sh -c commands one by one). I found Jenkins couldn't create the log file inside the container due to a mismatch in UID and GID.
If you are running Jenkins inside of Docker and using a DinD container for Jenkins running Docker jobs, make sure you mount your Jenkins data volume to /var/jenkins_home in the service providing the Docker daemon. The log creation is actually being attempted by the daemon, which means the daemon container needs access to the volume with the workspace that is being operated on.
Example snippet for docker-compose.yml:
services:
dind:
container_name: dind-for-jenkins
privileged: true
image: docker:stable-dind
volumes:
- 'jenkins-data:/var/jenkins_home'
This has eaten my life! I tried every imaginable solution on at least 10 SO posts, and in the end it was because my pipeline had spaces in its name. :|
So I changed "let's try scripting" with "scripts_try" and it just worked.
Building a Jenkins job which runs within a Docker container, and ran into this same error. The version of the Durable-Task plugin is at v1.35, so that was not the issue. My issue was ... my job was trying to run a chmod -R 755 *.sh command, and the active user within the container did not have sufficient permissions to execute chmod against those files. Would have expected Jenkins to fail the job here, but launching the container using an ID which did have permissions to run the chmod command got past this error.
Following snippet with volume mount creates the maven dependencies under $JENKINS_HOME/workspace/<project-name>/? (Question Mark) instead of under $HOME/.m2/
Note that settings.xml mirror to our internal repository. And the instructions on how to mount has been directly taken from jenkins.io
Anyone has any clue why is this happening?
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp/jenkins/.m2:/root/.m2:rw,z'
}
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s settings.xml'
}
}
}
}
This is not as simple as using Docker in standalone. I have created /var/jenkins/.m2 directory on Jenkins slave where the build would be running. Ensured the new directory has 775 permission (although that may not be required) and also changed the owner to be the same as what is for "/var/opt/slave/workspace/pipeline_test" (got this path based on logs below)
$ docker login -u dcr-login -p ******** https:// nexus.corp.zenmonics.com:8449
Login Succeeded
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . nexus.corp.zenmonics.com:8449/maven:3-alpine
.
[Pipeline] withDockerContainer
cucj1sb3 does not seem to be running inside a container
$ docker run -t -d -u 1002:1002 -v /tmp/jenkins/.m2:/root/.m2:rw,z -w
/var/opt/slave/workspace/pipeline_test -v /var/opt/slave/workspace/pipeline_test:/var/opt/slave/workspace/pipeline_test:rw,z -v /var/opt/slave/workspace/pipeline_test#tmp:/var/opt/slave/workspace/pipeline_test#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** nexus.corp.zenmonics.com:8449/maven:3-alpine cat
$ docker top c7282468dbb6952aadbe4bb495757e7047122b179c81516645ba23759b78c366 -eo pid,comm
This statement on official maven image at Docker Hub (https://hub.docker.com/_/maven) makes me feel the volume mount is updated
To create a pre-packaged repository, create a pom.xml with the
dependencies you need and use this in your Dockerfile.
/usr/share/maven/ref/settings-docker.xml is a settings file that
changes the local repository to /usr/share/maven/ref/repository, but
you can use your own settings file as long as it uses
/usr/share/maven/ref/repository as local repo.
The documentation at : https://jenkins.io/doc/book/pipeline/docker/ is misleading and waste of time when it comes to volume mounting.
When Docker container is created its created with user 1002 and group 1002. The user 1002 doesn't have access to /root/.m2 and only has access to the working directory injected into the container.
Dockerfile
FROM maven:3-alpine
COPY --chown=1002:1002 repository/ /usr/share/maven/ref/repository/
RUN chmod -R 775 /usr/share/maven/ref/repository
COPY settings.xml /usr/share/maven/ref/
Settings.xml
<localRepository>/usr/share/maven/ref/repository</localRepository>
Docker command
docker build -t <server>:<port>/<image-name>:<image-tag> .
docker push <server>:<port>/<image-name>:<image-tag>
docker volume create maven-repo
Jenkinsfile
pipeline {
agent('linux2') {
docker {
label '<slave-label-here>'
image '<image-name>:<image-tag>'
registryUrl 'https://<server>:<port>'
registryCredentialsId '<jenkins-credentials-for-docker-login>'
args '-v maven-repo:/usr/share/maven/ref/repository/'
}
}
parameters {
booleanParam(name: 'SONAR', defaultValue: false, description: 'Select this option to run SONAR Analysis')
}
stages {
stage('Build') {
steps {
sh 'mvn clean install -s /usr/share/maven/ref/settings.xml -f pom.xml'
}
}
}
}
As #masseyb mentions in the comments, Jenkins treat $HOME as current building context.
And there are two workarounds:
a) use Jenkins plugin to set Env variable
You can use Envinject Plugin to set environment variables in Jenkins.
b) specify absolute path instead of $HOME/.m2
You can specify absolute path for .m2, e.g.:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /home/samir-shail/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
Note: please check that Jenkins has access to yours $HOME/.m2/ directory.
I have a very simple Jenkinsfile as seen below.
def workspace
node {
workspace = sh(returnStdout: true, script: 'pwd').trim()
}
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker {
image 'composer'
args "-v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v $workspace:/app"
}
}
steps {
sh 'php -v'
sh 'composer install --no-interaction --working-dir=$WORKSPACE/backend'
}
}
}
}
I've gotten to the point where this works entirely as intended (e.g.: mounts volumes as expected, moves things around, pulls image, actually runs composer install), with one minor exception...
The immediately following the docker run it gets into the shell steps, runs sh 'composer install...' and dies after 1 second, going into the docker stop --time 1 ... and docker rm ... steps immediately after.
I have no idea if this is coming from Composer doing something odd, or if there's some configurable timeout I'm completely unaware of.
Has anyone dealt with this before?
Edit:
Here is more information:
[Pipeline] withDockerContainer
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 997:995 -v /var/lib/jenkins/.composer/auth.json:/.composer/auth.json -v [...] -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat composer
[Pipeline] {
[Pipeline] sh
[workspace] Running shell script
+ php -v
PHP 7.1.9 (cli) (built: Sep 15 2017 00:07:01) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
[Pipeline] sh
[workspace] Running shell script
+ composer install --no-interaction --working-dir=/app/backend --no-progress --no-ansi
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 29 installs, 0 updates, 0 removals
- Installing laravel/tinker (v1.0.2): Downloading[Pipeline] }
$ docker stop --time=1 ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
$ docker rm -f ee693aaa7cdde41b714fdc91dbc1b05ac07fe2be7904ab1ed528fb0a3f771047
[Pipeline] // withDockerContainer
[Pipeline] }
and from an earlier job
Installing dependencies (including require-dev) from lock file
Package operations: 55 installs, 0 updates, 0 removals
- Installing symfony/finder (v3.3.6): Downloading (connecting...)[Pipeline] }
It's working as can be seen, but the return code at the end is....
GitHub has been notified of this commit’s build result
ERROR: script returned exit code -1
Finished: FAILURE
Edit 2:
Got this to work even simpler, see gist for more info:
https://gist.github.com/chuckyz/6b78b19a6a5ea418afa16cc58020096e
It's a bug in Jenkins, so until this is marked as fixed I'm just using sh steps with docker run ... in them, manually.
https://issues.jenkins-ci.org/browse/JENKINS-35370
e.g.:
sh 'docker run -v $WORKSPACE:/app composer install'
I experienced this and resolved it by upgrading durable-task-plugin from 1.13 to 1.16.
The 1.16 changelog contains:
Using a new system for determining whether sh step processes are still alive, which should solve various robustness issues.