sh step in dockerfile agent running using rootless podman hangs - jenkins

I am trying to use dockefile agent with (rootless) Podman (yum install podman-docker), but the sh step that should run commands in the container hangs.
FROM registry.access.redhat.com/ubi8/python-36:1-164
COPY requirements.txt .
RUN pip install -r requirements.txt
pipeline {
agent {
dockerfile true
}
stages {
stage "stage", {
steps {
sh "echo hello"
}
}
}
}
Jenkins then tells (after hanging a longer time between the "sh" and "process apparently never started")
[Pipeline] { (Generate CryptoStore dist zip)
[Pipeline] sh
process apparently never started in /var/lib/jenkins/workspace/--%<--#tmp/durable-5572a21e
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
setting LAUNCH_DIAGNOSTICS, it tells
sh: /var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt: Permission denied
sh: /var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-result.txt.tmp: Permission denied
touch: cannot touch '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt': Permission denied
mv: cannot stat '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-result.txt.tmp': No such file or directory
touch: cannot touch '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt': Permission denied
[...]
I see Jenkins starts the container with a -u option corresponding to the user the Agent that starts the container is running as, but podman mounts the volumes as root.
How to fix or workaround that? The plugin does not seem to have an option to override the user, adding a custom -u option to args does not seem to help, the docker run jenkins shows then simply contains two -u options but the first (the jenkins one) seems to be used...

Looking into how to change the user for the volume mount I discovered this troubleshooting info: Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)
which describes some workaround but also contains this hint:
A side-note: Using --userns=keep-id can sometimes be an alternative
solution, but it forces the regular user's host UID to be mapped to
the same UID inside the container so it provides less flexibility than
using --uidmap and --gidmap.
As Jenkins steals us the flexibility here anyway, I added args "--userns=keep-id" to my dockerfile options and it works fine now. :)
pipeline {
agent {
dockerfile {
filename 'Containerfile'
// Jenkins sets the user in the container to the same one running it
// Using (rootless) podman as docker this breaks the -v volume mounts because the user in the container is mapped to a different one on the host.
// this options disables that mapping, so the uid inside and outside match again.
args "--userns=keep-id"
}
}
stages {
stage "Generate CryptoStore dist zip", {
steps {
sh "echo hello"
}
}
}
}

Related

How to pass docker run arguments in Jenkins?

I am trying to set up my Jenkins pipeline using this docker image. It requires to be executed as following:
docker run --rm \
-v $PROJECT_DIR:/input \
-v $PROJECT_DIR:/output \
-e PLATFORM_ID=$PLATFORM_ID \
particle/buildpack-particle-firmware:$VERSION
The implementation in my Jenkins pipeline looks like this:
stage('build firmware') {
agent {
docker {
image 'particle/buildpack-particle-firmware:4.0.2-tracker'
args '-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker'
}
}
steps {
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
Executing this on my PC system works just fine.
Upon executing the Jenkins pipeline, I am eventually getting this error:
java.io.IOException: Failed to run image 'particle/buildpack-particle-firmware:4.0.2-tracker'. Error: docker: Error response from daemon: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "-w": executable file not found in $PATH: unknown.
I read through the documentation of Jenkins + Docker, but I couldn't find out how to use such an image. All the guides usually explain how to run a docker image and execute shell commands.
If I get it right, this Dockerfile is the layout for the said docker image.
How do I get around this issue and call a docker container with run arguments?
The agent mode is intended if you want run Jenkins build steps inside a container; in your example, run the archiveArtifacts step instead of the thing the container normally does. You can imagine using an image that only contains a build tool, like golang or one of the Java images, in the agent { docker { image } } line, and Jenkins will inject several lines of docker command-line options so that it runs against the workspace tree.
The Jenkins Docker interface may not have a built-in way to wait for a container to complete. Instead, you can launch a "sidecar" container, then run docker wait still from outside the container to wait for it to complete. This would roughly look like
stage('build firmware') {
steps {
docker
.image('particle/buildpack-particle-firmware:4.0.2-tracker')
.withRun('-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker') { c ->
sh "docker wait ${c.id}"
}
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
In the end, it is up to Jenkins how the docker run command is executed and which entrypoint is taken. Unfortunately, I can't change the settings of the Jenkins Server so I had to find a workaround.
The solution for me is similar to my initial approach and looks like this:
agent {
docker {
image 'particle/buildpack-hal'
}
}
environment {
APPDIR="$WORKSPACE/tracker-edge"
DEVICE_OS_PATH="$WORKSPACE/device-os"
PLATFORM_ID="26"
}
steps {
sh 'make sanitize -s'
}
One guess is that calling the docker container as expected doesn't work on my Jenkins Server. It requires to be run and shell commands to be executed from within.

Jenkins declarative pipeline problem when running docker-in-docker

I just encountered a problem when running a Jenkins declarative pipeline on a Jenkins server that is itself running inside Docker, having access to the docker.sock from the host.
The structure of the pipeline is rather simple:
pipeline {
agent {
docker { image 'gradle:jdk11' }
}
stages {
stage('Checkout') {
steps {
// ...
}
}
stage('Assemble public API documentation') {
environment {
// ...
}
steps {
// ...
}
}
stage('Generate documentation') {
steps {
// ...
}
}
stage('Upload documentation to Firebase') {
agent {
docker {
image 'node:12'
reuseNode false
}
}
steps {
// ...
}
}
}
}
The idea is to run three stages in the first container, and then create a new container for the final stage.
The following is printed when entering the last stage:
[Pipeline] stage
[Pipeline] { (Upload documentation to Firebase)
[Pipeline] getContext
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . node:12
/var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-bc4d65d1/script.sh: 1: /var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-bc4d65d1/script.sh: docker: not found
[Pipeline] isUnix
[Pipeline] sh
+ docker pull node:12
/var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-297d223a/script.sh: 1: /var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-297d223a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 367647f97c9eed52bf85c13c2bc2203bb7194adac803d37cab0e0d0435325efa
$ docker rm -f 367647f97c9eed52bf85c13c2bc2203bb7194adac803d37cab0e0d0435325efa
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I don't really understand what is happening here.
In order to debug this, I logged in to that machine, and ran the docker command from the host, as well as from inside the running Jenkins container, and it was working.
The way this is set up is that the Docker client is installed in the image, i.e. the binary itself is not shared into the container.
Since the docker command is "not found", the only explanation that I have is that the docker command to start the agent for the final stage is not executed in the "top-level" Jenkins container, but in the JDK one, which does not have the docker executable inside.
This, however, would seem unexpected, if not a bug.
I'd be thankful if anyone was shedding some light on this.
Jenkins pipeline agents/nodes
Your pipeline has specified an agent to run on at the top-most level. The pipeline will execute all commands on that agent (or within a docker container in your scenario), until another agent is specified. When a new agent is specified, the top-level agent will connect to it via some protocol and the new agent will execute all pipeline stages/steps that are within this agents scope. Once out of scope, the connection to the new agent will be closed and the top-level agent will once again execute all commands.
What's causing the error?
The forth stage attempts to change the execution context to a new agent. The current agent, the gradle:jdk11 container, will execute the steps to connect to this new agent. As the new agent is a docker container, the gradle:jdk11 container will attempt to use the docker command itself to spin up the new container.
As you suspected there is no docker binary/service within this container.
Why is this the expected behaviour?
Assume that the top level agent is a different physical machine connected via tcp or ssh, rather than a docker container. This machine would need to have all the tools installed on it for compiling, generating docs, running unit tests, etc. E.g. it wouldn't use the doxygen binary installed on the Jenkins master as it should provide this itself (throwing errors if doxygen doesn't exist in the $PATH). Likewise, this machine would need docker installer to spin up the container in the forth stage.
How can I get my pipeline working?
You could create your own custom docker image inheriting from gradle:jdk11 and share the host systems' docker. This would allow your custom image to spin up the docker image required in the forth stage. You would use agent { docker { image 'my-custom-img' } } at a global scope.
Alternatively you could use the master agent (or other physical machines) at a global scope and have each stage spin up its own container. Each stage would have a clean working environment, so you'd need to use stash/unstash or a mounted volume to share src/docs between stages.

Jenkins: Connect to a Docker container from a stage that is run with an agent (another Docker container)

I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.

How can I build Docker images on Jenkins Pipeline, without changing permissions on the underlying Jenkins VM?

I want to use Jenkins Pipeline to build, push, and deploy my Docker image.
I get this:
Got permission denied while trying to connect to the
Docker daemon socket at unix:///var/run/docker.sock
Other questions on StackOverflow suggest sudo usermod -a -G docker jenkins, then restart Jenkins, but I do not have access to the machine running Jenkins -- and anyway, it seems strange that Jenkins Pipeline, which is built all around Docker, cannot run a basic Docker command.
How can I build my Docker?
pipeline {
agent any
stages {
stage('deploy') {
agent {
docker {
image 'google/cloud-sdk:latest'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
The pipeline definition shown is trying to execute the docker build inside a docker container (google/cloud-sdk:latest). Instead you should do the following given the jenkins user on the host has permission to execute docker commands on the host.
pipeline {
agent any
stages {
stage('deploy') {
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
There is nothing strange about jenkins unable to execute docker commands without proper permission when they are installed and configured separately on the machine.

Use git in jenkins pipeline with docker agent

When I try to run the pipeline below, it fails with this error:
Cloning into '/go/src/github.com/gorilla/websocket'...
fatal: unable to look up current user in the passwd file: no such user
package github.com/gorilla/websocket: exit status 128
As far as I understand, the issue is, that jenkins starts the container with the userid from the jenkins user, to keep the file system permissions right (docker run -t -d -u 108:113 ....) but how can I use git then?
pipeline {
agent none
stages {
...
stage('Build Code') {
agent {
docker {
image 'xxx.de/go_build_container'
args '-v=$WORKSPACE:/go/src/bitbucket.org/xxx/service_donation'
}
}
environment {
HOME = "."
}
steps {
sh 'cd /go/src/bitbucket.org/xxx/service_donation && go get github.com/gorilla/websocket'
}
}
Lot of programs won't work when running with a user id not existing in /etc/passwd and git is one of those programs.
You must specify a correct -u argument to your docker container, that is, one that exists in your xxx.de/go_build_container image, root (or 0:0) being one of them.
So put something like: args '-v=$WORKSPACE:/go/src/bitbucket.org/xxx/service_donation -u 0:0' and it will work.
Then, you will face another problem: the files created in your volume will belong to the UID you are using in your container, you may need to add some chown if you want to reuse those files later in your pipeline.
Another option I found is mounting /etc/passwd to the container.
agent {
docker {
image 'xxxx'
args '-v=/etc/passwd:/etc/passwd'
}
}
I'm not sure, if that has any other problems but it seems to work and you don't have the problem with wrong permissions.

Resources