How to pass docker run arguments in Jenkins? - docker

I am trying to set up my Jenkins pipeline using this docker image. It requires to be executed as following:
docker run --rm \
-v $PROJECT_DIR:/input \
-v $PROJECT_DIR:/output \
-e PLATFORM_ID=$PLATFORM_ID \
particle/buildpack-particle-firmware:$VERSION
The implementation in my Jenkins pipeline looks like this:
stage('build firmware') {
agent {
docker {
image 'particle/buildpack-particle-firmware:4.0.2-tracker'
args '-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker'
}
}
steps {
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}
Executing this on my PC system works just fine.
Upon executing the Jenkins pipeline, I am eventually getting this error:
java.io.IOException: Failed to run image 'particle/buildpack-particle-firmware:4.0.2-tracker'. Error: docker: Error response from daemon: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "-w": executable file not found in $PATH: unknown.
I read through the documentation of Jenkins + Docker, but I couldn't find out how to use such an image. All the guides usually explain how to run a docker image and execute shell commands.
If I get it right, this Dockerfile is the layout for the said docker image.
How do I get around this issue and call a docker container with run arguments?

The agent mode is intended if you want run Jenkins build steps inside a container; in your example, run the archiveArtifacts step instead of the thing the container normally does. You can imagine using an image that only contains a build tool, like golang or one of the Java images, in the agent { docker { image } } line, and Jenkins will inject several lines of docker command-line options so that it runs against the workspace tree.
The Jenkins Docker interface may not have a built-in way to wait for a container to complete. Instead, you can launch a "sidecar" container, then run docker wait still from outside the container to wait for it to complete. This would roughly look like
stage('build firmware') {
steps {
docker
.image('particle/buildpack-particle-firmware:4.0.2-tracker')
.withRun('-v application:/input -v application:/output -e PLATFORM_ID=26 particle/buildpack-particle-firmware:4.0.2-tracker') { c ->
sh "docker wait ${c.id}"
}
archiveArtifacts artifacts: 'application/target/*.bin', fingerprint: true, onlyIfSuccessful: true
}
}

In the end, it is up to Jenkins how the docker run command is executed and which entrypoint is taken. Unfortunately, I can't change the settings of the Jenkins Server so I had to find a workaround.
The solution for me is similar to my initial approach and looks like this:
agent {
docker {
image 'particle/buildpack-hal'
}
}
environment {
APPDIR="$WORKSPACE/tracker-edge"
DEVICE_OS_PATH="$WORKSPACE/device-os"
PLATFORM_ID="26"
}
steps {
sh 'make sanitize -s'
}
One guess is that calling the docker container as expected doesn't work on my Jenkins Server. It requires to be run and shell commands to be executed from within.

Related

sh step in dockerfile agent running using rootless podman hangs

I am trying to use dockefile agent with (rootless) Podman (yum install podman-docker), but the sh step that should run commands in the container hangs.
FROM registry.access.redhat.com/ubi8/python-36:1-164
COPY requirements.txt .
RUN pip install -r requirements.txt
pipeline {
agent {
dockerfile true
}
stages {
stage "stage", {
steps {
sh "echo hello"
}
}
}
}
Jenkins then tells (after hanging a longer time between the "sh" and "process apparently never started")
[Pipeline] { (Generate CryptoStore dist zip)
[Pipeline] sh
process apparently never started in /var/lib/jenkins/workspace/--%<--#tmp/durable-5572a21e
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
setting LAUNCH_DIAGNOSTICS, it tells
sh: /var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt: Permission denied
sh: /var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-result.txt.tmp: Permission denied
touch: cannot touch '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt': Permission denied
mv: cannot stat '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-result.txt.tmp': No such file or directory
touch: cannot touch '/var/lib/jenkins/workspace/--%<--#2#tmp/durable-baac9648/jenkins-log.txt': Permission denied
[...]
I see Jenkins starts the container with a -u option corresponding to the user the Agent that starts the container is running as, but podman mounts the volumes as root.
How to fix or workaround that? The plugin does not seem to have an option to override the user, adding a custom -u option to args does not seem to help, the docker run jenkins shows then simply contains two -u options but the first (the jenkins one) seems to be used...
Looking into how to change the user for the volume mount I discovered this troubleshooting info: Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)
which describes some workaround but also contains this hint:
A side-note: Using --userns=keep-id can sometimes be an alternative
solution, but it forces the regular user's host UID to be mapped to
the same UID inside the container so it provides less flexibility than
using --uidmap and --gidmap.
As Jenkins steals us the flexibility here anyway, I added args "--userns=keep-id" to my dockerfile options and it works fine now. :)
pipeline {
agent {
dockerfile {
filename 'Containerfile'
// Jenkins sets the user in the container to the same one running it
// Using (rootless) podman as docker this breaks the -v volume mounts because the user in the container is mapped to a different one on the host.
// this options disables that mapping, so the uid inside and outside match again.
args "--userns=keep-id"
}
}
stages {
stage "Generate CryptoStore dist zip", {
steps {
sh "echo hello"
}
}
}
}

Using a docker command in a Jenkinsfile gives me inconsistent result (works sometime, not found sometime)

pipeline {
agent any
stages {
stage('BuildImage') {
steps {
withCredentials([string(credentialsId: 'docker_pw', variable: 'DOCKER_PW')]){
sh '''
docker login -u ... -p ${DOCKER_PW} <dockerhub>
docker -v
'''
}
}
}
...
I am building a Jenkins pipeline using Jenkinsfile. I am trying to build a docker image in the Jenkinsfile and push it to the dockerhub.
This works sometimes but sometimes I just fail with the message line 2: docker: command not found
This doesn't make sense to me because it works sometimes.
Do I have to use a different agent or something?
This may be due to job trying to run on agents where docker is not installed.The best solution would be to use the labels. You can add labels on agents where docker is installed. That will help to identify what that agent can be used for. You can then specify in pileline with agent { label 'docker' }

Robotframework DatafileError when launching docker container trough Jenkins

When launching a pipeline using Jenkins with the following syntax:
stage('Verify test') {
agent {
docker { image 'python_image:latest' }
}
steps {
sh 'robot RobotFramework/test.robot'
}
post {
always {
archiveArtifacts 'log.html'
archiveArtifacts 'report.html'
archiveArtifacts 'output.xml'
junit 'output.xml'
}
}
}
I get the following error:
connect to UUT device | FAIL |
DatafileError: Failed to load the datafile '/opt/app-root/lib/python3.6/site-packages/genie/libs/sdk/genie_yamls/iosxr/trigger_datafile_xr.yaml'
It does work when I try the exact same command (robot RobotFramework/test.robot) on a new Docker container using the same image or when I pause the container in the Jenkins pipeline and execute the exact same command on the running container
Only when I am creating a virtual env on the docker container I get the exact same error but I assume that that is not happening when running a Docker container with Jenkins
fixed by adding #!/bin/bash
sh '''#!/bin/bash robot RobotFramework/test.robot'''

Jenkins: Connect to a Docker container from a stage that is run with an agent (another Docker container)

I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.

Jenkins pipeline should is removing container on remote daemon after deployment, i want to keep it running

I am trying to build deploy my code using jenkins pipeline and also using remote docker daemon for deployment.
everything is working but jenkins pipeline is stoping and removing all containers once pipeline script ends. server is coming up just for 10 seconds after that container stops and removed.
stage {
steps {
script {
docker.withServer('tcp://10.10.10.10:2375') {
docker.withRegistry('https://registry.my.com/','jenkins-registry') {
docker.image('registry.my.com/image-my/my:latest').withRun(' -p 9090:80 -i -t --name harpal ') {
sh 'docker ps -a'
}
}
}
}
}
output
[Flights-Docker-POC] Running shell script
+ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a4c5094a8d2 registry.my.com/image-my/my:latest "/usr/bin/supervisord" 6 hours ago Up Less than a second 0.0.0.0:9090->80/tcp harpal
[Pipeline] sh
[Flights-Docker-POC] Running shell script
+ docker stop 6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
+ docker rm -f 6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
got the answer for it.it wasn't issue related to entry point in my image
was suppose to use image.run() method instead of withRun(), withRun() method internally calls run() method and stops container in finally block of its implementation.
public <V> V withRun(String args = '', Closure<V> body) {
docker.node {
Container c = run(args)
try {
body.call(c)
} finally {
c.stop()
}
}
}
btw thank you guys for help.
script was supposed to be like.
stage {
steps {
script {
docker.withServer('tcp://10.10.10.10:2375') {
docker.withRegistry('https://registry.my.com/','jenkins-registry') {
docker.image('registry.my.com/image-my/my:latest').run(' -p 9090:80 -i -t --name harpal ')
}
}
}
}
I don't believe there is a way to keep it alive using that Docker plugin Groovy class, it's intended to remove the container after the run.
If you're just trying to launch Docker containers from Jenkins, just use shell commands to do a
sh 'docker run -p 9090:80 -i -t --name harpal registry.my.com/image-my/my:latest '
If you're trying to keep a container alive to debug it and look around I usually add
sh 'sleep 30m'
Then go to the Docker machine and take a look around the container with
docker exec -it <ContainerID> bash

Resources