I'm trying to use a pipeline build in Openshift 3.9 where I need to use the docker CLI. I can't figure out how to have the 'docker' command available in my pipeline.
I've tried the code below with declarative pipeline, but getting "docker: command not found"
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
The code was copied from here:
https://jenkins.io/doc/book/pipeline/docker/
I also tried the scripted version of it:
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('node:7-alpine').inside {
stage('Test') {
sh 'node --version'
}
}
}
But getting the same error: "docker: command not found"
The docker pipeline plugin is installed (version: 1.17)
Openshift version: 3.9
Any suggestions? Thank you!
Seems to be that you don't have docker installed on the node where you run your pipeline. You need to install it first. You cannot use docker command just after installing Docker Plugin without pre-configuration:
By default, the Docker Pipeline plugin will communicate with a local Docker daemon, typically accessed through /var/run/docker.sock.
Openshift provides Jenkins slave images of three types,
Maven supported
Nodejs supported
Base image
Neither of them have docker installed and believe me, that's not a good idea.
In Openshift, Jenkins runs as Pod( running the docker container inside), and you want to get the docker within the container. So if you want to make docker available,
you have to create a jenkins slave image by extending the base image and add docker to it
Push it to the registry
Start using it!
But, do above if you really really want that, just a word of caution ;)
Related
pipeline {
agent any
stages {
stage('BuildImage') {
steps {
withCredentials([string(credentialsId: 'docker_pw', variable: 'DOCKER_PW')]){
sh '''
docker login -u ... -p ${DOCKER_PW} <dockerhub>
docker -v
'''
}
}
}
...
I am building a Jenkins pipeline using Jenkinsfile. I am trying to build a docker image in the Jenkinsfile and push it to the dockerhub.
This works sometimes but sometimes I just fail with the message line 2: docker: command not found
This doesn't make sense to me because it works sometimes.
Do I have to use a different agent or something?
This may be due to job trying to run on agents where docker is not installed.The best solution would be to use the labels. You can add labels on agents where docker is installed. That will help to identify what that agent can be used for. You can then specify in pileline with agent { label 'docker' }
I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.
I want to use Jenkins Pipeline to build, push, and deploy my Docker image.
I get this:
Got permission denied while trying to connect to the
Docker daemon socket at unix:///var/run/docker.sock
Other questions on StackOverflow suggest sudo usermod -a -G docker jenkins, then restart Jenkins, but I do not have access to the machine running Jenkins -- and anyway, it seems strange that Jenkins Pipeline, which is built all around Docker, cannot run a basic Docker command.
How can I build my Docker?
pipeline {
agent any
stages {
stage('deploy') {
agent {
docker {
image 'google/cloud-sdk:latest'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
The pipeline definition shown is trying to execute the docker build inside a docker container (google/cloud-sdk:latest). Instead you should do the following given the jenkins user on the host has permission to execute docker commands on the host.
pipeline {
agent any
stages {
stage('deploy') {
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
There is nothing strange about jenkins unable to execute docker commands without proper permission when they are installed and configured separately on the machine.
I am new to jenkins, and I am trying to basically build an image from a Dockerfile and get a green light after the image is build.
I keep running into the issue:
[nch-gettings-started_master-SHLPWPHFAAYXF7TNKZMDMDGWQ3SU5XIHKYETXMIETUSVZMON4MRA]
Running shell script
docker build -t my-image:latest .
/Users/Shared/Jenkins/Home/workspace/nch-gettings-started_master-SHLPWPHFAAYXF7TNKZMDMDGWQ3SU5XIHKYETXMIETUSVZMON4MRA#tmp/durable-a1f989d1/script.sh:
line 2: docker: command not found
script returned exit code 127
My pipeline as code is as follow:
node {
stage('Clone repository') {
checkout scm
}
stage('Build image') {
def app = docker.build("my-image:my-tag")
}
}
I have also tried:
pipeline {
agent any
stages {
stage ('clonse repo') {
steps {
checkout scm
}
}
stage('build image') {
steps {
docker.build("my-image:my-tag")
}
}
}
}
I have already installed the docker pipeline plugin. and by the way jenkins is running in my localhost
line 2: docker: command not found
That is your issue. Depending on where the job is running, you need to make sure your slave image/VM/machine has docker installed.
If you have jobs running on your master, make sure docker is installed there.
If you have jobs running in Kubernetes, make sure your slave image has docker installed.
EDIT :
Just saw that you're running on localhost. Make sure you have docker installed there and its in your $PATH
I have a jenkins pipeline script which creates the Docker image and deploys it to the docker hub. I have installed the docker plugin but it complains about "docker command not found". I am not sure if I need to install docker in the same machine or something else need to happen?
Yes you have to install docker on the slave machine which is running that pipeline script of docker plugin. I would suggest adding a label docker to the slave that has docker installed and then use the pipeline script as:
node('docker') {
...
}