I have installed Jenkins on my local machine which runs on MacOS High Sierra and have docker installed . I am trying to run a simple pipeline example which uses docker. I have added the following lines to pipeline :
pipeline {
agent {
docker 'node'
}
stages {
stage("testing 123") {
steps {
sh 'node --version'
}
}
}
}
Then from the WebGUI, I click on build now and it is failing. Console output showing the error docker: command not found. The complete error log is as follows :
Started by user Happycoder
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /Users/Shared/Jenkins/Home/workspace/test
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . node
/Users/Shared/Jenkins/Home/workspace/test#tmp/durable-20ded4c0/script.sh: line 2: docker: command not found
[Pipeline] sh
[test] Running shell script
+ docker pull node
/Users/Shared/Jenkins/Home/workspace/test#tmp/durable-ebdc1549/script.sh: line 2: docker: command not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
Why this is happening ? In the Jenkins documentation, they are only given this syntax and doesn't mentioned about anything else.
I think the following section is not correct:
agent {
docker 'node'
}
This command will try to launch a Docker container associated with the 'node' label.
If you want to test a declarative pipeline, you should try this syntax (if you want to build a Maven project):
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
FYI, you can find a lot of pipeline examples here.
Related
We've a containerized Jenkins pipeline and for one of the stages, some part of stage, we want to be executed on container and some on Jenkins master(which is Windows in our case) -
pipeline {
agent {
docker {
label "<node-name>"
image "<docker-image-path>"
}
}
stages {
stage('Testing') {
steps {
script {
//This below part will be executed on container
println "This below part will be executed on container"
sh '''pwd
hostname -i
'''
// Now want to execute below code on master which is Windows
println "Now want to execute below code on master which is Windows"
node('master') {
bat 'dir'
}
}
}
}
}
}
Part to be executed on container is executed successfully but code to execute on Windows Jenkins master fails with -
Cannot run program "docker" (in directory "C:\Jenkins\workspace\TestDocker"): CreateProcess error=2, The system cannot find the file specified
EDIT
And when I've docker installed on Windows machine, above error is not thrown but stucks there forever.
Could you please help me how I can execute code on node or container on demand?
I am trying to run the pipeline example of python.
I am able to start Jenkins in Docker and connect to Git but I get "docker not found" error when trigger the job:
+ docker inspect -f . python:3.5.1
/var/jenkins_home/workspace/example-project#tmp/durable-6edc6c68/script.sh: 1: /var/jenkins_home/workspace/example-project#tmp/durable-6edc6c68/script.sh: docker: not found
[Pipeline] isUnix
[Pipeline] sh
+ docker pull python:3.5.1
/var/jenkins_home/workspace/example-project#tmp/durable-bd8f56f3/script.sh: 1: /var/jenkins_home/workspace/example-project#tmp/durable-bd8f56f3/script.sh: docker: not found
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
I have also referred to other online solution by adding Docker to be install automatically in Global Tool Configuration:
However, my Jenkins in Docker container is unable to connect to Docker Daemon when I configure the Docker plugin
On the other hand, my another Jenkins that is installed in Window 10 is able to connect to Docker Daemon. Therefore, may I know what can goes wrong?
I just encountered a problem when running a Jenkins declarative pipeline on a Jenkins server that is itself running inside Docker, having access to the docker.sock from the host.
The structure of the pipeline is rather simple:
pipeline {
agent {
docker { image 'gradle:jdk11' }
}
stages {
stage('Checkout') {
steps {
// ...
}
}
stage('Assemble public API documentation') {
environment {
// ...
}
steps {
// ...
}
}
stage('Generate documentation') {
steps {
// ...
}
}
stage('Upload documentation to Firebase') {
agent {
docker {
image 'node:12'
reuseNode false
}
}
steps {
// ...
}
}
}
}
The idea is to run three stages in the first container, and then create a new container for the final stage.
The following is printed when entering the last stage:
[Pipeline] stage
[Pipeline] { (Upload documentation to Firebase)
[Pipeline] getContext
[Pipeline] isUnix
[Pipeline] sh
+ docker inspect -f . node:12
/var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-bc4d65d1/script.sh: 1: /var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-bc4d65d1/script.sh: docker: not found
[Pipeline] isUnix
[Pipeline] sh
+ docker pull node:12
/var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-297d223a/script.sh: 1: /var/jenkins_home/workspace/publish_public_api_doc#tmp/durable-297d223a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 367647f97c9eed52bf85c13c2bc2203bb7194adac803d37cab0e0d0435325efa
$ docker rm -f 367647f97c9eed52bf85c13c2bc2203bb7194adac803d37cab0e0d0435325efa
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I don't really understand what is happening here.
In order to debug this, I logged in to that machine, and ran the docker command from the host, as well as from inside the running Jenkins container, and it was working.
The way this is set up is that the Docker client is installed in the image, i.e. the binary itself is not shared into the container.
Since the docker command is "not found", the only explanation that I have is that the docker command to start the agent for the final stage is not executed in the "top-level" Jenkins container, but in the JDK one, which does not have the docker executable inside.
This, however, would seem unexpected, if not a bug.
I'd be thankful if anyone was shedding some light on this.
Jenkins pipeline agents/nodes
Your pipeline has specified an agent to run on at the top-most level. The pipeline will execute all commands on that agent (or within a docker container in your scenario), until another agent is specified. When a new agent is specified, the top-level agent will connect to it via some protocol and the new agent will execute all pipeline stages/steps that are within this agents scope. Once out of scope, the connection to the new agent will be closed and the top-level agent will once again execute all commands.
What's causing the error?
The forth stage attempts to change the execution context to a new agent. The current agent, the gradle:jdk11 container, will execute the steps to connect to this new agent. As the new agent is a docker container, the gradle:jdk11 container will attempt to use the docker command itself to spin up the new container.
As you suspected there is no docker binary/service within this container.
Why is this the expected behaviour?
Assume that the top level agent is a different physical machine connected via tcp or ssh, rather than a docker container. This machine would need to have all the tools installed on it for compiling, generating docs, running unit tests, etc. E.g. it wouldn't use the doxygen binary installed on the Jenkins master as it should provide this itself (throwing errors if doxygen doesn't exist in the $PATH). Likewise, this machine would need docker installer to spin up the container in the forth stage.
How can I get my pipeline working?
You could create your own custom docker image inheriting from gradle:jdk11 and share the host systems' docker. This would allow your custom image to spin up the docker image required in the forth stage. You would use agent { docker { image 'my-custom-img' } } at a global scope.
Alternatively you could use the master agent (or other physical machines) at a global scope and have each stage spin up its own container. Each stage would have a clean working environment, so you'd need to use stash/unstash or a mounted volume to share src/docs between stages.
When launching a pipeline using Jenkins with the following syntax:
stage('Verify test') {
agent {
docker { image 'python_image:latest' }
}
steps {
sh 'robot RobotFramework/test.robot'
}
post {
always {
archiveArtifacts 'log.html'
archiveArtifacts 'report.html'
archiveArtifacts 'output.xml'
junit 'output.xml'
}
}
}
I get the following error:
connect to UUT device | FAIL |
DatafileError: Failed to load the datafile '/opt/app-root/lib/python3.6/site-packages/genie/libs/sdk/genie_yamls/iosxr/trigger_datafile_xr.yaml'
It does work when I try the exact same command (robot RobotFramework/test.robot) on a new Docker container using the same image or when I pause the container in the Jenkins pipeline and execute the exact same command on the running container
Only when I am creating a virtual env on the docker container I get the exact same error but I assume that that is not happening when running a Docker container with Jenkins
fixed by adding #!/bin/bash
sh '''#!/bin/bash robot RobotFramework/test.robot'''
I am new to jenkins, and I am trying to basically build an image from a Dockerfile and get a green light after the image is build.
I keep running into the issue:
[nch-gettings-started_master-SHLPWPHFAAYXF7TNKZMDMDGWQ3SU5XIHKYETXMIETUSVZMON4MRA]
Running shell script
docker build -t my-image:latest .
/Users/Shared/Jenkins/Home/workspace/nch-gettings-started_master-SHLPWPHFAAYXF7TNKZMDMDGWQ3SU5XIHKYETXMIETUSVZMON4MRA#tmp/durable-a1f989d1/script.sh:
line 2: docker: command not found
script returned exit code 127
My pipeline as code is as follow:
node {
stage('Clone repository') {
checkout scm
}
stage('Build image') {
def app = docker.build("my-image:my-tag")
}
}
I have also tried:
pipeline {
agent any
stages {
stage ('clonse repo') {
steps {
checkout scm
}
}
stage('build image') {
steps {
docker.build("my-image:my-tag")
}
}
}
}
I have already installed the docker pipeline plugin. and by the way jenkins is running in my localhost
line 2: docker: command not found
That is your issue. Depending on where the job is running, you need to make sure your slave image/VM/machine has docker installed.
If you have jobs running on your master, make sure docker is installed there.
If you have jobs running in Kubernetes, make sure your slave image has docker installed.
EDIT :
Just saw that you're running on localhost. Make sure you have docker installed there and its in your $PATH