Jenkins docker: command not found - docker

I have installed Jenkins on local machine (MAC OS) and docker as well.
I have created Jenkinsfile which contain below code
pipeline {
agent {
docker { image 'python:2.7' }
}
stages {
stage('Test') {
steps {
sh 'python --version'
}
}
}
}
Now clicked on Build Now which gave me an error like this
+ docker inspect -f . python:2.7
/Users/PKD/.jenkins/workspace/gfffffgfg#tmp/durable-42c1e897/script.sh: line 1: docker:
command not found
[Pipeline] isUnix
[Pipeline] sh
+ docker pull python:2.7
/Users/PKD/.jenkins/workspace/gfffffgfg#tmp/durable-0ffec7d7/script.sh: line 1: docker:
command not found
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I'm new to Jenkins and trying to resolve this issue by google it but didn't find anything helpful.
Can someone please help me to resolve this issue ?

The path to the docker binary is probably not in your PATH variable in the context that Jenkins is started in. Try executing docker by providing the full path to the executable, in my case it is: /usr/local/bin/docker. This will be the case if Jenkins is started by launchctl directly and doesn't pick up your bash or zsh envitonment.
If you've started Jenkins in a docker container however the reason for the docker executable not being found is different. You have no docker installed in your Jenkins container. But I doubt this is the case.

Related

unable to run simple jenkins docker node build (home directories outside of /home are not currently supported)

I am using a very simple script mentioned below as per the official docs (https://www.jenkins.io/doc/book/pipeline/docker/):
pipeline {
agent {
docker { image 'node:14-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Simple as it is, it outputs follows:
22:58:45 [Pipeline] }
22:58:45 [Pipeline] // stage
22:58:45 [Pipeline] withEnv
22:58:45 [Pipeline] {
22:58:45 [Pipeline] isUnix
22:58:45 [Pipeline] sh
22:58:45 + docker inspect -f . node:14-alpine
22:58:46 Sorry, home directories outside of /home are not currently supported.
22:58:46 See https://forum.snapcraft.io/t/11209 for details.
22:58:46 [Pipeline] isUnix
22:58:46 [Pipeline] sh
22:58:46 + docker pull node:14-alpine
22:58:46 Sorry, home directories outside of /home are not currently supported.
22:58:46 See https://forum.snapcraft.io/t/11209 for details.
22:58:46 [Pipeline] }
22:58:46 [Pipeline] // withEnv
22:58:46 [Pipeline] }
22:58:46 [Pipeline] // node
22:58:46 [Pipeline] End of Pipeline
22:58:46 ERROR: script returned exit code 1
22:58:46 Finished: FAILURE
Not sure what I am doing wrong.
The hyperlink inside the message leads to a page that says:
Snapd does currently not support running snaps if the home directory of the user is outside of /home.
It says that for the docker command. I suspect you're trying to run the docker command as the jenkins user. The default home directory for the jenkins user is /var/lib/jenkins. The default home directory of the jenkins user is outside /home.
If that's the case, there are several alternatives available:
Create a user on that computer with a home directory in /home and run a Jenkins agent as that user
Install docker on that computer using apt instead of using snapd (following the Docker directions rather than the Ubuntu directions)
Create a user on another computer with a home directory in /home and install docker there with snapd, then configure an agent to use that computer
It's likely you are inheriting the HOME environment variable from Jenkins in some way. You can use env config to override that. If you want the HOME from the worker node executing the docker build you can mount env.HOME into /home/jenkins (or something like that) into the container.
Something like:
pipeline {
agent {
docker {
image 'node:14-alpine'
args '-v $HOME:/home/jenkins'
}
}
...
}

Proper way to run docker container using Jenkinsfile

When making a jenkinsfile, I have steps to run dockers image which pulling from my docker hub.
stage('pull image and run') {
steps {
sh '''
docker login -u <username> -p <password>
docker run -d -p 9090:3000 <tag>
'''
}
}
This step is okay if I run this script the first time. However, if I run this script the second time, it will get this error.
Login Succeeded
+ docker run -d -p 9090:3000 <tag>
669955464d74f9b5186b437b7127ca0a24f6ea366f3a903c673489bec741cf78
docker: Error response from daemon: driver failed programming external connectivity on endpoint distracted_driscoll (db16abd899cf0cbd4f26cf712b1eee4ace5b491e061e2e31795c2669296068eb): Bind for 0.0.0.0:9090 failed: port is already allocated.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 125
Finished: FAILURE
Obviously, the port 9090 is not available so the execution failed.
Question:
What is the correct way to upgrade an app inside a docker container?
I can stop the container before running the docker run, but I can't find a proper way to do that in jenkinsfile steps.
Any suggestion?
Thanks
Jenkins has really good docker support to make your build proceed within docker container. good example can be found here
One declarative example to do maven build will be:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp:/tmp'
registryUrl 'https://myregistry.com/'
registryCredentialsId 'myPredefinedCredentialsInJenkins'
}
}
stages {
stage("01") {
steps {
sh "mvn -v"
}
}
stage("02") {
steps {
sh "mvn --help"
}
}
}
}
In a scripted pipeline, it would be
node {
docker.withRegistry('https://registry.example.com', 'credentials-id') {
docker.image('node:14-alpine').inside("-v /tmp:/tmp") {
stage('Test') {
sh 'node --version'
}
}
}
}

Jenkins inside docker how to configure path for hp fortify sourceanalyzer

I am running my jenkins instance inside docker.
I am trying to do fortify scan as a post-build step.
I have
HPE Security Fortify Jenkins Plugin
installed.
Now when I try to do something like
def call(String maven_version) {
withMaven(maven: maven_version) {
script {
sh "sourceanalyzer -b %JOB_NAME% -jdk 1.7 -extdirs %WORKSPACE%/target/deps/libs/ %WORKSPACE%/target/deps/src/**/* -source target/%JOB_NAME%.fpr"
}
}
}
But I get
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Fortify Analysis)
[Pipeline] withMaven
[withMaven] Options: []
[withMaven] Available options:
[withMaven] using JDK installation provided by the build agent
[withMaven] using Maven installation 'Maven 3.3.9'
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] sh
[Running shell script
+ sourceanalyzer -b %JOB_NAME% -jdk 1.7 -extdirs %WORKSPACE%/target/deps/libs/ %WORKSPACE%/target/deps/src/**/* -source target/%JOB_NAME%.fpr
script.sh: sourceanalyzer: not found"
I think all I need to do is create an environment variable for sourceanalyzer, but how do I see where that plugin-is, since this is a docker container and not really an operating system running. Thats where the source of my confusion is.
It is not looking for environment variable.
sourceanalyzer is a executable. and it's not available in the PATH.
Additionally : you can consider docker container as an Operating system (aggregated of multiple things and layers togather before starting.)
If you want to get into RUNNING instance of your JENKIN image then launch following command. (Ensure your container is running).
#>docker exec -it <container-id> sh
Container id is available when you launch
#>docker ps

Bind Volumes to Docker container in a pipeline job

So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?
I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'lvthillo/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Suppose you are under Linux, run the following code
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit Use bind mounts to get more information.
ps:
run
sudo -s
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
docker run [options] 5ed
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
you can simply run
docker run [options] 5ed6
to run the image My_Image_with_very_long_name.

Jenkins Docker Pipelining inside Docker

I'm following along with this tutorial:
https://www.linkedin.com/pulse/building-docker-pipeline-cloudbees-jenkins-jay-johnson
I'm running Jenkins on Docker 17:
docker run -d -p 8080:8080 -p 50000:50000 --name jenkins jenkins
I followed the instructions and replaced Jay's credentials with my own. I added my creds to Global and then renamed the creds in the pipeline script. When I attempt the build, though I'm getting the following error:
Proceeding
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Wrote authentication to /var/jenkins_home/.dockercfg
[Pipeline] {
[Pipeline] stage (Building)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Building
Proceeding
[Pipeline] sh
[alfred-master] Running shell script
+ docker build -t jayjohnson/django-slack-sphinx:testing django
/var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: 2: /var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: docker: not found
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I'm assuming this is looking for the docker binary.
How can I build a docker image from a repo from inside a Docker container?
The issue is here:
/var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: 2: /var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: docker: not found
I'm assuming your build is running on the master instance, which is just a basic installation of Jenkins - no extra tools.
You'll want to run an agent slave and connect it to your master - this agent should ensure it has Docker installed, and then you will be able to run those commands.
You can either set this up yourself; or use an open source option - Currently in my own setup I'm using this image which has everything I need (Well, personally - I've forked it and added some of my own tools as well).

Resources