SO i'm very new to jenkins and I'm trying to use jenkins to automatically build my docker image.
using the freestyle project
under build step i added an execute shell
added "docker images" (to see if docker worked)
cont
following error:
command:
docker images
output:
/var/folders/ym/d71xv1gx4fq16slmbtkmwr680000gn/T/jenkins80660521833580 63134.sh: line 2: docker: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
however
if I issue the following command
/usr/local/bin/docker images -- this works
question
How do I setup the path variable for docker so that I don't have to specify the path to the docker binary?
I would suggest checking what's the job PATH variable is. In your execute shell script add echo $PATH on the top, run the job again and see in the console output the result of that echo command, if the /usr/local/bin is in the PATH. If not, you should probably modify your PATH in the global jenkins configuration - Jenkins -> Manage Jenkins -> Configure System -> under Global Properties, Environment Variables should be checked, PATH var added and it should contain the /usr/local/bin path (together with all the other paths). For testing purposes, you can run export PATH=$PATH:/usr/local/bin on the top section of your shell script to see if the docker command runs.
This is what worked for me:
enables extras
sudo yum-config-manager --enable rhui-REGION-rhel-server-extras
installs docker
yum -y install docker-ce
starts docker
sudo systemctl start docker
test runs if docker is installed
sudo docker run hello-world
enables docker to start up on boot up
sudo systemctl enable docker.service
with these commands the above error didn't occur.
Related
I want to run docker build command in Jenkins shell prompt.
Have already installed docker and add Jenkins user in docker usergroup.
But when i hit docker build command it shows me permission denied issue and when i am using sudo prefix it ask for password with -S argument.
I am running all commands on Jenkins master, earlier i used on other node server not master.
So what is best way to resolve this.
I'm a beginner in docker, as well as in team city, I set up a pipeline for a build of a docker container and wanted to configure it to run after a successful build, I tried to use a step with a docker, but they advise using the command line with executable parameter and some way with docker socket, I crossed the Internet / YouTube did not see normal examples for starting a container after a build. I saw some examples of launching with agents, but again I did not understand anything in what was written, I looked for examples on YouTube, I also did not find it. Please give an example of running docker as a step in the pipeline on Linux.
I solved my similar requirement on Jenkins by applying following..
Add a shell file (e.g. run.sh) in your project. In there have the docker run command that you will use from command line adding > /dev/null 2>&1 & at the end so that the process can be run in background and O/P streams to null.
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag > /dev/null 2>&1 &
Then in your Jenkins (Teamcity) script add a sh step to run this file
steps {
dir (whatever-dir-run.sh-is-in) {
sh "JENKINS_NODE_COOKIE=dontKillMe sh run.sh"
}
}
Note: If JENKINS_NODE_COOKIE has an equivalent in Teamcity, use that.
I have been working through the docker book and I am now learning about CI. I tried to run this script within the execute shell of my build:
# Build the image to be used for this job.
IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF }')
# Build the directory to be mounted into Docker.
MNT="$WORKSPACE/.."
# Execute the build inside Docker.
CONTAINER=$(sudo docker run -d -v $MNT:/opt/project/ $IMAGE /bin/ bash -c 'cd /opt/project/workspace; rake spec')
# Attach to the container so that we can see the output.
sudo docker attach $CONTAINER
# Get its exit code as soon as the container stops.
RC=$(sudo docker wait $CONTAINER)
# Delete the container we've just used.
sudo docker rm $CONTAINER
# Exit with the same value as that with which the process exited.
exit $RC
Running this script ends in the build failing. It shows these two errors:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
and
sudo docker run -d -v /private/var/jenkins_home/jobs/${Docker_test_job}/workspace/..:/opt/project/ /bin/ bash -c cd /opt/project/workspace; rake spec
docker: invalid reference format.
See 'docker run --help'.
+ CONTAINER=
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE
I don't understand how to fix it as I've been following the instructions in the book. I tried using $PWD to try and fix my issue but that didn't work either.
Actaully the jenkins user does not have the permission to run docker command. To do this, add your jenkins user to the docker group:
sudo usermod -aG docker jenkins
Then restart your jenkins server to refresh the group.
Please be informed that ther is a warning "The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system."
I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.
I'm working with a Jenkins install I've inherited. This install has the CloudBees Docker Custom Build Environment Plugin installed. We think this plugin gives us a nifty Build inside a Docker container checkbox in our build configuration. When we configure jobs with this option, it looks like (based on Jenkins console output) Jenkins runs the build with the following command
Docker container 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q started to host the build
[WIP-Tests] $ docker exec -t 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q /bin/sh -xe /tmp/hudson7939624860798651072.sh
However -- we've found that this runs /bin/sh with a very limited set of environmental variables -- including a $PATH that doesn't include /bin! So
How does the CloudBees Docker Custom Build Environment Plugin setup its /bin/sh environment. Is this user configurable via the UI (if so, where?)
It looks like Jenkins is using docker exec -- which i think means that it must have (invisibly) setup a container with a long running process using docker run. Doesn't anyone know how the CloudBees Docker Custom Build Environment Plugin plugin invokes docker run, and if this is user manageable?
Considering this plugin is "up for adoption", I would recommend the official JENKINS/Docker Pipeline Plugin.
It source code show very few recent commits.
But don't forget any container has a default entrypoint set to /bin/sh
ENTRYPOINT ["/bin/sh", "-c"]
Then:
The docker container is ran after SCM has been checked-out into a slave workspace, then all later build commands are executed within the container thanks to docker exec introduced in Docker 1.3
That means the image you will be pulling or building/running must have a shell, to allow the docker exec to execute a command in it.