I have been working through the docker book and I am now learning about CI. I tried to run this script within the execute shell of my build:
# Build the image to be used for this job.
IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF }')
# Build the directory to be mounted into Docker.
MNT="$WORKSPACE/.."
# Execute the build inside Docker.
CONTAINER=$(sudo docker run -d -v $MNT:/opt/project/ $IMAGE /bin/ bash -c 'cd /opt/project/workspace; rake spec')
# Attach to the container so that we can see the output.
sudo docker attach $CONTAINER
# Get its exit code as soon as the container stops.
RC=$(sudo docker wait $CONTAINER)
# Delete the container we've just used.
sudo docker rm $CONTAINER
# Exit with the same value as that with which the process exited.
exit $RC
Running this script ends in the build failing. It shows these two errors:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
and
sudo docker run -d -v /private/var/jenkins_home/jobs/${Docker_test_job}/workspace/..:/opt/project/ /bin/ bash -c cd /opt/project/workspace; rake spec
docker: invalid reference format.
See 'docker run --help'.
+ CONTAINER=
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE
I don't understand how to fix it as I've been following the instructions in the book. I tried using $PWD to try and fix my issue but that didn't work either.
Actaully the jenkins user does not have the permission to run docker command. To do this, add your jenkins user to the docker group:
sudo usermod -aG docker jenkins
Then restart your jenkins server to refresh the group.
Please be informed that ther is a warning "The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system."
Related
We are running a Kubernetes cluster for building Jenkins jobs. For the pods we are using the odavid/jenkins-jnlp-slave JNLP docker image. I mounted the /var/run/docker.sock to the pod container and added jenkins(uid=1000) user to the docker group on the host systems.
When running a shell script job in Jenkins with e.g. docker ps it fails with error docker: not found.
$ /bin/sh -xe /tmp/jenkins6501091583256440803.sh
+ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
+ docker ps
/tmp/jenkins2079497433467634278.sh: 8: /tmp/jenkins2079497433467634278.sh: docker: not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
The interesting thing is that when connecting into the pod manually and executing docker commands directly in the container as jenkins user, it works:
kubectl exec -it jenkins-worker-XXX -- /bin/bash
~$ su - jenkins
~$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),1000(jenkins)
~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
What is doing Jenkins in its job differently? Same user, same container, only groups=1000(jenkins),1000(jenkins) lists 1000(jenkins) as group 2 times when connecting manually. What am i missing?
/var/run/docker.sock is just the host socket that allows docker client to run docker commands from the container.
What you are missing is the docker client in your container.
Download the docker client manually and place it on a persistent volume and ensure that he docker client is in the system path. Also, ensure that the docker client is executable.
This command will do it for you. You may have to get the right version of the docker client for your environment
curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.03.1-ce.tgz &&
tar --strip-components=1 -xvzf docker-17.03.1-ce.tgz -C /usr/local/bin
You may even be able to install the docker using the package manager for your image.
SO i'm very new to jenkins and I'm trying to use jenkins to automatically build my docker image.
using the freestyle project
under build step i added an execute shell
added "docker images" (to see if docker worked)
cont
following error:
command:
docker images
output:
/var/folders/ym/d71xv1gx4fq16slmbtkmwr680000gn/T/jenkins80660521833580 63134.sh: line 2: docker: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
however
if I issue the following command
/usr/local/bin/docker images -- this works
question
How do I setup the path variable for docker so that I don't have to specify the path to the docker binary?
I would suggest checking what's the job PATH variable is. In your execute shell script add echo $PATH on the top, run the job again and see in the console output the result of that echo command, if the /usr/local/bin is in the PATH. If not, you should probably modify your PATH in the global jenkins configuration - Jenkins -> Manage Jenkins -> Configure System -> under Global Properties, Environment Variables should be checked, PATH var added and it should contain the /usr/local/bin path (together with all the other paths). For testing purposes, you can run export PATH=$PATH:/usr/local/bin on the top section of your shell script to see if the docker command runs.
This is what worked for me:
enables extras
sudo yum-config-manager --enable rhui-REGION-rhel-server-extras
installs docker
yum -y install docker-ce
starts docker
sudo systemctl start docker
test runs if docker is installed
sudo docker run hello-world
enables docker to start up on boot up
sudo systemctl enable docker.service
with these commands the above error didn't occur.
So I'm working with a slightly strange infrastructure: I have a openshift container platform that has a jenkins image from docker running inside it using the image openshift3/jenkins-2-rhel7
I'm trying to run docker build . command's within a jenkins pipeline and i'm getting a "Cannot connect to the Docker daemon" error. I don't understand why docker is installed on the machine yet not running and I don't currently have access to the openshift server other than cli and via the console. Does anyone have recommendations on how to get the docker build . command to run successfully for jenkins either with or without utilizing slaves?
node("master"){
withEnv(["PATH=${tool 'docker'}/bin:${env.PATH}"]) {
docker.withRegistry( 'dockertest') {
git url: "https://github.com/mydockertag/example.git", credentialsId: 'dockertest'
stage "build"
sh "docker build -t mydockertag/example -f ./Dockerfile ."
stage "publish"
}
}
After running the build command i get the following error:
+ docker build -t mydockertag/example -f ./Dockerfile .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
There can be two reasons for the error "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?".
Docker is running but the user executing the docker command does not have privileges to talk to '/var/run/docker.sock'. Try using 'sudo docker build'. If you do not wish to use 'sudo' everytime, you can add your user to the docker group by following the post docker installation steps here (https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user).
The docker daemon is not up and running at all. You will have to start the docker daemon manually.
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN useradd -g root -G sudo -u 1001 user && \
chown -R user:root /some/directory && \
chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
#Specify the user with UID
USER 1001
Refer section "Support Arbitrary User IDs" on the Guideline from Openshift
I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.
First of all, I couldn't find the answer here on SO (this is the closest post).
I have EC2 running Ubuntu. First I installed Jenkins, and then Docker.
It's not "DonD".
My project has a Jenkinsfile, on that I'm running some docker commands.
It's supposed to use a docker container like gradle, share a volume and build the project.
The final .war will be on the host file system.
The problem is: the gradle inside the container can't write on host's folder.
Here's my Jenkinsfile (one of countless tries):
#!/usr/bin/groovy
node {
checkout scm
stage 'Gradle'
sh 'sudo docker run --rm -v "$PWD":/api -w /api gradle gradle clean build --stacktrace'
}
Stacktrace important line:
Caused by: org.gradle.api.UncheckedIOException: Failed to create parent directory '/api/.gradle' when creating directory '/api/.gradle/4.3/fileHashes'
Solved!
I logged into container using the -v argument:
$ docker exec -it -v "$PWD":/api -w /api gradle bash
Then I tried to check the current user:
$ whoami
# gradle
So, the solution is to run the container as root:
sh 'docker -u root run --rm -v "$PWD":/api -w /api gradle gradle clean build'
This is the root of the container. Jenkins doesn't need root access.