Jenkins Docker image, to use bind mounts or not? - docker

I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.

As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins

Related

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

How to mount volume inside child docker created by parent docker sharing docker.sock

I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.

Exporting whole docker container for jenkins or just volume?

Create jenkins container and bind volume - jenkins-data
docker run --name myJenkins1 -p 8080:8080 -p 50000:50000 -v jenkins-data:/var/jenkins_home jenkins/jenkins:lts
make changes - update plugins, run builds etc
login to jenkins in browser etc
now export the whole container as a tar
docker export 2c8b996d3088 > jenkinsContainerAndVolume.tar
Since this includes the jenkins image, it seems quite large. I am going to need the jenkins image anyway, but wondered if there is a better practice or standard to save just the volume data?
The docker-export command doesn't save the container's volumes.
To backup the named volume you could use tar like this:
docker run -v jenkins-data:/dbdata -v $(pwd):/backup ubuntu tar zcvf /backup/backup.tar.gz /dbdata
In case you need to migrate this container with all its volumes to another host I use this script:
https://github.com/ricardobranco777/docker-volumes.sh

Jenkins wrong volume permissions

I have a virtual machine hosting Oracle Linux where I've installed Docker and created containers using a docker-compose file. I placed the jenkins volume under a shared folder but when starting the docker-compose up I got the following error for Jenkins :
jenkins | touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Here's the volumes declaration
volumes:
- "/media/sf_devops-workspaces/dev-tools/continuous-integration/jenkins:/var/jenkins_home"
The easy fix it to use the -u parameter. Keep in mind this will run as a root user (uid=0)
docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
As haschibaschi stated your user in the container has different userid:groupid than the user on the host.
To get around this is to start the container without the (problematic) volume mapping, then run bash on the container:
docker run -p 8080:8080 -p 50000:50000 -it jenkins bin/bash
Once inside the container's shell run the id command and you'll get results like:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
Exit the container, go to the folder you are trying to map and run:
chown -R 1000:1000 .
With the permissions now matching, you should be able to run the original docker command with the volume mapping.
The problem is, that your user in the container has different userid:groupid as the user on the host.
you have two possibilities:
You can ensure that the user in the container has the same userid:groupid like the user on the host, which has access to the mounted volume. For this you have to adjust the user in the Dockerfile. Create a user in the dockerfile with the same userid:groupid and then switch to this user https://docs.docker.com/engine/reference/builder/#user
You can ensure that the user on the host has the same userid:groupid like the user in the container. For this, enter the container with docker exec -it <container-name> bash and show the user id id -u <username> group id id -G <username>. Change the permissions of the mounted volume to this userid:groupid.
You may be under SELinux. Running the container as privileged solved the issue for me:
sudo docker run --privileged -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
From https://docs.docker.com/engine/reference/commandline/run/#full-container-capabilities---privileged:
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
As an update of #Kiem's response, using $UID to ensure container uses the same user id as the host, you can do this:
docker run -u $UID -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
I had a similar issue with Minikube/Kubernetes just added
securityContext:
fsGroup: 1000
runAsUser: 0
under deployment -> spec -> template -> spec
This error solve using following commnad.
goto your jenkins data mount path : /media
Run following command :
cd /media
sudo chown -R ubuntu:ubuntu sf_devops-workspaces
restart jenkins docker container
docker-compose restart jenkins
Had a similar issue on MacOS, I had installed Jenkins using helm over a Minikube/Kubenetes after many intents I fixed it adding runAsUser: 0 (as root) in the values.yaml I use to deploy jenkins.
master:
usePodSecurityContext: true
runAsUser: 0
fsGroup: 0
Just be careful because that means that you will run all your commands as root.
use this command
$ chmod +757 /home/your-user/your-jenkins-data
first of all you can verify your current user using echo $USER command
and after that you can mention who is the user in the Dockerfile like bellow (in my case user is root)
screenshot
I had same issue it got resolved after disabling the SELINUX.
It's not recommended to disable the SELINUX so install custom semodule and enable it.
It works. Only changing the permissions won't work on CentOS 7.

Starting Jenkins in Docker Container

I want to run Jenkins in a Docker Container on Centos7.
I saw the official documentation of Jenkins:
First, pull the official jenkins image from Docker repository.
docker pull jenkins
Next, run a container using this image and map data directory from the container to the host; e.g in the example below /var/jenkins_home from the container is mapped to jenkins/ directory from the current path on the host. Jenkins 8080 port is also exposed to the host as 49001.
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
But when I try to run the docker container I get the following error:
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
Can someone tell me how to fix this problem?
The official Jenkins Docker image documentation says regarding volumes:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
This will store the jenkins data in /your/home on the host. Ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run.
This information is also found in the Dockerfile.
So all you need to do is to ensure that the directory $PWD/jenkins is own by UID 1000:
mkdir jenkins
chown 1000 jenkins
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
The newest Jenkins documentation says to use Docker 'volumes'.
Docker is kinda tricky on this, the difference between the two is a full path name with the -v option for bind mount and just a name for volumes.
docker run -d -p 49001:8080 -v jenkins-data:/var/jenkins_home -t jenkins
This command will create a docker volume named "jenkins-data" and you will no longer see the error.
Link to manage volumes:
https://docs.docker.com/storage/volumes/

Resources