I'm transitioning my current Jenkins server to implement Docker. Following the guide on github https://github.com/jenkinsci/docker, I was able to successfully launch jenkins with the command:
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
I'm not sure how to view/access the data in my container/volume through file explorer. Is it only accessible through docker inspect? The guide in GitHub says I should avoid using a bind mount from a folder on the host machine into /var/jenkins/home. Is there another way to view and access my jenkins jobs?
As you can see in the Jenkins CI Dockerfile source code
/var/jenkins_home is declared as a VOLUME.
It means that it can be mounted on the host.
Your command mounts a docker volume to it but you could also mount a path on your host.
For example:
docker run -p 8080:8080 -p 50000:50000 -v ~/jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On Windows hosts, you might have to create the directory first.
You can change ~/jenkins_home to whatever suites your host environment but that is a folder that you can easily navigate and inspect.
You can also still use the web interface available on the porta that you map on the host.
If you want see the data on a local host file system you can use bind mounts instead of volume, it will sync all the data from the jenkins_home folder to your local host file system. For example:
docker run -p 8080:8080 \ --name jenkins \ --mount type=bind,source="$(pwd)"/jenkins_home,target=/var/jenkins_home \ jenkins/jenkins
for more clarification on bind mounts and volumes please follow this link.
https://docs.docker.com/storage/bind-mounts/
Related
I am using Docker for Windows, and wanted to mount a host directory with files I would want to use in RStudio in a container for a bioconductor image. To mount the host directory I have used
docker run -d -v //c/Users/myR:/home/rstudio/myR2 -e PASSWORD=password -p 8787:8787 bioconductor/bioconductor_docker:devel
When I open the RStudio interface in the web browser I can see the directory myR2 is created, but it is empty. I have read that I should first share the host directory from Settings > Share folders, but i do not see this option in the Docker version I use (4.5.1). Any help? Thanks!
I am playing around with docker and ran into an issue when mounting docker volumes with --mount instead of -v. It appears to me that the error popping up is not valid, but probably I am missing a small detail here.
The path to which I want bind the created image in the container is seen as not absolute in the --mount scenario.
I am running Docker on a windows 10 machine
I pulled the jenkins/jenkins:lts image and want to spin up 2 containers that use the same configuration. As said before I use this just to play around with docker, and am exploring how the volume system works.
What i did is create a docker volume that is used to share the configuarion.
docker volume create jenkins_cfg
Then I tried to run 2 containers. The first container started with:
docker run -d -p 8081:8080 --name jenkins2 -v jenkins_cfg:/var/jenkins_home jenkins/jenkins:lts
Which works fine..
The second container started with:
docker run -d -p 8085:8080 --name jenkin5 --mount source=jenkins_cfg,target=var/jenkins_home jenkins/jenkins:lts
This results in the error
"C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: invalid mount config for type "volume": invalid mount path: 'var/jenkins_home' mount path must be absolute.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'."
Also /var/jenkins_home is not working properly.
While the -v also asks for the same target folder , i would assume that this folder would also work in the target option of --mount. Probably, I am overlooking something here ...
I figured out that the target folder should be preceeded by //
so the docker command would look like
docker run -d -p 8085:8080 --name jenkin5 --mount source=jenkins_cfg,target=//var/jenkins_home jenkins/jenkins:lts
Still no clue why // has to be added, maybe someone can clarify on that one
Actually mount binds are like mounting a part of physical disk volume to the containers. But volumes are like virtual memory you can't access them independently without containers but bind mounts can be accessed independently
Your mount binds should be an absolute path in your host
Hope this helps your cause
I have jenkins running inside container and project source code on github.
I need to run project in container on the same host as jenkins, but not as docker-in-docker, i want to run them as sibling containers.
My pipeline looks like this:
pull the source from github
build the project image
run the project container
What i do right now is using the docker socket of host from jenkins container:
/var/run/docker.sock:/var/run/docker.sock
I have problem when jenkins container mount the volume with source code from /var/jenkins_home/workspace/BRANCH_NAME to project container:
volumes:
- ./servers/identity/app:/srv/app
i am getting empty folder "/srv/app" in project container
My best guess is that docker tries to mount it from host and not from the jenkins container.
So the question is: how can i explicitly set the container from which i mount the volume?
I got the same issue when using Jenkins docker container to run another container.
Senario 1 - Running container inside Jenkins docker container
This is not a recommended way, explanations goes here. If you still need to use this approach, then this problem is not a problem.
Senario 2 - Running docker client inside Jenkins container
Suppose, we need to run another container (ContainerA) inside Jenkins docker container, docker pipeline plugin will use --volumes-from to mount Jenkins container volume to ContainerA.
If you trying to use --volume or -v to map specific directory in Jenkins container to ContainerA, you will got an unexpected behavior.
That's because --volumes or -v would try to map directories in host to ContainerA, rather than mapping from directories inside Jenkins container. If the directories not found in host, then you will get an empty dir inside ContainerA.
In short, we can not map a specific directory from containerA to containerB, we could only mount the whole volumes from containerA to containerB, and volume alias is not supported.
Solution
If your Jenkins is running with host volume, you can map the host directories to the target container.
Otherwise, you can access the files inside the newly created container with the same location as Jenkins container.
try:
docker run -d --volumes-from <ContainerID> <YourImage>
where container ID is id of container you want for mont data from.
You can also create volume, by:
docker volume create <volname>
and assign it to both containers
volumes:
- <volname>:/srv/app
Sharing the sock between the Host and Jenkins was my problem because "/var/jenkins_home" is most likely a volume for the Jenkins container.
My solution was installing docker inside a systemd container without sharing the sock.
docker run -d --name jenkins \
--restart=unless-stopped \
--privileged \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v jenkins-vol:/var/lib/jenkins \
--tmpfs /run \
--tmpfs /run/lock \
ubuntu:16.04 /sbin/init
Then install Jenkins, Docker and Docker Compose on it.
In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation
I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.