Cloudbees Docker Plugin - "?" Folder - jenkins

I'm using Cloudbees Docker Plugin 1.9 together with Jenkins 2.25 to Build my Project within Docker Containers.
Jenkins itself is also running under Docker 1.12.2 on Ubuntu 14.4.
The JENKINS_HOME directory is mounted as Volume so every job, workspace etc. is available under User "ubuntu" on the Host System.
When running a Job with Cloudbees Docker Plugin it creates a "?" folder in the workspace containing different hidden directories (e.g. .oracle_jre_usage, .m2, .gradle etc.)
Can anybody explain, what part / Plugin of the Jenkins Job creates this folder and why it is named "?"

I encountered similar issue when mounting a source folder into a Maven container as WORKDIR for build.
JRE seems to take WORKDIR/$(id -un) as the home directory (${user.home} in the settings) and creates those folders.
The '?' probably is a result of failing to resolve the host's UID in the container, which I did with docker run --rm -u $(id -u):$(id -g) ....
I was able to modify apache-maven/conf/settings.xml to change the path if .m2 to persist the cache on another host mount. However due to this issue .oracle_jre_usage will always be created and log the timestamp.
The solution was probably to not set WORKDIR to the workspace, so that ${user.home} will point to /?/ which will be removed with the container.

Related

Docker - Volumes inside volumes?

I try to start the Docker Jenkins Container (jenkins/jenkins:2.218-jdk11) so that if you create a new Jenkins Container you have all plugins already installed and skip the installation.
For this I have a preinstalled /var/jenkins_home (where all the data from Jenkins is located) that Docker uses to start, which works. This home directory doesn't need to be persistent and is only needed for the first startup so it can be copied into the container.
Inside /var/jenkins_home there are 2 directories which must be persistent. I have already tried with my own docker file to overwrite /var/jenkins_home with my own version which didn't work (maybe wrong configuration?). Do you have any tips how I could implement such a concept?
Hello you can mount volumes for the directories you want to persist.
$ docker run -v /path/to/persistence:/var/jenkins_home/dir_persisted jenkins-image:2.218

How to move Jenkins to another server/container without installing jenkins

I want to move Jenkins from one server to another Server/container(docker).
This is not about Jenkins Data Migration.
Jenkins need to migrate without installing Jenkins in the target machine. I mean, have to copy all the installation files required for running the service Jenkins.
I know some entries are there in /bin/ folders.
Otherthan that almost I moved Jenkins configuration related files
/usr/share/Jenkins
/var/lib/Jenkins
/etc/init.d/Jenkins
from Source machine to Target machine but while starting Jenkins service, Jenkins is failing to start.
Is there any way to do the same?
Thanks in advance for help.
Normally this is not the practice to move an executable file from one machine to another, as there are not only the executable that needs to move, there is some configuration related to an executable file which is stored in the underlying machine.
You can check this post portability-of-an-executable-to-another-linux-machine
The best way is to make an image something similar to amazon AMI, is a read-only filesystem image that includes an operating system (e.g., Linux, Unix, or Windows) and any additional software required to deliver a service or a portion of it, and restore the image in the target machine.
In case of a container, you do not install anything, just install the Docker in the target machine and run the below command.
docker run --name myjenkins -p 8080:8080 -p 50000:50000 jenkins:alpine
As containers are already portable, build the image once and run it anywhere.
Bonus with Container:
jenkins:alpine this is offical Jenkins image which is just 160MB and the size of base image is just 5MB.

'docker cp' from docker container to jenkins workspace during build

Is there a way I can copy files from docker container to jenkins workspace while tests are running i.e. not pre- or post- build
Currently docker is on a server within the organisation and when I kick off a Jenkins job (maven project), it runs tests within the above container.
During the test, there are files downloaded and I would like to be able to access those files in the jenkins workspace, during execution. So I tried the following as part of my code:
docker cp [containerName]:/home/seluser/Downloads /var/jenkins_home/jobs/[jobName]/workspace
But the files don't get copied over to the workspace. I have also tried doing this locally, i.e. getting the files copied to a directory on my laptop:
docker cp [containerName]:/home/seluser/Downloads /Users/[myUsername]/testDownloads
and it worked. Is there something I'm missing regarding how to do this for jenkins workspace?
Try adding /. as :
docker cp [containerName]:/home/seluser/Downloads/. /var/jenkins_home/jobs/[jobName]/workspace/

Can I move docker container that includes Jenkins setups to other server?

I have a Jenkins setup in a docker container in my local computer.
Can I move it to a company's CI server and re-use job items?
I tried this at local computer
docker commit
docker push
At CI server
docker pull
docker run
However, when I run Jenkins on CI server, Jenkins was initialized.
How can I get all the configurations and job items using Docker?
As described in the docs for the commit command
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins home is mounted as a volume, thus when you commit the container the jenkins home won't be commited. Therefore all the job configuration that is currently on the running local container won't be part of the commited image.
Your problem reduces to how would you migrate the jenkins_home volume that is on your machine, to the another machine. This problem is solved and you can find the solution here.
I would suggest however a better and more scalable approach, specifically for jenkins. The problem with the first approach, is that there is quiet some manual intervention that needs to be done whenever you want to start a similar jenkins instance on a new machine.
The solution is as follows:
Commit the container that is currently running
Copy the job configuration that is inside the container using the command: docker cp /var/jenkins_home/jobs ./jobs. This will copy the job config from the running container into your machine. Remember to clean the build folders
Create a Dockerfile that inherits from the commited image and copy the job config under the jenkins_home.
Push the image and you should have an image that you can pull and will have all the jobs configured correctly
The dockerfile will look something like:
FROM <commited-container>
COPY jobs/* /var/jenkins_home/jobs/
You need to check how the Jenkins image (hub.docker.com/r/jenkins/jenkins/) was launched on your local computer: if it was mounting a local volume, that volume should include the JENKINS_HOME with all the job configurations and plugins.
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
You need to export that volume too, not just the image.
See for instance "Docker & Jenkins: Data that Persists ", using a data volume container that you can then export/import.

Jenkins store workspace outside docker container

So I have a Jenkins Master-Slave setup, where the master spins up a docker container (on the slave VM) and builds the job inside that container then it destroys the container after it's done. This is all done via the Jenkins' Docker plugin.
Everything is running smoothly, however the only problem is that, after the job is done (failed job) I cannot view the workspace (because the container is gone). I get the following error:
I've tried attaching a "volume" from the host (slave VM) to the container to store the files outside also (which works because, as shown below, I can see files on the host) and then tried mapping it to the master VM:
Here's my settings for that particular docker image template:
Any help is greatly appreciated!
EDIT: I've managed to successfully get the workspace to store on the host.. However, when the build is done I still get the same error (Error: no workspace). I have no idea how to make Jenkins look for the files that are on the host rather than the container.
I faced with the same issue and above post helped me to figure out what was wrong in my environment configuration. But it took a while for me to understand logic of this sentence:
Ok, so the way I've solved this problem was to mount a dir on the
container from the slave docker container, then using NFS
(Instructions are shown below) I've mounted the that slave docker
container onto Jenkins master.
So I decided to clarify it and write some more explanation examples...
Here is my working environment:
Jenkins master running on Ubuntu 16.04 server, IP: 192.168.1.111
Jenkins Slave Build Server running on Ubuntu 16.04 server, IP: 192.168.1.112
Docker Enabled Build Server (for spinning up docker containers), running on Ubuntu 16.04 server, IP: 192.168.1.114.
Problem statement: "Workspace" not available under Jenkins interface when running project on docker container.
The Goal: Be able to browse "Workspace" under Jenkins interface as well as on Jenkins master/slave and docker host servers.
Well, my problem started with different issue, which lead me to find this post and figure out what was wrong on my environment configuration...
Let's assume you have working Jenkins/Docker environment with properly configured Jenkins Docker Plugin. For the first run I did not configured anything under "Container settings..." option in Jenkins Docker Plugin. Everything went smooth - job competed successfully and obviously I was not be able browse job working space, since Jenkins Docker Plugin by design destroys docker container after finishing the job. So far so good... I need to save docker working space in order to be able review files or fix some issue when job failed. For doing so I've mapped host/path from host to container's container/path using "Volumes" in "Container settings..." option in Jenkins Docker Plugin:
I've run the same job again and it failed with following error message in Jenkins:
After spending some time to learn how Jenkins Docker Plugin works, I figured out that the reason of error above is wrong permissions on Docker Host Server (192.168.1.114) on "workspace" folder which created automatically:
So, from here we have to assign "Other" group write permission to this folder. Setting jenknis#192.168.1.114 user as owner of workspace folder will be not enough, since we need jenkins#192.168.1.111 user be able create sub-folders under workspace folder at 192.168.1.114 server. (in my case I have jenkins user on Jenkins master server - 192.168.1.111 and jenkins user as well as on Docker host server - 192.168.1.114).
To help explain what all the groupings and letters mean, take a look at this closeup of the mode in the above screenshot:
ssh jenkins#192.168.1.114
cd /home/jenkins
sudo chmod o+w workspace
Now everything works properly again: Jenkins spin up docker container, when docker running, Workspace available in Jenkins interface:
But it's disappear when job is finished...
Some can say that it is no problem here, since all files from container, now saved under workspace directory on the docker host server (we're mapped folders under Jenkins Docker Plugin settings)... and this is right! All files are here:
/home/jenkins/workspace/"JobName"/ on Docker Host Server (192.168.1.114)
But under some circumstances, people want to be able browse job working space directly from Jenkins interface...
So, from here I've followed the link from Fadi's post - how to setup NFS shares.
Reminder, the goal is: be able browse docker jobs workspace directly from Jenkins interface...
What I did on docker host server (192.168.1.114):
1. sudo apt-get install nfs-kernel-server nfs-common
2. sudo nano /etc/exports
# Share docker slave containers workspace with Jenkins master
/home/jenkins/workspace 192.168.1.111(rw,sync,no_subtree_check,no_root_squash)
3. sudo exportfs -ra
4. sudo /etc/init.d/nfs-kernel-server restart
This will allow mount Docker Host Server (192.168.1.114) /home/jenkins/workspace folder on Jenkins master server (192.168.1.111)
On Jenkins Master server:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
Now, 192.168.1.114:/home/jenkins/workspace folder mounted and visible under /home/jenkins/workspace/"JobName"/ folder on Jenkins master.
So far so good... I've run the job again and face the same behavior: when docker still running - users can browse workspace from Jenkins interface, but when job finished, I get the same error " ... no workspace". In spite of I can browse now for job files on Jenkins master server itself, it is still not what was desired...
BTW, if you need unmount workspace directory on Jenkins Master server, use following command:
sudo umount -f -l /home/jenkins/workspace/<<mountpoint>>
Read more about NFS:
How to configure an NFS server and mount NFS shares on Ubuntu 14.10
The workaround for this issue is installing Multijob Plugin on Jenkins, and add new job to Jenkins that will use Multijob Plugin options:
In my case I've also transfer all docker related jobs to be run on Slave Build Server (192.168.1.112). So on this server I've installed NFS related staff, exactly as on Jenkins Master Server as well as add some staff on Docker host Server (192.168.1.114):
ssh jenkins#192.168.1.114
sudo nano edit /etc/exports
# Share docker slave containers workspace with build-server
/home/jenkins/workspace 192.168.1.112(rw,sync,no_subtree_check,no_root_squash)
In additional on Jenkins Slave (192.168.1.112) server I've ran following:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
After above configurations was done, I've ran new Job on Jenkins and finally got what I want: I can use Workspace option directly from Jenkins interface.
Sorry for long post... I hope it was helpful for you.
Ok, so the way I've solved this problem was to mount a dir on the container from the slave docker container, then using NFS (Instructions are shown below) I've mounted the that slave docker container onto jenkins master.
So my config looks like this:
I followed this answer to mount dir as NFS:
https://superuser.com/questions/300662/how-to-mount-a-folder-from-a-linux-machine-on-another-linux-machine/300703#300703
One small note is that the ip address that's provided in that answer (that you will have to put in the /etc/exports) is the local machine (or in my case the jenkins master) ip address.
I hope this answer helps you out!

Resources