'docker cp' from docker container to jenkins workspace during build - docker

Is there a way I can copy files from docker container to jenkins workspace while tests are running i.e. not pre- or post- build
Currently docker is on a server within the organisation and when I kick off a Jenkins job (maven project), it runs tests within the above container.
During the test, there are files downloaded and I would like to be able to access those files in the jenkins workspace, during execution. So I tried the following as part of my code:
docker cp [containerName]:/home/seluser/Downloads /var/jenkins_home/jobs/[jobName]/workspace
But the files don't get copied over to the workspace. I have also tried doing this locally, i.e. getting the files copied to a directory on my laptop:
docker cp [containerName]:/home/seluser/Downloads /Users/[myUsername]/testDownloads
and it worked. Is there something I'm missing regarding how to do this for jenkins workspace?

Try adding /. as :
docker cp [containerName]:/home/seluser/Downloads/. /var/jenkins_home/jobs/[jobName]/workspace/

Related

Jenkins Auto Deployment

Is there any way to make script in jenkins when this script be called or triggered jenkins automatically take last war of my project and deploy it in specific container
is differ from deploy to container which called after build as post-step
Yes, is possible, you just can cp from your build directory to your deployment directory.
In my case when I run my container I bind a volume mount with the -v parameter, when I build the jenkins job, it generate the package and I move to the remote docker volume.

Nmake can't locate files within a docker mount

I have a docker container I use to compile a project, build a setup and export it out of the container. For this I mount the checked out sources (using $(Build.SourcesDirectory):C:/git/ in the volumes section of the TFS docker run task) and an output folder in 2 different folders. Now my project contains a submodule which is also correctly checked out, all the files are there. However, when my script executes nmake I get the following error:
Cannot find file: \ContainerMappedDirectories\347DEF6A-D43B-48C0-A5DF-CE228E5A10FD\src\Submodule\Submodule.pro
Where the path of the mapped container maps to C:/git/ inside the windows docker container(running on a windows host). I was able to start the docker container with an interactive powershell and mount the folder and find out the following things:
All the files are there in the container.
When doing docker cp project/ container:C:/test/ and running my build script it finds all the files and compiles successfully.
when copying the mounted project within docker with powershell and starting the build script it also works.
So it seems nmake has trouble traversing a mounted container within docker. Any idea how to fix this? I'd rather avoid copying the project into the container because that takes quite some time as compared to simply mounting the checked out project.

How do I checkout a repository inside a Docker container when using a Jenkinsfile?

I've currently managed to set up a Jenkins file to run my PHP analysis inside a container. Here's the gist of it:
node {
stage "Prepare environment"
checkout scm
def environment = docker.build 'platforms-base'
environment.inside {
stage "Update Dependencies"
sh "composer install || true"
}
}
When this runs, it seems that the repo is checked out onto the host machine and that directory is mounted into the docker container in the /app directory. (I haven't found any configuration for this so I'm not sure where this is defined.)
In the same container, I have Magento2 installed in the /var/magento2 directory. I need to run some tests inside the scope of the Magento2 codebase. This means that I need to checkout the current branch into a particular directory in the container at /var/magento2/vendor/myorg/mypackage/ but the checked out repo seems to be at /app.
How do I checkout a repo in a certain place inside the container?
/app will be mounted inside your container because you (or the Dockerfile you're using) mounts the current directory as /app.
Now, your code is already checked out just fine by Jenkins, so I wouldn't do a separate extra checkout from inside the docker container. Jenkins just checked out the correct branch/master/tag for you with the proper credentials, so no need to do it yourself.
You've got three basic choices:
Mount the current directory not as /app, but as /var/magento2/vendor/myorg/mypackage/
Mount the current directory as both /app and /var/magento2/vendor/myorg/mypackage/. You could perhaps even do this with an extra command line option instead of modifying the Dockerfile. (This very much depends on your development setup).
Make /var/magento2/vendor/myorg/mypackage/ a symlink to /app, that will probably work, too.

Cloudbees Docker Plugin - "?" Folder

I'm using Cloudbees Docker Plugin 1.9 together with Jenkins 2.25 to Build my Project within Docker Containers.
Jenkins itself is also running under Docker 1.12.2 on Ubuntu 14.4.
The JENKINS_HOME directory is mounted as Volume so every job, workspace etc. is available under User "ubuntu" on the Host System.
When running a Job with Cloudbees Docker Plugin it creates a "?" folder in the workspace containing different hidden directories (e.g. .oracle_jre_usage, .m2, .gradle etc.)
Can anybody explain, what part / Plugin of the Jenkins Job creates this folder and why it is named "?"
I encountered similar issue when mounting a source folder into a Maven container as WORKDIR for build.
JRE seems to take WORKDIR/$(id -un) as the home directory (${user.home} in the settings) and creates those folders.
The '?' probably is a result of failing to resolve the host's UID in the container, which I did with docker run --rm -u $(id -u):$(id -g) ....
I was able to modify apache-maven/conf/settings.xml to change the path if .m2 to persist the cache on another host mount. However due to this issue .oracle_jre_usage will always be created and log the timestamp.
The solution was probably to not set WORKDIR to the workspace, so that ${user.home} will point to /?/ which will be removed with the container.

Jenkins store workspace outside docker container

So I have a Jenkins Master-Slave setup, where the master spins up a docker container (on the slave VM) and builds the job inside that container then it destroys the container after it's done. This is all done via the Jenkins' Docker plugin.
Everything is running smoothly, however the only problem is that, after the job is done (failed job) I cannot view the workspace (because the container is gone). I get the following error:
I've tried attaching a "volume" from the host (slave VM) to the container to store the files outside also (which works because, as shown below, I can see files on the host) and then tried mapping it to the master VM:
Here's my settings for that particular docker image template:
Any help is greatly appreciated!
EDIT: I've managed to successfully get the workspace to store on the host.. However, when the build is done I still get the same error (Error: no workspace). I have no idea how to make Jenkins look for the files that are on the host rather than the container.
I faced with the same issue and above post helped me to figure out what was wrong in my environment configuration. But it took a while for me to understand logic of this sentence:
Ok, so the way I've solved this problem was to mount a dir on the
container from the slave docker container, then using NFS
(Instructions are shown below) I've mounted the that slave docker
container onto Jenkins master.
So I decided to clarify it and write some more explanation examples...
Here is my working environment:
Jenkins master running on Ubuntu 16.04 server, IP: 192.168.1.111
Jenkins Slave Build Server running on Ubuntu 16.04 server, IP: 192.168.1.112
Docker Enabled Build Server (for spinning up docker containers), running on Ubuntu 16.04 server, IP: 192.168.1.114.
Problem statement: "Workspace" not available under Jenkins interface when running project on docker container.
The Goal: Be able to browse "Workspace" under Jenkins interface as well as on Jenkins master/slave and docker host servers.
Well, my problem started with different issue, which lead me to find this post and figure out what was wrong on my environment configuration...
Let's assume you have working Jenkins/Docker environment with properly configured Jenkins Docker Plugin. For the first run I did not configured anything under "Container settings..." option in Jenkins Docker Plugin. Everything went smooth - job competed successfully and obviously I was not be able browse job working space, since Jenkins Docker Plugin by design destroys docker container after finishing the job. So far so good... I need to save docker working space in order to be able review files or fix some issue when job failed. For doing so I've mapped host/path from host to container's container/path using "Volumes" in "Container settings..." option in Jenkins Docker Plugin:
I've run the same job again and it failed with following error message in Jenkins:
After spending some time to learn how Jenkins Docker Plugin works, I figured out that the reason of error above is wrong permissions on Docker Host Server (192.168.1.114) on "workspace" folder which created automatically:
So, from here we have to assign "Other" group write permission to this folder. Setting jenknis#192.168.1.114 user as owner of workspace folder will be not enough, since we need jenkins#192.168.1.111 user be able create sub-folders under workspace folder at 192.168.1.114 server. (in my case I have jenkins user on Jenkins master server - 192.168.1.111 and jenkins user as well as on Docker host server - 192.168.1.114).
To help explain what all the groupings and letters mean, take a look at this closeup of the mode in the above screenshot:
ssh jenkins#192.168.1.114
cd /home/jenkins
sudo chmod o+w workspace
Now everything works properly again: Jenkins spin up docker container, when docker running, Workspace available in Jenkins interface:
But it's disappear when job is finished...
Some can say that it is no problem here, since all files from container, now saved under workspace directory on the docker host server (we're mapped folders under Jenkins Docker Plugin settings)... and this is right! All files are here:
/home/jenkins/workspace/"JobName"/ on Docker Host Server (192.168.1.114)
But under some circumstances, people want to be able browse job working space directly from Jenkins interface...
So, from here I've followed the link from Fadi's post - how to setup NFS shares.
Reminder, the goal is: be able browse docker jobs workspace directly from Jenkins interface...
What I did on docker host server (192.168.1.114):
1. sudo apt-get install nfs-kernel-server nfs-common
2. sudo nano /etc/exports
# Share docker slave containers workspace with Jenkins master
/home/jenkins/workspace 192.168.1.111(rw,sync,no_subtree_check,no_root_squash)
3. sudo exportfs -ra
4. sudo /etc/init.d/nfs-kernel-server restart
This will allow mount Docker Host Server (192.168.1.114) /home/jenkins/workspace folder on Jenkins master server (192.168.1.111)
On Jenkins Master server:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
Now, 192.168.1.114:/home/jenkins/workspace folder mounted and visible under /home/jenkins/workspace/"JobName"/ folder on Jenkins master.
So far so good... I've run the job again and face the same behavior: when docker still running - users can browse workspace from Jenkins interface, but when job finished, I get the same error " ... no workspace". In spite of I can browse now for job files on Jenkins master server itself, it is still not what was desired...
BTW, if you need unmount workspace directory on Jenkins Master server, use following command:
sudo umount -f -l /home/jenkins/workspace/<<mountpoint>>
Read more about NFS:
How to configure an NFS server and mount NFS shares on Ubuntu 14.10
The workaround for this issue is installing Multijob Plugin on Jenkins, and add new job to Jenkins that will use Multijob Plugin options:
In my case I've also transfer all docker related jobs to be run on Slave Build Server (192.168.1.112). So on this server I've installed NFS related staff, exactly as on Jenkins Master Server as well as add some staff on Docker host Server (192.168.1.114):
ssh jenkins#192.168.1.114
sudo nano edit /etc/exports
# Share docker slave containers workspace with build-server
/home/jenkins/workspace 192.168.1.112(rw,sync,no_subtree_check,no_root_squash)
In additional on Jenkins Slave (192.168.1.112) server I've ran following:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
After above configurations was done, I've ran new Job on Jenkins and finally got what I want: I can use Workspace option directly from Jenkins interface.
Sorry for long post... I hope it was helpful for you.
Ok, so the way I've solved this problem was to mount a dir on the container from the slave docker container, then using NFS (Instructions are shown below) I've mounted the that slave docker container onto jenkins master.
So my config looks like this:
I followed this answer to mount dir as NFS:
https://superuser.com/questions/300662/how-to-mount-a-folder-from-a-linux-machine-on-another-linux-machine/300703#300703
One small note is that the ip address that's provided in that answer (that you will have to put in the /etc/exports) is the local machine (or in my case the jenkins master) ip address.
I hope this answer helps you out!

Resources