I'm running a centOS8 linux container and I want to continuously update a .jar file inside
that container which runs centOS8. To update that .jar I want to set up a job in Jenkins and Jenkins is spinning on the separate host (not as a docker container).
So. Basically I think I want to create a job which would ssh from Jenkins to a Docker container and would execute the changes related to a .jar file. Is this possible?
Related
I currently run a Jenkins instance inside a Docker container. I've been doing some experimenting with Jenkins and their pipelines. I managed to make my Maven app build successful using a Jenkinsfile.
Right now I am trying to automatically deploy the app I built to a docker container that's a sibling to the Jenkins container. I did already mount the /var/run/docker.sock so I have access to the parent docker. But right now I can't seem to find any good information to guide me through the next part, which is modifying the Jenkinsfile to deploy my app in a new container.
How would I go over and run my Maven app inside a sibling docker container?
It might be more appropriate to spin up a slave node (a physical or virtual box) and then run something in there with docker.
If you check your instance URL: https://jenkins.address/job/myjob/pipeline-syntax/
You will find more information about what are the commands you need for this.
Anyway best way to do this is to actually create a dockerfile and as a step copy the artifact in it, push the image to a registry and then find somewhere to deploy the just built image
I have a Jenkins setup in a docker container in my local computer.
Can I move it to a company's CI server and re-use job items?
I tried this at local computer
docker commit
docker push
At CI server
docker pull
docker run
However, when I run Jenkins on CI server, Jenkins was initialized.
How can I get all the configurations and job items using Docker?
As described in the docs for the commit command
The commit operation will not include any data contained in volumes
mounted inside the container.
The jenkins home is mounted as a volume, thus when you commit the container the jenkins home won't be commited. Therefore all the job configuration that is currently on the running local container won't be part of the commited image.
Your problem reduces to how would you migrate the jenkins_home volume that is on your machine, to the another machine. This problem is solved and you can find the solution here.
I would suggest however a better and more scalable approach, specifically for jenkins. The problem with the first approach, is that there is quiet some manual intervention that needs to be done whenever you want to start a similar jenkins instance on a new machine.
The solution is as follows:
Commit the container that is currently running
Copy the job configuration that is inside the container using the command: docker cp /var/jenkins_home/jobs ./jobs. This will copy the job config from the running container into your machine. Remember to clean the build folders
Create a Dockerfile that inherits from the commited image and copy the job config under the jenkins_home.
Push the image and you should have an image that you can pull and will have all the jobs configured correctly
The dockerfile will look something like:
FROM <commited-container>
COPY jobs/* /var/jenkins_home/jobs/
You need to check how the Jenkins image (hub.docker.com/r/jenkins/jenkins/) was launched on your local computer: if it was mounting a local volume, that volume should include the JENKINS_HOME with all the job configurations and plugins.
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
You need to export that volume too, not just the image.
See for instance "Docker & Jenkins: Data that Persists ", using a data volume container that you can then export/import.
I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.
Setup
Jenkins server running in a Docker container
Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
Docker daemon container based on docker:1.12-dind image
Slave started like so: docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
Basic Docker operations (pull, build and push images) are working just fine with this setup.
(Non-)Goals
I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
Ideally, I want to move away from my custom Docker image for the slave and instead use the inside function provided by the Docker pipeline plugin within a generic Docker slave.
Problem
This is a representative build step that's causing the issue:
image.inside {
stage ('Install Ruby Dependencies') {
sh "bundle install"
}
}
This would cause an error like this in the log:
sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q#tmp/durable-98bb4c3d/pid: Directory nonexistent
Previously, this warning would show:
71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a
but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []
Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:
For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as
cannot create /…#tmp/durable-…/pid: Directory nonexistent
or negative exit codes.
When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
Unfortunately, the detection described in the last paragraph doesn't seem to work.
Question
Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?
I've seen variations of this issue, also with the agents powered by the kubernetes-plugin.
I think that for it to work the agent/jnlp container needs to share workspace with the build container.
By build container I am referring to the one that will run the bundle install command.
This could be possibly work via withArgs
The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build container. What is the purpose of also using an agent?
Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M
So I have a Jenkins Master-Slave setup, where the master spins up a docker container (on the slave VM) and builds the job inside that container then it destroys the container after it's done. This is all done via the Jenkins' Docker plugin.
Everything is running smoothly, however the only problem is that, after the job is done (failed job) I cannot view the workspace (because the container is gone). I get the following error:
I've tried attaching a "volume" from the host (slave VM) to the container to store the files outside also (which works because, as shown below, I can see files on the host) and then tried mapping it to the master VM:
Here's my settings for that particular docker image template:
Any help is greatly appreciated!
EDIT: I've managed to successfully get the workspace to store on the host.. However, when the build is done I still get the same error (Error: no workspace). I have no idea how to make Jenkins look for the files that are on the host rather than the container.
I faced with the same issue and above post helped me to figure out what was wrong in my environment configuration. But it took a while for me to understand logic of this sentence:
Ok, so the way I've solved this problem was to mount a dir on the
container from the slave docker container, then using NFS
(Instructions are shown below) I've mounted the that slave docker
container onto Jenkins master.
So I decided to clarify it and write some more explanation examples...
Here is my working environment:
Jenkins master running on Ubuntu 16.04 server, IP: 192.168.1.111
Jenkins Slave Build Server running on Ubuntu 16.04 server, IP: 192.168.1.112
Docker Enabled Build Server (for spinning up docker containers), running on Ubuntu 16.04 server, IP: 192.168.1.114.
Problem statement: "Workspace" not available under Jenkins interface when running project on docker container.
The Goal: Be able to browse "Workspace" under Jenkins interface as well as on Jenkins master/slave and docker host servers.
Well, my problem started with different issue, which lead me to find this post and figure out what was wrong on my environment configuration...
Let's assume you have working Jenkins/Docker environment with properly configured Jenkins Docker Plugin. For the first run I did not configured anything under "Container settings..." option in Jenkins Docker Plugin. Everything went smooth - job competed successfully and obviously I was not be able browse job working space, since Jenkins Docker Plugin by design destroys docker container after finishing the job. So far so good... I need to save docker working space in order to be able review files or fix some issue when job failed. For doing so I've mapped host/path from host to container's container/path using "Volumes" in "Container settings..." option in Jenkins Docker Plugin:
I've run the same job again and it failed with following error message in Jenkins:
After spending some time to learn how Jenkins Docker Plugin works, I figured out that the reason of error above is wrong permissions on Docker Host Server (192.168.1.114) on "workspace" folder which created automatically:
So, from here we have to assign "Other" group write permission to this folder. Setting jenknis#192.168.1.114 user as owner of workspace folder will be not enough, since we need jenkins#192.168.1.111 user be able create sub-folders under workspace folder at 192.168.1.114 server. (in my case I have jenkins user on Jenkins master server - 192.168.1.111 and jenkins user as well as on Docker host server - 192.168.1.114).
To help explain what all the groupings and letters mean, take a look at this closeup of the mode in the above screenshot:
ssh jenkins#192.168.1.114
cd /home/jenkins
sudo chmod o+w workspace
Now everything works properly again: Jenkins spin up docker container, when docker running, Workspace available in Jenkins interface:
But it's disappear when job is finished...
Some can say that it is no problem here, since all files from container, now saved under workspace directory on the docker host server (we're mapped folders under Jenkins Docker Plugin settings)... and this is right! All files are here:
/home/jenkins/workspace/"JobName"/ on Docker Host Server (192.168.1.114)
But under some circumstances, people want to be able browse job working space directly from Jenkins interface...
So, from here I've followed the link from Fadi's post - how to setup NFS shares.
Reminder, the goal is: be able browse docker jobs workspace directly from Jenkins interface...
What I did on docker host server (192.168.1.114):
1. sudo apt-get install nfs-kernel-server nfs-common
2. sudo nano /etc/exports
# Share docker slave containers workspace with Jenkins master
/home/jenkins/workspace 192.168.1.111(rw,sync,no_subtree_check,no_root_squash)
3. sudo exportfs -ra
4. sudo /etc/init.d/nfs-kernel-server restart
This will allow mount Docker Host Server (192.168.1.114) /home/jenkins/workspace folder on Jenkins master server (192.168.1.111)
On Jenkins Master server:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
Now, 192.168.1.114:/home/jenkins/workspace folder mounted and visible under /home/jenkins/workspace/"JobName"/ folder on Jenkins master.
So far so good... I've run the job again and face the same behavior: when docker still running - users can browse workspace from Jenkins interface, but when job finished, I get the same error " ... no workspace". In spite of I can browse now for job files on Jenkins master server itself, it is still not what was desired...
BTW, if you need unmount workspace directory on Jenkins Master server, use following command:
sudo umount -f -l /home/jenkins/workspace/<<mountpoint>>
Read more about NFS:
How to configure an NFS server and mount NFS shares on Ubuntu 14.10
The workaround for this issue is installing Multijob Plugin on Jenkins, and add new job to Jenkins that will use Multijob Plugin options:
In my case I've also transfer all docker related jobs to be run on Slave Build Server (192.168.1.112). So on this server I've installed NFS related staff, exactly as on Jenkins Master Server as well as add some staff on Docker host Server (192.168.1.114):
ssh jenkins#192.168.1.114
sudo nano edit /etc/exports
# Share docker slave containers workspace with build-server
/home/jenkins/workspace 192.168.1.112(rw,sync,no_subtree_check,no_root_squash)
In additional on Jenkins Slave (192.168.1.112) server I've ran following:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
After above configurations was done, I've ran new Job on Jenkins and finally got what I want: I can use Workspace option directly from Jenkins interface.
Sorry for long post... I hope it was helpful for you.
Ok, so the way I've solved this problem was to mount a dir on the container from the slave docker container, then using NFS (Instructions are shown below) I've mounted the that slave docker container onto jenkins master.
So my config looks like this:
I followed this answer to mount dir as NFS:
https://superuser.com/questions/300662/how-to-mount-a-folder-from-a-linux-machine-on-another-linux-machine/300703#300703
One small note is that the ip address that's provided in that answer (that you will have to put in the /etc/exports) is the local machine (or in my case the jenkins master) ip address.
I hope this answer helps you out!