I am using the Kubernetes Jenkins plugin on an external master. The default disk size is limited to 10GB. Adding a pvc with the name jenkins-workspace, mounts the disk, but it is created with root user 0755 and doesn't allow jenkins user any access.
- Jenkins 2.104 master
- jenkinsci/jnlp slave
- kubernetes 1.7.4
- rhel 7.4
We have a customized jnlp slave, but I have even tried using the default that the plugin pulls in.
Can anybody point me to documentation or related article that shows how to add privileges for the pvc mount or dynamically add space after provisioning.
Our Jenkins master uses the Kubernetes cloud connection from the Configure System with a Kubernetes pod template either pointing to the default jnlp or using a container template to point to our customized jnlp slave in our local registry.
Cheers, Appreciate the help in advance
You can not set the permissions of the mounted volumes (fsGroup) nor the size of the PVC today. It is not implemented in https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/PersistentVolumeClaim.java so it just uses the cluster defaults (
It will be possible using YAML once https://github.com/jenkinsci/kubernetes-plugin/pull/275 is implemented.
It is possible to change the permission by using an init container, while defining the slave pod you can include one init container that mount the volume as root and change the permission to jenkins user.
Related
While looking for a kubernetes equivalent of the docker-compose watchtower container, I stumbled upon renovate. It seems to be a universal tool to update docker tags, dependencies and more.
They also have an example of how to run the service itself inside kubernetes, and I found this blogpost of how to set renovate up to check kubernetes manifests for updates (?).
Now the puzzle piece that I'm missing is some super basic working example that updates a single pod's image tag, and then figuring out how to deploy that in a kubernetes cluster. I feel like there needs to be an example out there somewhere but I can't find it for the life of me.
To explain watchtower:
It monitors all containers running in a docker compose setup and pulls new versions of images once they are available, updating the containers in the process.
I found one keel which looks like watchtower:
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Alternatively, there is duin
Docker Image Update Notifier is a CLI application written in Go and delivered as a single executable (and a Docker image) to receive notifications when a Docker image is updated on a Docker registry.
The Kubernetes provider allows you to analyze the pods of your Kubernetes cluster to extract images found and check for updates on the registry.
I think there is a confusion regarding what Renovate does.
Renovate updates files inside GIT repositories not on the Kubernetes API server.
The Kubernetes manager which you are probably referencing updates K8 manifests, Helm charts and so on inside of GIT repository.
I am using Kubernetes to spin up jenkins slaves on for my builds. I can get the plugin working without any issues.
Now, I am trying to mount a volume using the plugin. After adding the Volumes information in the plugin, its not even starting the container. I am not sure what is missing here.
In Docker file, I have added this line:
VOLUME /home/myslave
In POD Template(under jenkins configuration) I have these volume configurations:
Host path:/jenkins/slave
Mount path:/home/myslave
Thanks in advance.
You need to do 3 actions to fix this issue:
rename your container name to jnlp in the kubernetes plugin.
keep the JNLP configured correctly in your images ENTRYPOINT
let "command to run" and arguments to pass to the command empty.
Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M
I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?
JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.
So I have a Jenkins Master-Slave setup, where the master spins up a docker container (on the slave VM) and builds the job inside that container then it destroys the container after it's done. This is all done via the Jenkins' Docker plugin.
Everything is running smoothly, however the only problem is that, after the job is done (failed job) I cannot view the workspace (because the container is gone). I get the following error:
I've tried attaching a "volume" from the host (slave VM) to the container to store the files outside also (which works because, as shown below, I can see files on the host) and then tried mapping it to the master VM:
Here's my settings for that particular docker image template:
Any help is greatly appreciated!
EDIT: I've managed to successfully get the workspace to store on the host.. However, when the build is done I still get the same error (Error: no workspace). I have no idea how to make Jenkins look for the files that are on the host rather than the container.
I faced with the same issue and above post helped me to figure out what was wrong in my environment configuration. But it took a while for me to understand logic of this sentence:
Ok, so the way I've solved this problem was to mount a dir on the
container from the slave docker container, then using NFS
(Instructions are shown below) I've mounted the that slave docker
container onto Jenkins master.
So I decided to clarify it and write some more explanation examples...
Here is my working environment:
Jenkins master running on Ubuntu 16.04 server, IP: 192.168.1.111
Jenkins Slave Build Server running on Ubuntu 16.04 server, IP: 192.168.1.112
Docker Enabled Build Server (for spinning up docker containers), running on Ubuntu 16.04 server, IP: 192.168.1.114.
Problem statement: "Workspace" not available under Jenkins interface when running project on docker container.
The Goal: Be able to browse "Workspace" under Jenkins interface as well as on Jenkins master/slave and docker host servers.
Well, my problem started with different issue, which lead me to find this post and figure out what was wrong on my environment configuration...
Let's assume you have working Jenkins/Docker environment with properly configured Jenkins Docker Plugin. For the first run I did not configured anything under "Container settings..." option in Jenkins Docker Plugin. Everything went smooth - job competed successfully and obviously I was not be able browse job working space, since Jenkins Docker Plugin by design destroys docker container after finishing the job. So far so good... I need to save docker working space in order to be able review files or fix some issue when job failed. For doing so I've mapped host/path from host to container's container/path using "Volumes" in "Container settings..." option in Jenkins Docker Plugin:
I've run the same job again and it failed with following error message in Jenkins:
After spending some time to learn how Jenkins Docker Plugin works, I figured out that the reason of error above is wrong permissions on Docker Host Server (192.168.1.114) on "workspace" folder which created automatically:
So, from here we have to assign "Other" group write permission to this folder. Setting jenknis#192.168.1.114 user as owner of workspace folder will be not enough, since we need jenkins#192.168.1.111 user be able create sub-folders under workspace folder at 192.168.1.114 server. (in my case I have jenkins user on Jenkins master server - 192.168.1.111 and jenkins user as well as on Docker host server - 192.168.1.114).
To help explain what all the groupings and letters mean, take a look at this closeup of the mode in the above screenshot:
ssh jenkins#192.168.1.114
cd /home/jenkins
sudo chmod o+w workspace
Now everything works properly again: Jenkins spin up docker container, when docker running, Workspace available in Jenkins interface:
But it's disappear when job is finished...
Some can say that it is no problem here, since all files from container, now saved under workspace directory on the docker host server (we're mapped folders under Jenkins Docker Plugin settings)... and this is right! All files are here:
/home/jenkins/workspace/"JobName"/ on Docker Host Server (192.168.1.114)
But under some circumstances, people want to be able browse job working space directly from Jenkins interface...
So, from here I've followed the link from Fadi's post - how to setup NFS shares.
Reminder, the goal is: be able browse docker jobs workspace directly from Jenkins interface...
What I did on docker host server (192.168.1.114):
1. sudo apt-get install nfs-kernel-server nfs-common
2. sudo nano /etc/exports
# Share docker slave containers workspace with Jenkins master
/home/jenkins/workspace 192.168.1.111(rw,sync,no_subtree_check,no_root_squash)
3. sudo exportfs -ra
4. sudo /etc/init.d/nfs-kernel-server restart
This will allow mount Docker Host Server (192.168.1.114) /home/jenkins/workspace folder on Jenkins master server (192.168.1.111)
On Jenkins Master server:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
Now, 192.168.1.114:/home/jenkins/workspace folder mounted and visible under /home/jenkins/workspace/"JobName"/ folder on Jenkins master.
So far so good... I've run the job again and face the same behavior: when docker still running - users can browse workspace from Jenkins interface, but when job finished, I get the same error " ... no workspace". In spite of I can browse now for job files on Jenkins master server itself, it is still not what was desired...
BTW, if you need unmount workspace directory on Jenkins Master server, use following command:
sudo umount -f -l /home/jenkins/workspace/<<mountpoint>>
Read more about NFS:
How to configure an NFS server and mount NFS shares on Ubuntu 14.10
The workaround for this issue is installing Multijob Plugin on Jenkins, and add new job to Jenkins that will use Multijob Plugin options:
In my case I've also transfer all docker related jobs to be run on Slave Build Server (192.168.1.112). So on this server I've installed NFS related staff, exactly as on Jenkins Master Server as well as add some staff on Docker host Server (192.168.1.114):
ssh jenkins#192.168.1.114
sudo nano edit /etc/exports
# Share docker slave containers workspace with build-server
/home/jenkins/workspace 192.168.1.112(rw,sync,no_subtree_check,no_root_squash)
In additional on Jenkins Slave (192.168.1.112) server I've ran following:
1. sudo apt-get install nfs-client nfs-common
2. sudo mount -o soft,intr,rsize=8192,wsize=8192 192.168.1.114:/home/jenkins/workspace/ /home/jenkins/workspace/<JobName/
After above configurations was done, I've ran new Job on Jenkins and finally got what I want: I can use Workspace option directly from Jenkins interface.
Sorry for long post... I hope it was helpful for you.
Ok, so the way I've solved this problem was to mount a dir on the container from the slave docker container, then using NFS (Instructions are shown below) I've mounted the that slave docker container onto jenkins master.
So my config looks like this:
I followed this answer to mount dir as NFS:
https://superuser.com/questions/300662/how-to-mount-a-folder-from-a-linux-machine-on-another-linux-machine/300703#300703
One small note is that the ip address that's provided in that answer (that you will have to put in the /etc/exports) is the local machine (or in my case the jenkins master) ip address.
I hope this answer helps you out!