I would like to setup a Kubernetes cluster as follows:
Kubernetes will be installed on top of VMs depicted in pink.
I am going to use statefulsets or replicasets to deploy Jenkins master and Jenkins executors. I would like that the workspace folder on the Jenkins master to be always in sync on all replicas in eventuality of losing any worker VMs or server.
Can be achieved using internal mechanisms of replicasets or statefulsets or is any other way of keeping the workspace in sync?
Thank you,
Albert
You can't just assume that statefulset will do the job for you. You can configure a NFS server and point the PV to it and bind your PVC to this PV and your STS can point to your PVC. So, basically
STS -> PVC -> PV -> NFS Server
So, even if one worker node goes down, it won't impact the others.
Related
I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.
I have one global container registry.
I will have many k8s clusters in different cloud providers. For now I use GKE.
I want to have in each k8s cluster a local docker registry cache. It reduces the pulling latency and I will be safer if the global container registry has short downtime.
It should work like: when I deploy something on k8s cluster, the k8s starts pulling the image and goes via this proxy. If this proxy already has this image it will serve it quickly, if not it will pull it from the global container registry and will serve it.
I tried to setup https://hub.docker.com/r/rpardini/docker-registry-proxy
I run it, but I can't configure k8s cluster to use it as a proxy. In docs I see how to do it, but it is ok when you have your own k8s clusters on servers and you can change dockerd or containerd service files, but I have managed k8s in Google Cloud (GKE), so I can't easily permanent change files on nodes.
Do you have any ideas on how to achieve what I want?
my Kubernetes setup:
v1.16.2 on bare metal
1 master node: used for Jenkins Master + Docker registry
5 slave nodes: used for Jenkins JNPL slaves
I use kubernetes-plugin to run slave docker agents. All slave k8 nodes labeled as "jenkins=slave". When I use nodeSelector ("jenkins=slave") for podTemplate, kubernetes always schedule new pod on same node regardless the amount of started Jenkins jobs.
Please give me advice, how I can configure kubernetes or kubernetes-plugin to schedule each next build by round-robin (across all labeled nodes in kubernetes cluster)
Thank you.
This is generally handled by the inter-pod anti affinity configuration https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity. You would set this in the pod template for your builder deployment. That said, it's more common to use the Kubernetes plugin for Jenkins which runs each build as a temporary pod, rather than having long-lived JNLP builders.
I have set up a kubernetes cluster and is working fine.
As of now, Kubernetes cluster deploying container in the master as well. I dont want this to be happened . Can anybody let me know, how to prevent to deploy container in the master ?
If you want no Pods to be scheduled on the master you will need to taint the master node:
kubectl taint nodes nameofmaster dedicated=master:NoSchedule
Read up on taints and tolerations to understand the consequences and how to schedule specific Pods on the now tainted master node. To undo use kubectl taint nodes nameofmaster dedicated-
We are using Jenkins and docker for doing CI/CD. Our Jenkins is setup as master/slave style, where slaves are distributed across different data centers. when a new build needs to happen Jenkins master identifies a slave in one of the DC and spin up a ephemeral container and tear it down once done.
Due to firewall limitations, we only have about 10 ports open for the slaves in some of the DCs. for example Port Range: 8000 - 8010. In general docker uses the linux port ranges 32768 to 61000. The problem is Jenkins master can not talk to the containers if the host port is bound out of 8000 - 8010. Jenkins docker plugin has limitation where you can not bind multiple ports (may be I am wrong here). I would like to know if any way we can configure this at docker end or in Jenkins docker plugin.
After researching in many forums and talking to people, this is not possible or recommended even to try doing. The recommended implementation to overcome this issue is to move to Docker Swarm,
where you have only one virtual docker cloud
which takes care of spinning up containers behind the scenes and keep it ready for consumption even before the need arises. The configurations options are flexible.
Read more about Swarm here
https://docs.docker.com/swarm/