Scale Jenkins-slave on Kubernetes - jenkins

I configured Jenkins on a K8s cluster and setup Jenkins build pipeline. Once build execute It creates the jenkins-slave pod and after the build, the pod will terminate.
The use case is basically if all my workers in Jenkins goes full, I want to auto-scale (increase the number of slaves) if it comes back down, I would like to reduce the slaves count.
Is it possible? and How I can do it from k8s.

Related

Jenkins on k8s and pipeline with docker agent

I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.

Jenkins on Kubernetes

I would like to setup a Kubernetes cluster as follows:
Kubernetes will be installed on top of VMs depicted in pink.
I am going to use statefulsets or replicasets to deploy Jenkins master and Jenkins executors. I would like that the workspace folder on the Jenkins master to be always in sync on all replicas in eventuality of losing any worker VMs or server.
Can be achieved using internal mechanisms of replicasets or statefulsets or is any other way of keeping the workspace in sync?
Thank you,
Albert
You can't just assume that statefulset will do the job for you. You can configure a NFS server and point the PV to it and bind your PVC to this PV and your STS can point to your PVC. So, basically
STS -> PVC -> PV -> NFS Server
So, even if one worker node goes down, it won't impact the others.

How to handle "docker-in-docker" problem when using Jenkins inside K8S

New to Kubernetes, a little complex question needs help.
Background
Using Jenkins in GKE (Google Kubernetes Engine)
Want to use jenkins-docker plugin to provide the specific test environment for each type of tests
Don't want to mixin docker binary in the Jenkins image (because it is large)
Don't want docker-in-docker
More specifically, I don't want the Jenkins Pod be a new Docker Server
What I want
Each test environment can create a new pod in GKE Cluster, rather than creating containers inside the Jenkins Pod
P.S.
I have just read some articles, but half of them are telling about "how to use K8S to scale up the Jenkins (using jenkins-slave + jenkins-kubernates plugin)", another half are telling about how to "use docker plugin in a dockerized jenkins container on a bare metal machine (you can use /var/run/docker.sock to communicate between the host and the docker container)", but I cannot find **how to use docker plugin (to provide a specific environment) in a dockerized jenkins container inside K8S

Schedule jenkins slaves across all nodes in kubernetes cluster (round robin)

my Kubernetes setup:
v1.16.2 on bare metal
1 master node: used for Jenkins Master + Docker registry
5 slave nodes: used for Jenkins JNPL slaves
I use kubernetes-plugin to run slave docker agents. All slave k8 nodes labeled as "jenkins=slave". When I use nodeSelector ("jenkins=slave") for podTemplate, kubernetes always schedule new pod on same node regardless the amount of started Jenkins jobs.
Please give me advice, how I can configure kubernetes or kubernetes-plugin to schedule each next build by round-robin (across all labeled nodes in kubernetes cluster)
Thank you.
This is generally handled by the inter-pod anti affinity configuration https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity. You would set this in the pod template for your builder deployment. That said, it's more common to use the Kubernetes plugin for Jenkins which runs each build as a temporary pod, rather than having long-lived JNLP builders.

Jenkins slave running in ECS cluster start container slow

I'm using jenkins slave in AWS ECS cluster, everytimes when I press build, slave container take 3mins to start, how can I speed up this?

Resources