Jenkins master and Slave installation on CI/CD pipeline - docker

I am trying to implement CI/CD pipeline by using Kubernetes and Jenkins. I am planning to use Kubernetes HA Cluster having 3 master and 5 worker machine / node.
Now I am exploring about the implementation tutorials for CI/CD Pipeline. And also exploring about the Jenkins usage with Kubernetes HA Cluster. When I am reading , I felt little bit confusions about Jenkins. That I am adding here.
1. I have total 8 VMs - 3 Master and 5 Worker machines / nodes (Kubernetes cluster). If I installing Jenkins in any one worker machines , then is there any problem while integrating with CI/CD pipeline for deployment ?
2. I am previously readed the following link for understanding the implementations,
https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he
Is this mandatory to use Jenkins master and slave ?. In this tutorial showing that If kubectl,helm and docker is installed then don't need to use Jenkins slave. What is the idea about master and slave here?
3. If I am installing both jenkins master and slave in kubernetes cluster worker machine / node , then Need to install master and slave in separate separate VMs? I have still confusion about where to install Jenkins?
I am just started on CI/CD pipeline - Kubernetes and Jenkins.

Jenkins has two parts. There's the master, which manages all the jobs, and the workers, which perform the jobs.
The Jenkins master supports many kinds of workers (slaves) via plugins - you can have stand alone nodes, Docker based slaves, Kubernetes scheduled Docker slaves, etc.
Where you run the Jenkins master doesn't really matter very much, what is important is how you configure it to run your jobs.
Since you are on Kubernetes, I would suggest checking out the Kubernetes plugin for Jenkins. When you configure the master to use this plugin, it will create a new Kubernetes pod for each job, and this pod will run the Docker based Jenkins slave image. The way this works is that the plugin watches for a job in the job queue, notices there isn't a slave to run it, starts the Jenkins slave docker image, which registers itself with the master, then it does the job, and gets deleted. So you do not need to directly create slave nodes in this setup.
When you are in a Kubernetes cluster in a container based workflow, you don't need to worry about where to run the containers, let Kubernetes figure that out for you. Just use Helm to launch the Jenkins master, then connect to the Jenkins master and configure it to use Kubernetes slaves.

Related

Jenkins pipeline using docker on existing slaves

We have the following jenkins setup:
Jenkins master
Jenkins Slave1
Jenkins Slave2
Jenkins Slave3
Those are all virtual machines and the slaves do always exist. They don't spawn automatically up and down.
Now we have builds which needs a lot of tools (maven, python, aws cli, ...). We can install every tool on every slave and everything will work fine.
But we want to build a docker approach.
Nearly all the tutorials I've seen are using slaves in Docker. They use some orchestration tool like Kubernetes and are creating slaves in Docker, do their stuff and delete the pod again.
We don't have the possibility to do this:
Question: Is it a decent approach to use an 'old' Jenkins setup with
real VM slaves on which we use docker?
What I'm thinking about is writing a pipeline and in each stage we use a docker container:
start build (it will choose a slave, e.g. Slave1)
pipeline will start
stage1: spin up e.g. a python container: git clone and execute python commands. mount volume to workspace??
stage2: sping up e.g. aws container and mount the content of the workspace and execute new commands etc.
Can someone evaluate this approach?
This is a very good approach. In fact the way to do that is documented under jenkins docs under Using multiple containers section.
In each stage you basically spin up a container with the necessary tools available and you can use a volume to presist output from the stage into the workspace so that other
stages can use it.

Dockerizing Jenkins builds - slaves as containers or builds as containers?

I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.

How do I pass on my build file from Jenkins to Saltstack master

We have a Jenkins system to automate build from Github, now we are implementing a Saltstack system. So I need to integrate my Jenkins with Salt-master so that it passes all the new builds to the master which then sends it across the salt-clients(minions).
The saltstack setup is in AWS cloud and and the Jenkins machine is outside the cloud in a local setup.
You could enable the salt-api and using the following plugin: https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin then all of your jenkins builds can execute states / orchestrations etc. to any minions on a per job basis.
Another way of doing this is to have a minion running on the salt-master and install the jenkins slave on the same box. Then restrict the jenkins jobs to that jenkins slave and execute the commands as if you were at the command line. NOTE: this option requires a bit more configuration.

Multiple Jenkins Masters using a Shared Slave pool

I am trying to scale Jenkins for a large organization. Is there a way to have multiple Jenkins masters share a slave pool? For example, if I had 200 Jenkins Masters and I want them to share the same set of 50 Linux slaves.
That is, assuming each slave only has 1 executor, if Master A submits a job to the slave pool and it is running on Slave 1, if Master B submits a job to the slave pool, it would try to run on one of the other free slaves, since Slave 1 is already occupied.
I know multiple masters could share a single slave if I configured the slave to have a new workspace and executor for each master. However, I want to be able to set the slave up once, instead of having a slave.jar running on the slave for each master.
Cloudbees Op Center appears to provide this functionality, but looking for a way to do this with the Open Source version. If not, how difficult do you think it would be to extend Jenkins to have this functionality? I have Java development experience and have done a little work with Jenkins plugin development.
Thanks,
As you've noted it's not hard to share slaves between masters, just setup multiple workspaces and each master will install it's own slave jar. The trick is to share resources properly.
One such resource manager is Apache Mesos. A Jenkins Mesos plugin exists enabling the creation of slaves on a managed cluster.
This approach is very new and Ebay have blogged on how they've evolved their Jenkins setup to use Mesos:
Ebay CI solutin part 1
Ebay CI solution part 2
Hope this helps.
There is a Gearman Plugin developed by Open Stack to handle sharing slaves by multiple masters.
If it were me, I'd set up all the masters with a cloud plugin for slaves. For example, you could install the kubernetes plugin or the nomad plugin and connect all masters up to the same kubernetes or nomad cluster. Nomad or Kubernetes would take care of resource management, and the masters would only be submitting jobs to a shared pool of resources. This concept can easily be applied to other cloud providers, like AWS, but IMHO if you just want to set up an on-prem pool of resources for your jenkins masters from scratch, nomad is the easiest option.

Jenkins Master/Slave configuration

I've been reading about Jenkins master/slave configurations but I still have some questions:
Is it so that the slave Jenkins is not actually installed and started up the way master Jenkins is? I assumed I would install one master Jenkins and another slave Jenkins in the same way, and then master Jenkins would control the slave e.g. through SSH? So I cannot view the slave Jenkins through a GUI?
The reason why I have thought about adding a slave Jenkins on another VM is because the VM contains our application servers (many test environments). Deploying and starting/stopping application servers from master Jenkins is a pain because master Jenkins and application servers are on different machines. Therefore, if I would add a slave Jenkins to the machine where our application servers are, these would actually be deployed and started/stopped locally (by slave Jenkins). I wonder if I have missed something, of if my presumptions are still valid.
In a standard Jenkins master/slave setup, Jenkins is only installed on the master. That is where you see the user interface and start/configure build jobs.
The slaves execute the jobs. There is no Jenkins installation here other than a small Java app to have Jenkins communicate to/from the slave. Jenkins talks to these slaves through the slave.jar app over e.g. SSH via the SSH Slaves Plugin and can monitor if the slave is running, etc.
So in your case, you can start jobs from the master that will execute on the application servers.
The master/slave setup also allows you to host all whole bunch of different slaves, with different OSes, different hardware, etc. You can communicate job results (artifacts) from one slave to another via the Copy Artifacts Plugin.
There are also ways to duplicate the actual Jenkins master with load balancing in a heavy use scenario. That is not what you seem to be looking for.

Resources