I've been reading about Jenkins master/slave configurations but I still have some questions:
Is it so that the slave Jenkins is not actually installed and started up the way master Jenkins is? I assumed I would install one master Jenkins and another slave Jenkins in the same way, and then master Jenkins would control the slave e.g. through SSH? So I cannot view the slave Jenkins through a GUI?
The reason why I have thought about adding a slave Jenkins on another VM is because the VM contains our application servers (many test environments). Deploying and starting/stopping application servers from master Jenkins is a pain because master Jenkins and application servers are on different machines. Therefore, if I would add a slave Jenkins to the machine where our application servers are, these would actually be deployed and started/stopped locally (by slave Jenkins). I wonder if I have missed something, of if my presumptions are still valid.
In a standard Jenkins master/slave setup, Jenkins is only installed on the master. That is where you see the user interface and start/configure build jobs.
The slaves execute the jobs. There is no Jenkins installation here other than a small Java app to have Jenkins communicate to/from the slave. Jenkins talks to these slaves through the slave.jar app over e.g. SSH via the SSH Slaves Plugin and can monitor if the slave is running, etc.
So in your case, you can start jobs from the master that will execute on the application servers.
The master/slave setup also allows you to host all whole bunch of different slaves, with different OSes, different hardware, etc. You can communicate job results (artifacts) from one slave to another via the Copy Artifacts Plugin.
There are also ways to duplicate the actual Jenkins master with load balancing in a heavy use scenario. That is not what you seem to be looking for.
Related
I am trying to implement CI/CD pipeline by using Kubernetes and Jenkins. I am planning to use Kubernetes HA Cluster having 3 master and 5 worker machine / node.
Now I am exploring about the implementation tutorials for CI/CD Pipeline. And also exploring about the Jenkins usage with Kubernetes HA Cluster. When I am reading , I felt little bit confusions about Jenkins. That I am adding here.
1. I have total 8 VMs - 3 Master and 5 Worker machines / nodes (Kubernetes cluster). If I installing Jenkins in any one worker machines , then is there any problem while integrating with CI/CD pipeline for deployment ?
2. I am previously readed the following link for understanding the implementations,
https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he
Is this mandatory to use Jenkins master and slave ?. In this tutorial showing that If kubectl,helm and docker is installed then don't need to use Jenkins slave. What is the idea about master and slave here?
3. If I am installing both jenkins master and slave in kubernetes cluster worker machine / node , then Need to install master and slave in separate separate VMs? I have still confusion about where to install Jenkins?
I am just started on CI/CD pipeline - Kubernetes and Jenkins.
Jenkins has two parts. There's the master, which manages all the jobs, and the workers, which perform the jobs.
The Jenkins master supports many kinds of workers (slaves) via plugins - you can have stand alone nodes, Docker based slaves, Kubernetes scheduled Docker slaves, etc.
Where you run the Jenkins master doesn't really matter very much, what is important is how you configure it to run your jobs.
Since you are on Kubernetes, I would suggest checking out the Kubernetes plugin for Jenkins. When you configure the master to use this plugin, it will create a new Kubernetes pod for each job, and this pod will run the Docker based Jenkins slave image. The way this works is that the plugin watches for a job in the job queue, notices there isn't a slave to run it, starts the Jenkins slave docker image, which registers itself with the master, then it does the job, and gets deleted. So you do not need to directly create slave nodes in this setup.
When you are in a Kubernetes cluster in a container based workflow, you don't need to worry about where to run the containers, let Kubernetes figure that out for you. Just use Helm to launch the Jenkins master, then connect to the Jenkins master and configure it to use Kubernetes slaves.
I am trying to understand Jenkins distributed builds. From what I have read, Jenkins master uses its' own JVM, and each agent / slave uses its' own JVM.
My Jenkins master is running on a machine with an Ubuntu 16.04 native OS. Am I correct in assuming that each master and agent / slave will have it's own JVM (as oppose to their own linux virtual machines)?
Am I also correct in assuming that once Jenkins master is installed and running, the Jenkins master will orchestrate spinning up the additional JVMs required for the Jenkins agent / slaves?
Thank you for any help,
No, those agents/slaves should run not on the same system (aka VM) as the master. I guess they could, but there wouldn't be any benefits (which is usually: compute power).
If you want more simultaneous builds, add additional machines that have a JVM installed, where the Jenkins Master will launch its slaves (aka agents).
Let's say you have a gitlab instance and it already uses Jenkins for all its CI builds via the gitlab Jenkins plugin, etc. The Jenkins setup has a modest collection of build slaves providing a variety of platforms, etc. and each slave is set up to run just one job at a time (i.e. a Jenkins job gets exclusive access to the build slave, which is important for reasons I won't go into here).
Now let's say you want to consider using gitlab's own native CI support, moving one or more projects over to gitlab instead of Jenkins. The gitlab CI would need to use the same set of build slaves, but it needs to play nice with Jenkins and the two need to cooperate so that if one runs a job on a particular slave, the other won't submit a job to that same slave until the first finishes. In effect, while Jenkins is running a job on a slave, gitlab should see that slave as unavailable and vice versa.
Anyone have working methods for getting gitlab to tell Jenkins it is using a slave while it runs a CI job on there and vice versa? The method doesn't have to be 100% bullet proof, it would potentially be okay if both gitlab and Jenkins run a job on the same slave at the same time if it is a rare event (i.e. race conditions could potentially be tolerated if the frequency of occurrence is likely to be low).
Additional info:
Build slaves include Linux, Windows and Apple.
Docker is not used and would not be permitted at this time.
We have full admin access to everything, but changing code in gitlab or Jenkins themselves would be rejected. Adding scripts or plugins would be okay.
I'm new to Jenkins, and I like to know if it is possible to have one Jenkins server to deploy / update code on multiple web servers.
Currently, I have two web servers, which are using python Fabric for deployment.
Any good tutorials, will be greatly welcomed.
One solution could be to declare your web servers as slave nodes.
First thing, give jenkins credentials to your servers (login/password or ssh login+private key or certificate. This can be configured in the "Manage credentials" menu
Then configure the slave nodes. Read the doc
Then, create a multi-configuration job. First you have to install the matrix-project plugin. This will allow you to send the same deployment intructions to both your servers at once
Since you are already using Fabic for deployment, I would suggest installing Fabric on the Jenkins master and have Jenkins kick off the Fabric commands to deploy to the remote servers. You could set up the hostnames or IPs of the remote servers as parameters to the build and just have shell commands that iterate over them and run the Fabric commands. You can take this a step further and have the same job deploy to dev/test/prod just by using a different set of hosts.
I would not make the webservers slave nodes. Reserve slave nodes for build jobs. For example, if you need to build a windows application, you will need a windows Jenkins slave. IF you have a problem with installing Fabric on your Jenkins master, you could create a slave node that is responsible for running Fabric deploys and force anything that runs a fabric command to use that slave. I feel like this is overly complex but if you have a ton of builds on your master, you might want to go this route.
I am trying to scale Jenkins for a large organization. Is there a way to have multiple Jenkins masters share a slave pool? For example, if I had 200 Jenkins Masters and I want them to share the same set of 50 Linux slaves.
That is, assuming each slave only has 1 executor, if Master A submits a job to the slave pool and it is running on Slave 1, if Master B submits a job to the slave pool, it would try to run on one of the other free slaves, since Slave 1 is already occupied.
I know multiple masters could share a single slave if I configured the slave to have a new workspace and executor for each master. However, I want to be able to set the slave up once, instead of having a slave.jar running on the slave for each master.
Cloudbees Op Center appears to provide this functionality, but looking for a way to do this with the Open Source version. If not, how difficult do you think it would be to extend Jenkins to have this functionality? I have Java development experience and have done a little work with Jenkins plugin development.
Thanks,
As you've noted it's not hard to share slaves between masters, just setup multiple workspaces and each master will install it's own slave jar. The trick is to share resources properly.
One such resource manager is Apache Mesos. A Jenkins Mesos plugin exists enabling the creation of slaves on a managed cluster.
This approach is very new and Ebay have blogged on how they've evolved their Jenkins setup to use Mesos:
Ebay CI solutin part 1
Ebay CI solution part 2
Hope this helps.
There is a Gearman Plugin developed by Open Stack to handle sharing slaves by multiple masters.
If it were me, I'd set up all the masters with a cloud plugin for slaves. For example, you could install the kubernetes plugin or the nomad plugin and connect all masters up to the same kubernetes or nomad cluster. Nomad or Kubernetes would take care of resource management, and the masters would only be submitting jobs to a shared pool of resources. This concept can easily be applied to other cloud providers, like AWS, but IMHO if you just want to set up an on-prem pool of resources for your jenkins masters from scratch, nomad is the easiest option.