Single Jenkins Computer/Node used by more then one Jenkins - jenkins

I am curios if it's possible to have one computer/node that would be shared between two jenkins instances?
I know that one can make two same copies of node configurations on Jenkins instances and change workspace, but I am aiming for this scenario.
Jenkins A with Node A
Jenkins B with Node A
Jenkins A starts job on Node A
Jenkins B puts Node A offline
Jenkins B starts job on Node A
Jenkins A puts Node A offline
Node A can only be used once at a time, and there should not be parallel jobs running no matter of different workspaces.
I know that it can be achieved when you have one jenkins instance, but how to do it if you have to use two of them at least?
NOTE: both Jenkins instances are located on different machines.

I believe jenkins mesos plugin is what you needed.
Use following steps to install & configure:
step1
step2
step3
step4
Of course, you had to install a mesos cluster for yourself before do above, see mesos official site. This may cost some time if you are not familar with mesos.
And in step4, you'd better set slave cpus equal with the number of your real slave node.
The principle is:
Mesos first sends resource offer to Jenkins Master A, then jenkins master A will launch tasks on the slave node A, as the resource was used by jenkins master A now. Resource offer will not be sent to Jenkins Master B again, so Jenkins Master B is waiting.
After Jenkins Master A finish task, mesos resource offer for slave node A will be sent to Jenkins Master B, then Jenkins Master B can start task.
The slave can be used by Jenkins Mesos A & Jenkins Mesos B, but never at the same time.
Hope it help you.

Related

Prevent jobs from running on jenkins slave if a job of slave's own pipeline is running on it

I have master jenkins and slave jenkins. I hav kept slave jenkins no of build executors as 1. Slave Jenkins also has 1 pipeline (Lets say pipeline A).
lets suppose a job from slave jenkins' own pipeline is running right now (Job A). I schedule a job from the master jenkins for slave jenkins (Job B).
I dont want Job B to run while job A is running as both jobs use shared resources.
Right now, Job B runs in parallel with Job A, which is causing Job A to fail.
How to do that?
Thanks!
Your implementation is a bit tricky since you are talking about 2 separate machines with 2 separate Jenkins instances. One option is to get rid of the Jenkins instance in the slave machine and move the Jenkins job that runs on it to the master machine. Then, you can schedule the job to use the resources of the slave machine while being managed by the master machine. If you do that, no further configuration will be needed since you have set the number of executors to 1.
If that is not possible, the other option is to find a way for them to communicate with each other that a build is running. Consider the third point of this answer. You can have a variable in a database somewhere and when one job starts, it updates the variable. Before the second job starts, it has to poll the variable to see if there is a job already running. If yes, the build doesn't start, if no, build starts and updates the variable.
Another less elegant solution is to simply have a text file in a location accessible to both machines and write the variable data into that instead of a database.
One way to do this is by using the Lockable Resources Plugin.

Is the Jenkins workspace on the master or the worker?

Who does the actual cloning of the project, is it the master or the agent node? If it is the master, then how does the agent node actually execute the job. If it is the agent node, how can we view the workspace in the browser?
When people ask "where is the workspace" the answer is usually a path, but I am more interested in where that path is, on the master or the agent node? Or maybe it is both?
Edit1
Aligned terminology to this: https://jenkins.io/doc/book/glossary/ in order to avoid confusion.
In a Jenkins set up all the machines are considered nodes. The master node connects to one or more agent nodes. Executors can run both on the master or agent nodes.
In my scenario, no executors run on the master. They are run only on the agent nodes.
The answer is: it depends !
First of all, although it is not a good practice IMO, some installation let the master be an actual worker and run jobs. In this case, the workspace will be on the master.
If you configured the master not to accept jobs, there are still occasion when a workspace can be created on the master. A good example is when your job is a "pipeline script from SCM". In this case, the master will create a workspace for the job, clone the target repo, read the pipeline, and start needed jobs on whatever slave is targeted, creating a workspace to run the actions themselves. If the pipeline targets multiple slaves, there will be a workspace on each of them.
In simple situation (e.g. maven or freestyle job), the workspace will only be on the targeted slave.
I needed to dig a bit deeper to understand this.
I ran a brand new instance of Jenkins and I attached a single agent node. I used SSH and I set the remote (agent) root directory to: /home/igorski/jenkins
As soon as I attached the node the remoting folder and remoting.jar showed up in that root directory.
I ran a basic Gradle Java pipeline job (Jenkinsfile in the project).
The workspace showed up on the slave. Not on the master.
From the Jenkins GUI I can access the workspace and see it's contents.
At the moment I kill the agent machine I can no longer view the workspace in Jenkins.
My guess is that the remoting.jar somehow does a live sync.
I also ran a freestyle project and I can confirm the same. As soon as the agent is killed I can no longer open the Workspace and I get an error stack trace:
hudson.remoting.Channel$CallSiteStackTrace: Remote call to JenkoOne
This was much more obvious with the Pipeline job though. There you get a link to the agent that you need to click in order to see the contents. As soon as the agent is gone the link is disabled. And you know exactly on which agent the node is. With freestyle jobs, you just get a Workspace link. There is no indication on what agent it is or if the agent is accessible at the moment.
So, both Zeitounator and fabian were correct.

Jenkins - triggered builds on all Nodes

Currently, we have two machines. One has Jenkins installed and is hosted as master in Jenkins and another one is Slave. Number of executors for both Nodes are set to 1.
I am not exactly sure how Jenkins work behind the scenes but currently when I triggered 2 build jobs simultaneously, it somehow runs only on slave node (and put another build job in queue), if I disconnect the slave and leave only master, then it would run on master(and put another build job in queue).
How to configure Jenkins so that it leverage all my available nodes (master and slave). In other words, I would like to have all available nodes consumes the queue and not just for one of the Nodes.
As I understand, you need to enable Execute concurrent builds if necessary option in your job configuration and then you will be able to run your job simultaneously on all available nodes.
In addition to the above answer. We can also restrict the job to a particular node on which it should run.
For eg
A setup of 3 servers(2 Linux and one windows )
1 Linux server acts as master
1 Linux server acts as node
1 window server as as node
If we have a job that needs to be run on the windows node you can go to the job configuration and restrict the job to run on that node using the node name or label.
Additionally, the no. of executes define the instances of the slave or master node that can be executed parallelly across different jobs.
For running same job you need to check the enable concurrent build option and assign a label having more than 1 nodes in it
Cheers,
Yash

Jenkins master and Slave installation on CI/CD pipeline

I am trying to implement CI/CD pipeline by using Kubernetes and Jenkins. I am planning to use Kubernetes HA Cluster having 3 master and 5 worker machine / node.
Now I am exploring about the implementation tutorials for CI/CD Pipeline. And also exploring about the Jenkins usage with Kubernetes HA Cluster. When I am reading , I felt little bit confusions about Jenkins. That I am adding here.
1. I have total 8 VMs - 3 Master and 5 Worker machines / nodes (Kubernetes cluster). If I installing Jenkins in any one worker machines , then is there any problem while integrating with CI/CD pipeline for deployment ?
2. I am previously readed the following link for understanding the implementations,
https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he
Is this mandatory to use Jenkins master and slave ?. In this tutorial showing that If kubectl,helm and docker is installed then don't need to use Jenkins slave. What is the idea about master and slave here?
3. If I am installing both jenkins master and slave in kubernetes cluster worker machine / node , then Need to install master and slave in separate separate VMs? I have still confusion about where to install Jenkins?
I am just started on CI/CD pipeline - Kubernetes and Jenkins.
Jenkins has two parts. There's the master, which manages all the jobs, and the workers, which perform the jobs.
The Jenkins master supports many kinds of workers (slaves) via plugins - you can have stand alone nodes, Docker based slaves, Kubernetes scheduled Docker slaves, etc.
Where you run the Jenkins master doesn't really matter very much, what is important is how you configure it to run your jobs.
Since you are on Kubernetes, I would suggest checking out the Kubernetes plugin for Jenkins. When you configure the master to use this plugin, it will create a new Kubernetes pod for each job, and this pod will run the Docker based Jenkins slave image. The way this works is that the plugin watches for a job in the job queue, notices there isn't a slave to run it, starts the Jenkins slave docker image, which registers itself with the master, then it does the job, and gets deleted. So you do not need to directly create slave nodes in this setup.
When you are in a Kubernetes cluster in a container based workflow, you don't need to worry about where to run the containers, let Kubernetes figure that out for you. Just use Helm to launch the Jenkins master, then connect to the Jenkins master and configure it to use Kubernetes slaves.

Jenkins master(s) as a slave of another master(s)

This may be a crazy idea but I'm just throwing it.
Is it possible to have one Jenkins master's executors available as slave(s) (executors) from another Jenkins Master?
I.e. Let's say JenkinsMaster1 (has 10 executors). It has bunch of slaves (in various OS with various # of executors per slave) but all of them are used/running something.
There's another JenkinsMaster2 and this instance has the same setup (bunch of slaves with N no. of executors) but this one has some/a lot of free executors (on master or it's slaves).
The question is NOT, why I can't just create a new slave for JenkinsMaster1 if I need a job configured in JenkinsMaster1 instance to run (while every other executor in JenkinsMaster1/its slave are in use) or why not add more/increase executors of JenkinsMaster1 master/slaves BUT how can/is it even possible to use JenkinsMaster2's executors (or it's slaves i.e. owned by JenkinsMaster2) to run a job which is configured on JenkinsMaster1.

Resources