Jenkins: How to choose an agent under a same label - jenkins

Assume that Jenkins has three agents (Agent_1,Agent_2,Agent_3)
Both of them are labeled Linux_Server
Question:
When I select Linux_server to run a job, by default which agent will Jenkins
choose to actually run the job for me?
Jenkins will do it totally randomly ? Or choose a different agent ? Or choose a agent with least number of job? Or anything else ...

This is what their wiki says:
Some agents are faster, while others are slow. Some agents are closer
(network wise) to a master, others are far away. So doing a good build
distribution is a challenge. Currently, Jenkins employs the following
strategy:
If a project is configured to stick to one computer, that's always
honored.
Jenkins tries to build a project on the same computer that
it was previously built.
See this question for a few more sophisticated approaches. Especially the Least Load Plugin might sound interesting.

Related

Why does Jenkins unnecessarily change to a different node sometimes?

Why is Jenkins changing node to do another build when it doesn't need to?
We have a Jenkins setup with 3 Mac and 3 Windows nodes, building free style projects. We do not use the master for builds. The projects are set to run on any of the 3 nodes suitable for their platform. We are using labels for this purpose.
Some of the time, when we do a build it will do the build on the same node as last time.
But sometimes, without any obvious pattern, it will change to a different node, even though the previously used node is available and not busy. This wastes a lot of time potentially as incremental builds on the same node are much faster than pulling and building everything from scratch.
Jenkins claims it should allocate jobs to the same node if possible, when multiple possibilities exist.
As a result, from the user’s point of view, it looks as if Jenkins tries to always use the same node for the same job, unless it’s not available, in which case it’ll build elsewhere. But as soon as the preferred node is available, the build comes back to it. Reference
Jenkins version is 2.289.2 presently.
The jobs are freestyle builds with shell script/command prompt steps.
Repositories are Git and Mercurial.
I think if you read the Restrict where this project can be run that might provide you answer to your question.
As its definition tells By default, builds of this project may be executed on any agents that are available and configured to accept new builds.
So if you don't specify the agent and restrict the build to be run a particular node, it will randomly choose.
This wastes a lot of time potentially as incremental builds on the
same node are much faster than pulling and building everything from
scratch.
To solve the above problem only, we have the Restrict where this project can be run feature in Jenkins. Please look to the below screenshot for detailed help that we can get in Jenkins .

Execute Build Jobs/Pipelines not on Master but only on Build Agent

Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.

Add more "executors" to an agent [duplicate]

I have a Windows VM that hosts a VSTS build agent. Due to the number and length of builds that are running I would like to know whether multiple build agents can be hosted on one computer? That would allow a dedicated agent for slow builds, and a dedicated agent for quick builds.
https://www.visualstudio.com/en-us/docs/build/admin/agents/v2-windows
Yes you can run multiple agents in a single VM.
Make two directories say Agent1 and Agent2, extract the agent in each one of them and configure them with different names against your VSTS/TFS account.
It should work out of the box.
We run 4 agent jobs per machine concurrently with no issues. As mentioned above, should work out of the box. Just make sure you clean up directories. We have a script to do it every night
Yes, this works, I did the following:
Created a PAT for agent installation needs
Downloaded agent binaries from the agent creation page
Unpacked the archive contents into 2 different directories ("c:\ado-build-agents\agent1" and "c:\ado-build-agents\agent2")
Ran "config.cmd" and followed configuration instructions, provided by it.
Updated pipelines to build the agent pool, which those agents reside in ("Default" in my case)
To test the setup - triggered all 15 pipelines, that I had. As the result I was able to see two pipelines running at the same time, while others were in the "Queued" state (according to my expectations).
I will be also testing out how resources are consumed by the agents to try to understand if I should deploy more agents on the build machine.

In what scenario do I need to use slave node with jenkins?

I'm new to Jenkins and Continous Integration, and I noticed that it supports master / slave nodes. I really don't know what that means.
Can someone please tell in what scenario do I need slave agent?
Here is a scenario:
Our main Jenkins master is running on Windows machine (yes I know... I know...). We are doing iOS mobile development. There are some things that can only be done using Xcode (which only runs on Mac OS). I have a Jenkins Slave running on that Mac, that takes care of executing those tasks that can only run on a Mac.
Why not just setup a new instance on that Mac? Cause that job is tied together with other jobs (on Master) in dependencies and the flow. Even promotions on those Xcode tasks are run on Master.
Jenkins' Master / Slave architecture is used to manage distributed builds.
There are many different scenarios you might want to use a distributed build system. It is all based on your projects load and dependencies.
Pretty much, the Master is what you're probably currently using, and is responsible for scheduling builds, dispatching jobs to slaves, and monitoring the results, but can also execute jobs itself. A slave is a java executable that sits on a remote server waiting for instructions from the master (to execute build).
To use this functionality in Jenkins, go to "Manage Jenkins" screen, and click on "Manage Nodes"
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
For a more in depth summary of distributed builds with Jenkins, as well as some scenarios where this system would be useful, and how to implement it, please read chapter 11: Distributed Builds of Jenkins: The Definitive Guide by John Ferguson Smart
http://wakaleo.com/books/jenkins-the-definitive-guide/download-jtdg-pdf

Build Pipeline Plugin & Manual Deployment With Parameter

Let's say I have this situation. I have three jobs. Job number one has two manually triggered downstream jobs (deploy to test, deploy to prod for example). Something like this:
I want the deployment jobs (test-job-2, test-job-3) to require a password before they are triggered. How can I solve this with Jenkins?
The only option right now supported by the Build Pipeline Plugin is to have a manually deployed downstream job. But this job starts right after you click on it. I would like to require the user to manually enter some parameters (password for example).
Is there some workaround? I was thinking of using the Promoted Builds Plugin. So the deployment jobs would run in a "dry run mode" - just checking that we have ssh access to the server and some other basic stuff. And then in order to deploy you will have to promote the build.
This approach isn't very nice though. Build pipeline and promoted builds plugins don't interact with each other very well.
This is not exactly what you want, but I guess it would some how solve your problem.
View Job Filters
Using this feature in tandem with a security feature such as the Standard matrix based security can help you create a view that will show different jobs depending on who is logged in.
I use different Jenkins Servers to "complete the pipeline" using Build Publisher job to publish the last part of the pipeline job to the other jenkins. I then pick it up from there. Operations teams have access to the "prod" jenkins system, and developers have access to the "dev" system.

Resources