I added 3 new slaves to my jenkins master, so at the total my master has 5 powerful slaves. All of them from the same instance type (RAM/CPU, Disks and same network).
My Master jenkins is 2.78.
I am not using "Restrict where this project can be run" plugin. However, Jenkins executes builds mostly on slave1 and slave2. It rarely executes jobs on slaves[3-5].
Is there a configuration option within Jenkins, that prioritises the slaves?
Related
suppose that both the project are freestyle or one is freestyle and other is pipeline project
how do then we will configure such that we will able to run both the jobs or project simultanously or parallely
I have three VMs which I have used to deploy develop, staging and master branch of a project.
Lets say jenkins is running on VM named JEN
Develop branch on VM named DEV
Staging branch on VM named STAGE
And Master branch on VM named MASTER
I have made three slave node (DEV, STAGE, MASTER) on Jenkins and thee different branch's Jenkinsfile run on different VMs(DEV, STAGE, MASTER).
Another approch I am coming through is:
Not to make DEV, STAGE, MASTER as slave node. That is we have only one Jenkins Agent (JEN).
Run pipeline and the tests in it on JEN and use ANSIBLE to deploy remotely on (DEV, STAGE, MASTER)
How would that compare with the first approach?
First, I believe it is Ansible, not ancible.
Second, the interest of an Ansible deployment model is that is is agentless (as opposed to Jenkins, which needs an agent listener agent.jar)
So if what you need to deploy is not the sources but deliverables, Ansible is more suited for that task, provided the target machines are accessible through SSH.
The Jenkins pipeline would simply do a tower_cli call to the right Ansible Job Template: that is what I have in my deployment platform.
I have tried every permutation that I can find to pull a pre-existing variable from a specific Jenkins Slave and I cannot find a solution.
We have a git branch variable defined on each slave agent as a default branch for all builds initiated on that slave. This is to ensure that all DSL scripted job config is tested on our dev machine before it is promoted to a higher jenkins environment.
I have created a pipeline that will build all the components needed to stand up a new jenkins (with all of our enterprise deployment pipelines created) and it needs access to that one specific variable to correctly build the jobs based on the jenkins master/slave combo that it is running on.
I need a way (in a jenkinsfile) to access the variables that are configured on a particular jenkins slave machine.
I have one job and two slave nodes. "Workspace" is shown in job overview, but it contains only "Workspace of job on slave2". I run two builds on this job in parallel (one build runs on slave1 and one on slave2)
I tried Jenkins 2.74 and 1.658. I use Windows7 for server and slave. I configured Jenkins Job to "Execute concurrent builds if necessary". Description says
Each concurrently executed build occurs in its own build workspace, isolated from any other builds. By default, Jenkins appends "#" to the workspace directory name, e.g. "#2".
The separator "#" can be changed by setting the hudson.slaves.WorkspaceList Java system property when starting Jenkins. For example, "hudson.slaves.WorkspaceList=-" would change the separator to a hyphen.
I also use "Restrict where this project can be run" with: slave1||slave2
How can i display links to all workspaces of all configured slaves at the same time in jenkins web interface, i thought they will be shown as workspace#1 and so on?
I'm currently using jenkins with clustered slaves.
There is a multi-configuration job (triggered by a git hooks), which do nothing, it purpose is just to pull a git repository on all slaves. This repository stores all the needed scripts for other jenkins jobs. Thank to this, we can update the job's build script through git and ease the maintenance of our jenkins instance (I'm the only one familiar with Jenkins, and so setup this mess for my team).
However, the scripts aren't pulled in the same place on each slave:
In two slaves, the repository is pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name> and /builds/workspace/<multi-configuration job's name>.
In the last slave, the repository is only pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name>.
So, I have two questions :
Is it a good idea to use such multi-configuration job to synchronize build scripts in all the slave ?
Why the multi-configuration job doesn't pull the repository in the same place for all slaves ?
About the configuration:
The source code management and build triggers are configured to allow triggering from git hooks (Poll SCM is checked).
There is no build step.
Here is the configuration matrix:
(The master node is unchecked)