jenkins trigger build on another slave - jenkins

Consider this scenario -> I have 2 Jenkins slave Slave1 and Slave2, running jobs DeployJob1 and DeployJob2 respectively.
Here is my requirement -> whenever DeployJob1 finishes successfully I want to trigger DeployJob2.
Now, the problem is that we are on 2 different slaves. Is there a plugin which can help with this?
Note: I have already tried Parameterized Trigger Plugin but that only helps in case second job is on same slave.
Thanks in advance.

You should be able to achieve this with the Parameterized Remote Trigger Plugin by pointing it at your master and tying DeployJob2 to Slave2 in its configuration (if it's not already)
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin

Related

Is parameterized remote trigger plugin Pipeline script(Groovy) supported?

I have two Jobs Each on running on Different Hosts Both Are Masters. which i want trigger B job After A successfully completed its Build. how can we setup Groovy Pipelines. But cant not do it with a plugin option.is Parameterized remote trigger plugin Pipeline supported? Please give detailed answer.
Thanks in advance.

Can I call a Jenkins job from Udeploy?

I have a different kind of requirement wherein I want a Jenkins job to trigger automatically once an artifact is deployed to udeploy. I know this is reverse of what is usually done (Jenkins job calling udeploy).
I wanted to know if there is any way to do so?
We use CURL and trigger the Jenkins job, in the component configuration there is an option to "Run Process after a Version is Created", hope this helps

Jenkins Build Distribution among Slaves

I need to get some advice about the way to control the way Jenkins Slaves are used / Jobs are being triggered.
Background / Constraints:
I have a sequence of 10 jobs that run one after each other using the "Trigger parameterized build on other projects" option Parameterized Trigger Plugin.
Each Build of these jobs must run on the same node (I am doing it by using "Build on the same node" which is also configured in the parameterized build plugin and comes from the NodeLabel Plugin).
I have 5 Slaves (current number of executors per slave is 1 but i am open for suggestions here...)
Once Slave is occupied by a Build Sequence, no other job can run on it. When I had only 1 slave, the way I enforced it was using the "Block build when downstream project is building"
The way I configured the slave to be chosen when the first Job is triggered is one of the following: (None of them solved my problem)
a. Using the "Restrict where this project can be run" and put there a label that all relevant slaves will point to.
b. Using the option of "This build is parameterized" (Parameterized Trigger Plugin) and then add a "Node" parameter with the list of Slaves that the User can choose from.
What I want to achieve?
When a User triger the build of the First Job in the Build Sequence, this Build will be done on once of the idle Slaves. (I mean a slave that is doing nothing at the moment)
If there are no idle slaves, then it will join a queue of one of them (doesn't matter which)
Any suggestions how to solve it?
Thanks!
Try passing ${NODE_NAME} as a NodeLabel as a post-build trigger to the downstream jobs. If that works, you may need to pass it every job.
Try node-label parameter plugin. you can make Jobs to run slave node which is free at that movement

Jenkins not executing jobs (pending - waiting for next executor)

Jenkins won't execute any jobs. Having viewed this question, I have disabled all slave nodes but a simple job won't even run on the Master node.
What is wrong?
The Jenkins admin console can run, even with the Master node offline. This can happen when Jenkins runs out of disk space.
To confirm, do the following (with thanks to geekride - jenkins-pending-waiting-for-next-available-executor):
go to Jenkins -> Manage Jenkins -> Manage Nodes
examine the "master" node to see if it is offline. It may be reporting that the master node is out of disk space.
go to Jenkins -> Manage Jenkins -> Manage Nodes
examine the "master"
node.(Click on configure icon)
In my case No of executors was set to 0.
Increased it and issue got fixed.
In my case, I had the following set in my JenkinsFile
node('node'){
...
}
There was no node called 'node', only master (the value had been left in there after following some basic tutorials). Changing the value to 'master' got the build working.
In my case it was caused by number of executors (I had 1) and running Jenkins Job (Project) from Pipeline (my pipeline script started other Job in Jenkins). It caused deadlock - my pipeline held executor and was waiting for its job, but the job was waiting for free executor.
The solution may be increasing of # of executors in Jenkins -> Manage Jenkins -> Manage Nodes -> Configure (icon on required node).
I ran into a similar problem because my master was set to "Leave this machine for tied jobs only." So, even though I disabled the Slave, Jenkins kept on bypassing the Master, looking for something else.
Go to Jenkins --> Manage Jenkins --> Manage Nodes, and click on the configure button of your master node (looks like a screwdriver and a wrench). Check the Usage and make sure it's on "Utilize this slave as much as possible."
In my case, I had just installed the "Authorize Project" plugin and incorrectly setup the strategy in "Manage Jenkins -> Configure Global Security -> Access Control for Builds" as "Run as anonymous". So 'anonymous' had no rights to execute the job.
Setting the first strategy as "Run as User who Triggered Build" unlocked the queued jobs.
I'm a little late to the game, but this may help others.
In my case my jenkins master has a shared external resource, which is allocated to jenkins jobs by the external-resource-dispatcher-plugin. Due to bug JENKINS-19439 in the plugin (which is in beta), I found that my resource had been locked by a previous job, but wasn't unlocked when that previous job was cancelled.
To find out if a resource is currently in the locked state, navigate to the affected jenkins node, Jenkins -> Manage Jenkins -> Manage Nodes -> master
You should see the current state of any external resources. If any are unexpectedly locked this may be the reason why jobs are waiting for an executor.
I couldn't find any details of how to manually resolve this issue.
Restarting jenkins didn't resolve the problem.
In the end I went with the brutal approach:
Remove the external resource
(see Jenkins -> Manage Jenkins -> Manage Nodes -> master -> configure)
Restart jenkins
Re-create the external resource
In my case, I noticed this behavior when the box was out of memory (RAM)
I went to Jenkins -> Manage Jenkins -> Manage Nodes and found an out of memory exception.
I just freed up some memory on the machine and the jobs started to go into the executors.
Short answer:
Kill all the jobs which are running on the master.
In my case there were 3 jobs hung on the master for more than 10 days which were unnoticed. We usually do not run any jobs directly on the master, everything is run on slaves. I killed these 3 jobs which were hung, automatically the executors on the slave started picking up jobs.
Point to note that even though we have 8 slaves only 1 slave was in this affected state.
[EDIT] We found the answer to why only one slave was in this affected state.
When a Jenkins slave goes down, all the pending jobs automatically get transferred over to the master. All the 3 hung jobs which I killed were from this slave, so its likely a connection issue between the master and this particular slave.
For me below solution worked.
Jenkins --> Manage Jenkins --> Manage Nodes --> master -> configure -->
Node properties --> Restrict Jobs execution at node - is enabled and given access to specific users. I have given access myself and then job started to run.
If Restrict Jobs execution at node is enabled scheduled tasks cannot run.
In my case it is similar to #Michael Easter: I got a problem in a job due to lack of disk space. I cleared some space, restarted Jenkins but still the problem persisted.
The solution was to go to Jenkins -> Manage Jenkins -> Manage Nodes and just Click on the button to update the status.
In my case I've to set Execute concurrent builds if necessary in job's General settings.
I had just added a stage with a docker agent. Since I only had one node, master, I had to tell the container to reuse the node from the earlier stages:
agent {
docker {
image 'bitnami/mongodb:latest'
reuseNode true
}
}
You can vote to have this be the default behavior for agents and prevent this kind of lock.
In case you have installed the Parameterized Trigger plugin, a job waiting in the build queue might be a known issue JENKINS-47792.
A workaround fix is to downgrade the Parameterized Trigger plugin to version 2.35.1.
Note that you may be required to downgrade dependencies, such as the git plugin, as well.
What worked for me: I finally noticed the Build Executor Status window on the left on the main Jenkins dashboard. I run a dev/test instance on my local system with 2 executors. Both were currently occupied with builds that were not running. Upon cancelling these to jobs, my third (pending) job was able to run.
For me I have to restart the executors manually. Click on "Dead" under "Build Executor Status" and push the restart button.
I ran into a similar problem because my master was set to " # of executor (The maximum number of concurrent builds that Jenkins may perform on this agent).
Go to Jenkins --> Manage Jenkins --> Manage Nodes, and click on the configure button of your master node (increase the number of executor to run mutiple jobs at a time).
You may have milestones set up and your job won't run until the previous job is complete.
In my environment (images jenkins:lts-jdk11, jnlp-agent-maven e jnlp-agent-maven:jdk11), doing a pull for update jenkins image, reports no connectivity problem on agents. But all jobs stayed blocked with message:
pending—Waiting for next available executor on
So, the solution for me was stop and make a pull in both agents image:
docker pull jenkins/jnlp-agent-maven (jdk8)
docker pull jenkins/jnlp-agent-maven:jdk11
when you met this
just restart jenkins mater ,then recovery.

CI with Jenkins: how to force building happen on slaves instead of master?

I am using Jenkins for CI, I have a master and two slaves, master is running Jenkins and I want only slaves doing the actual building task, is there anywhere I can configure this? I know there is an 'executor', if I change it to 0 on master, probably master won't build anything, but is there any proper way to do this?
You can set where a job will be run using the "Restrict where this project can be run" option in your job.
This setting can be used together with tags you have added to your slaves.
For example two slaves having the tag "Linux-buildserver" and using that tag will split the job up on those two slaves.
Setting the IP-address as a tag in the job will make sure only that buildserver / slave is used.
One of my first steps in setting up a new Jenkins master is to do what you mention in your question, set "executors" to zero in the master server config.
This prevents anything from ever building on master.
While configuring the node, there is option,"leave this machine for tied jobs only" .if "leave this machine for tied jobs only" option is selected Then the slaves will be used by the jobs which are restricted to run on it.
The better way to do this is to give the master a label and restrict what runs on the master that way each of your builds don't have to specify a label.

Resources