How to get jenkins to launch multiple instances of an AMI? - jenkins

I'm using the EC2 plugin for jenkins and having problems getting multiple instances to spin up. I have one AMI configured and a job configured to use it as the build slave. The AMI is configured to have 1 executor, and the job has a weight of 1. When I kick off a build, it spins up an instance of the AMI as expected and does everything I need it to do, then terminates the instance when it's done. The problem is I would like to be able to kick off multiple concurrent builds of this job at once. I have selected "enable multiple concurrent builds" in the job config, but when I try to kick off a second build it says "pending" because the AMI is already being used by the first build.
When I kick off a second build, I would like it to spin up another instance of the AMI. I know I could copy the AMI and configure it in the EC2 plugin as a second build slave, but I only want to deal with managing one AMI. How can I accomplish this?

You can increase the number of executors on the slave machine so that the concurrent jobs get executed. Second option is to set a idle termination time for your slave. You can set 10 minutes idle termination time so that the job which is going into a pending state will be executed and after the concurrent job gets executed, the instance will wait for 10 minutes and if no job is triggered in those 10 minutes, then your instance will be terminated.

make sure that the Instance Cap is more than 1 , it's in the main configuration.
if not , PLS uplodad your configuration here so we can try and help
Thanks , Mor

Related

Prevent jobs from running on jenkins slave if a job of slave's own pipeline is running on it

I have master jenkins and slave jenkins. I hav kept slave jenkins no of build executors as 1. Slave Jenkins also has 1 pipeline (Lets say pipeline A).
lets suppose a job from slave jenkins' own pipeline is running right now (Job A). I schedule a job from the master jenkins for slave jenkins (Job B).
I dont want Job B to run while job A is running as both jobs use shared resources.
Right now, Job B runs in parallel with Job A, which is causing Job A to fail.
How to do that?
Thanks!
Your implementation is a bit tricky since you are talking about 2 separate machines with 2 separate Jenkins instances. One option is to get rid of the Jenkins instance in the slave machine and move the Jenkins job that runs on it to the master machine. Then, you can schedule the job to use the resources of the slave machine while being managed by the master machine. If you do that, no further configuration will be needed since you have set the number of executors to 1.
If that is not possible, the other option is to find a way for them to communicate with each other that a build is running. Consider the third point of this answer. You can have a variable in a database somewhere and when one job starts, it updates the variable. Before the second job starts, it has to poll the variable to see if there is a job already running. If yes, the build doesn't start, if no, build starts and updates the variable.
Another less elegant solution is to simply have a text file in a location accessible to both machines and write the variable data into that instead of a database.
One way to do this is by using the Lockable Resources Plugin.

Can one configure Jenkins to fail a job if it takes too long to provision an agent?

I'm working with the Jenkins AWS EC2 plugin, which spawns EC2 nodes to execute Jenkins jobs. There are several cases where this plugin can hang indefinitely while waiting for a node to be provisioned. For example, if a project requires python but the EC2 image doesn't have python, Jenkins will spin up a node, fail to run the job, spin up another node, fail to run the job, spin up another node...
Meanwhile, the job hangs forever, Jenkins racks up an Amazon bill, and the console output looks like this:
[Pipeline] Start of Pipeline
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘ec2worker’
Generally the solution is to just configure the EC2 cloud correctly in the first place, but easier said than done. It's easy to imagine, for instance, someone adding, say, node.js as a project dependency without updating the EC2 image, and then Jenkins is off to the races trying bill an AWS high score...
Ideally I could configure the plugin to limit the number of provision attempts before quitting, but there isn't an option for this. There is an option to limit the total number of nodes provisioned, but since each node is terminated after it's deemed unsuitable, Jenkins only considers there to be one active node. I.e., the number of nodes oscillates between 0 and 1, as Jenkins creates a node, discards it, and then creates another.
So I'm looking for a workaround. Is there a way to configure Jenkins to fail a build in the provisioning step? Can I limit the time it takes to create a node without limiting the total time of the whole job?
Preferably this configuration would be system-wide. But if it has to get pushed to each project config file, I imagine it looking something like this:
pipeline {
agent {
timeout(5m) {
label 'ec2worker'
}
}
}
Is there a Jenkins feature or plugin that does something like this?
In the end I couldn't find anything in Jenkins that did what I wanted -- though the Jenkins EC2 Plugin does seem to have a ticket in to support this missing feature.
I solved the problem in AWS with a lambda. The lambda is triggered whenever Jenkins destroys an instance, and from that event the lambda calculates how long the instance was alive. If it wasn't long enough (less than the idle period the node normally waits for), then Jenkins must be hanging and the lambda kills the job.

Is it possible to make Jenkins create workers from attached clouds faster?

I have an instance of Jenkins that uses the mesos plugin. Nearly all of my jobs get triggered via Mesos tasks. I would like to make worker generation a bit more aggressive.
The current issue is that, for the mesos plugin, I have all of the jobs marking the mesos tasks as one-time usage slaves and when a build is in progress on one of these slaves Jenkins forces any queued jobs to wait for a potential executor on these slaves, as opposed to spinning up new instances.
Based on the logs, it also seems like Jenkins has a timer that periodically checks to see if any slaves should be spun up based on the # of queued jobs / excess workload. Is it possible to decrease the polling interval for that process?
From Mesos Jenkins Plugin Readme: over provisioning flags
By default, Jenkins spawns slaves conservatively. Say, if there are 2 builds in queue, it won't spawn 2 executors immediately. It will spawn one executor and wait for sometime for the first executor to be freed before deciding to spawn the second executor. Jenkins makes sure every executor it spawns is utilized to the maximum. If you want to override this behavior and spawn an executor for each build in queue immediately without waiting, you can use these flags during Jenkins startup:
-Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85

Initiate Jenkins job on salve X while it’s running on slave Y

Let’s say I have Jenkins job that should be running on a slave for several hours, is there a way that while job is running to run it but this time on a different slave so same job will run on 2 salves in parallel ?
Currently when I try to do that I get something like that:
(pending—Build # is already in progress )
You will need to check "Execute concurrent builds" in the job.
You will also have to install https://wiki.jenkins-ci.org/display/JENKINS/Throttle+Concurrent+Builds+Plugin
This will allow you to specify how many runs you want your job to execute concurrently and how many runs per slave - so you won't have to limit the executors on your slaves to one, and that way you can always assure there will be no 2 parallel runs on one slave.
Good luck!

Jenkins job is waiting for next available executor

My Jenkins job is a Matrix build that should run on build machines labeled AAA and BBB.
I have three build machines set up, each having label AAA and BBB.
However, when I start the build job, the job does not execute. Instead, it goes to "pending - Waiting for next available executor" state. Why does not my job execute?
Check the slave node configuration.
"Usage" field should be "Utilize this slave as much as possible" instead of "Leave this machine for tied jobs only".
Go to Manage Jenkins -> Configure System and increase the number of executor from 0 to 1
Go to Nodes > Configure > ## of executors. Increase the number here.
Try using Elastic Axis plugin. Afer installing it, in the multi configuration job you can find new axis added as Elastic axis. You just need to provide the label of the node. The job is built on all the nodes with that label. There is a check box provided to skip the nodes that are offline.
For me, already 2 jobs were in progress when I tried to execute the third one and hence I got this "Jenkins job is waiting for next available executor" on executing 3rd job.
The first two jobs were automatically triggered (as per my scripts) hence, I didn't realise they were running. After aborting these two jobs to run the third one, this error got resolved.
So, if you face this issue, just check once if any other job is already running. If yes, aborting that job or running after that job is completed may help resolve this issue.
I uninstalled Jenkins ,deleted all .jenkins files.Then I reinstalled jenkins, created job and build it successfully.

Resources