Goal:
I would like to use Amazon Ec2 Plugin to add dynamic slaves to Jenkins based on the load.
Architecture:
Jenkins Master + 4 slaves + dynamic slaves (based on the requirement)
1st job runs on dynamic slave (no concurrent jobs) - label1 (ami-12345)
2nd job runs concurrently on dynamic slaves - label2 (ami-23314)
These two has different AMI and different labels.
PROBLEM:
first job is able to spin up the instance and executes the job everything looks good. If I run the 2nd job Jenkins able to spin up the instance, However if jobs are queued up it's not adding new slaves.Even though I added the instance to 4 for that AMI.
Jenkins v1.656
Amazon EC2 plugin v1.31
I tried to minimize the number of executors on master and try to run the job, But no luck. Changed EC2 instance size to little bit low and increased number of executors(in order increase more load the slave). Job waited for couple of minutes(~5 minutes) and started another slave.
Solution:
Your cluster should be overloaded for more than couple of minutes to add a new dynamic slave.
Related
i'm new to jenkins and i'm trying to run a job which runs on mulitple machines.
this was achieved by enabling this project is parameterized option and selecting the suitable node and value. This will run the same job on multiple machines parallely but only 1 instane on each machine.
Now my question is i want the job to be run on all executors on all machines.
Let's say machine A has 4 executors and machine B has 2 executors
it should run 6 times parallely instead of 2.
Is there a way to achieve this in jenkins?
Not sure if you can achieve this automatically - as I know, there are no plugins with such functionality.
Only way that I can propose you is to create additional job for triggering your job 6 times. However, you need to enable Execute concurrent builds if necessary option in your job. And then your job should execute 6 times in parallel (if not - try to use also Heavy Job Plugin for specifying the total number of executors that the job should occupy).
I'm using the EC2 plugin for jenkins and having problems getting multiple instances to spin up. I have one AMI configured and a job configured to use it as the build slave. The AMI is configured to have 1 executor, and the job has a weight of 1. When I kick off a build, it spins up an instance of the AMI as expected and does everything I need it to do, then terminates the instance when it's done. The problem is I would like to be able to kick off multiple concurrent builds of this job at once. I have selected "enable multiple concurrent builds" in the job config, but when I try to kick off a second build it says "pending" because the AMI is already being used by the first build.
When I kick off a second build, I would like it to spin up another instance of the AMI. I know I could copy the AMI and configure it in the EC2 plugin as a second build slave, but I only want to deal with managing one AMI. How can I accomplish this?
You can increase the number of executors on the slave machine so that the concurrent jobs get executed. Second option is to set a idle termination time for your slave. You can set 10 minutes idle termination time so that the job which is going into a pending state will be executed and after the concurrent job gets executed, the instance will wait for 10 minutes and if no job is triggered in those 10 minutes, then your instance will be terminated.
make sure that the Instance Cap is more than 1 , it's in the main configuration.
if not , PLS uplodad your configuration here so we can try and help
Thanks , Mor
We have a Jenkins server with 8 executors and 20 jobs. 15 of those jobs take approximately 2 hours to finish while the remaining 5 take only 15 minutes. I would like to reserve 1 executor (or 2) to run those 5 small jobs only and restrict other jobs to run on the other executors. Note: I don't have any slaves, just 8 executors on master Jenkins process.
I'm new to Jenkins so I just wonder is it any way that I can do that? Thank you.
As i understand it Kiddo uses the master for 8 executors. What you can do is to add a new slave which runs on the master, let's call it slave-master. I.e. You will have master with 6 executors that has usage set to utilise as much as possible, and then slave-master which has usage restricted to only the short builds. So on your server you will have two jenkins tasks running, one is the jenkins master it self, and two is the slave-master.
For info on how to connect slaves, go to https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
Adding to #StephenKing answer, you also have to specify the label name for each job while configuring it, as shown in the below image:
I'm a bit late but I think it would be much easier to restrict how many concurrent "slow" jobs can run than trying to reserve executors. This is simple to do with the Lockable Resources plugin: https://wiki.jenkins.io/display/JENKINS/Lockable+Resources+Plugin
Simply add as many resources as the number of slow jobs you want to allow (6 or 7) and give them all the same label. Modify the job configurations to lock a resource (by label with quantity 1) before it can execute. If all the resources are already locked, then the job will wait until one is freed.
In the slave configuration, you can set the Usage mode to Only build jobs with label expressions matching this node.
Then, only jobs matching a given label (e.g. job-group-whatever) will be executed on this slave.
I had same issue. I installed multiple agent on same slave and it works fine.
Nodes remote directory should be different.
agent as a windows services
I am using the Jenkins EC2 Plugin to spin up AWS EC2 instances on demand to use as slaves for my build jobs. When I kick off multiple jobs at the same time, I want to run each job on its own EC2 instance.
But the default "Usage" setting is to "Utilize this slave as much as possible". This means that Jenkins will not boot a slave for a job unless all the build-executors on the slave currently are in use.
How to configure EC2 plugin to spin up new EC2 instance every time my job is kicked off even not all the executors are in use?
How about just having a single executor per slave ? Then each build would have to run on a separate slave.
This may be a crazy idea but I'm just throwing it.
Is it possible to have one Jenkins master's executors available as slave(s) (executors) from another Jenkins Master?
I.e. Let's say JenkinsMaster1 (has 10 executors). It has bunch of slaves (in various OS with various # of executors per slave) but all of them are used/running something.
There's another JenkinsMaster2 and this instance has the same setup (bunch of slaves with N no. of executors) but this one has some/a lot of free executors (on master or it's slaves).
The question is NOT, why I can't just create a new slave for JenkinsMaster1 if I need a job configured in JenkinsMaster1 instance to run (while every other executor in JenkinsMaster1/its slave are in use) or why not add more/increase executors of JenkinsMaster1 master/slaves BUT how can/is it even possible to use JenkinsMaster2's executors (or it's slaves i.e. owned by JenkinsMaster2) to run a job which is configured on JenkinsMaster1.