We have multiple SCM jobs. Job A triggers Job B and Job C triggers Job D. While job C running if there is a checkin in Job A, after Job C is completed instead of triggering Job D , Job A is triggered and Job D is in queue. Once Job A is complete then Job D is triggered while Job B is in queue. Is this a bug ? would expect Job C to trigger Job D regardless of any SCM change in upstream job. How do you solve this problem?
Yes this is expected, assuming that you only have one executor. First some terminology:
Queue, this is the place where a build is before it's executed, usually because all executors are busy.
Triggered, means that a a build of a job is added to the queue.
Executor, Takes the first build in the queue and builds it.
Now let's do a timeline:
C is triggered, adds build C#1 to the queue and the build starts right away since there is nothing executing
A is triggered and adds build A1 to the queue, since the only executor is busy building C#1
Build C#1 is nearing its completion and at the end it triggers a build of D, build D#1. Build D#1 is put on the queue right after build A#1
Once C#1 finishes, the executor takes the next item in the queue, which happens to be A#1 (since it's been there the longest).
A#1 finishes as adds B#1 to the queue, right after D#1
D#1 is built since it is the first in the queue
Finally B#1 is built
So as you can se, the executor always takes the build that has been in the queue the longest. There are ways you can alter this priority, for example using the Priority Sorter Plugin, it allows you to set higher priority for certain jobs, in your case you should put higher priority on job B and D.
Related
i have came across two kind of definitions for serial queue after reading online.
1st version: serial queue performs one task at a time.
2nd version: serial queue execute tasks serially, so task1 have to finish before task2 starts.
can you tell me which one is right exactly ?
As Ratul Sharker said, both versions are saying the same thing.
1st version: serial queue performs one task at a time.
You only can have one task running, so your task have to finish before another is started.
Task 1 starts
Task 1 ends
Task 2 starts
Task 2 ends
2nd version: serial queue execute tasks serially, so task1 have to finish before task2 starts
Obviously, the result is the same as the 1st version.
But! That 2nd version might be speaking of callbacks or another sort of paradigms with multithreading, where you could run more than one task, but Task 2 will wait for Task 1 to end.
In any case, two tasks are serial if one starts after the other ends, simple as that.
From the appledoc
Operations within a queue are organized according to their readiness, priority level, and interoperation dependencies, and are executed accordingly. If all of the queued operations have the same queuePriority and are ready to execute when they are put in the queue—that is, their isReady property returns true—they’re executed in the order in which they were submitted to the queue. Otherwise, the operation queue always executes the one with the highest priority relative to the other ready operations.
However, you should never rely on queue semantics to ensure a specific execution order of operations, because changes in the readiness of an operation can change the resulting execution order. Interoperation dependencies provide an absolute execution order for operations, even if those operations are located in different operation queues. An operation object is not considered ready to execute until all of its dependent operations have finished executing.
So the queued operations are intended to be executed serially, but it never garunteed. To ensure execution order dependencies must be specified to get the full proof exact behaviour.
We have a number of multi-jobs that run a parent job and multiple sub-jobs. The parent does some preprocessing, then kicks off the first sub-job.
Example:
Parent - checks out git repos and preps the code
Build the code
Unit Tests
Upload to HockeyApp
Since the parent is running the entire time the sub-jobs are running, the process starts out with one executor, then picks up a second whenever a sub-job starts. drops it and then picks it back up when the next starts.
We have 4 nodes with 3 - 4 executors on each of them. We also don't have a networked drive so the sub job has to stay on the same executor as the parent to avoid having to pass the entire workspace between jobs.
The problem is that if one job is running and has two executors, then another gets kicked off and then another right after that, there's a chance they'll all end up on the same node and something like below happens:
Node 1
Executor 1 - Parent1
Executor 2 - Child1
Executor 3 - Parent2
Executor 4 - Parent3
Now Parent2 and Parent3 just sit around waiting for a free executor. Eventually the Child job on Parent1 ends, then 2 r 3 grabs the executor and all of them are fighting for it.
Is there a way to tell Jenkins only kick off that parent on a node with at least 2 executors free? I know if someone started enough jobs quickly enough we could still end up with man issue, but it would significantly reduce the chance of it.
I think you can use - https://wiki.jenkins.io/display/JENKINS/Heavy+Job+Plugin , and define for each step the amount of free executers you need.
Our system uses Quartz.Net for scheduling and has multiple types of jobs (say: job type A, job type B, job type C). We want to avoid that certain types of jobs run concurrently:
scenario 1: jobs of type A cannot run concurrently with other jobs of type A.
scenario 2: jobs of type B cannot run concurrently with jobs of type C. (if this happens then we want the C job to 'wait' until the B job is finished)
I know I can use the DisallowConcurrentExecutionAttribute attribute to implement scenario 1. But I can't figure out how to implement scenario 2 using built-in Quartz.Net functionality.
I could limit the number of worker threads to 1, but that will kill all concurrency, which is undesired. (A-jobs are allowed to run concurrently with B-jobs)
Of course I could program this logic inside the jobs, but preferably I don't want jobs of type B to know about jobs of type C.
You can create a separate scheduler for B and C jobs and configure it to have 1 thread in its pool. From How can I set the number of threads in the Quartz.NET threadpool SO answer:
var properties = new NameValueCollection { {"quartz.threadPool.threadCount", "1"} };
var schedulerFactory = new StdSchedulerFactory(properties);
var scheduler = schedulerFactory.GetScheduler();
I want to run a job on different slaves, but in a predefined order.
i.e. I would like the job to run on the second machine only when all the executors of the first slave are busy.
I am trying to use "Scoring Load Balancer" plugin to allocate scores to different slaves,i.e. if I have 3 nodes, NodeA,NodeB and NodeC, having preferences of 9,5 and 1 respectively and no. of executors 10 on each node.
The nodes have a common label "WindowSlave".
Also, I have defined a job,"ProjectX", with project preference score of 10 and the label "WindowSlave" as the preferred label.
I had expected that if I run 100 concurrent builds for the "ProjectX" then the execution would happen in the order :
NodeA(10 builds) -> NodeB(10 builds) -> NodeC(10 builds) -> NodeA(10 builds) -> NodeB(10 builds)->... and so on.
From my observations it is still not clear if above scenario would always be achieved.
Also it happens that any random slave starts behaving as the main slave and co-ordinates with the other slave,such that all the build workspace are created on that particular slave.
What am i missing here??
Using Jenkins or Hudson I would like to create a pipeline of builds with fork and join points, for example:
job A
/ \
job B job C
| |
job D |
\ /
job E
I would like to create arbitrary series-parallel graphs like this and leave Jenkins the scheduling freedom to execute B/D and C in parallel whenever a slave is available.
The Join Plugin immediately joins after B has executed. The Build Pipeline Plugin does not support fork/join points. Not sure if this is possible with the Throttle Concurrent Builds Plugin (or deprecated Locks & Latches Plugin); if so I could not figure out how. One solution could be to specify build dependencies with Apache Ivy and use the Ivy Plugin. However, my jobs are all Makefile C/C++/shell script jobs and I have no experience with Ivy to verify if this is possible.
What is the best way to specify parallel jobs and their dependencies in Jenkins?
There is a Build Flow plugin that meets this very need. It defines a DSL for specifying parallel jobs. Your example might be written like this:
build("job A")
parallel (
{
build("job B")
build("job D")
},
{
build("job C")
}
)
build("job E")
I just found it and it is exactly what I was looking for.
There is one solution that might work for you. It requires that all builds start with a single Job and end with a definite series of jobs at the end of each chain; in your diagram, "job A" would be the starting job, jobs C and D would be the terminating jobs.
Have Job A create a fingerprinted file. Job A can then start multiple chains of builds, B/D and C in this example. Also on Job A, add a promotion via the Promotions Plugin, whose criteria is the successful completion of the successive jobs - in this case, C and D. As part of the promotion, include a trigger of the final job, in your case Job E. This can be done with the Parameterized Trigger Plugin. Then, make sure that each of the jobs you list in the promotion criteria also fingerprint the same file and get the same fingerprint; I use the Copy Artifact Plugin to ensure I get the exact same file each time.