)
I'm facing a problem and hope you'll be able to give me hand :-)
The problem
I'm trying to write a pipeline like the one below:
parallel(
"task A": { build some stuff and run verifications},
"task B": { build more stuff. Wait for task A to be finished. Run tests.},
"task C": { build some more stuff. Wait for task A to be finished. Run tests.}
)
My problem is I can't find a way to wait for the completion of task A.
Things I've tried
Store the result of the build
In "task A", I would run the job like this: job_a = build job: "Job_A"
Then in task B and C, I would use the attributes of "job_a".
Unfortunately this doesn't work as I get an error because job_a is not defined (in the scope of task B ans C). There might a forks happening when using "parallel".
I also tried defining "job_a" before the parallel block and still assign the job to it in "task A" but this did not work either as in task B and task C, job_a would only have the value that was first defined.
Schedule task A outside the parallel block
I also tried scheduling the job directly before the parallel block.
I would get a job object and then directly run job.scheduleBuild2.
Here again no success.
Any idea how to do this?
The main reasons I would like to set up the pipeline this way is:
All these jobs run on slaves (most likely different).
If task A is finished, and the build of task B is finished, the tests should start. Even if the build of task C hasn't finished yet.
Same if task C finishes before task B.
I'd be very grateful if you have an idea how to implement this :-)
More generally I'm also curious of how this all work behind the scenes.
Indeed, when running parallel several processes or threads must be used. How does the master keeps communicating with a slave during a build to update status etc.
Thanks a lot :-D
I tried to find a solution to your problem but I was only able to come up with something close to what your are asking for. As far as I am aware, parallel in Jenkinsfiles is currently implemented in a way which does not support communication between the different processes running in parallel. Each one of your parallel tasks is run in its own sandbox and therefor cannot access information about the other directly.
One solution could be the following:
A,B and C are started in parallel
B or C finishes its first stage and now need A to continue
Introduce a waiting stage into B and C
B and/or C poll the Jenkins remote api of A (http://jenkins/job/job.A/lastBuild/api/json) and look for the result entry
If result is null -> keep waiting, if result is SUCCESS -> continue, if result is FAILURE throw exception and so on
The obvious downside for this solution is, that you have to implement that stage and do actual HTTP calls to get the JSON responses.
Another Solution could be:
Split B and C into two jobs each
Run the first parts of B and C in parallel with A
Run the second part of B and C in parallel once the first parallel stage has finished
The downside here would be, that it is slower than the setup your wish for in your question. But it would be considerably less effort to implement.
Related
TFS build allows to specify conditions for running a task: reference.
The condition I would like to define is: a specific task [addressed by name or other mean] has failed.
This is similar to Only when a previous task has failed, but I want to specify which previous task that is.
Looking at the examples I don't see any condition that is addressing a specific task outcome, only the entire build status.
Is it possible? any workaround to achieve this?
It doesn't seem like there's an out-of-the-box solution for this requirement, but I can come up with (an ugly :)) workaround.
Suppose your specific task (the one you examine in regards to its status) is called A. The goal is to call another build task (let's say B) only in case A fails.
You can do the following:
Define a custom build variable, call it task.A.status and set to success
Create another build task, e.g. C and schedule it right after A; condition it to only run if A fails - there's a standard condition for that
The task C should only do one thing - set task.A.status build variable to 'failure' (like this, if we are talking PowerShell: Write-Host "##vso[task.setvariable variable=task.A.status]failure")
Finally, the task B is scheduled sometime after C and is conditioned to run in case task.A.status equals failure, like this: eq(variables['task.A.status'], 'failure')
I might be incorrect in syntax details, but you should get the general idea. Hope it helps.
I have a pipeline script where I want to kick off parallel builds on two different build machines, and once it's all done, perform some post-run activity like unstashing and publishing test results, creating an archive from all of the binaries and libraries generated, etc.
It basically looks like this, where 'master' is a MacOS machine and we've got a separate machine for Windows builds:
// main run stuff
parallel (
"mac" : {
node ('master') {
for (job in macJobs) {
job.do()
}
}
},
"windows" : {
node ('windowsMachine') {
for (job in windowsJobs) {
job.do()
}
}
}
}
node('master') {
// post-run stuff
}
If I kick off a single build with this script then it completes no problem.
But, if a second build kicks off while the first is still working through the parallel block (i.e. its polling SCM and someone did a push while the first build is still going), then the post-run block doesn't get executed until the second job's parallel block completes.
There's obviously a priority queue based on who gets to request the node first, but I'd like for one complete script run to finish before Jenkins moves on to the next, so we don't end up with jobs piling up on the post-run block which normally only takes a couple of seconds to complete...
How do I modify the script to do this? I've tried wrapping it all in a single stage block, but no luck there.
I might guess that part of the problem lies around your post-run stuff sharing your master node with one of your parallel tasks. Especially if your master node only has a one or two executors, which would definitely put it at 100% load with more than one concurrent build.
If this sounds like it might be part of your problem, you can try giving your post-run stuff a dedicated node to guarantee availability independent of triggered builds. Or increase the executors available on your master node to guarantee that even if there are a couple concurrent builds, there are still executors available for those post-runs.
Jenkins doesn't really care about the origin of a block to execute. So if you have two jobs running at the same time, and each uses the master node in two separate blocks. There is a real chance the first block of each job will execute together before either of their second block is reached. If your executor only has two executors available, then you may even end up with a starved queue for that node, but at the very least, an executor must become available before either of those second blocks can begin.
Is it possible to set a -halt condition (or multiple -halt conditions?) such that all jobs will be halted if any of them fail, regardless of the exit code?
I want to monitor for an event (that I just triggered, separately, on a load balanced service). I can identify if the event passed or failed by viewing the logs, but I have to view logs on multiple servers at once. Perfect! Parallel it! I have an extra requirement though: I want to return success or failure based on the log result.
So I want to stop the parallel jobs if any of them detect the event (i.e. "-halt now") but I don't know if the detect will return zero or non-zero (that's the point: I'm trying to find out that information) so neither "--halt now,success=1" nor "--halt now,fail=1" is correct, I need to figure out a way to do something like "--halt now,any=1")
I did a look through the source and, well, my perl Kung-fu is inadequate to tackle this (and it looks like exitstatus is used in many different places in the source, so it's difficult for me to figure out if this would be feasible or not.)
Note that ,success=1 and ,fail=1 both work perfectly (given the corresponding exit status) but I don't know if it will be success or fail before I run parallel.
The GNU Parallel manpage says:
--halt now,done=1
exit when one of the jobs finishes. Kill running jobs.
Source: https://www.gnu.org/software/parallel/man.html (search for --halt - it's a big page)
If you (as a human) are viewing the logs, why not use Ctrl-C?
If you simply want all jobs to be killed when the first finishes, then append true to your command to force it to become a success:
parallel -Sserver{1..10} --halt now,success=1 dosomething {}\;true ::: 1..100
The simple case where you just have one job depending on the completion of a set of other jobs is easy: either use a multijob or use the build flow plugin with parallel { ... }. The case I am trying to solve is more general, for example:
JobA depends on JobX and JobZ
JobB depends on JobY and JobZ
SuperJob depends on JobA and JobB
I want each of these jobs to trigger as soon as, and only when their prerequisites complete.
It would appear that neither the build flow plugin, nor the join plugin or the job DSL plugin have a good mechanism for this. I can, of course, just start all my jobs and have them poll Jenkins, but that would be quite ugly.
Another dead end is the "Upstream job trigger". I want to trigger off a specific build of a job, not just any run of an upstream job.
update
One answer mentions the multijob plugin. It can indeed be used to solve this problem, but the scheduling and total build time is almost always worst case. For example, assume this dependency graph, with the build times as indicated:
left1 (1m) right1 (55m)
| |
left2 (50m) right2 (2m)
|____________|
|
zip
With the multijob plugin, you get:
Phase 1:
left1, right1 // done in 55m
Phase 2:
left2, right2 // done in 50m
Phase 3:
zip // total time 105m
If I had a way to trigger the next job exactly when all prerequisites are done, then the total build time would be just 57m.
The answer here should explain how I can obtain that behavior, preferably without writing my own polling mechanism.
update 1 1/2
In the comments below, it was suggested I group the left tasks and the right tasks into a single subtask. Yes, this can be done in this example, but it is very hard to do this in general, and automatically. For example, assume there is an additional dependency: right2 depends on left1. With the build times given, the optimal build time should not change, since left1 is long done before right2 is launched, but without this knowledge, you can no longer lump left1 and left2 in the same group, without running the risk of not having right1 available.
update 2
It looks like there is no ready made answer here. It seems I am going to have to code up a system groovy script myself. See my own answer to the question.
update 3
We ended up forking the multijob plugin and writing new logic within. I hope we can publish it as a new plugin after some cleanup...
Since you added the jenkins-workflow tag, I guess that using Jenkins Workflow Plugin is ok to you, so perhaps this Workflow script fit your needs:
node {
parallel left: {
build 'left1'
build 'left2'
}, right: {
build 'right1'
build 'right2'
},
failFast: true
build 'zip'
}
This workflow will trigger zip as soon as both parallel branches finish.
As far as I can tell, there is no published solution to my problem, so I have to roll my own. The following system groovy script works, but can obviously use some enhancements. Specifically, I really miss a nice simple one page build status overview...
This gist implements my solution, including proper handling of job cancellations: https://gist.github.com/cg-soft/0ac60a9720662a417cfa
You can use Build other projects as Post Build Actions in the configuration of one of your parent job which would trigger second parent job on successful build of the job. When the second parent job also gets completed, trigger your child job by same method.
Multijob plugin could be used to make hierarchy of jobs.
First select Multijob Project in new item and then in configuration you can add as many jobs as you want. You need to also specify phase for each Job.
Jenkins Scenario details: ======================
- Number of build executors(either on master/slave) in Jenkins: 3
- UpStream job: USJob and this job can run on any build executor
- DownStream job: DSJob & this job has a quiet period of 120 seconds + it's tied to run on a particular build executor only.
USJob has this in the build step : echo "Happy Birthday James"
and it takes 5 seconds to complete this job
DSJob has this in the build step : echo "James bond is dead"
and it takes 5 seconds to complete this job
Now, let say we run USJob (parent/UpStream job) 5 times, which will ---> call DSJob(child/DownStream job) 5 times as well, then, what I want is:
Jenkins should run USJob 5 times, thus call DSJob child job during each call.
Instead of running DSJob (as soon as it's called from USJob), DSJob will sit idle or in queue for "120 seconds" (i.e. it's set for quiet period).
Now, if we see this scenario, UPJob will call DSJob 5 times and DSJob will sit in queue for until that quite period is met. Thus, once the quiet period is over, Jenkins will start DSJob.
My question:
What I'm trying to see is what setting/option can I set in DSJob(child job) so that DSJob runs only once and dont care how many time it was called.
In other words: If James Bond/someone is dead once, he can't die again! ......got it! but someone can wish him Happy Birthday N # of times on his BDay.
-- THIS concept is similar to running a continuous integration (CI) build in accumulate fashion in TFS (Team Foundation Server - inside Build Definition's TRIGGER section) i.e. run the build as soon as there's a change in source control BUT accumulate all the changes to source control until the running CI build is in progress and once that's complete, the next CI build will pick all other source control changes done by developers.
I agree as this is one option and I would have gone this way finally. Thanks for sharing Eldad. We basically wanted to not to use by putting a file in the Workspace as we run jobs on any available slave on any machine and didn't want to create the file at a central NAS accessible to all machine/slaves. Also, I didn't wanted to make child/downstream job look for parent/upstream job if it finished with X status or not and then run it.
The way I did it is using Quiet period set to 120 seconds on DSJob + Called "DSJob" from USJob or any other parent of DSJob (you can choose to pass/ or not pass params directly/via a property file) + found that it worked fine. When I scheduled multiple USJob, the first occurrence of USJob called DSJob and it waited for 120 seconds (or X seconds what you want to se), then once USJob first job completed, 2nd USJob started and finished and it called DSJob again but this didn't put a new DSJob in queue albeit it just bumped the remaining X no. of seconds for job DSJob to run from X-whatever time spent so far ..back to X seconds again, which was good. I also used "Build Blocker plugin" but I just used it to make my point clear logically, as things are just working like I want using "quiet period" concept set on DSJob. Solved!
If I understand you correctly, you don't want 5 executions of DSJob to happen after 5 (quick) executions of USJob. You rather want DSJob to execute once, from the last trigger from USJob? Inspired by how the quiet period feature works for SCM?
If that is the case, then I faced the same problem. What I did, was to write a Groovy script that executes as the last step in USJob. First, it gets a list of all downstream jobs (that would simply return DSJob in your case). You could skip this step if you don't need it to work dynamically. If any of these jobs are queued, then the script removes them from the queue. USJob would then trigger DSJob (again) as normal.
The end result is that DSJob will only trigger once if USJob has triggered several times within the 120s time frame. Only the last trigger from USJob will have effect.
I'm kind of a Groovy novice, so my script is a messy combination of other scripts I found around the web. I know there are way better ways to do this. I'll post the (messy) script if this is the solution you're looking for.
I suggest the following idea: Don't link the US and DS jobs at all. Have the US job do its thing and finish. Have the DS job check something to decide if to start at all.
To implement this, use the Script Trigger Plugin. I use it for a similar need and it works great! You have full control on the triggering and using a well written script, you can apply ANY logic you want, giving you absolute control over triggering and flow.
Just a note - the script for evaluating build does not have to be kept as an external file. It can also be written in to the job's configuration. Groovy scripts are also supported.
Hope this helps.