Jenkins Scenario details: ======================
- Number of build executors(either on master/slave) in Jenkins: 3
- UpStream job: USJob and this job can run on any build executor
- DownStream job: DSJob & this job has a quiet period of 120 seconds + it's tied to run on a particular build executor only.
USJob has this in the build step : echo "Happy Birthday James"
and it takes 5 seconds to complete this job
DSJob has this in the build step : echo "James bond is dead"
and it takes 5 seconds to complete this job
Now, let say we run USJob (parent/UpStream job) 5 times, which will ---> call DSJob(child/DownStream job) 5 times as well, then, what I want is:
Jenkins should run USJob 5 times, thus call DSJob child job during each call.
Instead of running DSJob (as soon as it's called from USJob), DSJob will sit idle or in queue for "120 seconds" (i.e. it's set for quiet period).
Now, if we see this scenario, UPJob will call DSJob 5 times and DSJob will sit in queue for until that quite period is met. Thus, once the quiet period is over, Jenkins will start DSJob.
My question:
What I'm trying to see is what setting/option can I set in DSJob(child job) so that DSJob runs only once and dont care how many time it was called.
In other words: If James Bond/someone is dead once, he can't die again! ......got it! but someone can wish him Happy Birthday N # of times on his BDay.
-- THIS concept is similar to running a continuous integration (CI) build in accumulate fashion in TFS (Team Foundation Server - inside Build Definition's TRIGGER section) i.e. run the build as soon as there's a change in source control BUT accumulate all the changes to source control until the running CI build is in progress and once that's complete, the next CI build will pick all other source control changes done by developers.
I agree as this is one option and I would have gone this way finally. Thanks for sharing Eldad. We basically wanted to not to use by putting a file in the Workspace as we run jobs on any available slave on any machine and didn't want to create the file at a central NAS accessible to all machine/slaves. Also, I didn't wanted to make child/downstream job look for parent/upstream job if it finished with X status or not and then run it.
The way I did it is using Quiet period set to 120 seconds on DSJob + Called "DSJob" from USJob or any other parent of DSJob (you can choose to pass/ or not pass params directly/via a property file) + found that it worked fine. When I scheduled multiple USJob, the first occurrence of USJob called DSJob and it waited for 120 seconds (or X seconds what you want to se), then once USJob first job completed, 2nd USJob started and finished and it called DSJob again but this didn't put a new DSJob in queue albeit it just bumped the remaining X no. of seconds for job DSJob to run from X-whatever time spent so far ..back to X seconds again, which was good. I also used "Build Blocker plugin" but I just used it to make my point clear logically, as things are just working like I want using "quiet period" concept set on DSJob. Solved!
If I understand you correctly, you don't want 5 executions of DSJob to happen after 5 (quick) executions of USJob. You rather want DSJob to execute once, from the last trigger from USJob? Inspired by how the quiet period feature works for SCM?
If that is the case, then I faced the same problem. What I did, was to write a Groovy script that executes as the last step in USJob. First, it gets a list of all downstream jobs (that would simply return DSJob in your case). You could skip this step if you don't need it to work dynamically. If any of these jobs are queued, then the script removes them from the queue. USJob would then trigger DSJob (again) as normal.
The end result is that DSJob will only trigger once if USJob has triggered several times within the 120s time frame. Only the last trigger from USJob will have effect.
I'm kind of a Groovy novice, so my script is a messy combination of other scripts I found around the web. I know there are way better ways to do this. I'll post the (messy) script if this is the solution you're looking for.
I suggest the following idea: Don't link the US and DS jobs at all. Have the US job do its thing and finish. Have the DS job check something to decide if to start at all.
To implement this, use the Script Trigger Plugin. I use it for a similar need and it works great! You have full control on the triggering and using a well written script, you can apply ANY logic you want, giving you absolute control over triggering and flow.
Just a note - the script for evaluating build does not have to be kept as an external file. It can also be written in to the job's configuration. Groovy scripts are also supported.
Hope this helps.
Related
I have 2 multijob. MJ1 only does sync from GIT. MJ2 does the other activities like Deployment, DB creation, Test execution. Now I want the following:
i) MJ1 should be triggered every after 3 hrs if there is any code check in done.
ii) If there is no code check-in on a particular day or days then both of the MJ1 and MJ2 should run atleast once on that day/days.
)
I'm facing a problem and hope you'll be able to give me hand :-)
The problem
I'm trying to write a pipeline like the one below:
parallel(
"task A": { build some stuff and run verifications},
"task B": { build more stuff. Wait for task A to be finished. Run tests.},
"task C": { build some more stuff. Wait for task A to be finished. Run tests.}
)
My problem is I can't find a way to wait for the completion of task A.
Things I've tried
Store the result of the build
In "task A", I would run the job like this: job_a = build job: "Job_A"
Then in task B and C, I would use the attributes of "job_a".
Unfortunately this doesn't work as I get an error because job_a is not defined (in the scope of task B ans C). There might a forks happening when using "parallel".
I also tried defining "job_a" before the parallel block and still assign the job to it in "task A" but this did not work either as in task B and task C, job_a would only have the value that was first defined.
Schedule task A outside the parallel block
I also tried scheduling the job directly before the parallel block.
I would get a job object and then directly run job.scheduleBuild2.
Here again no success.
Any idea how to do this?
The main reasons I would like to set up the pipeline this way is:
All these jobs run on slaves (most likely different).
If task A is finished, and the build of task B is finished, the tests should start. Even if the build of task C hasn't finished yet.
Same if task C finishes before task B.
I'd be very grateful if you have an idea how to implement this :-)
More generally I'm also curious of how this all work behind the scenes.
Indeed, when running parallel several processes or threads must be used. How does the master keeps communicating with a slave during a build to update status etc.
Thanks a lot :-D
I tried to find a solution to your problem but I was only able to come up with something close to what your are asking for. As far as I am aware, parallel in Jenkinsfiles is currently implemented in a way which does not support communication between the different processes running in parallel. Each one of your parallel tasks is run in its own sandbox and therefor cannot access information about the other directly.
One solution could be the following:
A,B and C are started in parallel
B or C finishes its first stage and now need A to continue
Introduce a waiting stage into B and C
B and/or C poll the Jenkins remote api of A (http://jenkins/job/job.A/lastBuild/api/json) and look for the result entry
If result is null -> keep waiting, if result is SUCCESS -> continue, if result is FAILURE throw exception and so on
The obvious downside for this solution is, that you have to implement that stage and do actual HTTP calls to get the JSON responses.
Another Solution could be:
Split B and C into two jobs each
Run the first parts of B and C in parallel with A
Run the second part of B and C in parallel once the first parallel stage has finished
The downside here would be, that it is slower than the setup your wish for in your question. But it would be considerably less effort to implement.
I have a parent job which runs multiple times in a day, I have a child job which has to run only once a day only if the latest staus of parent job is successful.
Can you please let me know, different ways of doing it.s
Regards
Jagdish
Use Post build trigger
Trigger your child build if parent build is succeed.
Trigger one more child job from your child job to disable the job.
(you can easily disable or enable jobs by using groovy scipts.)
I would implement a simple counter + flag file.
Counter written to a file every time the parent job runs.
If the counter reaches N, create a flag file.
Use the Conditional BuildStep Plugin and check if flag file exists.
If exists, and parent build is good, reset counter, delete flag file and trigger child job.
I hope this helps.
Thnanks to you all for the ideas.I implimented this using a python script which compares current time with the time I want to run the job, if current time is less then it does not trigger the job and if it is equal to, greater than and less than one minute from runtime+1 min then it executes. So if my my designated runtime is 7PM, I compare current>7PM and current<7:01 PM.
So this way, even if parent job runs every 5 minutes, this child job triggers only once at 7PM. Use Trigger Remotely option to enable trigger remotely and execute this python script from parent job.
The simple case where you just have one job depending on the completion of a set of other jobs is easy: either use a multijob or use the build flow plugin with parallel { ... }. The case I am trying to solve is more general, for example:
JobA depends on JobX and JobZ
JobB depends on JobY and JobZ
SuperJob depends on JobA and JobB
I want each of these jobs to trigger as soon as, and only when their prerequisites complete.
It would appear that neither the build flow plugin, nor the join plugin or the job DSL plugin have a good mechanism for this. I can, of course, just start all my jobs and have them poll Jenkins, but that would be quite ugly.
Another dead end is the "Upstream job trigger". I want to trigger off a specific build of a job, not just any run of an upstream job.
update
One answer mentions the multijob plugin. It can indeed be used to solve this problem, but the scheduling and total build time is almost always worst case. For example, assume this dependency graph, with the build times as indicated:
left1 (1m) right1 (55m)
| |
left2 (50m) right2 (2m)
|____________|
|
zip
With the multijob plugin, you get:
Phase 1:
left1, right1 // done in 55m
Phase 2:
left2, right2 // done in 50m
Phase 3:
zip // total time 105m
If I had a way to trigger the next job exactly when all prerequisites are done, then the total build time would be just 57m.
The answer here should explain how I can obtain that behavior, preferably without writing my own polling mechanism.
update 1 1/2
In the comments below, it was suggested I group the left tasks and the right tasks into a single subtask. Yes, this can be done in this example, but it is very hard to do this in general, and automatically. For example, assume there is an additional dependency: right2 depends on left1. With the build times given, the optimal build time should not change, since left1 is long done before right2 is launched, but without this knowledge, you can no longer lump left1 and left2 in the same group, without running the risk of not having right1 available.
update 2
It looks like there is no ready made answer here. It seems I am going to have to code up a system groovy script myself. See my own answer to the question.
update 3
We ended up forking the multijob plugin and writing new logic within. I hope we can publish it as a new plugin after some cleanup...
Since you added the jenkins-workflow tag, I guess that using Jenkins Workflow Plugin is ok to you, so perhaps this Workflow script fit your needs:
node {
parallel left: {
build 'left1'
build 'left2'
}, right: {
build 'right1'
build 'right2'
},
failFast: true
build 'zip'
}
This workflow will trigger zip as soon as both parallel branches finish.
As far as I can tell, there is no published solution to my problem, so I have to roll my own. The following system groovy script works, but can obviously use some enhancements. Specifically, I really miss a nice simple one page build status overview...
This gist implements my solution, including proper handling of job cancellations: https://gist.github.com/cg-soft/0ac60a9720662a417cfa
You can use Build other projects as Post Build Actions in the configuration of one of your parent job which would trigger second parent job on successful build of the job. When the second parent job also gets completed, trigger your child job by same method.
Multijob plugin could be used to make hierarchy of jobs.
First select Multijob Project in new item and then in configuration you can add as many jobs as you want. You need to also specify phase for each Job.
I am using Jenkins for Continous-Integration.
I configured a job, which polls the scm for changes. I have one executor. When there is more than one scm-change, but the executor is already working, there is still only one job added to queue, where I want it to queue more than one job.
I already tried my job "parametrized" as a workaround, but as long as polling does not set any parameters¹ (even not the default ones²), this does not help, too.
Is there any way to get for each scm-change a new build in the job-queue?
[1] https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build
[2] I tried to combine this scenario with https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Dynamic+Parameter+Plug-in
You can write a script with the Jenkins Adaptive Plugin to be triggered by SVN and create a new build regardless of what is currently running.
Another option would be to create two jobs, one that monitors SCM and one that runs the build. Every time there is an SCM change you have the first job add an instance of the second to the queue and complete immediately so that it can continue to poll.
Described scenario is possible in Jenkins by using a workaround which requires two steps:
[JobA_trigger] One Job which triggers another job 'externally', via curl or jenkins-cli.jar¹.
[JobA] The actual job which has to be a parametrized one.
In my setup, JobA_trigger polls SCM periodically. If there is a change, JobA is triggered via curl and the current dateTime is submitted². This 'external' triggering is necessary to submit parameters to JobA.
# JobA_trigger "execute shell"
curl ${JENKINS_URL}job/JobA/buildWithParameters?SVN_REVISION=`date +"%Y-%m-%d"`%20`date +"%H:%M:%S"`
# SVN_REVISION, example (decoded): "2012-11-07 12:56:50" ("%20" is url-encoded space)
JobA itself is parametrized and accepts a String-Param "SVN_REVISION". Additionally I had to change the SVN-URL to
# Outer brackets for usage of SVN revision dates³ - must be avoided if working on a revision-number.
https://svn.someaddress.com/trunk#{${SVN_REVISION}}
Using this workaround, for each scm-change there is new run of JobA queued which has the related svn-revision/dateTime attached as a parameter and is used as the software-state which is being tested by this job.
¹ https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
² I decided to have dateTime-bases updates instead of revision-based ones, as I have svn-externals which would be updated to HEAD each, if I would be working revision-based.
³ http://svnbook.red-bean.com/en/1.7/svn.tour.revs.specifiers.html#svn.tour.revs.dates