I have a JobA which once complete triggers JobB, JobC, JobD.
I am trying to have JobB start immediately. Then, JobC and JobD to have the build requests in queue until is 10PM at night.
JobA is triggered at 12PM. So right now, I am using the option of Quiet period of 5hrs. It would be really nice if JobC and JobD could do as mentioned above.
Is this possible in Jenkins?
Thanks.
One way you could do it is by writing a small Groovy Postbuild script.
You would basically call scheduleBuild and supply a quiet period based on how many more seconds till 10PM.
Here is some untested pseudo code:
def duration = tenPM - now;
manager.build.project.scheduleBuild( duration.seconds, new Cause.UpstreamCause( build ), new ParametersAction(params));
Related
EDIT: I use this plugin to schedule runs btw: https://plugins.jenkins.io/parameterized-scheduler/
I have set slack notification which would automatically be sent when the timer(scheduler) starts the execution.
In the slack channel, it would post something like this when the auto-build starts:
(/ci/master-15): Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
And when the run is complete, a message similar to this would be posted to the channel:
(/ci/master-15): The job is now complete. Here are the execution summary
Groovy:
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription[0]
SEND_SLACK_NOTIF(BUILD_TRIGGER_BY)
Now, If a human rebuilds one of these timer jobs after they're unsuccessful, I would expect it to say "Started by andrea-hyong#gmail.com" but instead
In the pipeline status it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
Started by user andrea-hyong#gmail.com
Rebuilds build #14
In the slack message it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
How can I make it send the username instead of timer?
The behavior you are seeing is a result of the plugins you are using. I assume the rebuild plugin picks both triggers when rebuilding the Job. Since you have consistent behavior when rebuilding, you can improve your Groovy code to something like the one below.
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription.size() > 1 ? currentBuild.getBuildCauses().shortDescription[1] : currentBuild.getBuildCauses().shortDescription[0]
Right now, I have two sets of benchmarks, a short one and a long one. The short one runs on checkin for every branch. Which set to run is a parameter - SHORT or LONG. The long one always runs nightly on the dev branch. How can I trigger other branches to build and run the long benchmark if the branch was built successfully today?
If you want to run those long tests only over the night - I find it easiest to just duplicate the job and modify it so its triggered in the night and has additional checks added after the normal job, I.e. your post-commit jobs just do the short test, the nightly triggered do the short first and then (if no errors) the long one.
I find that much easier to handle then the added complexity of chaining jobs on some condition, like evaluating time of day to skip some tests.
Example 1st job that runs after every commit
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test
}
}
2nd job that triggers nightly
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test, fail the build here when not successful
}
stage('Long Tests')
// Long Test, runs only when short test successful
}
}
Edit
A solution that got it all in a single job, however it adds alot of complexity and makes some followup use cases harder to integrate, i.e. different notification for the integration test branch, tracking of build durations etc. I still find it more manageable to have it split in 2 jobs.
Following job must be configured to be triggered by post commit hook and a nightly timer. It runs the long test when
the last build is younger then set (you dont want it to trigger from the last nightly),
last run was successful (dont want to run long test for a broken build), and
was triggered by said timer (dont want to trigger on a check in).
def runLongTestMaxDiffMillis = 20000
def lastRunDiff = (currentBuild.getStartTimeInMillis().toInteger() - currentBuild.getPreviousBuild().getStartTimeInMillis().toInteger())
def lastBuildTooOld = (lastRunDiff > runLongTestMaxDiffMillis)
def isTriggeredByTimer = currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause')
def lastBuildSuccessful = (currentBuild.getPreviousBuild().getResult() == 'SUCCESS')
def runLongTest = (!lastBuildTooOld && isTriggeredByTimer && lastBuildSuccessful)
node() {
if (runLongTest) {
println 'Running long test'
} else {
println 'Skipping long test'
}
}
You can create another pipeline that calls the parameterized pipeline with the LONG parameter, for example:
stage('long benchmark') {
build job: 'your-benchmark-pipeline', parameters: [string(name: 'type', value: 'LONG')]
}
When you configure this new pipeline you can tick the Build after other projects are built checkbox in the Build Triggers section and choose which short benchmarks should trigger it once they complete successfully (the default behavior).
You can use the Schedule Build Plugin to schedule a build of the long job when the short job succeed.
The short job runs on every branch, when a build succeed for a certain branch, it schedules a build of the long job (in the night) with the branch in parameter, so the long job will run on this particular branch.
I'm using Apache Beam on Dataflow through Python API to read data from Bigquery, process it, and dump it into Datastore sink.
Unfortunately, quite often the job just hangs indefinitely and I have to manually stop it. While the data gets written into Datastore and Redis, from the Dataflow graph I've noticed that it's only a couple of entries that get stuck and leave the job hanging.
As a result, when a job with fifteen 16-core machines is left running for 9 hours (normally, the job runs for 30 minutes), it leads to huge costs.
Maybe there is a way to set a timer that would stop a Dataflow job if it exceeds a time limit?
It would be great if you can create a customer support ticket where we would could try to debug this with you.
Maybe there is a way to set a timer that would stop a Dataflow job if
it exceeds a time limit?
Unfortunately the answer is no, Dataflow does not have an automatic way to cancel a job after a certain time. However, it is possible to do this using the APIs. It is possible to wait_until_finish() with a timeout then cancel() the pipeline.
You would do this like so:
p = beam.Pipeline(options=pipeline_options)
p | ... # Define your pipeline code
pipeline_result = p.run() # doesn't do anything
pipeline_result.wait_until_finish(duration=TIME_DURATION_IN_MS)
pipeline_result.cancel() # If the pipeline has not finished, you can cancel it
To sum up, with the help of #ankitk answer, this works for me (python 2.7, sdk 2.14):
pipe = beam.Pipeline(options=pipeline_options)
... # main pipeline code
run = pipe.run() # doesn't do anything
run.wait_until_finish(duration=3600000) # (ms) actually starts a job
run.cancel() # cancels if can be cancelled
Thus, in case if a job was successfully finished within the duration time in wait_until_finished() then cancel() will just print a warning "already closed", otherwise it will close a running job.
P.S. if you try to print the state of a job
state = run.wait_until_finish(duration=3600000)
logging.info(state)
it will be RUNNING for the job that wasn't finished within wait_until_finished(), and DONE for finished job.
Note: this technique will not work when running Beam from within a Flex Template Job...
The run.cancel() method doesn't work if you are writing a template and I haven't seen any successful work around it...
We use a Jenkins pipeline for our builds and tests. After the build, we run automated tests on several measurement devices.
For a better overview about the needed testing time, I created a test stage which is periodically checking the status of the tests. When all tests are finished, the pipeline is done. I use the "waitUntil" implementation of Jenkins pipeline for this functionality.
My problem is: The pause between the attemps gets more and more after every try. This is a quite good idea. BUT: After a while, the pause between the attemps gets up to 16 hours and more. This value is too high for my needs because I want to know the needed test time exactly.
My question is: Does anyone know a way to change this behaviour of "waitUntil"?
I know I could use a "while" loop but I would prefer to solve this using "waitUntil".
stage ">>> Waiting for testruns"
waitUntil {
sleep(10)
return(checkIfTestsAreFinished())
}
New versions of Jenkins have capped this to never go over 15 seconds (see https://issues.jenkins-ci.org/browse/JENKINS-34554 ).
In waitUntil, if the processing in the block returns false, then the waitUntil step waits a bit longer and tries again. “a bit longer” means, a 0.25 second wait time. If it needs to loop again, it multiplies that by a factor of 1.2 to get 0.3 seconds for the next wait cycle. On each succeeding cycle, the last wait time is multiplied again by 1.2 to get the time to wait. So, the sequence goes as 0.25, 0.3, 0.36, 0.43, 0.51... until 15 secs(as it is mentioned in one of the answers below, Jenkins solved it).
See the image below:
If you are using older Jenkins version then possible solution could be to use timeout
timeout(time: 1, unit: 'HOURS'){
// you can change format in seconds, minutes, hours
// you can decide your know timeout limit.
waitUntil {
try {
//sleep(10)// you don't need this
return(checkIfTestsAreFinished())
} catch (exception) {
return false
}
}//waituntil
}//timeout
Please note the behavior is from Jenkins LTS 2.235.3.
Jenkins command waitUntil is not supposed to launch something synchronously.
To know the required time you must add a timestamp into the test output parse it and calc separately.
I can't figure why my job is not triggering nightly. In Jenkins have 2 jobs set up intended to function in the following way
Job 1: DataCheck
Runs every hour runs a check to see if data is consistent.
Job 2: MoveDataToProduction
Every Day at 8pm moves data to production as long as the most recent DataCheck job has passed
I'm using a BuildResultTrigger in MoveDataToProduction , with the Job to monitor DataCheck. The Job Build result is SUCCESS. The schedule is H 20 * * *.
Now, I can see the BuildResultTrigger running a check at 8pm, but even if the DataCheck job last run is success the MoveDataToProduction does not run. Here's all I see:
Polling started on Nov 19, 2014 8:00:00 PM
Polling for the job MoveDataToProduction
Recording context. Check changes in next poll.
Polling complete. Took 0 ms.
No changes.
Will the SUCCESS of DataCheck only be picked up if it changed from a failure state or something? Ideally, my DataCheck should never fail, but I still want the MoveDataToProduction to trigger.