I can't figure why my job is not triggering nightly. In Jenkins have 2 jobs set up intended to function in the following way
Job 1: DataCheck
Runs every hour runs a check to see if data is consistent.
Job 2: MoveDataToProduction
Every Day at 8pm moves data to production as long as the most recent DataCheck job has passed
I'm using a BuildResultTrigger in MoveDataToProduction , with the Job to monitor DataCheck. The Job Build result is SUCCESS. The schedule is H 20 * * *.
Now, I can see the BuildResultTrigger running a check at 8pm, but even if the DataCheck job last run is success the MoveDataToProduction does not run. Here's all I see:
Polling started on Nov 19, 2014 8:00:00 PM
Polling for the job MoveDataToProduction
Recording context. Check changes in next poll.
Polling complete. Took 0 ms.
No changes.
Will the SUCCESS of DataCheck only be picked up if it changed from a failure state or something? Ideally, my DataCheck should never fail, but I still want the MoveDataToProduction to trigger.
Related
EDIT: I use this plugin to schedule runs btw: https://plugins.jenkins.io/parameterized-scheduler/
I have set slack notification which would automatically be sent when the timer(scheduler) starts the execution.
In the slack channel, it would post something like this when the auto-build starts:
(/ci/master-15): Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
And when the run is complete, a message similar to this would be posted to the channel:
(/ci/master-15): The job is now complete. Here are the execution summary
Groovy:
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription[0]
SEND_SLACK_NOTIF(BUILD_TRIGGER_BY)
Now, If a human rebuilds one of these timer jobs after they're unsuccessful, I would expect it to say "Started by andrea-hyong#gmail.com" but instead
In the pipeline status it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
Started by user andrea-hyong#gmail.com
Rebuilds build #14
In the slack message it shows:
Started by timer with parameters: {URL=https://shop.com, USERNAME=yyy, PASSWORD=xxs, SLACK-NOTIFY-CHANNEL=#JENKINS-REPORTS}
How can I make it send the username instead of timer?
The behavior you are seeing is a result of the plugins you are using. I assume the rebuild plugin picks both triggers when rebuilding the Job. Since you have consistent behavior when rebuilding, you can improve your Groovy code to something like the one below.
BUILD_TRIGGERED_BY = currentBuild.getBuildCauses().shortDescription.size() > 1 ? currentBuild.getBuildCauses().shortDescription[1] : currentBuild.getBuildCauses().shortDescription[0]
I'm using Apache Beam on Dataflow through Python API to read data from Bigquery, process it, and dump it into Datastore sink.
Unfortunately, quite often the job just hangs indefinitely and I have to manually stop it. While the data gets written into Datastore and Redis, from the Dataflow graph I've noticed that it's only a couple of entries that get stuck and leave the job hanging.
As a result, when a job with fifteen 16-core machines is left running for 9 hours (normally, the job runs for 30 minutes), it leads to huge costs.
Maybe there is a way to set a timer that would stop a Dataflow job if it exceeds a time limit?
It would be great if you can create a customer support ticket where we would could try to debug this with you.
Maybe there is a way to set a timer that would stop a Dataflow job if
it exceeds a time limit?
Unfortunately the answer is no, Dataflow does not have an automatic way to cancel a job after a certain time. However, it is possible to do this using the APIs. It is possible to wait_until_finish() with a timeout then cancel() the pipeline.
You would do this like so:
p = beam.Pipeline(options=pipeline_options)
p | ... # Define your pipeline code
pipeline_result = p.run() # doesn't do anything
pipeline_result.wait_until_finish(duration=TIME_DURATION_IN_MS)
pipeline_result.cancel() # If the pipeline has not finished, you can cancel it
To sum up, with the help of #ankitk answer, this works for me (python 2.7, sdk 2.14):
pipe = beam.Pipeline(options=pipeline_options)
... # main pipeline code
run = pipe.run() # doesn't do anything
run.wait_until_finish(duration=3600000) # (ms) actually starts a job
run.cancel() # cancels if can be cancelled
Thus, in case if a job was successfully finished within the duration time in wait_until_finished() then cancel() will just print a warning "already closed", otherwise it will close a running job.
P.S. if you try to print the state of a job
state = run.wait_until_finish(duration=3600000)
logging.info(state)
it will be RUNNING for the job that wasn't finished within wait_until_finished(), and DONE for finished job.
Note: this technique will not work when running Beam from within a Flex Template Job...
The run.cancel() method doesn't work if you are writing a template and I haven't seen any successful work around it...
We use a Jenkins pipeline for our builds and tests. After the build, we run automated tests on several measurement devices.
For a better overview about the needed testing time, I created a test stage which is periodically checking the status of the tests. When all tests are finished, the pipeline is done. I use the "waitUntil" implementation of Jenkins pipeline for this functionality.
My problem is: The pause between the attemps gets more and more after every try. This is a quite good idea. BUT: After a while, the pause between the attemps gets up to 16 hours and more. This value is too high for my needs because I want to know the needed test time exactly.
My question is: Does anyone know a way to change this behaviour of "waitUntil"?
I know I could use a "while" loop but I would prefer to solve this using "waitUntil".
stage ">>> Waiting for testruns"
waitUntil {
sleep(10)
return(checkIfTestsAreFinished())
}
New versions of Jenkins have capped this to never go over 15 seconds (see https://issues.jenkins-ci.org/browse/JENKINS-34554 ).
In waitUntil, if the processing in the block returns false, then the waitUntil step waits a bit longer and tries again. “a bit longer” means, a 0.25 second wait time. If it needs to loop again, it multiplies that by a factor of 1.2 to get 0.3 seconds for the next wait cycle. On each succeeding cycle, the last wait time is multiplied again by 1.2 to get the time to wait. So, the sequence goes as 0.25, 0.3, 0.36, 0.43, 0.51... until 15 secs(as it is mentioned in one of the answers below, Jenkins solved it).
See the image below:
If you are using older Jenkins version then possible solution could be to use timeout
timeout(time: 1, unit: 'HOURS'){
// you can change format in seconds, minutes, hours
// you can decide your know timeout limit.
waitUntil {
try {
//sleep(10)// you don't need this
return(checkIfTestsAreFinished())
} catch (exception) {
return false
}
}//waituntil
}//timeout
Please note the behavior is from Jenkins LTS 2.235.3.
Jenkins command waitUntil is not supposed to launch something synchronously.
To know the required time you must add a timestamp into the test output parse it and calc separately.
So, in my collection I have about ten requests, with the last two being:
/Wait 10 seconds
/Check Complete
The first makes a call to the postman's echo (delay by 10 seconds) and the second is the call to my system to check for the status complete. Now, if status is unavailable I wait another 10s:
postman.setNextRequest("Wait 10 seconds");
The complete status on my system can appear in a minute or so. Now, as one can see - it is an infinite loop if something goes wrong with the system and status is never complete. Is there a way in postman/newman test to fail a test if it has been going for more than 2 minutes, for example.
Additionally, this will be executed in jenkins with command line, so I am not really looking into postman settings or delays between requests in the runner.
you may have a look to newman options here : https://www.npmjs.com/package/newman#newman-run-collection-file-source-options. The interesting option is
--timeout-request : it will surely fulfill your need.
In Postman itself, you may test the responseTime. I recall that there is a snippet, on the right part, which looks like this:
tests["Response time is less than 200ms"] = responseTime < 200;
and which could help you as the test fails if response does not occur within the requested time.
Alexandre
If you are going to be using Jenkins pipeline you can use the timeout step to cause long running jobs to result in failure, here's on for 2 mins.
timeout(120) {
node {
sh 'newman command'
}
}
Check out the "Pipeline Syntax" editor in Jenkins to generated your code block and look for other useful functions.
I have a JobA which once complete triggers JobB, JobC, JobD.
I am trying to have JobB start immediately. Then, JobC and JobD to have the build requests in queue until is 10PM at night.
JobA is triggered at 12PM. So right now, I am using the option of Quiet period of 5hrs. It would be really nice if JobC and JobD could do as mentioned above.
Is this possible in Jenkins?
Thanks.
One way you could do it is by writing a small Groovy Postbuild script.
You would basically call scheduleBuild and supply a quiet period based on how many more seconds till 10PM.
Here is some untested pseudo code:
def duration = tenPM - now;
manager.build.project.scheduleBuild( duration.seconds, new Cause.UpstreamCause( build ), new ParametersAction(params));