I have a job and it can run concurrently. The trick part here is that this job needs to have an interval to one another, at least for 30 seconds. So when Build1 runs, Build2 should wait for 30 seconds to start.
I already tried using the quiet period but it does not fit my needs (it only works when a job is not triggered by Build Now or Build With Parameters)
Is there a way for me to be able to do this kind of condition?
You can try enumerating other builds of this job and sleeping until all the other builds have been running above 30 seconds. See example code in this answer.
Related
I am building a project in Jenkins and want to launch tests right after it, wait
until the tests are finished and than run another job to analyze the results. The testing system is a close system (I can't modify it) so in order to check if the tests are finish I need to query the system every X seconds. One way to so that is to create a job that will query the system but it will take a slot (I can create 1000 slots but it looks like a hack). is there another way to make the job "sleep" while it is waiting for the next X seconds so it will not take a slot while waiting for another process to finish ?
You can trigger one Jenkins job from another. No need to make jobs sleep or anything complicated like that. Look at upstream and downstream triggers using the parameterized build plugin.
https://plugins.jenkins.io/parameterized-trigger
I am dealing with a build pipeline that has some very long wait times in between builds due to outside dependencies. I've found that you can indeed tell a build to sleep before it executes its build steps here. However, I was wondering if there is a limit to how long the sleep can last for. In some cases I'd like builds to wait for 24 hours between builds in the pipeline, inputting 24hrs as 86400 seconds is a little unsettling, but I suppose it's not that unreasonable.
There is no limit implicitly within Jenkins. It will bee limited by your infrastructure reliability and the like.
If you are using jenkins pipeline, ensure that the wait's do not occur whilst consuming an executor (inside a node block).
It may be better, (again if using pipeline) to use a timeout() block rather than a arbitrary sleep, so it resumes as soon as ready.
https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-timeout-code-enforce-time-limit
I have a background job that runs every minute and it has been working fine for the last few weeks. All of a sudden when I log in today, every job is failing instantly with max concurrent job limit reached. I have tried deleting the job and waiting for 15 minutes so any job running currently can finish but when I schedule the job again it just starts failing every time like before. I don't understand why parse thinks I am running a job when I am not.
Someone deleted my previous answer (not sure why). This is and continues to be a Parse Bug. See this Google Group for other people reporting the issue (https://groups.google.com/forum/#!msg/parse-developers/_TwlzCSosgk/KbDLHSRmBQAJ) there are several open Parse bugs about this as well.
Let's say i have repo which for each push (build) starts 4 jobs (diffrent environment/compilers etc.).
There is time limit for builds - 50min. Is it counted as sum of times of all builds (like in left panel), or is it independant for each job?
Example: 4 builds, each taking 20minute - will it timeout becouse it will be counter as 80min or will it be ok and count as 20min (time of longest job)?
The Travis CI documentation is pretty clear about this. A build consists of one or many jobs. The limit is enforced for each job:
There is no timeout for a build; a build will run as long as all the jobs do as long as each job does not timeout.
For example, the current timeout for a job on travis-ci.org is 50 minutes (and at least one line printed to stdout/stderr per 10 minutes).
I have a job that kicks off on any commit. It takes 5-10 minutes to run.
But if (say) 4 or 5 git commits come back-to-back I don't want 4 or 5 jobs run - just one job for the last commit. So basically if there is a job of type "X" in the build queue I don't want another job of type "X" in the queue.
That should be the default behavior if you're using the SCM trigger, default job parameters, and don't check the 'Execute concurrent builds if necessary' option.
First job is going to queue and run immediately.
On source change, next job is going to queue and wait until first one is complete.
A third SCM change would detect job already in queue and not do anything.
When first job is done, next one will start - and will use whatever is in the SCM at the moment it starts (not the moment it was scheduled).
That behavior can be changed using parameters, concurrent builds, job throttling, etc. My knowledge there might also be outdated (Jenkins is evolving pretty fast).
On a side note: multiple builds are not necessarily a bad thing - they give you failure locality, which might allow you faster identification of the offending commit. It doesn't matter much for 10 minutes builds, but if your build grows larger than that it can be a problem (with a large team, you can have a LOT of commits in 30 minutes).
Basically you just want to check if there is a new commit every 5 or 10 minutes? You can do that inside the triggering configuration: monitor source control every X minutes (CRON syntax: */15 * * * * for every 15mins)
If you check every 15 minutes if a new commit happened and your jobs only takes 10 minutes to run, there is no chance you would have another execution pending (unless someone ask for a "manual" construction...).
To avoid the latter case, you may consider the Throttle Concurrent Builds plugin