Gerrit Trigger doesn't work when a build is running for that gerrit - jenkins

This is a weird one.
On Jenkins, with Gerrit Trigger, I can trigger my jenkins job just fine using any of the configured triggers.
However, while that job is running - I can not trigger the same job again from the same gerrit so long as the previous build is still running.
I can trigger the same job from other gerrits, or trigger other jobs from that gerrit, but not the same job and the same gerrit.
Funny thing is - it's not even queued. The trigger is just completely ignored, it doesn't even start when the running job is finished.
Thins I've checked are that Do not allow concurrent builds is not checked (it indeed isn't) and that the label has enough executors available (it does).
Any idea what I could be missing?
Thanks!

Related

How to avoid scheduling/starting multiple runs of a Jenkins job at the same time

We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.
For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.

Jenkins builds are being triggered without a trigger

I inherited a project set up to use Jenkins. I set the trigger to poll SCM every 5 minutes and disabled the pre-existing build remotely trigger. So now the only trigger set for Jenkins is to poll SCM every 5 minutes. However jenkins is building the project everytime the project repo is updated and NOT on polling. The polling log specifically says that it won't build (due to the author of the commit being in the excluded list) but the project builds anyway, unrelated to the polling. There are no other triggers set.
I didn't find any scripts anywhere that could explain this behaviour.

Jenkins: how to I automatically restart a triggered build

I have one Jenkins job that triggers another job via "Trigger/call builds on other projects."
The triggered downstream job sometimes fails due to environmental reasons. I'd like to be able to restart the triggered job multiple times until it passes.
More specifically, I have a job which does the following:
Triggers a downstream job to configure my test environment. This process is sensitive to environmental issues and may fail. I'd like this to restart multiple times over a period of about an hour or two until it succeeds.
Trigger another job to run tests in the configured environment. This should not restart multiple times because any failure here should be inspected.
I've tried using Naginator for step 1 above (the configuration step). The triggered job does indeed re-run until it passes. Naginator looks so promising, but I'm disappointed to find that when the first execution of the job fails, the upstream job fails immediately despite a subsequent rebuild of the triggered job passing. I need the upstream job to block until the downstream set of jobs passes (or fails to pass) via Naginator.
Can someone help me know what my options are to accomplish this? Can I configure things differently for the upstream job so it relates to the Naginator-managed job better? I'm not wed to Naginator and am open to other plugins or options.
In case its helpful, my organization is currently using Jenkins 1.609.3 which is a few years old. I'd consider upgrading if that leads to a solution.

Triggering a remote job and blocking the actual job while the remote job is being built is not working in Jenkins

I have a Jenkins job that triggers a remote parameterized job. I have checked the box next to the Block until the remote triggered projects finish their builds.
option. Sometimes it works just fine, but occasionally the first job is not being blocked while the triggered remote job is being built.
Check the following snippet from the log:
16:07:00 Blocking local job until remote job completes
16:07:00 Remote build started!
16:07:00 Remote build finished with status SUCCESS.
It seems that the remote job finished succesfully in only 1 sec, but actually, the remote job has an approx. 10 minutes long build time and I checked that it is started correctly and was still running when the calling job logged this and went on.
Any idea on what is wrong with the blocking?
I think that what is finishing in 1 sec is just the queuing and firing up of the remote job. Not sure if that's intended or a bug (you might want to ask this on plugin's JIRA page).
It seems though you need to block your parent job yourself until the remote job has actually finished. You might be able to work a polling scheme (say check status every 10 secs or so) based on the env vars that are available when you use the Block until... option. Especially TRIGGERED_BUILD_RESULT_<project name> (see here https://wiki.jenkins.io/display/JENKINS/Parameterized+Trigger+Plugin)

Jenkins Git SCM Polling stops when the job for which polling is happening is running

We have a job in a jenkins environment which is triggered based on the changes found in the git source code repository.
When the job is running, the git polling log shows nothing and until the job finishes the execution, polling log doesn't have anything on it.
It always shows log after completing the job and another note is that, enable concurrent builds option is not set to make sure only one build runs at a time.
I would like to understand whether it is a known behavior on jenkins front to halt polling when the job is running and whether the concurrent builds option is enabled or not?
I had a similar problem and discovered this: https://issues.jenkins-ci.org/browse/JENKINS-7423
It looks like it's related to the polls requiring a workspace in order to perform the checkout. You can manually kick off new builds and they will pick up SCM changes.

Resources