Trouble aborting pipeline jobs
We recently converted some of our jobs over to pipeline jobs (specifically, multibranch pipeline jobs), and since we did so, stopping builds has become much more problematic.
Back when they were freestyle jobs, the builds would immediately stop when we clicked the red X next to a build.
But when we do that in a pipeline job, often it won't stop. In the console output, we'll get something like this:
Aborted by [USERNAME]
Sending interrupt signal to process
Click here to forcibly terminate running steps
We have to click that link in the third line, and then it often leaves mercurial repositories in a bad state.
Why did stopping builds work fine with freestyle jobs, but not with Pipeline jobs, and is there any way to get them working well with Pipeline jobs?
Related
We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.
For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.
I have a Jenkins pipeline that has 6 stages.
In 4 of the 6 stages the pipeline triggers Jenkins jobs using build job.
Both the pipeline and the job are set to allow concurrent parallel executions.
The node on which the pipeline and the jobs run is a single node having number of executer set to 10.
Everything works fine when I run 10 parallel concurrent pipeline.
But if I run more than 10 parallel pipelines then all of them seem to go in a deadlock, none of them completes no matter how long you wait and send they are waiting on each other to complete.
If I kill the 11th execution then all the 10 start completing successfully.
My requirement is that if someone executes more concurrent builds of a pipeline than the number of executors it has for the node that it runs on; then in that case 10 should complete parallel execution and 11 onwards should wait until then and should be executed in the second batch of 10 executions rather than all going to a hung state.
Please help me understand if this is a bug with Jenkins latest version and what is the workaround to avoid all pipeline builds from falling in hung state?
The issue can be that the master or the node runs out of CPU and/or memory.
You can also look at Jenkins master/slave node logs for exceptions.
I'm running a single-node Jenkins instance on a project where I want to clean up closed PRs intermittently. The system can keep up just fine with the average PR submission rate; however, when we re-run the scan repository against the project the system encounters a ton of parallel build congestion.
When we run scan repository the multi-branch pipeline fires off the separate stages and those generally complete sequentially without issue and without hitting timeouts; however, the priority of declarative post actions seems lower than the other stages (no special plugins are installed that would cause that AFAIK). The behavior exhibited is that parallel builds start running impacting the total run-time for any one branch or pull request such that all of the builds might indicate say 60 or 90 minute build times instead of the usual 6-10 minutes when the only task remaining is filling out the checkstyle reports or whatever minor notifications tasks there are.
Is there a way to dedicate a single executor thread on the master node for declarative post actions so one branch or PR can be ran from end-to-end without being stuck waiting for available executors that have suddenly been picked to start a different PR or branch and run the earlier (and computationally expensive stages like linting code and running unit tests) in order to avoid ultimately hitting a timeout?
I have one Jenkins job that triggers another job via "Trigger/call builds on other projects."
The triggered downstream job sometimes fails due to environmental reasons. I'd like to be able to restart the triggered job multiple times until it passes.
More specifically, I have a job which does the following:
Triggers a downstream job to configure my test environment. This process is sensitive to environmental issues and may fail. I'd like this to restart multiple times over a period of about an hour or two until it succeeds.
Trigger another job to run tests in the configured environment. This should not restart multiple times because any failure here should be inspected.
I've tried using Naginator for step 1 above (the configuration step). The triggered job does indeed re-run until it passes. Naginator looks so promising, but I'm disappointed to find that when the first execution of the job fails, the upstream job fails immediately despite a subsequent rebuild of the triggered job passing. I need the upstream job to block until the downstream set of jobs passes (or fails to pass) via Naginator.
Can someone help me know what my options are to accomplish this? Can I configure things differently for the upstream job so it relates to the Naginator-managed job better? I'm not wed to Naginator and am open to other plugins or options.
In case its helpful, my organization is currently using Jenkins 1.609.3 which is a few years old. I'd consider upgrading if that leads to a solution.
I would like to build only one project at a time in entire jenkins server. I have tried Jenkins throttle concurrent plugin and lockable resource plugin but no luck. As lockable resource plugin doesn't give me option to lock in pipeline job.
I have 3 Jenkins pipeline jobs (job have pipeline script):
JOB1
JOB2
JOB3
which has some common thing at beginning of the job (clearing content).
Running one by one manually doesn't have any problem if job completed but if JOB1 is building and JOB2 starts in between then it interrupts to JOB1 and build fails for JOB1.
Even when I start jobs using CLI, you never know which job might be running. So, I'm looking for solution to block JOBY if JOBX is running (X, Y can be 1,2 or 3) and allow only one job to run in entire Jenkins server. Like I said, throttle concurrent plugin gives customization option only for respective job instead of for multiple jobs?
Can anyone suggest some solution for multi pipeline jobs block to run only once?
Install Build Blocker Plugin
In configuration of JOBY check "Block build if certain jobs are running".
Put JOBX name in Blocking Jobs text area, each job on new line.
Note, you can also use regex to define in single line jobs having the same prefix but ending with different numbers.