Just looking for input on this one, but is there any benefit for disabling or deleting jobs?
Does disabling a job force the slave/node to remove any workspaces for that job or does disabling a job change how much the master has to work through?
TIA for any feedback.
Disabling a job is useful when it is a scheduled job, or one triggered by a hook
Few cases it could be of interest to disable a job:
You are aware of a third party service causing issues and you do not want your job to partially execute and fail, you could disable it the time the issue gets resolved by the team in charge of the 3rd party service.
Or someone in your team wants to make a demo on a server that gets updated by jenkins and would like the server not to be redeployed during the demo
You are working on a job that is not finally done, you have an urgency and want to disable the job until you can get back to working on it
Finally, in my opinion, if you only have pipeline jobs generated by a Jenkinsfile for each projects. You won't need to disable a job ever
Actually, there is no real benefit for disabling or deleting jobs. Up until now, I only see one benefit when you put Jenkins jobs in a pipeline, so you can test each upstream job without trigger downstream job mistakenly.
Disabling a job doesn't remove any workspaces on the slave/node and doesn't influence how much the master has to work through.
Related
For internal reasons, one of my jobs is able to run concurrently, but new builds abort themselves if another build is already running (disabling concurrency doesn't help, since I don't want new jobs to be scheduled for execution once the current build is done).
However, this behaviour is detrimental to the job status preview (the colored ball next to the job name when inside the job's parent folder). It often shows the status as "aborted", which is undesirable - I want to view the latest running build as the source of the job status.
I tried deleting aborted builds from within their own execution, but that's unfortunately neither trivial nor stable, and thus not suitable for this situation. I could probably get a workaround running that deletes them from a separate job, but that's not ideal either.
Anyway, I'm now hoping that I can just tell Jenkins to ignore "aborted" builds in the calculation of the job preview. Unfortunately, I wasn't able to find a setting or plugin that allows me to do this.
Is this possible at all? Or do I need to find another way?
Would something like this help?
https://plugins.jenkins.io/build-blocker-plugin/
I haven't used it myself but it supports blocking builds completely if a build is already running.
The scenarios is like:
Initially we scheduled 3 jobs in jenkins named like core, api-simulator and ui-sumilator. core is main build and triggered build once in week. api-simulator and ui-simulator are dependent. Once core is executed, it triggered other both jobs. So we can say core is parent of other two.
Somehow, someone changed the build triggered rule for core job and set it to 3 times in everyhour. So it executed so many task for core job. Meanwhile it triggered job for other two also.
To stop all those executions, I disabled all 3 jobs.
But now we are facing issue like when we manually click on build now, it starts building but eveytime it generated next task also. If we keep it running, it will generate new schedule one by one continuously.
Even I created new job for same repository and tried but facing same issue.
screenshot of execution
Can anyone have an idea how to stop that auto triggered schedule task or remove all those task?
Seems to be, that you've configured triggering your downstream jobs from main job and at the same time in your child jobs you've configured main job as upstream job.
So, re-check your configuration, and if that - you need to disable upstream job in child jobs.
Thanks #biruk1230 for your answer.
It has been resolved by just done the minor configuration change in Jenkins. I updated Branches to build value to "master". It was "*/master" previously.
You can view from Reference link
I have just setup a basic multibranch pipeline build job and have a feature branch with a jenkinsfile present which I am experimenting with.
I have the job configured to poll the scm every 5 mins and trigger a build if necessary.
I am finding however at times when I manually start a build for my branch after I have pushed up a tweak on my Jenkinsfile for example (as I don't want to wait for the next scm poll interval), the branch re-indexing activity can still trigger another build to be performed.
See the image below for what I mean, so here build 7 is one I kicked off manually on jenkins, so it picked up my commit, but then the branch indexing kicked off build 8 but there were no new changes for the branch.
Is there a way to prevent this from happening? Other than me being patient and waiting 5 mins of course!
Thanks
I experienced this also. I tried playing with the polling settings in the Multibranch pipeline to see if this could fix it but that only led to no builds being spawned at all in response to SCM changes.
So this behaviour to which you refer was the best of a bad lot.
At time of writing, I think this behaviour is a bug with no current resolution.
Can't remember if I raised a ticket or not as I am in Azure DevOps world now.
Using Jenkins, I am running 2 builds (Linux+Windows) and one Doxygen job
At the moment, I am using 3 separate SCM polling triggers pointing to the same source code
How can I use a single trigger for all three jobs provided that I still want to get separate statuses
For the record; the underlying SCM is Git
Off the top of my head, some solutions which might do what you are looking for:
Instead of setting an SCM trigger, use a post-receive hook in your repository, which can send a signal for Jenkins that there are new changes (see: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository). This way Jenkins doesn't have to constantly poll the repository (multiple times for the different jobs), and the trigger would be faster, since there is no waiting for the next polling, but when there is a push, a build will be started.
Use an extra job, that would do nothing else, just have the SCM polling trigger, and start all three original jobs (without waiting for any of them to finish).
If the configuration is similar for all three jobs, you could consider creating a single project with a matrix configuration. Roughly what it does, is that you could have a variable for the build type, with values like linux, windows, doxygen. When the job is triggered, it would start multiple builds with all the possible values - of course you would have to set up the job in a way that the current parameter changes the build process according to what needs to be done. Actually I haven't had to use a matrix configuration yet, so my example may be not the best, but you can probably find lots of examples on the Jenkins wiki, if you think this is a good direction.
We have a three layer multi configuration which at times fails since some sub job fails at times on some slaves
We are looking at rebuilding the whole job across all slaves selected in parent job from the beginning if any of the sub jobs fail
I have looked at rebuild plugin, but am also looking at a programmatic way of solving the problem, any guidance would help
Try Jenkins Remote Access API. This can do it.
https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API