Prevent branch-indexing triggering builds when a manual build has already picked up changes - jenkins

I have just setup a basic multibranch pipeline build job and have a feature branch with a jenkinsfile present which I am experimenting with.
I have the job configured to poll the scm every 5 mins and trigger a build if necessary.
I am finding however at times when I manually start a build for my branch after I have pushed up a tweak on my Jenkinsfile for example (as I don't want to wait for the next scm poll interval), the branch re-indexing activity can still trigger another build to be performed.
See the image below for what I mean, so here build 7 is one I kicked off manually on jenkins, so it picked up my commit, but then the branch indexing kicked off build 8 but there were no new changes for the branch.
Is there a way to prevent this from happening? Other than me being patient and waiting 5 mins of course!
Thanks

I experienced this also. I tried playing with the polling settings in the Multibranch pipeline to see if this could fix it but that only led to no builds being spawned at all in response to SCM changes.
So this behaviour to which you refer was the best of a bad lot.
At time of writing, I think this behaviour is a bug with no current resolution.
Can't remember if I raised a ticket or not as I am in Azure DevOps world now.

Related

Using Multibranch Pipeline Jenkins job, is it possible to run branch indexing without re-running existing branch builds

I'm setting up a new Jenkins job using multibranch pipeline and I have noticed that when a branch is deleted, it only has a strikethrough and isn't actually removed on Jenkins. This is solved by re-running branch indexing. However, I cannot really use this as it will also cause every other branch to rebuild (a consequence of how the repository is updated). Is there some custom code or pipeline/script I can run to re-index without building?
I've already looked at various UI methods such as suppressing SCM triggers, but this also negates push events from Github which is something we want to use.
The deleted/merged branch build will disappear after a period of time (<24 hours). It is not removed immediately to show the recently deleted/merged branches and give a chance to review the prior build statuses. It is relatively harmless since the jobs for these branches are deactivated (read-only).
Note that the removal is based on the branch indexing job running at regular interval, so if you have this disabled, it probably won't do it (not sure the SCM webhook calls are enough).

Can Jenkins job watch one repo for changes, to trigger a build on another repo?

Using Jenkins and pipelines;
I have one repo which drives deployments and cloning its Jenkinsfile should kick off a new build; but I need the jenkins job which wraps it to watch another repo representing the site (not its build process), to watch for and build changes to the site, expected to change often, rather than the build process which hopefully will soon be stable and not subject to much additional change.
Can anyone please advise me how it is I would accomplish this feat?
Use the jenkins build step.
In the repo experiencing the changes, have its Jenkins job kick off a https://jenkins.io/doc/pipeline/steps/pipeline-build-step/ for the repo you want to build. You can configure the build step to either wait or not wait for the result of the kicked off job.
build(job: "org/${jobName}/${BRANCH_NAME}",
parameters: [
new StringParameterValue('ENV', env),
new StringParameterValue('ENV_NO', env_no),
],
propagate: false,
wait: false,
)
Usually I‘d recommend to have the Jenkinsfile in the very same repository like your source code. Only that way you will have a combined history so it‘ll be much easier to reproduce an older build from - let’s say - one year ago.
However if you still want to go for the separation: The git/checkout steps will usually have the possibility to add a hook in Jenkins so the job will get triggered automatically on changes.
If I did understand your use case correctly the Jenkinsfile will go into stable state. If it’s stable it won’t change. When there are no changes it won’t trigger the job, right?
If that still is not enough I think I would need more details on what you’re trying to achieve and why.

Should branch indexing for Multibranch Pipeline jobs be triggered automatically by webhooks?

I've set up a number of Multibranch Pipeline jobs in Jenkins (running 2.46.2 LTS, Branch API 2.0.8, GitHub Branch Source 2.0.5, and Pipeline Multibranch 2.14) and have just noted that branch indexing -- and thus any cleanup of old branches -- does not appear to be triggered by the webhook calls from GitHub. It only appears to be triggered if someone manually clicks the "Scan Repository Now" link, or if the job configuration in Jenkins is re-saved. I'm using the timestamp shown in the "Scan Repository Log" page as an indication of when the branch indexing occurs.
It seems that new branches or changes to existing ones are being detected correctly and built, so the webhooks from source control (GitHub) are working, but was surprised that this wasn't also triggering the branch indexing and thus the old branch cleanup. I just can't tell from the documentation whether this is correct and expected behavior or if something is incorrect in my setup.
I note that the help text for the "Periodically if not otherwise run" setting says:
Some kinds of folders are reindexed automatically and immediately upon receipt of an external event. For example, a multi-branch project will recheck its SCM repository for new or removed or modified branches when it receives an SCM change notification. (Push notification may be configured as per the SCM plugin used for each respective branch source.) Such notifications can occasionally be unreliable, however, or Jenkins might not even be running to receive them. In some cases no immediate notification is even possible, for example because Jenkins is behind a firewall and can only poll an external system.
This trigger allows for a periodic fallback, but when necessary. If no indexing has been performed in the specified interval, then an indexing will be scheduled. For example, in the case of a multi-branch project, if the source control system is not configured for push notification, set a short interval (most people will pick between 15 minutes and 1 hour). If the source control system is configured for push notification, set an interval that corresponds to the maximum acceptable delay in the event of a lost push notification as the last commit of the day. (Subsequent commits should trigger indexing anyway and result in the commit being picked up, so most people will pick between 4 hours and 1 day.)
This certainly implies that indexing of a Multibranch Pipeline job should be re-triggered by branch events (e.g., pushes from GitHub via webhook), but the timestamp on my indexing log seems to belie that.
So, is what I'm observing the intended behavior? If so, and I want a regular cleanup of old branches, do I need to select the "Periodically if not otherwise run" checkbox under "Scan repository triggers"? Or is there something wrong with my setup, which is preventing it from working as intended?
According to the official documentation:
By default, Jenkins will not automatically re-index the repository for branch additions or deletions (unless using an Organization Folder), so it is often useful to configure a Multibranch Pipeline to periodically re-index in the configuration.
I depend on "Periodically if not otherwise run" for 1) cleanup of branches and 2) creation of container jobs for brand new repos (i use "Bitbucket Team/Project", the bitbucket version of "Github Organization", which basically creates a multibranch pipeline for every repo in your organization). I have "Periodically if not otherwise run" set to run once a day for each project.
It does seem like these things could work via webhook, but they do not in my experience.

Jenkins Pipeline continue latest build at certain time

I have a Jenkins Pipeline which runs per commit, does a build and runs some sanity tests.
Then at 8pm I want the latest successful build to carry on and run more indepth tests as part of the same pipeline.
I have looked at milestone and lock but it seem that the first commit of the day would grab the lock to wait till 8pm and then be "promoted" and then when that finished it would run the latest which doesnt work for me.
I have looked at milestone and then having a user-input to hold all the builds at the end of the first stage, but that would mean manually clicking a job at 8pm or having and external script to do it.
I've also looked at checkpoint and this doesnt appear to have the ability to do what I need either.
Can anyone suggest a groovy method for a new build to supersede an old one or plugin that would work for me?

In Jenkins, can I trigger a downstream job once a day

We've got a Jenkins setup where we do incremental builds on SCM change, validate and then if this works do a full build (from scratch). This basically works but we waste time doing full builds during the day that we don't normally use.
I know we could trigger full builds every night, but many of our branches won't change for a few days - and then we might get a rush of changes. Thus building every branch every night is wasteful too.
What I really want is some mechanism where we only do the full builds once (say at night) if there has been an SCM change and the incremental build and validate worked - there is no point auto-triggering full builds where the incremental build and validate failed. Actually just "the incremental build and validate worked" should suffice - as these normally just run on SCM change.
Any suggestions? Is there some Jenkins extension that would help with this?
To achieve what you've asked for you can create a new job that is the same as your existing one, but have it only poll the SCM once a day, for a nightly build.
Set the schedule to something like this: H H(0-5) * * *.
In your original job, remove the post-build triggering of a full build.
That will give you pretty much what you've asked for, except the nightly build will do an incremental build and then a full build if the incremental one succeeded, rather than just checking the result of the last incremental build.
BUT...
What is the cost of the 'waste' you are trying to avoid? How much does running a full build every night actually cost you? And wouldn't you be better off finding out when the full build is broken as soon as possible, i.e. during the day when it was broken rather than only the following morning?

Resources