Let's say that you have 5 different jenkins jobs that are executed (triggered) on different conditions but now you want to extend the usage over the other branches.
Just duplicating the jobs would create a real maintenance mess, as the number of branches is big and under permanent change but you still want to be able to edit the jobs templates.
The only thing different between the jobs are the source control branches against you run them.
So, it does make sense to run them as different jobs but you still want to be able to reconfigure the jobs in a single place.
For builds that do not need to be triggered by SCM changes the easiest is to use a multi-configuration (matrix) build with BRANCH axis that runs over your branch names.
For builds that are triggered by SCM changes add BRANCH parameter and write a post-commit hook that will trigger the build(s) with BRANCH instantiated appropriately. Alternatively, write short trigger jobs - one per branch - that would poll SCM and call your main job with the appropriate BRANCH parameter. Trigger jobs should be identical except for the BRANCH parameter that is set to a branch name as a default value.
The biggest drawback is that you can't instantly distinguish among branches that fail and those that don't, but it's a small price to pay.
Chances are that sooner or later you'll need to differentiate among branches. If the differences are relatively minor you can use Run Condition Plugin.
Related
I know well Jenkins free-style jobs, but I'm new to pipelines.
I'm reworking some validation pipelines to add a manual step/pause before the final publication/diffusion step.
I've read about the input directive, but it's not what we want.
Our need is to be able to run the same pipeline several (10) times with different parameters, and afterwards, make a manual check on some or all the runs, eventually update some repports. Then go on with downstream operations (basically officially publish the reports, for QA team or others).
I used to do such things easily with free-style jobs, and manual promotions (restricted to authorized users). It's safe because all the build artefacts are saved at the end of the free-style job, and can be post-processed later.
Is there a way to achieve such thing in a pipeline ? (adding properties / promotions)
A key point for us is that the same pipeline job should be run several time, so each time the artefacts should be stored in a different location/workspace.
I used the input directive. It's expecting an interractive input.
If I launch the same job again, I'm afraid it will use the same workspace.
I added a free-style job, triggered after the pipeline job.
in this job, I retrieve the artefacts from the first job. The promotion does the job, but it's quite ugly implementation.
I tried to add a promotion using properties + promotions, but I'm not sure if I can do the same thing as a manual promotion in a free-style job. That's what I would like to do, to keep all things in the pipeline.
I have searched for some time, but I did not found much about it.
I have read some things like
https://issues.jenkins.io/browse/JENKINS-36089
or
https://www.jenkins.io/projects/gsoc/2019/artifact-promotion-plugin-for-jenkins-pipeline/
which say it's not possible.
But I'm sure some people have the same need, so some solutions should exist.
I'm setting up a new Jenkins job using multibranch pipeline and I have noticed that when a branch is deleted, it only has a strikethrough and isn't actually removed on Jenkins. This is solved by re-running branch indexing. However, I cannot really use this as it will also cause every other branch to rebuild (a consequence of how the repository is updated). Is there some custom code or pipeline/script I can run to re-index without building?
I've already looked at various UI methods such as suppressing SCM triggers, but this also negates push events from Github which is something we want to use.
The deleted/merged branch build will disappear after a period of time (<24 hours). It is not removed immediately to show the recently deleted/merged branches and give a chance to review the prior build statuses. It is relatively harmless since the jobs for these branches are deactivated (read-only).
Note that the removal is based on the branch indexing job running at regular interval, so if you have this disabled, it probably won't do it (not sure the SCM webhook calls are enough).
I've noticed that there seems to be a build queue limit of one in Jenkins. When I trigger a lot of builds it seems to only place a max of one build in the build queue. Is there a way to remove this limit so there can be more then one build in the build queue?
This is intended behaviour:
Normally, your jobs will depend on some input (from SCM, or from some upstream jobs)
If your slave capacity is too low to catch up with each and every build, then you'd normally want to test/build/... only the very latest "item".
This is the default behaviour. Without that, there'd be a risk that the build queue grows indefinitely.
On top of that, Jenkins does not track the properties of normal build requests -- they all look the same, and Jenkins can not (for example) separate different SCM states that existed at different triggering times.
This is however exactly the point that gives you a workaround: parameterize your jobs, and then use for example the Trigger parameterized build on other projects post-build action to trigger those. Then Jenkins will queue each build request individually -- and inside your job, you can use the parameter to find out what exactly has to be done.
Jenkins will squash queued parameterized builds that have identical parameter values (thanks to user "atline" for checking).
Using Jenkins, I am running 2 builds (Linux+Windows) and one Doxygen job
At the moment, I am using 3 separate SCM polling triggers pointing to the same source code
How can I use a single trigger for all three jobs provided that I still want to get separate statuses
For the record; the underlying SCM is Git
Off the top of my head, some solutions which might do what you are looking for:
Instead of setting an SCM trigger, use a post-receive hook in your repository, which can send a signal for Jenkins that there are new changes (see: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository). This way Jenkins doesn't have to constantly poll the repository (multiple times for the different jobs), and the trigger would be faster, since there is no waiting for the next polling, but when there is a push, a build will be started.
Use an extra job, that would do nothing else, just have the SCM polling trigger, and start all three original jobs (without waiting for any of them to finish).
If the configuration is similar for all three jobs, you could consider creating a single project with a matrix configuration. Roughly what it does, is that you could have a variable for the build type, with values like linux, windows, doxygen. When the job is triggered, it would start multiple builds with all the possible values - of course you would have to set up the job in a way that the current parameter changes the build process according to what needs to be done. Actually I haven't had to use a matrix configuration yet, so my example may be not the best, but you can probably find lots of examples on the Jenkins wiki, if you think this is a good direction.
I have a project that has 3-5 different mercurial branches going at all times. I want to schedule a weekly Jenkins test to run our tests on all relevant branches.
What I want, I think, is a parameterized build, with the branch name as the parameter, and then to have a list of branches, and once a week, run the parameterized build with each of the parameters in the list.
However, I see that you can't send parameters into a triggered build. I assume that there is a plugin for this. Is job Generator the correct plugin? Is there something better?
I should mention that currently, we are doing this with multiple SCMs, and having the body of the build have a sh loop that runs through each directory and runs the tests. This is really inefficient, and a pain to maintain...
I can suggest one solution but it couldn't be called elegant.
Firstly, you need create multi-configuration project (aka Matrix project).
In this project you need declare one node (it can be already existed master node)
And one type of axis (for example BRANCH - be careful don't use Jenkins Set Environment Variables variables) with values corresponding for each branch (for example default, testing, devel, etc).
After you need add in your project build action in which you need check environment variable (previously declared $BRANCH) and discover for which branch this build was launched (the main idea is illustrated by example with using bash).
And finally you need manually get sources from corresponding branch.
Next build steps can be the same for all branches.
This approach have set of drawbacks:
1. You can not triggered this project by changes in repository (you can check using Mercurial plugin only one branch).
2. All subprojects will be rebuilt even if they have not changed.
3. Appropriate only for statically defined branches.
4. Not elegant.
But it has one advantage versus parameterized build:
1. All artifacts (and build logs) of branches is stored in separated directories (because they are separate subprojects).