Jenkins Mutualize SCM polling - jenkins

Using Jenkins, I am running 2 builds (Linux+Windows) and one Doxygen job
At the moment, I am using 3 separate SCM polling triggers pointing to the same source code
How can I use a single trigger for all three jobs provided that I still want to get separate statuses
For the record; the underlying SCM is Git

Off the top of my head, some solutions which might do what you are looking for:
Instead of setting an SCM trigger, use a post-receive hook in your repository, which can send a signal for Jenkins that there are new changes (see: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository). This way Jenkins doesn't have to constantly poll the repository (multiple times for the different jobs), and the trigger would be faster, since there is no waiting for the next polling, but when there is a push, a build will be started.
Use an extra job, that would do nothing else, just have the SCM polling trigger, and start all three original jobs (without waiting for any of them to finish).
If the configuration is similar for all three jobs, you could consider creating a single project with a matrix configuration. Roughly what it does, is that you could have a variable for the build type, with values like linux, windows, doxygen. When the job is triggered, it would start multiple builds with all the possible values - of course you would have to set up the job in a way that the current parameter changes the build process according to what needs to be done. Actually I haven't had to use a matrix configuration yet, so my example may be not the best, but you can probably find lots of examples on the Jenkins wiki, if you think this is a good direction.

Related

Add Promotion after jenkins pipeline

I know well Jenkins free-style jobs, but I'm new to pipelines.
I'm reworking some validation pipelines to add a manual step/pause before the final publication/diffusion step.
I've read about the input directive, but it's not what we want.
Our need is to be able to run the same pipeline several (10) times with different parameters, and afterwards, make a manual check on some or all the runs, eventually update some repports. Then go on with downstream operations (basically officially publish the reports, for QA team or others).
I used to do such things easily with free-style jobs, and manual promotions (restricted to authorized users). It's safe because all the build artefacts are saved at the end of the free-style job, and can be post-processed later.
Is there a way to achieve such thing in a pipeline ? (adding properties / promotions)
A key point for us is that the same pipeline job should be run several time, so each time the artefacts should be stored in a different location/workspace.
I used the input directive. It's expecting an interractive input.
If I launch the same job again, I'm afraid it will use the same workspace.
I added a free-style job, triggered after the pipeline job.
in this job, I retrieve the artefacts from the first job. The promotion does the job, but it's quite ugly implementation.
I tried to add a promotion using properties + promotions, but I'm not sure if I can do the same thing as a manual promotion in a free-style job. That's what I would like to do, to keep all things in the pipeline.
I have searched for some time, but I did not found much about it.
I have read some things like
https://issues.jenkins.io/browse/JENKINS-36089
or
https://www.jenkins.io/projects/gsoc/2019/artifact-promotion-plugin-for-jenkins-pipeline/
which say it's not possible.
But I'm sure some people have the same need, so some solutions should exist.

Best route to take for a Jenkins job with hundreds of sub jobs

Currently, at my organization we have a few repositories which contain ~500+ projects that need to be built to satisfy unit testing (really integration testing), and I am trying to think of a new way of approaching the situation.
Currently, the pipeline for building the projects is templatized and is stored on our Bitbucket server. All the projects get built in parallel, so once the jobs are queued, they all go to the master node to do a SCM check of the pipeline.
This creates stress on the master node, and for some reason it is not able to utilize every available node and executor on that node to it's fullest potential. Contrary, if the pipeline is not stored on SCM, it does the complete opposite to where it DOES use every possible node with any available executor on that node.
Is there something I am missing about the SCM checkout version that makes it different than storing the pipeline locally on Jenkins? I understand that you need to do an SCM poll, and I am assuming only the master can do the SCM poll for the original Jenkinsfile.
I've tried:
Looking to see if I am potentially throttling the build, but I do not see anything
Disable concurrent builds is not enabled within the pipeline
Lightweight checkout seems to work when I do it with Git plugin, but not the Bitbucket Server Integration plugin; however, Atlassian mentioned this will never be a feature, so this doesn't really matter.
I am trying to see if there is a possible way to change the infrastructure since I don't have much of a choice in how certain programs are setup since they are very tightly coupled.
I could in theory just have the pipeline locally on Jenkins and use that as a template rather than checking it into SCM; however, making changes locally to the template does not change the sub-jobs that uses it (I could implement this feature, but SCM already does it). Plus, having the template pipeline checked into Bitbucket allows a better control, so I am trying to avoid that option.

Jenkins Build Queue Limit

I've noticed that there seems to be a build queue limit of one in Jenkins. When I trigger a lot of builds it seems to only place a max of one build in the build queue. Is there a way to remove this limit so there can be more then one build in the build queue?
This is intended behaviour:
Normally, your jobs will depend on some input (from SCM, or from some upstream jobs)
If your slave capacity is too low to catch up with each and every build, then you'd normally want to test/build/... only the very latest "item".
This is the default behaviour. Without that, there'd be a risk that the build queue grows indefinitely.
On top of that, Jenkins does not track the properties of normal build requests -- they all look the same, and Jenkins can not (for example) separate different SCM states that existed at different triggering times.
This is however exactly the point that gives you a workaround: parameterize your jobs, and then use for example the Trigger parameterized build on other projects post-build action to trigger those. Then Jenkins will queue each build request individually -- and inside your job, you can use the parameter to find out what exactly has to be done.
Jenkins will squash queued parameterized builds that have identical parameter values (thanks to user "atline" for checking).

How to manage jenkins jobs that needs to run against several branches?

Let's say that you have 5 different jenkins jobs that are executed (triggered) on different conditions but now you want to extend the usage over the other branches.
Just duplicating the jobs would create a real maintenance mess, as the number of branches is big and under permanent change but you still want to be able to edit the jobs templates.
The only thing different between the jobs are the source control branches against you run them.
So, it does make sense to run them as different jobs but you still want to be able to reconfigure the jobs in a single place.
For builds that do not need to be triggered by SCM changes the easiest is to use a multi-configuration (matrix) build with BRANCH axis that runs over your branch names.
For builds that are triggered by SCM changes add BRANCH parameter and write a post-commit hook that will trigger the build(s) with BRANCH instantiated appropriately. Alternatively, write short trigger jobs - one per branch - that would poll SCM and call your main job with the appropriate BRANCH parameter. Trigger jobs should be identical except for the BRANCH parameter that is set to a branch name as a default value.
The biggest drawback is that you can't instantly distinguish among branches that fail and those that don't, but it's a small price to pay.
Chances are that sooner or later you'll need to differentiate among branches. If the differences are relatively minor you can use Run Condition Plugin.

Jenkins - Running instances of single build concurrently

I'd like to be able to run several builds of the same Jenkins job simultaneously.
Example:
Build [*jenkins_job_1*]: calls an ant script with parameter 'A'
Build [*jenkins_job_1*]: calls an ant script with parameter 'B'
repeat as necessary
each instance of the job runs simultaneously, rather than through a queue.
The reason I'd like to do this is to avoid having to create several jobs that are nearly identical, all of which would need to be maintained.
Is there a way to do this, or maybe another solution (ie — dynamically create a job from a base job and remove it after it's finished)?
Jenkins has a check box: "Execute concurrent builds if necessary"
If you check this, then it'll start multiple builds for a job.
This works with the "This build is parameterized" checkbox.
You would still trigger the builds, passing your A or B as parameters. You can use another job to trigger them or you could do it manually via a script.
You can select Build a Multi-configuration project (Matrix build) when you create the job. Then, under the job's configuration, you can define the Configuration Matrix which lets you specify one or more parameters (axes) for different builds. Regarding running simultaneously, you should be able to run as many simultaneous builds as you have executors (with the appropriate label).
Unfortunately, the Jenkins wiki lacks documentation about this setup. There are a couple previous SO questions, here and here, that might provide a little guidance. There was a "recent" blog post about setting up a multi-configuration job to perform builds on various platforms.
A newer (and better) solution is the Jenkins Job DSL Plugin.
We've been using it with great success. Our job configurations are now disposable... we can set up a huge stack of complicated jobs from some groovy files and a couple template jobs. It's great.
I'm liking it a lot more than the matrix builds, which were complicated and harder to understand.
Nothing stopping you doing this using the Jenkins pipeline DSL.
We have the same pipeline running in parallel in order to model combined loads for an application that exposes web services, provides a database to several external applications, receives data via several work queues and has a GUI front end. The business gives us non-functional requirements (NFRs) which our application must meet that guarantees its responsiveness even at busy times.
The different instances of the pipeline are run with different parameters. The first instance might be WS_Load, the second GUI_Load and the third Daily_Update_Load, modelling a large data queue that needs processing within a certain time-frame. More can be added depending on which combination of loads we're wanting to test.
Other answers have talked about the checkboxes for concurrent builds, but I wanted to mention another issue: resource contention.
If your pipeline uses temporary files or stashes files between pipeline stages, the instances can end up pulling the rug from under each others' feet. For example you can end up overwriting a file in one concurrent instance while another instance expects to find the pre-overwritten version of the same stash. We use the following code to ensure stashes and temporary filenames are unique per concurrent instance:
def concurrentStash(stashName, String includes) {
/* make a stash unique to this pipeline and build
that can be unstashed using concurrentUnstash() */
echo "Safe stashing $includes in ${concurrentSafeName(stashName)}..."
stash name: concurrentSafeName(stashName), includes: includes
}
def concurrentSafeName(name) {
/* make a name or name component unique to this pipeline and build
* guards against contention caused by two or more builds from the same
* Jenkinsfile trying to:
* - read/write/delete the same file
* - stash/unstash under the same name
*/
"${name}-${BUILD_NUMBER}-${JOB_NAME}"
}
def concurrentUnstash(stashName) {
echo "Safe unstashing ${concurrentSafeName(stashName)}..."
unstash name: concurrentSafeName(stashName)
}
We can then use concurrentStash stashName and concurrentUnstash stashName and the concurrent instances will have no conflict.
If, say, the two pipelines both need to store stats, we can do something like this for filenames:
def statsDir = concurrentSafeName('stats')
and then the instances will each use a unique filename to store their output.
You can create a build and configure it with parameters. Click the This build is parameterized checkbox and add your desired param(s) in the Configuration of the build. You can then fire off simultaneous builds using different parameters.
Side note: The "Bulk Builder" in Jenkins might push it into a queue, but there's also a This bulk build is parameterized checkbox.
I was having a pretty large build queue and I performed below steps to run jobs in
parallel in jenkins to reduce number of jobs waiting in queue
For each job you need to navigate to configure and select the checkbox stating
"Execute concurrent builds if necessary"
Navigate to Manage -> Configure System -> look for "# of executors" and set the no
of parallel executors you want (in my case it was set to 0 and I updated it to 2)

Resources