Add Promotion after jenkins pipeline - jenkins

I know well Jenkins free-style jobs, but I'm new to pipelines.
I'm reworking some validation pipelines to add a manual step/pause before the final publication/diffusion step.
I've read about the input directive, but it's not what we want.
Our need is to be able to run the same pipeline several (10) times with different parameters, and afterwards, make a manual check on some or all the runs, eventually update some repports. Then go on with downstream operations (basically officially publish the reports, for QA team or others).
I used to do such things easily with free-style jobs, and manual promotions (restricted to authorized users). It's safe because all the build artefacts are saved at the end of the free-style job, and can be post-processed later.
Is there a way to achieve such thing in a pipeline ? (adding properties / promotions)
A key point for us is that the same pipeline job should be run several time, so each time the artefacts should be stored in a different location/workspace.
I used the input directive. It's expecting an interractive input.
If I launch the same job again, I'm afraid it will use the same workspace.
I added a free-style job, triggered after the pipeline job.
in this job, I retrieve the artefacts from the first job. The promotion does the job, but it's quite ugly implementation.
I tried to add a promotion using properties + promotions, but I'm not sure if I can do the same thing as a manual promotion in a free-style job. That's what I would like to do, to keep all things in the pipeline.
I have searched for some time, but I did not found much about it.
I have read some things like
https://issues.jenkins.io/browse/JENKINS-36089
or
https://www.jenkins.io/projects/gsoc/2019/artifact-promotion-plugin-for-jenkins-pipeline/
which say it's not possible.
But I'm sure some people have the same need, so some solutions should exist.

Related

Jenkins CI workflow with separate build and automated test both in source control

I am trying to improve our continuous integration process using Jenkins and our source control system (currently svn, but git soon).
Maybe I am thinking about this overly complicated, or maybe I have not yet seen the right hints.
The process I envisioned has three steps and associated roles:
one or more developers would do their job and ultimately submit the code changes for the actual software ("main software") as well as unit tests into source control (git, or something else). Jenkins shall build the software, run unit tests and perhaps some other steps (e.g. static code analysis). If none of this fails, the work of the developers is done. As part of the build, the build number is baked into the main software itself as part of the version number.
one or more test engineers will subsequently pickup the build and perform tests. Some of them may be manual, most of them are desired to be automated/scripted tests. These shall ultimately be submitted into source control as well and be executed through the build server. However, this shall not trigger a new build of the main software (since there is nothing changed). If none of this fails, the test engineers are done. Note that our automated tests currently take several hours to complete.
As a last step, a project manager authorizes release of the software, which executes whatever delivery/deployment steps are needed. Also, the source of the main software, unit tests, and automated test scripts, the jenkins build script - and ideally all build artifacts ("binaries") - are archived (tagged) in the source control system.
Ideally, developers are able to also manually trigger execution of the automated tests to "preview" the outcome of their build.
I have been unable to figure out how to do this with Jenkins and Git - or any other source control system.
Jenkin's pipelines seem to assume that all steps are carried out in sequence automatically. It also seems to assume that committing code into source control starts at the beginning (which I believe is not true if the commit was "merely" automated test scripts). Triggering an unnecessary build of the main software really hurts our process, as it basically invalidates and manual testing and documentation, as it results in a new build number baked into the software.
If my approach is so uncommon, please direct me how to do this correctly. Otherwise I would appreciate pointers how to get this done (conceptually).
I will try to reply with some points. This is indeed conceptually approach as there are a lot of details and different approaches too, this is only one.
You need git :)
You need to setup a git branching strategy which will allow to have multiple developers to work simultaneously, pushing code and validating it agains the static code analysis. I would suggest that you start with Git Flow, it is widely used and can be adapted to whatever reality you do have - you do not need to use it in its pure state, so give some thought how to adapt it. Fundamentally, it will allow for each feature to be tested. Then, each developer can merge it on the develop branch - from this point on, you have your features validated and you can start to deploy and test.
Have a look at multibranch pipelines. This will allow you to test the several feature branches that you might have and to have different flows for the other branches - develop, release and master - depending on your deployment needs. So, when you have a merge on develop branch, you can trigger testing or just use it to run static code analysis.
To overcome the problem that you mention on your second point, there are ways to read your change sets on the pipeline, and in case the changes are only taken on testing scripts, you should not build your SW - check here how to read changes, and here an example of how to read changes and also to prevent your pipeline to build all the stages according to the changes being pushed to git.
In case you still have manual testing taking place, pipelines are pausable which means that you can pause the pipeline asking for approval to proceed. Before approving, testers should do whatever they have to, and whenever they are ready to proceed, just approve the build to proceed for the next steps.
Regarding deployments authorization, it is done the same way that I mention on the last point, with the approvals, but in this case, you can specify which users/roles are allowed to approve that step.
Whatever you need to keep from your builds, Jenkins has an archive artifacts utility. Let me just note that ideally you would look into a proper artefact repository such as Nexus.
To trigger manually a set of tests... You can have a manually triggered job on Jenkins apart from your CI/CD pipeline, that will only execute the automated tests. You can even trigger this same job as one pipeline stage - how to trigger another jobs
Lastly let me say that the branching strategy is the starting point.
Think on your big picture, what SDLC flows you need to have and setup those flows on your multibranch pipeline. Different git branches will facilitate whatever flows you need within the same Jenkinsfile - your pipeline definition. It really depends on how many environments you have to deploy to and what kind of steps you need.

Is it possible to have multi-config template in Jenkins?

I have a number of multi-config jobs and all have to run on the same machines, one after another.
For example:
Build on all platforms.
Do some automated testing.
Do some automated benchmarking.
These are all happening on the same machines, in that order, but they are different jobs.
The problem is that if I want to add another platform or remove one of them, I will have to do it for every single multi-config job. What I would like is to have a way of defining those platforms in one place and then have the jobs point to that template and run.
I am quite sure I'm not the first one to hit this problem and that there should be some plugin out there, but I haven't been able to find it.
So, is there any simple way of doing this?
We create temaplte jobs in jenkins which helps us to create all the set of jobs reqired for a platform, we just pass the platform / component name as input pareamter for the template job. We us the job copy plugin https://wiki.jenkins-ci.org/display/JENKINS/Jobcopy+Builder+plugin
But for a deleting the jobs we have another job where again the component name is the input parameter and we use something similar to the answer given here Is it possible to delete a hudson job programmatically via REST API?

Jenkins Mutualize SCM polling

Using Jenkins, I am running 2 builds (Linux+Windows) and one Doxygen job
At the moment, I am using 3 separate SCM polling triggers pointing to the same source code
How can I use a single trigger for all three jobs provided that I still want to get separate statuses
For the record; the underlying SCM is Git
Off the top of my head, some solutions which might do what you are looking for:
Instead of setting an SCM trigger, use a post-receive hook in your repository, which can send a signal for Jenkins that there are new changes (see: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository). This way Jenkins doesn't have to constantly poll the repository (multiple times for the different jobs), and the trigger would be faster, since there is no waiting for the next polling, but when there is a push, a build will be started.
Use an extra job, that would do nothing else, just have the SCM polling trigger, and start all three original jobs (without waiting for any of them to finish).
If the configuration is similar for all three jobs, you could consider creating a single project with a matrix configuration. Roughly what it does, is that you could have a variable for the build type, with values like linux, windows, doxygen. When the job is triggered, it would start multiple builds with all the possible values - of course you would have to set up the job in a way that the current parameter changes the build process according to what needs to be done. Actually I haven't had to use a matrix configuration yet, so my example may be not the best, but you can probably find lots of examples on the Jenkins wiki, if you think this is a good direction.

Problems with Jenkins Build pipeline and parallel jobs

Background
I am using Jenkins with the Build Pipeline plugin to build some fairly complicated projects that require multiple compilation steps:
Build source RPM.
Build binary RPM (this is performed twice, once for each platform).
Deploy to YUM repository.
My strategy for solving build requirements involves splitting the common work into parameterized jobs that can be reused across projects and branches, with each job representing one stage in the pipeline. Each stage is triggered with parameters, and build artifacts passed along to the next job in the pipeline. However, I'm having some trouble with this strategy, and could really use some tips on how to go about solving this problem in the most elegant and flexible way possible.
To be more specific, there are two common libraries, which are shared by other projects (but not all projects). The libraries are built differently from the dependent projects, but it should not be necessary to specify in Jenkins what the dependent projects are.
There are multiple branches, the master branch (rebuilt nightly), the develop branch (polled for changes), feature branches (also polled), and release branches (polled, but built for release). The branches are built the same way across multiple projects.
We create multiple repositories every month, and whilst it is feasible to expect a little setup for a new project, generally we want this to be as simple and automated as possible.
The Problems
I have many projects with multiple branches, and I do not wish to build all branches or even all projects in the same way. Because most of the build steps are similar I can turn these common steps into parameterized build jobs, and get each job to trigger the next in the chain, passing parameters and build artifacts along the chain. However, this falls apart if one of the steps needs to be skipped, because I don't know of a way to conditionally skip a build step. This implies I would need to copy the build jobs so that I can customise them for each pipeline, resulting in a very large number of build jobs. I could use a combination of plugins to create a job generator (eg. dsl flow, dsl job, etc), and hide as much as possible from the users, but what's the most elegant Jenkins solution to this? Are there any plugins, or examples that I might have missed? What's your experience of doing this?
Because step 2 can be split into two jobs that can be run in parallel, this introduces a complexity that is causing me problems with my pipeline. My first attempt would trigger a parameterized build job twice with different parameters, and then join the jobs afterwards using the join plugin, but it was beginning to look like it would be complicated to copy in the build artifacts from the two upstream jobs. This is significant, because I need the build artifacts from both jobs for stage 3. What's the most elegant solution to join parallel jobs and copy artifacts from them all? Are there any examples that I might have missed?
I need to combine test results generated from both of the jobs in stage 2, and copy them to the job that triggers the build. What's the best way to handle this?
I'm happy to read articles, presentations, technical articles, reference documentation, write scripts and whatever else necessary to make this work nicely, but I'm not a Jenkins expert. If anyone can give me some advice on these 3 problems then that would be helpful. Additionally, I would appreciate any constructive advice on how to get the best out of pipeline CI builds in Jenkins, if relevant.
For the first point, the Job Generator plugin I wrote has been developed to address this use case. You can find more info on the wiki page of Job Generator.
There is also the same type of plugin with a different approach (job generator as a build step), it is called Jobcopy Builder.
The other approaches you mentioned require some kind of DSL and can be a good choice too.

Jenkins - Running instances of single build concurrently

I'd like to be able to run several builds of the same Jenkins job simultaneously.
Example:
Build [*jenkins_job_1*]: calls an ant script with parameter 'A'
Build [*jenkins_job_1*]: calls an ant script with parameter 'B'
repeat as necessary
each instance of the job runs simultaneously, rather than through a queue.
The reason I'd like to do this is to avoid having to create several jobs that are nearly identical, all of which would need to be maintained.
Is there a way to do this, or maybe another solution (ie — dynamically create a job from a base job and remove it after it's finished)?
Jenkins has a check box: "Execute concurrent builds if necessary"
If you check this, then it'll start multiple builds for a job.
This works with the "This build is parameterized" checkbox.
You would still trigger the builds, passing your A or B as parameters. You can use another job to trigger them or you could do it manually via a script.
You can select Build a Multi-configuration project (Matrix build) when you create the job. Then, under the job's configuration, you can define the Configuration Matrix which lets you specify one or more parameters (axes) for different builds. Regarding running simultaneously, you should be able to run as many simultaneous builds as you have executors (with the appropriate label).
Unfortunately, the Jenkins wiki lacks documentation about this setup. There are a couple previous SO questions, here and here, that might provide a little guidance. There was a "recent" blog post about setting up a multi-configuration job to perform builds on various platforms.
A newer (and better) solution is the Jenkins Job DSL Plugin.
We've been using it with great success. Our job configurations are now disposable... we can set up a huge stack of complicated jobs from some groovy files and a couple template jobs. It's great.
I'm liking it a lot more than the matrix builds, which were complicated and harder to understand.
Nothing stopping you doing this using the Jenkins pipeline DSL.
We have the same pipeline running in parallel in order to model combined loads for an application that exposes web services, provides a database to several external applications, receives data via several work queues and has a GUI front end. The business gives us non-functional requirements (NFRs) which our application must meet that guarantees its responsiveness even at busy times.
The different instances of the pipeline are run with different parameters. The first instance might be WS_Load, the second GUI_Load and the third Daily_Update_Load, modelling a large data queue that needs processing within a certain time-frame. More can be added depending on which combination of loads we're wanting to test.
Other answers have talked about the checkboxes for concurrent builds, but I wanted to mention another issue: resource contention.
If your pipeline uses temporary files or stashes files between pipeline stages, the instances can end up pulling the rug from under each others' feet. For example you can end up overwriting a file in one concurrent instance while another instance expects to find the pre-overwritten version of the same stash. We use the following code to ensure stashes and temporary filenames are unique per concurrent instance:
def concurrentStash(stashName, String includes) {
/* make a stash unique to this pipeline and build
that can be unstashed using concurrentUnstash() */
echo "Safe stashing $includes in ${concurrentSafeName(stashName)}..."
stash name: concurrentSafeName(stashName), includes: includes
}
def concurrentSafeName(name) {
/* make a name or name component unique to this pipeline and build
* guards against contention caused by two or more builds from the same
* Jenkinsfile trying to:
* - read/write/delete the same file
* - stash/unstash under the same name
*/
"${name}-${BUILD_NUMBER}-${JOB_NAME}"
}
def concurrentUnstash(stashName) {
echo "Safe unstashing ${concurrentSafeName(stashName)}..."
unstash name: concurrentSafeName(stashName)
}
We can then use concurrentStash stashName and concurrentUnstash stashName and the concurrent instances will have no conflict.
If, say, the two pipelines both need to store stats, we can do something like this for filenames:
def statsDir = concurrentSafeName('stats')
and then the instances will each use a unique filename to store their output.
You can create a build and configure it with parameters. Click the This build is parameterized checkbox and add your desired param(s) in the Configuration of the build. You can then fire off simultaneous builds using different parameters.
Side note: The "Bulk Builder" in Jenkins might push it into a queue, but there's also a This bulk build is parameterized checkbox.
I was having a pretty large build queue and I performed below steps to run jobs in
parallel in jenkins to reduce number of jobs waiting in queue
For each job you need to navigate to configure and select the checkbox stating
"Execute concurrent builds if necessary"
Navigate to Manage -> Configure System -> look for "# of executors" and set the no
of parallel executors you want (in my case it was set to 0 and I updated it to 2)

Resources