Is it possible to configure Jenkins to always run using a predefined Jenkinsfile for all projects, rather than pulling a Jenkinsfile from the project repo? The goal here is to make sure that a certain set of stages are always being run. If we allow projects to define their own Jenkinsfile, they could theoretically just skip some required stages in their project (like unit testing).
I want to make sure this never happens, but simply telling everyone "don't remove these stages from your Jenkinsfile" seems a bit brittle.
Use a Shared Library and define all of the stages and steps there. Then the Jenkinsfile in the SCM for each application needs only to point to the shared library and run a method. You can also pre-define variables specific to each application in the Jenkinsfile before you call the shared library.
This won't FORCE the application use your code--they can simply rewrite the entire Jenkinsfile, but at least they don't have control of the code you wrote and they can't simply comment out or easily add stages.
Related
I'd like to define build steps in a generic way by reading info from a Yaml/Json file.
Use case: Share information about build stages among different types of CI (i.e. I have lot's of build stages and want to implement those for different CIs and for local usage (as described here)
The problem I face now with Jenkins is that this file won't be available before checking out the whole project which is done in the first stage, i.e. I don't yet have the information about the stages I want to define when I need it.
Is there a solution for this situation? Can I select files to be provided among the main Jenkinsfile before running the actual stages?
We have a release pipeline which runs many tests, now we want to run each test as a different stage of pipeline. Problem is there are different number of tests for each use case so can't fix the stages while designing the pipeline.
Is there a way by which I can create stages during runtime. (when a release has been created and it's running)
Actually, no, there is no such way to handle this situation. The stage could not be created during the running of Release.
Stages are the major divisions in your release pipeline: "run functional tests", "deploy to pre-production", and "deploy to production" are good examples of release stages.
A stage in a release pipeline consists of jobs and tasks. To run each test as a different stage of pipeline is also not a recommend way.
Instead, you can use some filters and multiple test tasks to split cases. Please also take a look at this blog which may be useful: Staged execution of tests in Azure DevOps Pipelines
No, this cannot happen because a Release Pipelines are a "snapshot" of the pipeline and its tasks as they existed at that time. That means any associated Task Groups that get modified after a Release Pipeline is initiated will not get changed in the created Release Pipeline because that Release Pipeline "snapshot" has already been created and is running. This is actually a good thing since you don't want your pipeline to change when running and you are making changes. So back to the problem. There are some workarounds, but you aren't going to like them:
Use the REST API and a Release Pipeline JSON template to dynamically create a Release Pipeline on-the-fly (along with it's stages and associated tasks withing each stage). This is a little complex, but it can be done. You need to understand the relationships of elements within the json and minimal required json elements to get this working, which will be on a try-until-it-works basis.
Use Stage Pre-Condition checks, but they might not be mature enough to check some conditions that you are probably looking for. Even Gate checks using Azure Function Apps might help here but it is either FAIL or PASS. But still, take a look into Azure Function Apps that can extend the pre-condition checks you might need to run because I have no idea you are using as conditions. I like Azure Function Apps!
I don't know how you are running your tests, but you can run PowerShell as tasks. So within a stage (before you run the test) you can run a PowerShell script to evaluate the condition use-case to run the test and set some Variable. Then in the next step that will actually run the Test set Custom Conditions to run the Test Task and evaluate the Variable that you set in the previous PowerShell step to either run the Test Task or skip it. The downside is that it will show the Stage as "GREEN" and not greyed out like it was skipped as if you had used Pre-Deployment Conditions. You probably want to show in the Release Pipeline that the test was actually skipped and if it was passed or failed.
I'm sure I'm missing another option, but these are just off the top off my head.
Let me know what you come up with.
I have 11 jobs running on the Jenkins master node, all of which have a very similar pipeline setup. For now I have integrated each job with its very own Jenkinsfile that specifies the stages within the job and all of them build just fine. But, wouldn't it be better to have a single repo that has some files (preferably a single Jenkinsfile and some libraries) required to run all the jobs that have similar pipeline structure with a few changes that can be taken care of with a work around?
If there is a way to accomplish this, please let me know.
Use a Shared Library to define common functionality. Your 11 Jenkinsfiles can then be as small as only a single call to the function implementing the pipeline.
Besides using a Shared Library, you can create a groovy file with common functionality and call its methods via load().
Documentation
and example. This is an easier approach, but in the future with the increasing complexity of pipelines, this may impose some limitations.
I am trying to upgrade my current regression infrastructure to use pipeline plugin and I realize there are two methods: scripted pipeline and declarative pipeline. Going through multiple articles, I realize that declarative pipeline is more future proof and more powerful, hence I am inclined to use this. But there seems to be following restrictions which I don't want to have in my setup:
The jenkinsfile needs to be in the repository. I don't want to keep my jenkinsfile in the code repository.
Since the jenkinsfile needs to be in SCM. Does it mean I cannot test any modification in the file until I check that in to the repository.
Any details on the above will be very helpful.
Declarative pipelines are compiled to scripted ones, so those will definitely not go away. But declarative ones are a bit easier to handle, so all fine for you.
You don't have to check a Jenkinsfile into VCS. You can also set up a job of type Pipeline and define it there. But this has the usual disadvantages like no history etc.
When using multi-branch pipelines, i.e., every branch containing a Jenkinsfile generating an own job, you just push your changed pipeline to a new branch and execute it. Once it's done, you merge it.
This approach certainly increases feedback cycles a bit, but it just applies the same principles as when writing your software. For experimentation, just set up a Pipeline type job and play around. Afterwards, commit it to a branch, test it, review it, merge it.
You can use the Pipeline Multibranch Defaults Plugin for that. It allows you to define the Jenkinsfile in the web UI (with the Config File Provider plugin) itself and then reference that file from a Multibranch Pipeline.
I have a fairly complicated Jenkins job that builds, unit tests and packages a web application. Depending on the situation, I would like to do different things once this job completes. I have not found a re-usable/maintainable way to do this. Is that really the case or am I missing something?
The options I would like to have once my complicated job completes:
Do nothing
Start my low-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to production
Start my high-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to test
run acceptance tests
deploy to production
I have not found an easy way to do this. The simplest, but not very maintainable approach would be to make three separate jobs, each of which kicks off a downstream build. This approach scares me for a few reasons including the fact that changes would have to be made in three places instead of one. In addition, many of the downstream jobs are also nearly identical. The only difference is which downstream jobs they call. The proliferation of jobs seems like it would lead to an un-maintainable mess.
I have looked at using several approaches to keep this as one job, but none have worked so far:
Make the job a multi-configuration project (https://wiki.jenkins-ci.org/display/JENKINS/Building+a+matrix+project). This provides a way to inject the job with a parameter. I have not found a way to make the "build other projects" step respond to a parameter.
Use the Parameterized-Trigger plugin (https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin). This plugin lets you trigger downstream-jobs based on certain triggers. The triggers appear to be too restrictive though. They're all based on the state of the build, not arbitrary variables. I don't see any option provided here that would work for my use case.
Use the Flexible Publish plugin (https://wiki.jenkins-ci.org/display/JENKINS/Flexible+Publish+Plugin). This plugin has the opposite problem as the parameterized-trigger plugin. It has many useful conditions it can check, but it doesn't look like it can start building another project. Its actions are limited to publishing type activities.
Use Flexible Publish + Any Build Step plugin (https://wiki.jenkins-ci.org/display/JENKINS/Any+Build+Step+Plugin). The Any Build Step plugin allows making any build action available to the Flexible Publish plugin. While more actions were made available once this plugin was activated, those actions didn't include "build other projects."
Is there really not an easy way to do this? I'm surprised that I haven't found it and even more surprised that I haven't really seen any one else trying to do this? Am I doing something unusual? Is there something obvious that I am missing?
If I understood it correct you should be able to do this by following these Steps:
First Build Step:
Does the regular work. In your case: building, unit testing and packaging of the web application
Depending on the result let it create a file with a specific name.
This means if you want the low-risk-change to run afterwards create a file low-risk.prop
Second Build Step:
Create a Trigger/call builds on other projects Step from the Parameterized-Trigger
plugin.
Entery the name of your low-risk job into the Projects to build field
Click on: Add Parameter
Choose: Parameters from properties File
Enter low-risk.prop into the Use properties from file Field
Enable Don't trigger if any files are missing
Third Build Step:
Check if a low-risk.prop file exists
Delete the File
Do the same for the high-risk job
Now you should have the following Setup:
if a file called low-risk.prop occurs during the first Build Step the low-risk job will be started
if a file called high-risk.prop occurs during the first Build Step the high-risk job will be started
if there's no .prop File nothing happens
And that's what you wanted to achieve. Isn't it?
Have you looked at the Conditional Build Plugin? (https://wiki.jenkins.io/display/JENKINS/Conditional+BuildStep+Plugin)
I think it can do what you're looking for.
If you want a conditional post-build step, there is a plugin for that:
https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task
It will search the console log for a RegEx you specify, and if found, will execute a custom script. You can configure fairly complex criteria, and you can configure multiple sets of criteria each executing different post build tasks.
It doesn't provide you with the usual "build step" actions, so you've got to write your own script there. You can trigger execution of the same job with different parameters, or another job with some parameters, in standard ways that jenkins supports (for example using curl)
Yet another alternative is Jenkins text finder plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
This is a post-build step that allows to forcefully mark a build as "unstable" if a RegEx is found in console text (or even some file in workspace). So, in your build steps, depending on your conditions, echo a unique line into console log, and then do a RegEx for that line. You can then use "Trigger parameterized buids" and set the condition as "unstable". This has an added benefit of visually marking the build different (with a yellow ball), however you only have 1 conditional option with this method, and from your OP, looks like you need 2.
Try a combination of these 2 methods:
Do you use Ant for your builds?
If so, it's possible to do conditional building in ant by having a set of environment variables your build scripts can use to conditionally build. In Jenkins, your build will then be building all of the projects, but your actual build will decide whether it builds or just short-circuits.
I think the way to do it is to add an intermediate job that you put in the post-build step and pass to it all the parameters your downstream jobs could possibly need, and then within that job place conditional builds for the real downstream jobs.
The simplest approach I found is to trigger other jobs remotely, so that you can use Conditional Build Plugin or any other plugins to build other jobs conditionally.