Suppose I have two similar simple pipeline scripts, each of which basically look like this:
stage('only stage') {
node {
// first step
}
node {
// second step
}
}
Let's call the steps in my first script A and B, and the steps in my similar second script C and D.
The way I understand it from my testing, as I kick off builds for these scripts, Jenkins appears to queue only the first step from each script, and then queue the next step only when the first one finishes. As a result, it appears to favour earlier running steps in pipelines in general. Consider this example which maps actions I've taken => the resulting Jenkins queue of steps:
Kick off script 1 => [A]
Kick off script 2 => [A, C]
A starts => [C]
A finishes => [C, B]
C starts => [B]
...etc
My question is: at the last step I've depicted where C starts, is there a way to prioritize B to start instead, since it's a later step in a build (vs. C which is a first step)? My motivation is that I would rather have one build completely finished sooner, vs. several builds simultaneously in progress / partially finished.
It seems like this was possible with freestyle projects through the Parameterized Trigger plugin (details in this question), but I haven't been able to figure out if there's a similar way to make this work with pipeline scripts. I've seen the Priority Sorter plugin which apparently has recently added pipeline compatibility, but I didn't understand how that pipeline compatibility actually worked as I was unable to find any examples or documentation on its usage, and I'm not sure hardcoding priority numbers into my steps would be an ideal solution anyway (in reality, the scripts are much more complex than the examples I provided, so I could see priority numbers quickly getting unwieldy and confusing). There are a few other "priority" based plugins I found, notably the Accelerated Build Now plugin, but it hasn't been updated in years so it doesn't have pipeline support.
So far the attitude I've seen is that in situations where queues are forming and prioritization of tasks is needed, "just add more slaves". This makes sense as a design decision, but unfortunately I'm working with limited resources and do need some queue management as I can't just add more slaves to relieve the queue. Has anyone else solved this?
Related
I have a stage which checks whether any hardcoded credentials aren't going into the build.
This is done using a custom jenkins library.
Now, there are many pipelines in which I need to add, so I figured out two solutions :
Make a python script and manually figure out the text pattern in each/most of these Jenkinsfiles where I can add my stage.
Let Jenkins do the work of checking whether the stage exists and if not fail the build, so that the developer could herself/himself add the stage.
Now, 2nd is the one I would like to go with as it's quite scalable ( so to say ) and I don't have to go with the unreliableness of pattern searching to add the stage using python and moreover I already have tried out the 1st one.
This question is similar to this : jenkins-making-a-build-fail-if-javadoc-is-missing
In the above question, the solution seeker wishes to fail the build based on javadoc string.
The solution as suggested is a plugin, but I don't want to increase the complexity to this solution because for this I will have to learn plugin development for Jenkins and seeing that it's in JAVA, it will take even more time ( I am fluent in Python ).
I have worked, rather struggled with groovy to make a Jenkin libs, but I am ready to walk that path, if need be.
Thanks.
Just thinking if you could introduce some kind of global variable in your flow which will store all the env.STAGE_NAME at the beginning of each stage (need just one-liner on every stage). At the end of the flow, you validate all the stage names from the list and see if you are missing anything.
We have a release pipeline which runs many tests, now we want to run each test as a different stage of pipeline. Problem is there are different number of tests for each use case so can't fix the stages while designing the pipeline.
Is there a way by which I can create stages during runtime. (when a release has been created and it's running)
Actually, no, there is no such way to handle this situation. The stage could not be created during the running of Release.
Stages are the major divisions in your release pipeline: "run functional tests", "deploy to pre-production", and "deploy to production" are good examples of release stages.
A stage in a release pipeline consists of jobs and tasks. To run each test as a different stage of pipeline is also not a recommend way.
Instead, you can use some filters and multiple test tasks to split cases. Please also take a look at this blog which may be useful: Staged execution of tests in Azure DevOps Pipelines
No, this cannot happen because a Release Pipelines are a "snapshot" of the pipeline and its tasks as they existed at that time. That means any associated Task Groups that get modified after a Release Pipeline is initiated will not get changed in the created Release Pipeline because that Release Pipeline "snapshot" has already been created and is running. This is actually a good thing since you don't want your pipeline to change when running and you are making changes. So back to the problem. There are some workarounds, but you aren't going to like them:
Use the REST API and a Release Pipeline JSON template to dynamically create a Release Pipeline on-the-fly (along with it's stages and associated tasks withing each stage). This is a little complex, but it can be done. You need to understand the relationships of elements within the json and minimal required json elements to get this working, which will be on a try-until-it-works basis.
Use Stage Pre-Condition checks, but they might not be mature enough to check some conditions that you are probably looking for. Even Gate checks using Azure Function Apps might help here but it is either FAIL or PASS. But still, take a look into Azure Function Apps that can extend the pre-condition checks you might need to run because I have no idea you are using as conditions. I like Azure Function Apps!
I don't know how you are running your tests, but you can run PowerShell as tasks. So within a stage (before you run the test) you can run a PowerShell script to evaluate the condition use-case to run the test and set some Variable. Then in the next step that will actually run the Test set Custom Conditions to run the Test Task and evaluate the Variable that you set in the previous PowerShell step to either run the Test Task or skip it. The downside is that it will show the Stage as "GREEN" and not greyed out like it was skipped as if you had used Pre-Deployment Conditions. You probably want to show in the Release Pipeline that the test was actually skipped and if it was passed or failed.
I'm sure I'm missing another option, but these are just off the top off my head.
Let me know what you come up with.
Background of the problem
We are using jenkins to build lots of projects that are dependent to some of the projects.
As most of you know, jenkins allows you to trigger another job if the build is stable (stability is an option that we want). And there is another tool in jenkins that allows you to "block build if certain jobs are running". Also there is an option as "PrerequisitiesCheck".
Let say there is project A triggering project B and B is triggering project C. For the simplicity, let me say this configuration as A->B->C. Let say there is another path like A->X->C. First problem, if A and B are built successfully, C is triggered even Y is being built at the same moment. Solution is to use "block build if certain jobs are running" option. Second problem is, when A triggers B and X, and if B fails, then nevertheless X is triggering C and C fails because of B failed already. That is something that we do not want. Solution (not exact solution) is to use "PrerequisitiesCheck" option. At least with that option the person responsible for the project C can understand the problem did not occur due to project C. Also, we have to use trigger option to be able to link these A->B->C and A->X->C projects each other.
Problem
Problem is so simple, We do not want to use these three options (Trigger, PrerequisitiesCheck, Block build if certain jobs are running) because it is too much work and most probably this complex structure will cause many problems (i.e. forgetting linking is the simples one). Is there any tool that is doing three of them at the same time? Do you know any plugin enabling us to solve that problem wity only one linking?
Multijob plugin will be of interest to you.
This is what the documentation says.
After installing this plugin you will be able to do the following:
When creating new Jenkins job you have an option to create MultiJob project.
This job can define in the Build section phases that contains one job or more.
All jobs belong to one phase will be executed in parallel (if there are enough executors on the node)
All jobs in phase 2 will be executed only after jobs in phase 1 are completed etc.
Since A is triggering both B and X, I will make them run in parallel(making them part of same phase) and trigger C only when both are done.
Maybe the Build Flow Plugin is what you are looking for:
Build Flow Plugin
There you can write a little script which triggers your existing jobs, for example:
parallel (
// job 1, 2 and 3 will be scheduled in parallel.
{ build("job1") },
{ build("job2") },
{ build("job3") }
)
// job4 will be triggered after jobs 1, 2 and 3 complete
build("job4")
We have some builds that depents on each other in a kind of tree structure:
A
AA
AB
ABA
AC
B
BA
BB
BBA
BBAA
BBAAA
BBAB
C
...
Another build should be triggered if all of these builds have finished. Unfortunatelly it is not possible to say which build will be always the last that finished to use this to trigger the following task.
Is there a chance (maybe a plugin) that allows to trigger a new build when each build of a list of builds has finished?
Thanks in advance!
Frank
Take a look at Join Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin
This plugin allows a job to be run after all the immediate downstream
jobs have completed. In this way, the execution can branch out and
perform many steps in parallel, and then run a final aggregation step
just once after all the parallel work is finished.
Although this is an old question, you might consider restructuring your build pipeline completely using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.
I'd like to be able to run several builds of the same Jenkins job simultaneously.
Example:
Build [*jenkins_job_1*]: calls an ant script with parameter 'A'
Build [*jenkins_job_1*]: calls an ant script with parameter 'B'
repeat as necessary
each instance of the job runs simultaneously, rather than through a queue.
The reason I'd like to do this is to avoid having to create several jobs that are nearly identical, all of which would need to be maintained.
Is there a way to do this, or maybe another solution (ie — dynamically create a job from a base job and remove it after it's finished)?
Jenkins has a check box: "Execute concurrent builds if necessary"
If you check this, then it'll start multiple builds for a job.
This works with the "This build is parameterized" checkbox.
You would still trigger the builds, passing your A or B as parameters. You can use another job to trigger them or you could do it manually via a script.
You can select Build a Multi-configuration project (Matrix build) when you create the job. Then, under the job's configuration, you can define the Configuration Matrix which lets you specify one or more parameters (axes) for different builds. Regarding running simultaneously, you should be able to run as many simultaneous builds as you have executors (with the appropriate label).
Unfortunately, the Jenkins wiki lacks documentation about this setup. There are a couple previous SO questions, here and here, that might provide a little guidance. There was a "recent" blog post about setting up a multi-configuration job to perform builds on various platforms.
A newer (and better) solution is the Jenkins Job DSL Plugin.
We've been using it with great success. Our job configurations are now disposable... we can set up a huge stack of complicated jobs from some groovy files and a couple template jobs. It's great.
I'm liking it a lot more than the matrix builds, which were complicated and harder to understand.
Nothing stopping you doing this using the Jenkins pipeline DSL.
We have the same pipeline running in parallel in order to model combined loads for an application that exposes web services, provides a database to several external applications, receives data via several work queues and has a GUI front end. The business gives us non-functional requirements (NFRs) which our application must meet that guarantees its responsiveness even at busy times.
The different instances of the pipeline are run with different parameters. The first instance might be WS_Load, the second GUI_Load and the third Daily_Update_Load, modelling a large data queue that needs processing within a certain time-frame. More can be added depending on which combination of loads we're wanting to test.
Other answers have talked about the checkboxes for concurrent builds, but I wanted to mention another issue: resource contention.
If your pipeline uses temporary files or stashes files between pipeline stages, the instances can end up pulling the rug from under each others' feet. For example you can end up overwriting a file in one concurrent instance while another instance expects to find the pre-overwritten version of the same stash. We use the following code to ensure stashes and temporary filenames are unique per concurrent instance:
def concurrentStash(stashName, String includes) {
/* make a stash unique to this pipeline and build
that can be unstashed using concurrentUnstash() */
echo "Safe stashing $includes in ${concurrentSafeName(stashName)}..."
stash name: concurrentSafeName(stashName), includes: includes
}
def concurrentSafeName(name) {
/* make a name or name component unique to this pipeline and build
* guards against contention caused by two or more builds from the same
* Jenkinsfile trying to:
* - read/write/delete the same file
* - stash/unstash under the same name
*/
"${name}-${BUILD_NUMBER}-${JOB_NAME}"
}
def concurrentUnstash(stashName) {
echo "Safe unstashing ${concurrentSafeName(stashName)}..."
unstash name: concurrentSafeName(stashName)
}
We can then use concurrentStash stashName and concurrentUnstash stashName and the concurrent instances will have no conflict.
If, say, the two pipelines both need to store stats, we can do something like this for filenames:
def statsDir = concurrentSafeName('stats')
and then the instances will each use a unique filename to store their output.
You can create a build and configure it with parameters. Click the This build is parameterized checkbox and add your desired param(s) in the Configuration of the build. You can then fire off simultaneous builds using different parameters.
Side note: The "Bulk Builder" in Jenkins might push it into a queue, but there's also a This bulk build is parameterized checkbox.
I was having a pretty large build queue and I performed below steps to run jobs in
parallel in jenkins to reduce number of jobs waiting in queue
For each job you need to navigate to configure and select the checkbox stating
"Execute concurrent builds if necessary"
Navigate to Manage -> Configure System -> look for "# of executors" and set the no
of parallel executors you want (in my case it was set to 0 and I updated it to 2)