I am trying to configure a Jenkins pipeline in the following fashion :
Build A , B and C non blocking as they don't depend on each other
(but block on the fact that A , B and C are still building)
Build D
I tried to configure two pipelines :
Pipeline 1 : Build A , B and C non blocking
Pipeline 2 : Build D
But that did not work. A pipeline seems to reports "Success" the moments a build starts which is not what I need.
Ideally I would like to stay within Jenkins UI instead of creating scripts to accomplish this.
Use the parallel syntax. Found here: https://jenkins.io/doc/book/pipeline/syntax/#parallel
Related
I am developing software for an embedded device. The steps involved in building and verifying it all are complicated: creating the build environment (via containers), building the actual SD card image, running unit tests, automated tests on target hardware, license compliance checks and so on - details aren't important here.
Currently I have this in one long declarative Jenkinsfile as a multibranch-pipeline (for all intents and purpose here, we're doing gitflow). In doing this I've hit a limit on the size of a Jenkinsfile (https://issues.jenkins.io/browse/JENKINS-37984) and can't actually get all the stages in that I want to.
It's too big so i need to cut this massive pipeline up. I broke this all up in little pipeline jobs with parameters to pass data/context between each part of the pipeline and came up with something like this:
I've colour-coded the A and B artifacts as they're used a lot and the lines would make things messy. What this tries to show is an order of running things, where things in a column depend on artifacts created in column to the left.
I'm struggling to discover how to do the "waiting" for multiple upstream jobs (for instance in Job Foxtrot in the diagram) before starting another downstream job that depends on them.
I specifically do not want to turn each column in the diagram into a parallel group of things, because for instance Job Delta might take 2 minutes but Job Charlie take 20 minutes. The exact duration of each job is variable and unpredictable as for some parameter combinations will mean building from scratch and others will cause an existing artifact to be output.
I think I need something like the join plugin (https://plugins.jenkins.io/join/), but for pipeline jobs (join only works on freestyle jobs and is quite aged).
The one approach I've explored is to have a "controller" job (maybe job Alpha in the diagram?) that uses the build step (https://www.jenkins.io/doc/pipeline/steps/pipeline-build-step/_) with the wait parameter set to false to trigger the downstream jobs in correct order, with the correct parameters. It would involve searching Jenkins.instance.getItems() to locate the Runs for the downstream projects, which have an upstream cause that matches the currently executing "controller" job. This involves polling waiting for the job to appear and then polling for the job to complete. This feels like I'm "doing it wrong". Below is the source for this polling approach - be gentle, i'm new to groovy!
Is this polling approach a good way? What problems could I encounter with this approach? Should I be using the ItemListener Jenkins ExtensionPoint and writing a plugin to do this sort of thing in a generic way? Is there another way I've not found?
I feel like I'm not "holding it right" when it comes to the overall pipeline design/architecture here.
Finally after writing this I notice that Jobs India, Juliet and Kilo could be collapsed into a single Job, but I don't think that solve much.
#NonCPS
Integer getTriggeredBuildNumber(String project, String causeJobName, Integer causeBuildNumber) {
//find the job/project first
def job = Jenkins.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob.class).find { job -> job.getFullName() == project }
//find a build for this job that was caused by the current build
def build = job.getBuilds().find { build ->
build.getCauses().findAll{ it.class == hudson.model.Cause.UpstreamCause.class }.find { cause ->
cause.getUpstreamProject() == causeJobName && cause.getUpstreamBuild() == causeBuildNumber
} != null
}
if(build != null) {
return build.getNumber()
} else {
return -1
}
}
#NonCPS
Boolean isBuildComplete(String jobName, Integer buildNumber) {
def job = Jenkins.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob.class).find { job -> job.getFullName() == jobName }
if(job) {
def build = job.getBuildByNumber(buildNumber)
return build.isBuilding() == false && build.getResult() != null
} else {
println "WARNING: job '" + jobName + "' not found."
return false
}
}
We've hit the "Code too large" too many times, but the way to cope with it is to refactor your pipeline to remain deep under the limit. The following may be used:
You can run a combination of scripted and declarative pipeline. So some stages in the beginning and/or in the end may be refactored out.
You can build some of the parallel stages dynamically. This code would not be counted towards the limited code size.
Lastly, the issue mentions transformation variables, and that can help too.
We used the combination of the above and have expanded our pipeline well beyond what it was when we first encountered the issue you're facing.
I'm trying to figure out what options do I have,
when trying to build a good pipeline for CICD for a monorepo,
I'm trying to have something like this (this is only a pseudo pipeline)
and not really what I'm using ATM in my monorepo (or what I will have).
Explanation:
Pre: understand what I should build, test, etc..
Build dynamically a parallel step which will give me the later explained capabilities.
Foo: run the parallel and comfortably wait:)
This is the only way I thought of getting this features:
* Build process among the P’s can be shared and I can generate some waitUntil statements
to make this works, I guess...
* Every P’s is independent from the other, if one Ut of P2 fails f.e, it doesn't affect the other progress
of the pipeline, or if I want, it's only a failFast configuration
* Every step within the way is again not related to the progress of other P’s,
so when Ut finishes in any of the P's it starts immediately it's St.
(thought this might changed according to some configuration I'll probably need)
The main problems with that is:
1. I'm losing the control the Restart single steps (since I can only restart Top level steps)
2. It requires me to do a lot more with Scripted Pipeline, which looks like the support of BlueOcean
(which is kind of critical to me), is questionable...
seems that BlueOcean is more supported within the scope of the Declarative Pipeline.
NOTE: It probably looks like I can split every P’s to a another jenkins job
but, this will require me to wait a lot of time in checkout workspace+preparation of the monorepo,
and like I said the "build" step may have shared between the P’s and it's more efficient to do this like that
I will appreciate every feedback or any suggestion:)
There's no problem whatsoever with doing what you want with a Declarative pipeline, since stage can have a stages child. So:
pipeline {
stages {
stage("Pre") { }
stage("Foo") {
parallel {
stage ("P1") {
stages {
stage("P1-Build") {}
stage("P1-Ut") {}
stage("P1-St") {}
}
}
stage ("P2") {
stages {
stage("P2-Build") {}
stage("P2-Ut") {}
}
}
// etc..
Stages P1..P4 will run in parallel but within each their Build-unittest-test stages will run sequentially.
You won't be able to restart separate stages but it's not a good feature anyway.
I am trying to implement a more complex combination filter for Jenkins using the Matrix Groovy Execution Strategy Plugin. See my previous question for more details. It seems to work otherwise but if the nodes where the label is set are offline, the matrix job hangs and does not put the rest of the matrix items into the job queue.
This is enough Groovy to cause the same effect in the plugin:
combinations.each{
result[it.cfg] = result[it.cfg] ?: []
result[it.cfg] << it
}
return [result, true]
If I set the execution strategy to "Classic", all the job labels go into the queue even if some nodes are offline. I have "Execute concurrent builds if necessary" enabled if that makes any difference.
Is there some setting I need to fix or is this a plugin issue?
Thats because the classic strategy puts all the keystone jobs in the queue and then the others afterwards.
This plugin will schedule them in sections and if the node is offline then they will wait, which is standard behaviour
you could try this
Note: I wrote the matrix execution strategy plugin
Incorporated comments
You can force all the combinations to submit in one go by doing the following:
combinations.each{
result["a"] = result["a"] ?: []
result["a"] << it
}
return [result, true]
I am using a Jenkins pipeline script and when all nodes are offline, the builds keep on queuing up. How do I stop Jenkins from adding jobs to the queue while all slaves are offline?
pipeline {
triggers {
pollSCM('H/3 * * * 1-5')
}
}
Is your agent's availability configured to 'Keep this agent online as much as possible' ?
One way to tackle this situation is, run the below script on master node and build your pipeline(s) only if at least one of the nodes is online. You can pass the online node name to your downstream job as a parameter.
def axis = []
for (slave in jenkins.model.Jenkins.instance.getNodes()) {
if (slave.toComputer().isOnline()) {
axis += slave.getDisplayName()
}
}
return axis
Above script source: Jenkins: skip if node is offline
Other links that may help are:
Monitor and restart your slave nodes - https://wiki.jenkins.io/display/JENKINS/Monitor+and+Restart+Offline+Slaves
I found this script handy in some situations:
https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/clearBuildQueue.groovy
I'm not into pipeline jobs, but for regular freestyle jobs, this kind of queueing will only happen if your builds are parameterized. Seperate builds are needed then to ensure that the project will run seperately for each and every parameter value (it does not matter whether the value is actually different).
So, removing build parameters in your project might solve the problem.
I have 3 Jenkins jobs. Smoke tests, critical path test (part 1), critical path test (part 2).
Now it starts one by one. I need create build PipeLine depends on test result. I need to take into account the results of a single test (#Test annotation in TestNG), ignoring the overall result of test suite.
I want to get the configuration like this:
Smoke tests -> If specified test passed then run critical path test Part 1 and Part 2 on different nodes
So, please tell me how depends in Jenkins only on one tests result (not all suite)?
You can try to use some of build log analysis plugins:
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task
Scan build output, and downgrade build result to failure on specific text.
Next in downstream item check option "Build after other projects are built" in build triggers section. Set proper upstream item name and set proper trigger result.
I solved that task by using 2 Jenkins extensions:
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
Create properties file from test. File contains property, that indicate result of test step status
With EnvInject Plugin add new step into Jenkins Job (step must be after test run) and inject parameter value (from file created at first step)
Create build flow with Build Flow Plugin
Write groovy script:
smokeTest = build( "Run_Smoke_Test" )
def isTestStepSuccessful = smokeTest.environment.get( "TestStepSuccessful" )
if (isTestStepSuccessful != "false") {
parallel (
{
build("Run_Critical_Path_Part_1_Test")
build("Run_Critical_Path_Part_3_Test")
},
{
build("Run_Critical_Path_Part_2_Test")
}
)
}
build( "Run_Critical_Path_Final_Test" )