I have a Jenkinsfile that contains a few logical checks for a commit to Github, and only after certain criteria are met, will it trigger a downstream build job. The relevant parts of the Jenkinsfile are below:
script {
if (... a bunch of conditions) {
echo 'Building because of on-demand job!'
build job: '/my/downstream/job', parameters: [gitParameter(name: 'BRANCH', value: env.BRANCH_NAME),
gitParameter(name: 'GIT_BRANCH', value: env.GIT_LOCAL_BRANCH)], wait: true, propagate: true
}
echo 'Skipping'
currentBuild.result = 'NOT_BUILT'
}
However, in my Github UI, anytime a job is skipped, it is rendered as a failure. For instance, when a commit is made that does not satisfy the condition, Jenkins correctly skips the build:
However, on the Github commit history, it shows as a failure:
I know this is somewhat trivial and literally only for appearances but it is quite aggravating to see so many red Xs. Is the best solution just to switch the currentBuild.result to SUCCESS? I am somewhat hesitant to do so since it's not technically a success (nothing was built) but I don't see another way of getting Github to not mark it as failed.
Related
I have a declarative pipeline, and I have two groups of tests:
The failures from 1 group are actual warnings, and they must stop the build.
Failures from the second group are only warnings: ideally I'd like the stage to fail, but the overall build status to remain successful.
Simple version of my attempt at the warnings group. I have 2 steps:
steps {
bat label: "Compliance Test - Warnings",
script: "${env.WARNTESTCMD} -report compliance_report.xml"
catchError(buildResult: hudson.model.Result.SUCCESS, stageResult: hudson.model.Result.FAILURE) {
junit testResults: 'compliance_report.xml',
allowEmptyResults: true
}
}
I've tried using catchError to give me the result I want, but no dice. The status of the build is set to UNSTABLE, and I don't seem to have any say in that.
I guess the first thing I should ask is whether that's possible at all in Jenkins? Can I keeping my test results in junit format to be recognised by Jenkins as tests, or do I need to store them as artefacts but not official tests?
I have the following code in my Jenkinsfile:
build job: 'project_1', propagate: false, wait: false
Which builds a separate project, which works great. However, I would like a link to the particular job. Unfortunately, it just links to the branch where it is building in master
[Pipeline] build (Building project_1)
Scheduling project: project_1 ยป master (link is here)
I believe when I set propagate: true, wait: true it would link to the job, but I cannot do this due to constraints in the project. Is there a way around this to get a link to the particular job without setting propagate or wait to true?
I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"
I can call another jenkins job using the build command. Is there a way I can tell another job to do a branch scan?
A multibranch pipeline job has a UI button "Scan Repository Now". When you press this button, it will do a checkout of the configured SCM repository and detect all the branches and create subjobs for each branch.
I have a multibranch pipeline job for which I have selected the "Suppress automatic SCM triggering" option because I only want it to run when I call it from another job. Because this option is selected, the multibranch pipeline doesn't automatically detect when new branches are added to the repository. (If I click "Scan Repository Now" in the UI it will detect them.)
Essentially I have a multibranch pipeline job and I want to call it from another multibranch pipeline job that uses the same git repository.
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
//scanScm "../my-other-multibranch-job" <--- scanScm is a fake command I made up
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
I get an error on that build line, because the target multibranch pipeline job does not yet know that BRANCH_NAME exists. I need a way to trigger an SCM re-scan in the target job from this current job.
Similar to what you figured out yourself, I can contribute my optimization that actually waits until the scan has finished (but is subject to Script Security):
// Helper functions to trigger branch indexing for a certain multibranch project.
// The permissions that this needs are pretty evil.. but there's currently no other choice
//
// Required permissions:
// - method jenkins.model.Jenkins getItemByFullName java.lang.String
// - staticMethod jenkins.model.Jenkins getInstance
//
// See:
// https://github.com/jenkinsci/pipeline-build-step-plugin/blob/3ff14391fe27c8ee9ccea9ba1977131fe3b26dbe/src/main/java/org/jenkinsci/plugins/workflow/support/steps/build/BuildTriggerStepExecution.java#L66
// https://stackoverflow.com/questions/41579229/triggering-branch-indexing-on-multibranch-pipelines-jenkins-git
void scanMultiBranchAndWaitForJob(String multibranchProject, String branch) {
String job = "${multibranchProject}/${branch}"
// the `build` step does not support waiting for branch indexing (ComputedFolder job type),
// so we need some black magic to poll and wait until the expected job appears
build job: multibranchProject, wait: false
echo "Waiting for job '${job}' to appear..."
while (Jenkins.instance.getItemByFullName(job) == null || Jenkins.instance.getItemByFullName(job).isDisabled()) {
sleep 3
}
}
Ended up figuring this out shortly after posting the question. Calling build against the base multibranch pipeline job as opposed to a branch causes it to re-scan. The solution to my above snippet would have ended up looking something like...
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
build job: "../my-other-multibranch-job", wait: false, propagate: false // scan for branches
sleep 2 // scanning takes time
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
The wait: false is important because otherwise you get "ERROR: Waiting for non-job items is not supported". The multibranch "parent" job is closer to a folder than a job, but it's a folder that supports the build command, and it does so by scanning the SCM.
But solving this just led to another problem, which is that with wait: false we have no way of knowing when the SCM Scan finished. If you have a large repository (or you're short on jenkins agents), the branch won't get discovered until after the second build command has already failed due to the branch not existing. You could bump the sleep time even higher, but that doesn't scale.
Fortunately, it turns out manually initiating the SCM scan isn't even needed if you have github webhooks set up for your jenkins. The branch will be discovered more-or-less instantly, so for my purposes this is solved another way. The reason I was running into it is we don't have webhooks set up in our dev jenkins, but once I move this code to prod it will work fine.
If you're trying to use JobDSL to set up multibranches calling multibranches and you don't have webhooks or something equivalent, the better path is probably to abandon multibranch for your second tier of jobs and use JobDSL to create folders and manage the branch jobs yourself.
I have a build flow which builds 4 jobs in sequence
eg;
build(Job 1)
build(Job 2)
build(Job 3)
build(Job 4)
I want to run Job 4 even if any of the previous job fails also . How can I do that in the build flow ?
you can set propagate to false, that will ensure your workflow will continue if particular job fails:
build job: '<job_name>', propagate: false
For me, propagate: false didn't worked, So I used ignore(FAILURE) instead in my BuildFlow, to make sure that all the jobs in flow executes, even if there are failures. (Ref)
ignore(FAILURE) {
build("JobToCall", Param1: "param1Val", Param2: "param2Val")
}
You can use Jenkins Workflow Plugin as follows:
try {
build 'A'
} catch(e) {
echo 'Build for job A failed'
}
try {
build 'B'
} catch(e) {
echo 'Build for job B failed'
}
You can extend this idiom to any number of jobs or any combination of success/failure flow you want (for example, adding build steps inside catches if you want to build some job in case another failed).