How to use upstream triggers in declarative Jenkisfile - jenkins

What is the correct usage of the upstream trigger in a declarative Jenkinsfile?
I'm trying to add dependency triggers, so that the pipeline is triggered after another project has built successfully.
The jenkisci doku on github is listing upstream events as possible pipeline triggers here.
My Jenkisfile is currently looking like this:
pipeline {
agent {
docker {
...
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
triggers {
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
}
which leads to the following error:
WorkflowScript: 16: Arguments to "upstream" must be explicitly named. # line 16, column 9.
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
^
Update
I changed the syntax for the upstream trigger according to the code snippet here. So, now there is at least no syntax error anymore. But the trigger is still not working as intended.
triggers {
upstream(
upstreamProjects: 'project-name,other-project-name',
threshold: hudson.model.Result.SUCCESS)
}
If I understand the documentation correctly this pipeline should be triggered if one of the two declared jobs has completed successfully right?

If both projects are in same folder and jenkins is aware of jenkinsfile (with below mentioned code) of downstream project(i.e downstream project has ran atleast once), this should work for you :
triggers {
upstream(upstreamProjects: "../projectA/master", threshold: hudson.model.Result.SUCCESS)
}

I'm still relatively new to Jenkins and I have been struggling to get this to work, too. Specifically, I thought that just saving the pipeline code with the triggers directive in the web UI editor would connect the two jobs. It doesn't.
However, if one then manually runs the downstream job, the upstream... code in its pipeline appears to modify the job definition and configure the triggers, resulting in the same situation one would have if they had just set things up in the "build triggers" section of the job config form. In other words, the directive appears to be just a way to tell Jenkins to do the web UI config work for you, when/if the job gets run for some reason.
I spent hours on this and it could have been documented better. Also, the "declarative directive generator" in Jenkins for triggers gave me something like upstream threshold: 'FAILURE' which resulted in something like:
WorkflowScript: 5: Expecting "class hudson.model.Result" for parameter "threshold" but got "FAILURE" of type class java.lang.String instead # line 5, column 23.
To fix this I had to change the parameter to read upstream threshold: hudson.model.Result.FAILURE. Fair enough, but then the generator is broken. Not impressed.

Related

Is there a way to use "propagate=false" in a Jenkinsfile with declarative syntax directly for stage/step?

You can use propagate on a build job as described here:
https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
So you can use something like this to prevent a failing step from failing the complete build:
build(job: 'example-job', propagate: false)
Is there a way to use this for a stage or a step? I know i can surround it with a try/catch and that does works almost as i want. It does ignore the failing the stage and resumes the rest of the build, but it does not display the stage as failed. For now i write all failing stages to a variable and output that on a later stage, but this is not ideal.
If i cant suppress propagation in a stage/step, is there maybe a way to use the build() call to do the same? Maybe if i move it to another pipeline and call that via build()?
Any help appreciated.
With catchError you can prevent a failing step from failing the complete build:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As you might have guessed, you can freely choose the buildResult and stageResult, in case you want it to be unstable or anything else. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature.
There are currently lots of suggestions for the scripted syntax, but for the declarative syntax work is in progress to support this.
Track the progress of https://issues.jenkins-ci.org/browse/JENKINS-26522 which groups all of the pieces together to achieve this. It has some interesting bits already marked as 'Resolved' (meaning a code change was made), such as https://issues.jenkins-ci.org/browse/JENKINS-49764 ( "Allow define a custom status for pipeline stage"). Unfortunately, I cannot find references to any of the tickets involved in the Jenkins changelog which would make sense since the parent ticket is not yet finished.
Of interest might also be the following : https://issues.jenkins-ci.org/browse/JENKINS-45579 which was reopened due to an issue. The environment for this is :
Admittedly, there are a confusing number of tickets tracking this work, but that is probably due to the fact that the functionality being implemented has a number of use-cases.
Another interesting ticket is "Individual Pipeline steps and stages/blocks should have Result statuses" , for which I was able to find a related PR: https://github.com/jenkinsci/workflow-api-plugin/pull/63
It is worth noting that the declarative pipeline was always designed as being opinionated and as such was not meant to support everything possible with the scripted syntax. For more complicated workflows and use-cases where it does not serve your needs, scripted syntax may be the only (and recommended?) option.
For needs such as the one you've stated, if enough noise is made, the declarative pipeline will probably be modified in due course to support it.

Jenkins upstreamProjects not starting jobs

I created a pipelined jobs in jenkins and want to get it triggered as another jobs ends.
I introduced into my pipeline this way:
pipeline {
agent any
triggers { upstream(upstreamProjects: "jobname" )}
...
}
It does not start when the first job ends. I tryed with the web interface build trigger section and it worked.
I wonder what am I missing to get it work in the pipeline code.
I also add "../folder/jobname" and "threshold: hudson.model.Result.SUCCESS".

jenkins fails on building a downstream job

I'm trying to trigger a downstream job from my current job like so
pipeline {
stages {
stage('foo') {
steps{
build job: 'my-job', propagate: true, wait: true
}
}
}
}
The purpose is to wait on the job result and fail or succeed according to that result. Jenkins is always failing with the message Waiting for non-job items is not supported . The job mentioned above does not have any parameters and is defined like the rest of my jobs, using multibranch pipeline plugin.
All i can think of is that this type of jenkins item is not supported as a build step input, but that seems counterintuitive and would prove to be a blocker to me. Can anyone confirm if this is indeed the case?
If so, can anyone suggest any workarounds?
Thank you
I actually managed to fix this by paying more attention to the definition of the build step. Since all my downstream jobs are defined as multibranch pipeline jobs, their structure is folder-like, with each item in the folder representing a separate job. Thus the correct way to call the downstream jobs was not build job: 'my-job', propagate: true, wait: true, but rather build job: "my-job/my-branch-name", propagate: true, wait: true.
Also, unrelated to the question but related to the issue at hand, make sure you always have at least one more executor free on the jenkins machine, since the wait on syntax will consume one thread for the waiting job and one for the job being waited on, and you can easily find yourself in a resource-starvation type situation.
Hope this helps
This looks like JENKINS-45443 which includes the comment
Pipeline has no support for the upstream/downstream job system, in part due to technical limitations, in part due to the fact that there is no static job configuration that would make this possible except by inspecting recent build metadata.
But it also offer the workaround:
as long as the solution is still ongoing, I include here our workaround. It is based in the rtp (Rich Text Publisher) plugin, that you should have installed to make it work:
At the end of our Jenkinsfile and after triggering the job, we wait it to finish. In that case, build() returns the object used to run the downstream job. We get the info from it.
Warning: getAbsoluteUrl() function is a critical one. Use it at your own risk!
def startedBld = build(
job: YOUR_DOWNSTREAM_JOB,
wait: true, // VERY IMPORTANT, otherwise build () does not return expected object
propagate: true
)
// Publish the started build information in the Build result
def text = '<h2>Downstream jobs</h2>Started job ' + startedBld.rawBuild.toString () + ''
rtp (nullAction: '1',parserName: 'HTML', stableText: text)
This issue is part of JENKINS-29913, opened for the past two years:
Currently DependencyGraph is limited to AbstractProject, making it impossible for Workflows to participate in upstream/downstream relationships (in cases where job chaining is required, for example due to security constraints).
It refers the RFE (Request for Enhancement) JENKINS-37718, based on another (unanswered) Stack Overflow question.

RunListener and QueueListener not invoked in pipeline?

I'm trying to write a plugin that listens for node executions during a Jenkins pipeline. The pipeline will have some code like this:
stage ('production deploy') {
input 'enter change ticket #'...
node('prod') {
// production deploy code here
}
}
Either on allocation of node, or before any tasks run on the node, I want to verify a change management ticket has been approved. For Freestyle jobs I could use QueueListener or RunListener, but neither of these are invoked when I run a pipeline.
I can't put this code in the pipeline script because anyone that can edit the pipeline script could remove the verification.
Are there any other listeners I could hook into before, or just after a node is allocated in a pipeline?
In my previous implementation for freestyle builds, I had overridden the setUpEnvironment method. I didn't realize this was not called in pipeline runs - makes sense. I then implemented onStarted in my RunListener and I successfully broke into my code. Just confusion on my part.

How can I set the job timeout for all jobs using the Jenkins DSL

I read How can I set the job timeout using the Jenkins DSL. That sets the timeout for one job. I want to set it for all jobs, and with slightly different settings: 150%, averaged over 10 jobs, with a max of 30 minutes.
According to the relevant job-dsl-plugin documentation I should use this syntax:
job('example-3') {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
I tested in http://job-dsl.herokuapp.com/ and this is the relevant XML part:
<buildWrappers>
<hudson.plugins.build__timeout.BuildTimeoutWrapper>
<strategy class='hudson.plugins.build_timeout.impl.ElasticTimeOutStrategy'>
<timeoutPercentage>150</timeoutPercentage>
<numberOfBuilds>10</numberOfBuilds>
<timeoutMinutesElasticDefault>30</timeoutMinutesElasticDefault>
</strategy>
<operationList>
<hudson.plugins.build__timeout.operations.FailOperation></hudson.plugins.build__timeout.operations.FailOperation>
<hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
<description>Build failed due to timeout after {0} minutes</description>
</hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
</operationList>
</hudson.plugins.build__timeout.BuildTimeoutWrapper>
</buildWrappers>
I verified with a job I edited manually before, and the XML is correct. So I know that the Jenkins DSL syntax up to here is correct.
Now I want to apply this to all jobs. First I tried to list all the job names:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
println("Job: " + it.name)
}
This works too, all job names are printed to console.
Now I want to plug it all together. This is the full code I use:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
job(it.name) {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
}
When I push this code and Jenkins runs the DSL seed job, I get this error:
ERROR: Type of item "jobname" does not match existing type, item type can not be changed
What am I doing wrong here?
The Job-DSL plugin can only be used to maintain jobs that have been created by that plugin before. You're trying to modify the configuration of jobs that have been created in some other way -- this will not work.
For mass-modification of existing jobs (like, in your case, adding the timeout) the most straightforward way is to change the job's XML specification directly,
either by changing the config.xml file on disk, or
using the REST or CLI API
xmlstarlet is a powerful tool for performing such tasks directly on shell level.
Alternatively, it is possible to perform the change via a Groovy script from the "Script Console" -- but for that you need some understanding of Jenkins' internal workings and data structures.

Resources