Rebuild Jenkins Pipeline after specified Time if failed - jenkins

I am using Pipelines to build our projects in Jenkins.
If the build of a pipeline fails I'd like to automatically start a new build after a predefined period of time.
Thus it is not possible for me to use the retry("3") command in the Jenkinsfiles because, the way I understand it, there would be no possibility for a delay.
Sleep or something similar wont do because it will block an executor.
The Naginator Plugin seems to do exactly what I need but it doesn't seem to work with pipelines.
I tried implementing it in the Jenkinsfile like:
post {
always {
echo '-------post called-------'
retryBuild {
rerunIfUnstable(true)
retryLimit(2)
fixedDelay(60)
}
echo '-------post finished-------'
}
}
This does not throw any errors and both echos will be shown in the pipeline build. However it doesn't do anything either.
Does anyone have any experience with a similar problem or is there potentially even a way to use Naginator (or other plugins) with Jenkins pipelines?

Related

Jenkins Script Console vs Build Agent

I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...
Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller

Find artifacts in post-build actions

In a nutshell:
How can I access the location of the produced artifacts within a shell script started in a build or post-build action?
The longer story:
I'm trying to setup a jenkins job to automate the building and propagation of debian packages.
So far, I was already successfull in using the debian-pbuilder plugin to perform the build process, such that jenkins presents the final artifacts after successfully finishing the job:
mypackage_1+020200224114528.NOREV.4_all.deb
mypackage_1+020200224114528.NOREV.4_amd64.buildinfo
mypackage_1+020200224114528.NOREV.4_amd64.changes
mypackage_1+020200224114528.NOREV.4.dsc
mypackage_1+020200224114528.NOREV.4.tar.xz
Now I would like to also automate the deployment process into the local reprepro repository, which would actually just require a simple shell script invocation, I've put together.
My problem: I find no way to determine the artifact location for that deployment script to operate on. The "debian-pbuilder" plugin generates the artifacts in a temporary directory ($WORKSPACE/binaries.tmp15567690749093469649), which changes with every build.
Since the artifacts are listed properly in the finished job status view, I would expect that the artifact details are provided to the script (e.g. by environment variables). But that is obvously not the case.
I've already search extensively for a solution, but didn't find anything helpful.
Or is it me (still somewhat a Rookie in Jenkins), following a wron approach here?
You can use archiveArtifacts. You have binaries.tmp directory in the Workspace and you can use it, but before execute clear workspace using deleteDir().
Pipeline example:
pipeline {
agent any
stages {
stage('Build') {
steps {
deleteDir()
...
}
}
}
post {
always {
archiveArtifacts artifacts: 'binaries*/**', fingerprint: true
}
}
}
You can also check https://plugins.jenkins.io/copyartifact/

Notifications on jenkins job failures - with pipeline from scm

We have several jenkins pipeline jobs setup as "pipeline from scm" that checkout a jenkins file from github and runs it. There is sufficient try/catch based error handling inside the jenkinsfile to trap error conditions and notify the right channels.This blog post goes into a quite a bit of depth about how to achieve this.
However, if there is issue fetching the jenkinsfile in the first place, the job fails silently. How does one generate notifications from general job launch failures before the pipeline is even started?
Jenkins SCM pipeline doesn't have any execution provision similar to catch/finally that will be called if Jenkinsfile load is failed, And I don't think there will be any in future.
However there is this global-post-script which runs groovy script after every build of every job on Jenkins. You have to place that script in $JENKINS_HOME/global-post-script/ directory.
Using this you can send notifications or email to admins based on project that failed and/or reason/exceptions of failure.
Sample code that you can put in script
if ("$BUILD_RESULT" != 'SUCCESS') {
def job = hudson.model.Hudson.instance.getItem("$JOB_NAME")
def build = job.getBuild("$BUILD_NUMBER")
def exceptionsToHandle = ["java.io.FileNotFoundException","hudson.plugins.git.GitException"]
def foundExection = build
.getLog()
.split('\n')
.toList()
.stream()
.filter{ line ->
!line.trim().isEmpty() && !exceptionsToHandle.stream().filter{ex -> line.contains(ex)}.collect().isEmpty()
}
.collect()
.size() > 0;
println "do something with '$foundExection'"
}
You can validate your Jenkinsfile before pushing it to repository.
Command-line Pipeline Linter
There are some IDE Integrations as well
Apparently this is an open issue with Jenkins: https://issues.jenkins.io/browse/JENKINS-57946
I have decided not to use Yogesh answer mentioned earlier. For me it is simpler to just copy the content of the Jenkinsfile directly into the Jenkins project instead of pointing Jenkins to the GIT location of the Jenkinsfile. However, in addition I keep the Jenkinsfile in GIT. But make sure to keep the GIT and the Jenkins version identical.

Jenkins how to create pipeline manual step

Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md

Jenkins pipeline parallel not exeucting

I'm trying to test out the parallel functionality for a Jenkins pipeline job, but for some reason the individual build steps of the parallel job never get passed off to an executor and processed. Normal single-threaded pipeline jobs have no issue processing. I tried restarting the Jenkins server in case some resources were locked up, but it did not help.
The full script I'm trying to execute is:
def branches = [:]
branches["setup"] = {node("nsetup") {
echo "hello world"
}}
parallel branches
I have only one node, the master, and it has 5 available executors. It is configured to "use as often as possible". I'm pretty new to Jenkins and setting up a server for the first time, so maybe there's something I missed in the configuration that isn't related to the job.
Does anybody have any suggestions?
And 2 minutes after I post I figure it out! Every time.
Turns out I just didn't have any idea how the "node" command really works. By specifying a parameter in the parentheses, it was preventing it from releasing to an executor. I'm guessing that must tell it to try executing on a certain node matched by label, and I was using it like it was some random logging field. Oops!

Resources