Re run Jenkins job on a different slave upon failure - jenkins

I have a job which is compatible with 2 slaves(configured on the different locations). I often experience connectivity issues due to VPN session timeout so I am trying to figure out a way to automatically run a job on the slave 2 if the job gets fail on slave 1. Please let me know if there is any plugin or any way to accomplish it.

I think with a free style project, it would be hard to implement your requirement.
Pipeline script
Check this if you don't know this plugin : How create a pipeline script
According to this answer, the Pipeline Plugin allows you to write jobs that run on multiple slave nodes using labels:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
I created this simple pipeline script and work (this example does not have label, but you could use it):
def exitStatusInMasterNode = 'success';
node {
echo 'Hello World in node master'
echo 'status:'+exitStatusInMasterNode
exitStatusInMasterNode = 'failure'
}
node {
echo 'Hello World in node slave'
echo 'master status:'+exitStatusInMasterNode
}
exitStatusInMasterNode variable could be shared across nodes.
So if your slave1 fail, you could set exitStatusInMasterNode to failure. And at the start of your slave2, you could validate if exitStatusInMasterNode is failure in order to execute the same build but in this slave.
Example:
def exitStatusInMasterNode = 'none';
node {
try{
echo 'Hello World in Slave-1'
throw new Exception('Simulating an error')
exitStatusInMasterNode = 'success'
} catch (err) {
echo err.message
exitStatusInMasterNode = 'failure'
}
}
node {
if(exitStatusInMasterNode == 'success'){
echo 'Job in slave 1 was success. Slave-2 will not be executed'
currentBuild.result = 'SUCCESS'
return;
}
echo 'Re launch the build in Slave-2 due to failure on Slave-1'
// exec simple tasks or stages
}
Log of simulated error in slave1
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Simulating an error
Running on Jenkins in .../multiple_nodes
Re launch the build in Slave-2 due to failure on Slave-1
Finished: SUCCESS
Log when there is not error in slave1 (comment this line: throw new Exception)
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Running on Jenkins in .../multiple_nodes
Job in slave 1 was success. Slave-2 will not be executed
Finished: SUCCESS

Related

How to run a task if tests fail in Jenkins

I have a site in production. And I have a simple playwright test that browses to the site and does some basic checks to make sure that it's up.
I'd like to have this job running in Jenkins every 5 minutes, and if the tests fail I want to run a script that will restart the production server. If the tests pass, I don't want to do anything.
What's the easiest way of doing this?
I have the MultiJob plugin that I thought I could use, and have the restart triggered on the failed test step, but it doesn't seem to have the ability to trigger specifically on fail.
Something like the following will do the Job for you. I'm assuming you have a second Job that will take care of the restart.
pipeline {
agent any
triggers{
cron('*/5 * * * *')
}
stages {
stage("Run the Test") {
steps{
echo "Running the Test"
// I'm returning exit code 1 so jenkins will think this failed
sh '''
echo "RUN SOMETHING"
exit 1
'''
}
}
}
post {
success {
echo "Success: Do nothing"
}
failure {
echo 'I failed :(, Execute restart Job'
// Executing the restart Job.
build job: 'RestartJob'
}
}
}

setting status of a build in jenkins

I have a jenkins job.
It is pretty simple: pull from git, and run the build.
The build is just one step:
Execute window command batch
In my use case, I will need to run some python scripts.
Some will fail, some others will not.
python a.py
python b.py
What does determine the final status of the build?
It seems I can edit that by:
echo #STABLE > build.proprieties
but how are the STABLE/UNSTABLE status assigned if not specified by the user?
What happens if b.py raise an error and fails?
Jenkins interprets a pipeline as failed if a command returns an exit code unequal zero.
Internally the build status is set with currentBuild.currentResult which can have three values: SUCCESS, UNSTABLE, or FAILURE.
If you want to control the failure / success of your pipeline yourself you can catch exceptions / exit codes and manually set the value for currentBuild.currentResult. Plugins also use this attribute to change the result of the pipeline.
For example:
stage {
steps {
script {
try {
sh "exit 1" // will fail the pipeline
sh "exit 0" // would be marked as passed
currentBuild.currentResult = 'SUCCESS'
} catch (Exception e) {
currentBuild.currentResult = 'FAILURE'
// or currentBuild.currentResult = 'UNSTABLE'
}
}
}}

Not able to attach file from Slave machine and email using emailext in Jenkins

I have a Master (Unix) and a slave Machine (Windows).
I have created a Multibranch pipeline Project on Master and Trigger request all of the Process takes place in Slave. I am trying to send the HTML reports which are being generated at the Slave machine but get Exception:
ERROR: Error: No workspace found!
Sending email to: abhishek.gaur1#pb.com
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
I am using the below code in Jenkinsfile:
success {
emailext attachmentsPattern: '**/overview-features.html',
body: '${SCRIPT, template="groovy-html.template"}',
mimeType: 'text/html',
subject: 'Success Pipeline: ${currentBuild.fullDisplayName}',
to: 'abhishek.gaur1#pb.com'
}
The file should be attached to the email and sent. Currently it shows ERROR:
Error: No workspace found!
From my tests it seems the agent none case has a problem in configurations where the workspace is not allocated on the master.
agent none allows to set agents per stage, but the post() block doesn't allow to set an agent, it will run on master without workspace on the case of agent none from what i gathered.
So the only solution for declarative pipeline in that case would be to run the whole build on agent with label Developer30, if your example is complete it should be no problem.
pipeline {
agent {
label 'Developer30'
}
tools {
maven 'MAVEN_HOME'
}
stages {
stage ('Compile Stage') {
steps {
bat 'mvn clean'
}
}
}
post {
success {
// emailext stuff
}
}
}

jenkins complicated buildflow, is it possible?

I would like to have a Jenkins build flow that looks like this.
After the build is triggered all slaves run the same job in parallel (a setup job).
If any slaves fail this job they should not continue on.
For the all the slaves that to pass that job, they should grab a job out of a pool of jobs that need to be completed. And once a slave completes a job they should go back to complete another job in the pool.
I have only started working with Jenkins a few weeks ago and they way I have it setup now is as each job is picked up by a slave they have to run the setup job first. This really slows down build times because I have about 30 jobs and the setup takes ~2 minutes.
I am using Jenkins as an automated testing platform and all the jobs in the job pool can run independently of each other. I have 5 slaves currently and ~30 jobs.
The following should do the trick:
def jobPool = new ArrayDeque()
jobPool.add({
echo "Doing stuff on ${env.NODE_NAME}"
});
jobPool.add({
echo "Doing other stuff on ${env.NODE_NAME}, a little slower"
sleep 4
});
jobPool.add({
echo "Doing more stuff on ${env.NODE_NAME}, even slower"
sleep 10
});
jobPool.add({
echo "Doing stuff quick on ${env.NODE_NAME}"
});
jobPool.add({
echo "Doing stuff quicker on ${env.NODE_NAME}"
});
def par = [:]
for (x in ["master", "urban"]) {
def nodeName = x; // needed due to variable scoping
par[nodeName] = {
node (nodeName) {
try {
echo "Doing setup on ${env.NODE_NAME}!"
// Do you're setup
echo "Done with setup"
} catch (Exception e) {
echo "Will not use this node as it failed setup!"
return;
}
while (true) {
// echo "${jobPool.size()}"
def subTask = jobPool.poll()
//echo "${jobPool.size()} ${subTask}"
if (subTask == null) {
break;
}
// Might wan't try catch around the next line if you wan't to continue if a job fails
subTask()
}
}
}
}
parallel par
if (!jobPool.isEmpty()) {
error "Not all tasks was done!"
}
Simply add your "job pool jobs" to the jobPool variable and modify the setup part.
It seems like you want separate stages in the same job. This is made much easier in jenkins 2's pipelines. There are some pictures here:
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Stage+View+Plugin
the [groovy] code ends up looking like this:
node {
stage 'Checkout'
svn 'https://svn.mycorp/trunk/'
stage 'Build'
sh 'make all'
stage 'Test'
sh 'make test'
}

Jenkins Pipeline - how to get values after parallel execution

Is it possible to save some values during a parallel execution and use these values during a final step?
In the following example, I would like to know which jenkins slave is used during the parallel execution and use it during the final step.
node {
stage 'Checkout'
checkout([...])
stash includes: '**', name: 'binary'
stage 'Running simulation'
parallel (
"stream 1" : {
node {
unstash "binary"
sh "echo \"\$(whoami)#\$(hostname):\$PWD\""
// How to save the previous result
// Run simulation on node first slave
...
}
},
"stream 2" : {
node {
unstash "binary"
sh "echo \"\$(whoami)#\$(hostname):\$PWD\""
// How to save the previous result
// Run simulation on node second slave
...
}
}
)
stage 'Gathering results files'
// use the values of the slaves to retrieve some files.
stage 'Generate report'
}
Thanks for your answer.
My bad, I was using the 2.3 version of Pipeline Nodes and Processes Plugin. It works fine with the version 2.5
hostname = sh (returnStdout: true, script: 'hostname')
println hostname

Resources