Check for slave existence before running a jenkins job - jenkins

Is it possible to execute some sort of a script in a jenkins job before it starts the execution to check whether the slave exists or not ?
To avoid cluttering the jenkins queue with jobs that have non-existing nodes set to run on.

See https://your.jenkins/computer/api/xml:
<computerSet>
<computer>
...
<offline>true</offline>
...
<computer>
...
<offline>true</offline>
...
...
or https://your.jenkins/computer/api/json:
computer:
0:
...
offline: true
...
1:
...
offline: true
...
...
See http://your.jenkins/computer/api:
You can also specify optional XPath to control the fragment you'd like to obtain (but see below). For example, ../api/xml?xpath=/*/*[0].

Related

Raise Abort in Jenkins Job from Batch script

I have a Jenkins job, which do Git syncs and build the source code.
I added and created a "Post build task" option.
In 'post build task', I am searching for keyword "TIMEOUT:" in console output (this part is done) and want to declare job as Failed and Aborted if keyword matches.
How can I raise / declare the Job as Aborted from batch script if keyword matches. Something like echo ABORT?
It is easier if you want mark it as "FAIL"
Just exit 1 will do that.
It is tricky to achieve "Abort" from post build task plugin, it is much easier to use Groovy post build plugin.
The groovy post build provide rich functions to help you.
Such as match function:
def matcher = manager.getLogMatcher(".*Total time: (.*)\$")
if(matcher?.matches()) {
manager.addShortText(matcher.group(1), "grey", "white", "0px", "white")
}
Abort function:
def executor = build.executor ?: build.oneOffExecutor;
if (executor != null){
executor.interrupt(Result.ABORTED)
}
Br,
Tim
you can simply exit the flow and raise the error code that you want:
echo "Timeout detected!"
exit 1
Jenkins should detect the error and set-up the build as failed.
The error code must be between 1 and 255. You can chose whatever your want, just be aware that some code are reserved:
http://tldp.org/LDP/abs/html/exitcodes.html#EXITCODESREF
You can also consider using the time-out plugin:
https://wiki.jenkins.io/display/JENKINS/Build-timeout+Plugin
And another option is to build a query to BUILD ID URL/stop. Which is exactly what is done when you manually abort a build.
echo "Timeout detected!"
curl yourjenkins/job_name/11/stop

Jenkins: env in pipeline from Jenkinsfile is null

Please help. env is null.
Jenkinsfile:
node {
println "Your ENV:"
println env.dump
println "Your params:"
println params.dump
}
Jenkins output:
[Pipeline] properties
[Pipeline] node
Running on foobarquux in c:\workspace\123abc
[Pipeline] {
[Pipeline] echo
Your ENV:
[Pipeline] echo
null
[Pipeline] echo
Your params:
[Pipeline] echo
null
I expect that my environment variables will not be null. I expect env.dump not to be null and to see something beyond Your ENV: when println env.dump executes.
After reading very helpful comments from #mkobit, I realized I needed parentheses for dump, and even with them Jenkins throws a security exception.
${WORKSPACE} only works if it is used in an agent (node)! Otherwise it comes out as null.
I have agent none at the top of my pipeline because I have a few input steps that i don't want use heavyweight executors for. And I was setting an environment variable in the top-level environment {} block that used ${WORKSPACE}. For the life of me I couldn't figure out why it was being set to null. Some other thread mentioned the workspace on an agent, so i moved that definition into a step on an agent, and lo and behold, when you set a var with WORKSPACE while running on an agent, it all works as expected.
The sidebar here is that if you are using a top-level agent none, the environment and presumably other pre-stages blocks are not running in an agent. So anything that relies on an agent will behave unexpectedly.
Groovy's optional parenthesis requires at least one parameter, which is different than Ruby.
Method calls can omit the parentheses if there is at least one parameter and there is no ambiguity:
So, to call the dump() method you would do env.dump() or params.dump(). However, this method will not be whitelisted and you will get a security exception (if you are running in the sandbox or using any sort of Jenkins security) because this would print out all fields of the object.
Thanks to StephenKing for pointing out, i check again with a new fresh Jenkins instance. see comments inside
Assuming the job has 2 parameters [str1=val1, bool1=true] :
node {
// Print the value of a job parameter named "str1"
// output: val1
println "${params.str1}"
// Calling the dump() function to print all job parameters (keys/vals)
// NOTE: calling this method should be approved by Jenkins admin
// output: .... m=[str1:val1, bool1:true] ...
println params.dump()
// Same as the above.
// output: .... m=[str1:val1, bool1:true] ...
println "${params.dump()}"
// SYNTAX ERROR, the '$' is not expected here by the parser
//println ${params.dump()};
// This appears in the question, but it seems like this is not
// what the author meant. It tries to find a param named "dump"
// which is not available
// output: null
println params.dump
}

Block a job from running if given node(s) with a given label(s) is/are running another job(s)

In Jenkins, we can block a job A if job B is running using Build blocker plugin.
Similarly or in some fashion, I would like a job, for ex: another_dumb_job to NOT run / (wait and let it sit in queue) if there are any in-progress jobs running on any user selected slave(s) until those slaves are free again.
For ex: I don't want to run a Job (which will delete bunch of slaves either offline/online -- using a downstream job or via calling some groovy/scriptler script) until any of those slave(s) have active/in-progress job(s) running on them?
The end goal is to delete Jenkins node slaves gracefully i.e. the node/slave is marked OFFLINE first, then any existing jobs (running on a slave are complete) and then the slaves get deleted.
For deleting all offline nodes, tweak the script below and run doDelete() only on slaves where isOffline() is true or isOnline() is false. If you want to delete all nodes (be careful) then don't use the the following if statement:
if ( aSlave.name.indexOf(slaveStartsWith) == 0) {
I'm also ignoring a slave (if you want to ALWAYS ignore a slave from getting deleted). It can be enhanced to use a list of slaves to ignore.
Anyways, the following script will gracefully delete any Jenkins node slaves which starts with a given name (so that you have more control) and it'll mark offline (asap) but delete it only after any running job(s) on that given slave(s) is/are complete. Thought I should share here.
Using Jenkins Scriptler Plugin, one can import/upload/run this script: https://github.com/gigaaks/jenkins-scripts/blob/7eaf41348e886db108bad9a72f876c3827085418/scriptler/disableSlaveNodeStartsWith.groovy
/*** BEGIN META {
"name" : "Disable Jenkins Hudson slaves nodes gracefully for all slaves starting with a given value",
"comment" : "Disables Jenkins Hudson slave nodes gracefully - waits until running jobs are complete.",
"parameters" : [ 'slaveStartsWith'],
"core": "1.350",
"authors" : [
{ name : "GigaAKS" }, { name : "Arun Sangal" }
]
} END META**/
// This scriptler script will mark Jenkins slave nodes offline for all slaves which starts with a given value.
// It will wait for any slave nodes which are running any job(s) and then delete them.
// It requires only one parameter named: slaveStartsWith and value can be passed as: "swarm-".
import java.util.*
import jenkins.model.*
import hudson.model.*
import hudson.slaves.*
def atleastOneSlaveRunnning = true;
def time = new Date().format("HH:mm MM/dd/yy z",TimeZone.getTimeZone("EST"))
while (atleastOneSlaveRunnning) {
//First thing - set the flag to false.
atleastOneSlaveRunnning = false;
time = new Date().format("HH:mm MM/dd/yy z",TimeZone.getTimeZone("EST"))
for (aSlave in hudson.model.Hudson.instance.slaves) {
println "-- Time: " + time;
println ""
//Dont do anything if the slave name is "ansible01"
if ( aSlave.name == "ansible01" ) {
continue;
}
if ( aSlave.name.indexOf(slaveStartsWith) == 0) {
println "Active slave: " + aSlave.name;
println('\tcomputer.isOnline: ' + aSlave.getComputer().isOnline());
println('\tcomputer.countBusy: ' + aSlave.getComputer().countBusy());
println ""
if ( aSlave.getComputer().isOnline()) {
aSlave.getComputer().setTemporarilyOffline(true,null);
println('\tcomputer.isOnline: ' + aSlave.getComputer().isOnline());
println ""
}
if ( aSlave.getComputer().countBusy() == 0 ) {
time = new Date().format("HH:mm MM/dd/yy z",TimeZone.getTimeZone("EST"))
println("-- Shutting down node: " + aSlave.name + " at " + time);
aSlave.getComputer().doDoDelete();
} else {
atleastOneSlaveRunnning = true;
}
}
}
//Sleep 60 seconds
if(atleastOneSlaveRunnning) {
println ""
println "------------------ sleeping 60 seconds -----------------"
sleep(60*1000);
println ""
}
}
Now, I can create a free-style jenkins job, use Scriptler script in Build action and use the above script to gracefully delete slaves starting with a given name (job parameter getting passed to scriptler script).
If you are fast enough to get the following error message, that means, you ran or called the Scriptler script (as shown above) in a job and restricted that job to run on a non-master aka node/slave machine. Scriptler Scripts are SYSTEM Groovy scripts i.e. they must run on Jenkins master's JVM to access all Jenkins resources/tweak them. To fix the following issue, you can create a job (restrict it to run on master server i.e. Jenkins master JVM) which will just accept one parameter for the scriptler script and call this job from the first job (as Trigger a project/job and block until the job is complete):
21:42:43 Execution of script [disableSlaveNodesWithPattern.groovy] failed - java.lang.NullPointerException: Cannot get property 'slaves' on null objectorg.jenkinsci.plugins.scriptler.util.GroovyScript$ScriptlerExecutionException: java.lang.NullPointerException: Cannot get property 'slaves' on null object
21:42:43 at org.jenkinsci.plugins.scriptler.util.GroovyScript.call(GroovyScript.java:131)
21:42:43 at hudson.remoting.UserRequest.perform(UserRequest.java:118)
21:42:43 at hudson.remoting.UserRequest.perform(UserRequest.java:48)
21:42:43 at hudson.remoting.Request$2.run(Request.java:328)
21:42:43 at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
21:42:43 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
21:42:43 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
21:42:43 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
21:42:43 at java.lang.Thread.run(Thread.java:745)
21:42:43 Caused by: java.lang.NullPointerException: Cannot get property 'slaves' on null object
i.e.
If you have Scriptler script build step running in a job (which is not running on a MASTER Jenkins machine/JVM), then the above error will come and to solve it, create a job "disableSlaveNodesStartsWith" and restrict it to run on master (safer side) and calling Scriptler script and pass parameter to the job/script.
Now, from the other job, call this job:

How does it possible to pass value from child job to parent on Jenkins?

I know it possible to pass values from parent to child jobs using Multijob Plugin
Is it possible to pass variables from child job to parent?
Yes with a little work. If JobParent calls jobChild and you want to have variableChild1 (that you may have created in jobChild job) to be visible in jobParent job then do the following simple steps.
In the child job, create a file (variable=value) pair with all the variables in it. Lets call it child or downstream_job or jobChild_envs.txt
Now, once jobParent is done calling jobChild (I guess you are calling Trigger another project or Build other project steps etc), next action just would be to use "Copy Artifact from another project/job" (Copy Artifact plugin in Jenkins). PS: You would need to click on the check box to FLATTEN the file (see jobParent image below). https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Using this plugin, you'll be able to get a file/folder from jobChild's workspace into jobParent's workspace in a defined/base workspace location.
In jobParent, you'll Inject Environment variables (in the BUILD step).
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
At this time, if jobChild job created a .txt file with a variable for ex:
variableChild1=valueChild1
in it, then it'll be available/visible to the parent/upstrem job jobParent.
See the images for more details and run the jobs at your end to see the output.
and
In pipeline builds, you can do this as follows. Let's say you want to save the child build's URL and pass it back to the parent pipeline.
In your child build...
// write out some data about the job
def jobData = [job_url: "${BUILD_URL}"]
def jobDataText = groovy.json.JsonOutput.toJson(jobData)
writeFile file: "jobDataChild.json", text: jobDataText, encoding: 'UTF-8'
// archive the artifacts
archiveArtifacts artifacts: "jobDataChild.json", onlyIfSuccessful: false
And you can retrieve this in the parent build...
step ([$class: 'CopyArtifact', projectName: 'ChildJobName', filter: "jobDataChild.json", selector: [$class: 'LastCompletedBuildSelector'] ])
if (fileExists("jobDataChild.json")) {
def jobData = readJSON file: "jobDataChild.json"
def jobUrl = jobData.job_url
}
To add to this answer years later. The way i'm doing it is by having a redis instance that pipelines can connect to and pass data back and forth.
sh "redis-cli -u $redis_url ping" // server is up
def redis_key = "$BUILD_TAG" // BUILD_TAG is always unique
build job: "child", propagate: true, wait: true, parameters: [
string(name: "redis", value: "$redis_url;$redis_key"),
]
/******** in child job ***********/
def (redis_url, redis_key) = env.redis.tokenize(";")
sh"redis-cli -u $redis_url ping" // we are connected on url
// lpush adds to an array in redis
sh"""
redis-cli -u $redis_url lpush $redis_key "MY_DATA"
"""
/******* in parent job after waiting for child job *****/
def data_from_child = sh(script: "redis-cli --raw -u $redis_url LRANGE $redis_key 0 -1", returnStdout: true)
data_from_child == "MY_DATA"? println("👍") : error("wow did not expect this")
I kind of like this approach better than passing back and forth with files because it allows scaling up via multiple worker nodes and executing multiple jobs in parallel.

Getting the build status in post-build script

I would like to have a post-build hook or similar, so that I can have the same output as e. g. the IRC plugin, but give that to a script.
I was able to get all the info, except for the actual build status. This just doesn't work, neither as a "Post-build script", "Post-build task", "Parameterized Trigger" aso.
It is possible with some very ugly workarounds, but I wanted to ask, in case someone has a nicer option ... short of writing my own plugin.
It works as mentioned with the Groovy Post-Build Plugin, yet without any extra quoting within the string that gets executed. So I had to put the actual functionality into a shell script, that does a call to curl, which in turn needs quoting for the POST parameters aso.
def result = manager.build.result
def build_number = manager.build.number
def env = manager.build.getEnvironment(manager.listener)
def build_url = env['BUILD_URL']
def build_branch = env['SVN_BRANCH']
def short_branch = ( build_branch =~ /branches\//).replaceFirst("")
def host = env['NODE_NAME']
def svn_rev = env['SVN_REVISION']
def job_name = manager.build.project.getName()
"/usr/local/bin/skypeStagingNotify.sh Deployed ${short_branch} on ${host} - ${result} - ${build_url}".execute()
Use Groovy script in post-build step via Groovy Post-Build plugin. You can then access Jenkins internals via Jenkins Java API. The plugin provides the script with variable manager that can be used to access important parts of the API (see Usage section in the plugin documentation).
For example, here's how you can execute a simple external Python script on Windows and output its result (as well as the build result) to build console:
def command = """cmd /c python -c "for i in range(1,5): print i" """
manager.listener.logger.println command.execute().text
def result = manager.build.result
manager.listener.logger.println "And the result is: ${result}"
For this I really like the Conditional Build Step plugin. It's very flexible, and you can choose which actions to take based on build failure or success. For instance, here's a case where I use conditional build step to send a notification on build failure:
You can also use conditional build step to set an environment variable or write to a log file that you use in subsequent "execute shell" steps. So for instance, you might create a build with three steps: one step to compile code/run tests, another to set a STATUS="failed" environment variable, and then a third step which sends an email like The build finished with a status: ${STATUS}
Really easy solution, maybe not to elegant, but it works!
1: Catch all the build result you want to catch (in this case SUCCESS).
2: Inject an env variable valued with the job status
3: Do the Same for any kind of other status (in this case I catch from abort to unstable)
4: After you'll be able to use the value for whatever you wanna do.. in this case I'm passing it to an ANT script! (Or you can directly load it from ANT as Environment variable...)
Hope it can help!
Groovy script solution:-
Here I am using groovy script plugin to take the build status and setting it to the environmental variable, so the environmental variable can be used in post-build scripts using post-build task plugin.
Groovy script:-
import hudson.EnvVars
import hudson.model.Environment
def build = Thread.currentThread().executable
def result = manager.build.result.toString()
def vars = [BUILD_STATUS: result]
build.environments.add(0, Environment.create(new EnvVars(vars)))
Postscript:-
echo BUILD_STATUS="${BUILD_STATUS}"
Try Post Build Task plugin...
It lets you specify conditions based on the log output...
Basic solution (please don't laugh)
#!/bin/bash
STATUS='Not set'
if [ ! -z $UPSTREAM_BUILD_DIR ];then
ISFAIL=$(ls -l /var/lib/jenkins/jobs/$UPSTREAM_BUILD_DIR/builds | grep "lastFailedBuild\|lastUnsuccessfulBuild" | grep $UPSTREAM_BUILD_NR)
ISSUCCESS=$(ls -l /var/lib/jenkins/jobs/$UPSTREAM_BUILD_DIR/builds | grep "lastSuccessfulBuild\|lastStableBuild" | grep $UPSTREAM_BUILD_NR)
if [ ! -z "$ISFAIL" ];then
echo $ISFAIL
STATUS='FAIL'
elif [ ! -z "$ISSUCCESS" ]
then
STATUS='SUCCESS'
fi
fi
echo $STATUS
where
$UPSTREAM_BUILD_DIR=$JOB_NAME
$UPSTREAM_BUILD_NR=$BUILD_NUMBER
passed from upstream build
Of course "/var/lib/jenkins/jobs/" depends of your jenkins installation

Resources