i want to trigger/start a job (jobB) only if another job (jobA) failed for n-times after the last success.
i saw this parameterized trigger plugin - but for triggers you only can say "Failed" but you can't define whether it should trigger after a failed counter.
Thanks
Chris
here my groovy script thats solved the issue. Used groovy-postbuild plugin to exec the script on jobA. Thanks Ian W for your input.
import hudson.model.*
import jenkins.model.Jenkins
job_name = "jobA"
job_name_to_run = "jobB"
triggerThreshold = 2
last_succ_num = 0
last_job_num = 0
def currentBuild = Thread.currentThread().executable
def job = Hudson.instance.getJob(job_name)
def job_data = Jenkins.instance.getItemByFullName(job.fullName)
println 'Job: ' + job_data.fullName
if (job_data.getLastBuild()) {
last_job_num = job_data.getLastBuild().getNumber()
}
println 'last_job_num: ' + last_job_num
if (job_data.getLastSuccessfulBuild()) {
last_succ_num = job_data.getLastSuccessfulBuild().getNumber()
}
println 'last_succ_num: ' + last_succ_num
doRunJob =(last_job_num - last_succ_num >= triggerThreshold)
println 'do run job? ' + doRunJob
if (doRunJob){
def jobToRun = Hudson.instance.getJob(job_name_to_run)
def cause = new Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
Hudson.instance.queue.schedule(jobToRun, 0, causeAction)
}
This is the output I got:
"Successfully created scratch org: oopopoooppooop, username: test+color#example.com"
when I run the following script:
echo "1. Creating Scratch Org"
def orgStatus = bat returnStdout: true, script: "sfdx force:org:create --definitionfile ${PROJECT_SCRATCH_PATH} --durationdays 30 --setalias ${SCRATCH_ORG_NAME} -v DevHub "
if (!orgStatus.contains("Successfully created scratch org")) {
error "Scratch Org creation failed"
} else {
echo orgStatus
}
Now I need to extract scratch org ID and username from the output separately and store it.
You can use a regular expression:
def regEx = 'Successfully created scratch org: (.*?), username: (.*)'
def match = orgStatus =~ regEx
if( match ) {
println "ID: " + match[0][1]
println "username: " + match[0][2]
}
Here the operator =~ applies the regular expression to the input and the result is stored in match.
Live Demo
I'm trying to add parameter in Jenkins groovy shell script, then wonder if groovy string interpolation can be used nested way like this.
node{
def A = 'C'
def B = 'D'
def CD = 'Value what I want'
sh "echo ${${A}${B}}"
}
Then what I expected is like this;
'Value what I want'
as if I do;
sh "echo ${CD}"
But it gives some error that $ is not found among steps [...]
Is it not possible?
Like this?
import groovy.text.GStringTemplateEngine
// processes a string in "GString" format against the bindings
def postProcess(str, Map bindings) {
new GStringTemplateEngine().createTemplate(str).make(bindings).toString()
}
node{
def A = 'C'
def B = 'D'
def bindings = [
CD: 'Value what I want'
]
// so this builds the "template" echo ${CD}
def template = "echo \${${"${A}${B}"}}"
// post-process to get: echo Value what I want
def command = postProcess(template, bindings)
sh command
}
In regard to the accepted answer, if you're putting values in a map anyway then you can just interpolate your [key]:
def A = 'C'
def B = 'D'
def bindings = [ CD: 'Value what I want' ]
bindings["${A}${B}"] == 'Value what I want'
${A}${B} is not a correct groovy syntax.
The interpolation just insert the value of expression between ${}.
Even if you change to the correct syntax and create $ method, the result will not be what you want.
def nested = "${${A}+${B}}"
println nested
static $(Closure closure) { //define $ method
closure.call()
}
CD will be printed.
I have a job trigger (upstream job) parametrized with uno-choice dynamic choice parameter, to trigger a multiple jobs (downstream jobs).
trigger job parameters:
and i'm using this script that trigger selected jobs using the post-build actions -> groovy post build:
// param list of jobs to execute coming from upastream Job
def upstreamParam = "JOB_LIST_TRIGGER"
def upstreamParamFlag = "NEXT_RELEASE_TYPE"
def resolver = manager.build.buildVariableResolver
def JOBS_TO_EXECUTE = resolver.resolve(upstreamParam)
def FLAG = resolver.resolve(upstreamParamFlag )
def viewName = "test"
def jobsName = []
//only the NEXT RELEASE TYPE
def params = new hudson.model.StringParameterValue('NEXT_RELEASE_TYPE_BIS', FLAG)
// if no job selected in the parameter, will trigger all jobs in the view "test", otherwise will execute only selected jobs
if (JOBS_TO_EXECUTE == null || JOBS_TO_EXECUTE == ''){
// retrieve jobs names from jenkins view
hudson.model.Hudson.instance.getView(viewName).items.each() {
jobsName.push(it.getDisplayName())
}
// launch plugins jobs
jobsName.each(){
job = manager.hudson.getItem(it)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
def paramsAction = new hudson.model.ParametersAction(params)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
}
}else{
// create a list of selected jobs in the JOB_LIST_TRIGGER param
hudson.model.Hudson.instance.getView(viewName).items.each() {
if (JOBS_TO_EXECUTE.contains(it.getDisplayName())){
jobsName.push(it.getDisplayName())
}
}
// launch only job selected in the JOB_LIST_TRIGGER param
jobsName.each(){
job = manager.hudson.getItem(it)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
def paramsAction = new hudson.model.ParametersAction(params)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
}
}
in this code i only trigger NEXT_RELEASE_PARAMETER to the downstream jobs.
job1..job4 with below details to display the current job details.
echo "------------------------------------------------"
echo "------------------------------------------------"
# the current param
echo $RELEASE_VERSION
echo "------------------------------------------------"
echo "------------------------------------------------"
# the current param
echo $NEXT_RELEASE_TYPE
echo "------------------------------------------------"
echo "------------------------------------------------"
# the upstream param
echo $NEXT_RELEASE_TYPE_BIS
echo "------------------------------------------------"
echo "------------------------------------------------"
# the current param
echo $NEXT_RELEASE
echo "------------------------------------------------"
echo "------------------------------------------------"
the output is:
+ echo ------------------------------------------------
------------------------------------------------
+ echo
+ echo ------------------------------------------------
------------------------------------------------
+ echo ------------------------------------------------
------------------------------------------------
+ echo
+ echo ------------------------------------------------
------------------------------------------------
+ echo ------------------------------------------------
------------------------------------------------
+ echo Major
Major
+ echo ------------------------------------------------
------------------------------------------------
+ echo ------------------------------------------------
------------------------------------------------
+ echo
+ echo ------------------------------------------------
------------------------------------------------
+ echo ------------------------------------------------
------------------------------------------------
the result prints only the upstream param but not the current parameters, so is there any way to get downstream (job1..job4) current parameters after it was triggred by job-trigger?
We have a Windows based SPSS server say 10.20.30.40. We would like to kick off a SPSS Production job from another server 10.20.30.50.
Can we kick off the job using a batch file?
1.Create an SPJ file in production.
2.make a bat file to run spj
"C:\Program Files\IBM\SPSS\Statistics\21\stats.exe" -production "K:\Meinzer\Production\SPJ\DashBoardInsert.spj"
create a 'scheduled task' in windows.
The real issue is getting your output from the job. for that, i use python.
I use syntax like this
begin program.
AlertDays=4
Files=['k:\meinzer/Production\dashboarddatasets/aod_network_report.sps',
'k:\meinzer/Production\push/aod_network_reportpush.sps',
'k:\meinzer/Production\pushproduction/aod_network_reportpushP.sps']
end program.
insert file='k:/meinzer/production/ps/errorTestPickles.sps'.
to trigger this
*still needs error info passed.
set mprint=off /printback=on.
begin program.
#test files to observe - uncomment out 8 or 9
#Files=['k:\meinzer/Production\dashboarddatasets/test.sps']
#Files=['k:\meinzer/Production\dashboarddatasets/testfail.sps']
#Files=['k:\meinzer/Production\dashboarddatasets/clinfo.sps']
#Files=['k:\meinzer/Production\dashboarddatasets/CSOC_Consecutive_High_Level_Svcs.sps']
import shutil
import spss
import re, os, pickle
from datetime import datetime
def main(Files):
"""The parser and processor for Syntax Error Reporting """
try:
for FilePath in Files:
Start = datetime.now().replace( microsecond=0)
DBname, init_Syntax = init_Vars(FilePath)
cmds = init_cmds(init_Syntax)
cmd=''
cmd2=''
cmd3=''
try:
for cmd in cmds:
cmd=cmd.replace('\r\n','\n ')
cmd=cmd.replace('\t',' ')
print cmd
spss.Submit(cmd)
cmd3=cmd2
cmd2=cmd
# cmd, cmd2, cmd3=run_cmds(cmd,cmd2,cmd3,cmds)
Finish = datetime.now().replace( microsecond=0)
spss_Output(DBname)
SavedNewname=check_saved_new_name(DBname)
if SavedNewname==1:
send_result(DBname,'Failure',Start,Finish,0,cmd,cmd2,cmd3)
break
if SavedNewname==0:
send_result(DBname,'Success',Start,Finish,1,AlertDays)
except Exception,e:
Finish = datetime.now().replace( microsecond=0)
errorLevel, errorMsg = get_spss_error(e)
send_result(DBname,"Failure in code",Start,Finish,0,AlertDays,cmd,cmd2,cmd3,errorLevel, errorMsg )
spss_Output(DBname)
break
except IOError:
print "can't open file or difficulty initializing comands from spss"
send_result('Can not open File %s' % DBname,Start,Finish)
spss_Output(DBname)
def init_Vars(FilePath):
FilePath=FilePath.encode('string-escape')
#FilePath= map(os.path.normpath, FilePath)
FilePath=FilePath.replace('\\','/')
FilePath=FilePath.replace('/x07','/a')
FilePath=FilePath.replace('//','/')
FilePath=FilePath.replace('/x08','/b')
FilePath=FilePath.replace('/x0b','/v')
FilePath=FilePath.replace('/x0c','/v')
print 'this is the file path..................... '+FilePath
DBname = os.path.split(os.path.normpath(FilePath))[-1]
#if '\\' in FilePath:
# DBname=FilePath.rpartition('\\')[-1]
#if '/' in FilePath:
# DBname=FilePath.rpartition('/')[-1]
init_Syntax=FilePath
OutputClose="output close name=%s." % DBname
OutputNew="output new name=%s." % DBname
spss.Submit(OutputClose)
spss.Submit(OutputNew)
return (DBname, init_Syntax)
def init_cmds(init_Syntax):
with open(init_Syntax,'rb') as f:
BOM_UTF8 = "\xef\xbb\xbf"
code = f.read().lstrip(BOM_UTF8)
#r = re.compile('(?<=\.)\s*?^\s*',re.M)
r = re.compile('(?<=\.)\s*?^\s*|\s*\Z|\A\s*',re.M)
cmds = r.split(code)
#cmds = re.split("(?<=\\.)%s[ \t]*" % os.linesep, code, flags=re.M)
#cmds = re.split(r'(?<=\.)[ \t]*%s' % os.linesep, code, flags=re.M)
cmds = [cmdx.lstrip() for cmdx in cmds if not cmdx.startswith("*")]
return cmds
def run_cmds(cmd,cmd2,cmd3,cmds):
for cmd in cmds:
cmd=cmd.replace('\r\n','\n ')
cmd=cmd.replace('\t',' ')
print cmd
spss.Submit(cmd)
cmd3=cmd2
cmd2=cmd
return (cmd, cmd2, cmd3)
def send_result(DBname,result,Start,Finish,status,AlertDays,cmd='',cmd2='',cmd3='',errorLevel='', errorMsg=''):
""" """
print result + ' was sent for '+DBname
FinishText = Finish.strftime("%m-%d-%y %H:%M")
StartText = Start.strftime("%m-%d-%y %H:%M")
Runtimex = str(Finish-Start)[0:7]
error_result="""%s %s
Start Finish Runtime Hrs:Min:Sec
%s %s %s """ % (DBname,result,StartText,FinishText,Runtimex)
error_result_email="""%s <br>
%s <br> Runtime %s <br>\n""" % (result,DBname,Runtimex)
with open("k:/meinzer/production/output/Error Log.txt", "r+") as myfile:
old=myfile.read()
myfile.seek(0)
if status==1:
myfile.write(error_result+"\n\n"+ old)
if status==0:
myfile.write(error_result+'\n'+'This was the problem\n'+errorLevel+" "+ errorMsg+'\n'+cmd3+'\n'+cmd2+'\n'+cmd+"\n\n"+ old)
# with open("k:/meinzer/production/output/email Log.txt", "r+") as emailtext:
# olde=emailtext.read()
# emailtext.seek(0)
# emailtext.write(error_result_email+ olde)
with open("k:/meinzer/production/output/ErrorCSV.txt", "r+") as ErrorCSV:
oldcsv=ErrorCSV.read()
ErrorCSV.seek(0)
ErrorCSV.write(DBname+','+str(status)+','+FinishText+",0"+','+str(AlertDays)+"\n"+ oldcsv)
def check_saved_new_name(DBname):
""" check saved new name"""
with open("k:/meinzer/production/output/text/"+DBname+".txt", "r") as searchfile:
if 'Warning # 5334' in open("k:/meinzer/production/output/text/"+DBname+".txt", "r").read():
SavedNewname=True
else:
SavedNewname=False
return SavedNewname
def get_spss_error(e):
print 'Error', e
errorLevel=str(spss.GetLastErrorLevel())
errorMsg=spss.GetLastErrorMessage()
return (errorLevel, errorMsg)
def spss_Output(DBname):
""" """
outputtext="output export /text documentfile='k:/meinzer/production/output/text/%s.txt'." % DBname
outputspv="output save outfile='k:/meinzer/production/output/%s.spv'." % DBname
spss.Submit(outputspv)
spss.Submit(outputtext)
main(Files)
end program.