Consider the following code:
chan configure stdin -blocking false
while { true } {
chan gets stdin
if { chan blocked stdin } { break }
}
Here at the last line of the loop chan blocked stdin returns true in either cases: when there is no more input available at stdin, and when there there is some input available at stdin, but it doesn't end with a newline character. I need to distinguish these two cases.
How can I do that?
chan pending input stdin also returns 0 in both cases.
Here is the context where the above code is used.
proc prompt { } { puts -nonewline stdout "MyShell > "; flush stdout }
proc evaluate { } \
{
chan configure stdin -blocking false
while { true } {
# for "stdin -blocking false" "read stdin" acts like "gets stdin"
set part [chan read stdin 100]
append command $part
if { $part eq "" } { break }
}
chan configure stdin -blocking true
# Here I want test if there are pending characters at stdin,
# and while so, I want to wait for newline character.
# It should be like this:
# while { *the required test goes here* } {
# append command [chan gets stdin]\n
# }
while { ! [info complete $command] } {
append command [chan gets stdin]\n
}
catch { uplevel #0 $command } got
if { $got ne "" } {
puts stderr $got
flush stderr
}
prompt
}
chan event stdin readable evaluate
prompt
while { true } { update; after 100 }
Running an interactive command interpreter with the event loop is quite possible, and is described in some detail on the Tcler's Wiki. However, if you're really just running Tcl commands then you should consider using the commandloop command from the TclX package, as that takes care of the details for you.
Related
I am trying to populate a global variable with the exit code status and use that in the catch block to avoid certain exceptions. Not able to store the exit code value to the variable in groovy docker file. Should I take a different approach here? Please suggest.
Scenario: I'm looking to make sure if docker command fails, it should not fail the build, but if the python command inside the sh""" inside docker.image.inside fails as shown below, only then it should fail the build. That's why I started looking for exit code of python script to just fail the build when python cmd is failed and not otherwise.
def scriptStatus = 0
stages {
stage('Call Update) {
steps {
script {
try {
def image = docker.image("${repo}:${tag}")
image.pull()
image.inside("--env-file ${PWD}/cc.env -v /work/user.ssh") {
scriptStatus = sh(script: """
python -u /config/env-python/abc.py
""", returnStdout: true)
}
} catch(Exception e) {
echo "scriptStatus = ${scriptStatus}" --> not showing any result
if (scriptStatus == 0){
currentBuild.result = 'SUCCESS'
} else {
currentBuild.result = 'FAILURE'
}
}
}
}
}
}
I have tried couple options here:
returnStatus: true -> but it doesn't export any result outside the try block, hence I can't check whether the returned value is 0 or non-zero.
then I also tried exitCode=$? -> this also doesn't get stored in the global variable which can be further used in the catch block for if/else condition.
Use returnStatus: true instead returnStdout: true
scriptStatus = sh(script: """
python -u /config/env-python/abc.py
""", returnStatus: true)
returnStatus : returns the status code either 0 or 1.
returnStdout: returns the output.
I am trying to populate an environment variable with the exit code status and use that in the catch block to avoid certain exceptions.
not able to store the exit code value to the env variable in groovy pipeline. Should I take a different approach here?
environment {
cmdStatus = 0
}
try{
image.inside("--env-file ${PWD}/creds.env -v /config/.ssh:/config/.ssh") {
sh """
python -u /config/env-python/abc.py -u ${update}
cmdStatus = \$? --> error exit code value is not getting stored in the env variable
"""
}
}
catch(Exception e) {
if (env.cmdStatus == 0) {
echo 'Inside Success'
} else {
echo 'Inside Failure'
}
}
So without Jenkins Pipeline the Naginator Plugin allows to restart a specific build on failure using regular expressions.
I like the retry option in Jenkins pipeline but I am not sure if I can catch an error from the build in the catch block and do a retry.
Is there a way to do so?
Eg: I have jenkins build which runs make. now make fails with an error: "pg_config.h missing". I want to catch this error and retry the build again a couple of times.
How can I do the above? Also, is it possible to catch multiple errors similar to regular expressions in Naginator somehow using pipelines?
retry("3"){
try {
sh "${cmd} 2>&1 > cmdOutput.txt"
sh "cat cmdOutput.txt"
} catch(FlowInterruptedException interruptEx) {
throw interruptEx
} catch(err) {
def cmdOutput = readFile('cmdOutput.txt').trim()
if (cmdOutput.contains("pg_config.h missing")) {
error "Command failed with error : ${err}. Retrying ...."
} else {
echo "Command failed with error other than `pg_config.h missing`"
}
}
}
I use the 'waitUntil' step and a counter to retry a shell command. I capture the output of the shell command so that I can run regex checks against the output and then continue or exit the loop.
// example pipeline
pipeline {
agent {
label ""
}
stages {
// stage('clone repo') {
// steps {
// git url: 'https://github.com/your-account/project.git'
// }
// }
// stage ('install') {
// steps {
// sh 'npm install'
// }
// }
stage('build') {
steps {
script {
// wrap with timeout so the job aborts if no activity
timeout(activity: true, time: 5, unit: 'MINUTES') {
// loop until the inner function returns true
waitUntil {
// setup or increment "count" counter and max value
count = (binding.hasVariable('count')) ? count + 1 : 1
countMax = 3
println "try: $count"
// Note: you must include the "|| true" after your command,
// so that the exit code always returns as 0. The "sh" command is
// actually running '/bin/sh -xe'. The '-e' option forces the script
// to exit on non-zero exit code. Prevent this by forcing a 0 exit code
// by adding "|| true"
// execute command and capture stdout
// Uncomment one of these 3 lines to test different conditions.
output = sh returnStdout: true, script: 'echo "Finished: SUCCESS" || true'
// output = sh returnStdout: true, script: 'echo "BUILD FAILED" || true'
// output = sh returnStdout: true, script: 'echo "something else happened" || true'
// show the output in the log
println output
// run different regex tests against the output to check the state of your build
buildOK = output ==~ /(?s).*Finished: SUCCESS.*/
buildERR = output ==~ /(?s).*BUILD FAILED.*/
// then check your conditions
if (buildOK) {
return true // success, so exit loop
} else if (buildERR) {
if (count >= countMax) {
// count exceeds threshold, so throw an error (exits pipeline)
error "Retried $count times. Giving up..."
}
// wait a bit before retrying
sleep time: 5, unit: 'SECONDS'
return false // repeat loop
} else {
// throw an error (exits pipeline)
error 'Unknown error - aborting build'
}
}
}
}
}
}
}
// post {
// always {
// cleanWs notFailBuild: true
// }
// }
}
Here is the content of my Jenkinsfile :
node {
// prints only the first element 'a'
[ 'a', 'b', 'c' ].each {
echo it
}
}
When executing the job in Jenkins (with the Pipeline plugin), only the first item in the list is printed.
Can someone explain me this strange behavior? Is it a bug? or is it just me not understanding the Groovy syntax?
Edit : the for (i in items) works as expected :
node {
// prints 'a', 'b' and 'c'
for (i in [ 'a', 'b', 'c' ]) {
echo i
}
}
The accepted answer here states that it's a known bug, and uses a workaround that didn't work for me, so I'll offer an update with what I've found lately.
Despite the resolution of JENKINS-26481 (fairly recent, as of this writing) many people may be stuck with an older version of Jenkins where the fix is not available. For-loop iteration over a literal list might work sometimes but related issues like JENKINS-46749 and JENKINS-46747 seem to continue to bedevil many users. Also, depending on the exact context in your Jenkinsfile, possibly echo will work whereas sh fails, and things might fail silently or they might crash the build with serialization failures.
If you dislike surprises (skipped loops and silent failures) and if you want your Jenkinsfiles to be the most portable across multiple versions of Jenkins, the main idea seems to be that you should always use classic counters in your for-loops and ignore other groovy features.
This gist is the best reference I've seen and spells out many cases that you'd think should work the same but have surprisingly different behaviour. It's a good starting place to establish sanity checks and debug your setup, regardless of what kind of iteration you're looking at and regardless of whether you're trying to use #NonCPS, do your iteration directly inside node{}, or call a separate function.
Again, I take no credit for the work itself but I'm embedding the gist of iteration test cases below for posterity:
abcs = ['a', 'b', 'c']
node('master') {
stage('Test 1: loop of echo statements') {
echo_all(abcs)
}
stage('Test 2: loop of sh commands') {
loop_of_sh(abcs)
}
stage('Test 3: loop with preceding SH') {
loop_with_preceding_sh(abcs)
}
stage('Test 4: traditional for loop') {
traditional_int_for_loop(abcs)
}
}
#NonCPS // has to be NonCPS or the build breaks on the call to .each
def echo_all(list) {
list.each { item ->
echo "Hello ${item}"
}
}
// outputs all items as expected
#NonCPS
def loop_of_sh(list) {
list.each { item ->
sh "echo Hello ${item}"
}
}
// outputs only the first item
#NonCPS
def loop_with_preceding_sh(list) {
sh "echo Going to echo a list"
list.each { item ->
sh "echo Hello ${item}"
}
}
// outputs only the "Going to echo a list" bit
//No NonCPS required
def traditional_int_for_loop(list) {
sh "echo Going to echo a list"
for (int i = 0; i < list.size(); i++) {
sh "echo Hello ${list[i]}"
}
}
// echoes everything as expected
Thanks to #batmat on #jenkins IRC channel for answering this question!
It's actually a known bug : JENKINS-26481.
A workaround for this issue is to expand all the commands to a flat text file as a groovy script. Then use load step to load the file and execute.
For example:
#NonCPS
def createScript(){
def cmd=""
for (i in [ 'a', 'b', 'c' ]) {
cmd = cmd+ "echo $i"
}
writeFile file: 'steps.groovy', text: cmd
}
Then call the function like
createScript()
load 'steps.groovy'
Here is example loop example with curl without NonCPS :
#!/usr/bin/env groovy
node('master') {
stagesWithTry([
'https://google.com/',
'https://github.com',
'https://releases.hashicorp.com/',
'https://kubernetes-charts.storage.googleapis.com',
'https://gcsweb.istio.io/gcs/istio-release/releases'
])
stage ('ALlinOneStage'){
stepsWithTry([
'https://google.com/',
'https://github.com',
'https://releases.hashicorp.com/',
'https://kubernetes-charts.storage.googleapis.com',
'https://gcsweb.istio.io/gcs/istio-release/releases'
])
}
}
//loop in one stage
def stepsWithTry(list){
for (int i = 0; i < list.size(); i++) {
try {
sh "curl --connect-timeout 15 -v -L ${list[i]}"
} catch (Exception e) {
echo "Stage failed, but we continue"
}
}
}
//loop in multiple stage
def stagesWithTry(list){
for (int i = 0; i < list.size(); i++) {
try {
stage(list[i]){
sh "curl --connect-timeout 15 -v -L ${list[i]}"
}
} catch (Exception e) {
echo "Stage failed, but we continue"
}
}
}
Within a Jenkinfile pipeline script, how do you query the running job state to tell if it has been aborted?
Normally a FlowInterruptedException or AbortException (if a script was running) will be raised but these can be caught and ignored. Also scripts will not exit immediately if it has multiple statements.
I tried looking at 'currentBuild.Result' but it doesn't seem to be set until the build has complete. Something in 'currentBuild.rawBuild' perhaps?
There is nothing that would automatically set the build status if the exception has been caught. If you want such exceptions to set a build status, but let the script continue, you can write for example
try {
somethingSlow()
} catch (InterruptedException x) {
currentBuild.result = 'ABORTED'
echo 'Ignoring abort attempt at this spot'
}
// proceed
You could implement a watchdog branch in a parallel step. It uses a global to keep track of the watchdog state which could be dangerous, I don't know if accessing globals in 'parallel' is threadsafe. It even works if 'bat' ignores the termination and doesn't raise an exception at all.
Code:
runWithAbortCheck { abortState ->
// run all tests, print which failed
node ('windows') {
for (int i = 0; i < 5; i++) {
try {
bat "ping 127.0.0.1 -n ${10-i}"
} catch (e) {
echo "${i} FAIL"
currentBuild.result = "UNSTABLE"
// continue with remaining tests
}
abortCheck(abortState) // sometimes bat doesn't even raise an exception! so check here
}
}
}
def runWithAbortCheck(closure) {
def abortState = [complete:false, aborted:false]
parallel (
"_watchdog": {
try {
waitUntil { abortState.complete || abortState.aborted }
} catch (e) {
abortState.aborted = true
echo "caught: ${e}"
throw e
} finally {
abortState.complete = true
}
},
"work": {
try {
closure.call(abortState)
}
finally {
abortState.complete = true
}
},
"failFast": true
)
}
def _abortCheckInstant(abortState) {
if (abortState.aborted) {
echo "Job Aborted Detected"
throw new org.jenkinsci.plugins.workflow.steps.FlowInterruptedException(Result.ABORTED)
}
}
def abortCheck(abortState) {
_abortCheckInstant(abortState)
sleep time:500, unit:"MILLISECONDS"
_abortCheckInstant(abortState)
}