I am trying to place timeout in below code and skip rest of the stages in jenkins if still "status = progress" after 3 minutes.
script {
def date = new Date()
currtime = date.getTime()
future_time = date.getTime() + 3 * 60000
while (currtime < future_time) {
date = new Date()
currtime = date.getTime()
Status= "IN_PROGRESS"
if (Status == 'IN_PROGRESS') {
echo 'Status is IN_PROGRESS'
} else if (Status == 'SUCCEEDED') {
break
} else if (Status == 'FAILED') {
echo "Status is FAILED"
exit(1)
} else {
exit(1)
}
}
}
You can just use timeout inside your pipeline without any custom time calculation. It automatically works while stage in the "IN_PROGRESS" status. When a timeout is reached, all next stages will be skipped, and job status will be "ABORTED".
pipeline {
agent {
label 'agent_name'
}
stages {
stage('Stage_name') {
steps {
timeout(time: 3, unit: 'MINUTES') {
script {
// Your logic here ...
}
}
}
}
stage { .. }
stage { .. }
}
}
Also, I guess your question is related to this one. You can find there more examples.
Related
1.The following code can be replicated locally
how do i reuse the redundant code above, The code here may be a bit special. The code that needs to be reused includes stages. This is because an upstream build issues testing tasks to a downstream job(the code here), which is partially serialized in a producer-consumer model to control the degree of parallelism
def fill_list() {
def test_list = [] // or [].asSynchronized()
for ( int i = 0; i < 2; i++ ) {
test_list.add(i.toString())
}
return test_list
}
pipeline {
agent any
stages {
stage('Assign Tests') {
steps {
script {
test_list = fill_list()
}
}
}
stage('CTest Stage') {
parallel {
stage('stream_1') {
agent any
stages {
stage('installation') {
steps {
script {
while (test_list.size() > 0) {
def current_task = 0
try {
current_task = test_list.pop()
} catch (err) {
println('current stream ends')
break
}
stage("stream_1_${current_task}") {
script {
sleep(0.5)
println("ctest -R ${current_task} -N") // do test
println("node executor num: ${env.EXECUTOR_NUMBER}")
}
}
}
}
}
}
stage('clean') {
steps {
script {
println('stream_1_workspace clean')
}
}
}
}
}
stage('stream_2') {
agent any
stages {
stage('installation') {
steps {
script {
while (test_list.size() > 0) {
def current_task = 0
try {
current_task = test_list.pop()
} catch (err) {
println('current stream ends')
break
}
stage("stream_2_${current_task}") {
script {
sleep(0.5)
println("ctest -R ${current_task} -N") // do test
println("node executor num: ${env.EXECUTOR_NUMBER}")
}
}
}
}
}
}
stage('clean') {
steps {
script {
println('stream_1_workspace clean')
}
}
}
}
}
}
}
}
}
You could write a function that would provide a map containing reusable code and execute it in parallel. See also this answer https://stackoverflow.com/a/53237378/10721630
def parallelExecution() {
streams = [:]
for (int count = 1; count <= 2; count++) {
def temp = count
streams["stream_${temp}"] = {
node {
stage('installation') {
while (test_list.size() > 0) {
def current_task = 0
try {
current_task = test_list.pop()
} catch (err) {
println('current stream ends')
break
}
stage("stream_${temp}_${current_task}") {
println("ctest -R ${current_task} -N") // do test
println("node executor num: ${env.EXECUTOR_NUMBER}")
}
}
}
stage('clean') {
println("stream_${temp}_workspace clean")
}
}
}
}
parallel streams
}
In the pipeline script call the new function.
...
stage('CTest Stage') {
steps {
script {
parallelExecution()
}
}
}
...
i'm trying to solve the issue that the connection between the master and the slave break and want the test to continue to the next bat command automatically.
the master and slave are windows machines.
and i do not want the pipeline to get stuck at the bat command if exception occurs.
the error log
Cannot contact xxxx: hudson.remoting.ChannelClosedException: Channel
"hudson.remoting.Channel#280d9c94:JNLP4-connect connection from
some-ip:51478": Remote call on JNLP4-connect connection from
some-ip:51478 failed. The channel is closing down or has closed down
my pipeline code is
pipeline {
agent { node 'master' }
stages {
stage ("Run my Code") {
steps {
script {
def suite_script = env.SCRIPT.tokenize('\n');
int intNum1 =0;
int counter =0 ;
suite_script.each { item ->
try{
def tmp_suite = item.trim().tokenize(',');
intNum1 += tmp_suite[2] as int
}catch(err){
print err
}
}
suite_script.each { item ->
println item
//get 4 parameters
//suite,script,amount,yes/no
def tmp_suite = item.trim().tokenize(',');
if(tmp_suite[3] == 'yes'){
int intNum = tmp_suite[2] as int
for (i = 0; i < intNum; i++) {
try{
node("${SOME_MACHINE}"){
counter++;
bat 'some code with parameters --value1 tmp_suite[0] --value2 tmp_suite[1]'
}
}catch(err){
print err
}
}
}
}
}
}
}
}
}
I usually lock resources in my declarative pipeline with something like:
lock(resource: "MY_RESOURCE") {
// do something
}
but now I have several different resources I could use, is there a way to check if a resource is locked?
I would like to do something like:
myResources = ["RES1", "RES2", "RES3"]
hasResource = false
for (resource in myResources) {
if (hasresource) {
break
}
if (!isLocked(resource)) {
hasresource = true
lock(resource) {
// do something
}
}
}
(sorry if the syntax is wrong, I don't really program in groovy very often)
according to the sources of lock plugin this should work:
import org.jenkins.plugins.lockableresources.LockableResourcesManager as LRM
def myResources = ["RES1", "RES2", "RES3"]
def notLocked = myResources.find{rName->
LRM.get().forName(rName).with{ r-> !r.isLocked() && !r.isQueued() }
}
if(notLocked){
lock(notLocked){
//do smth
}
}
To Check if a particular resource is locked or not in jenkins
def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager
def myResources = manager.get().fromName("test2")
//def myLabels = manager.get().allLabels // if you want to filter based on labels
def checkLocked = myResources.find { r ->
!r.isLocked() && !r.isQueued() }
if (checkLocked) {
def myResource = checkLocked.toString()
println myResource + "is locked"
}
else {
println "Specified Resource is not Locked"
}
Maybe not an optimal solution, but we managed to achieve this with following approach:
waitUntil {
lock(resource: 'RES1', skipIfLocked: true){
// do something with RES1
return true; // exit the waiting loop
}
lock(resource: 'RES2', skipIfLocked: true){
// do something with RES2
return true; // exit the waiting loop
}
lock(resource: 'RES3', skipIfLocked: true){
// do something with RES3
return true; // exit the waiting loop
}
}
We did it this way due to the following error we received when we tried to use the accepted answer:
Scripts not permitted to use staticMethod org.jenkins.plugins.lockableresources.LockableResourcesManager get
I have a bunch of nodes serving labels rhel6, rhel7.
How do I execute myFunc() on any 2 nodes of rhel6 and any 3 nodes rhel7 - in parallel?
def slaveList = ['rhel6', 'rhel6', 'rhel7', 'rhel7', 'rhel7']
def stageFunc (String slaveLabel) {
return {
// Run this stage on any available node serving slaveLabel
agent { label "${slaveLabel}" } // Error shown here.
stage {
myFunc()
}
}
}
pipeline {
agent any
stages {
stage('Start') {
steps {
script {
def stageMap = [:]
def i = 0
slaveList.each { s ->
stageMap[i] = stageFunc(s)
i++
}
parallel stageMap
}
}
}
}
}
Error shown:
java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps [archive, ...
I haven't tested this yet, but it should work.
def slaveList = ['rhel6', 'rhel6', 'rhel7', 'rhel7', 'rhel7']
def stageFunc (stage_name, slaveLabel) {
return {
// Run this stage on any available node serving slaveLabel
stage(stage_name){
node(slaveLabel) {
myFunc()
}
}
}
}
pipeline {
agent any
stages {
stage('Start') {
steps {
script {
def stageMap = [:]
def i = 0
slaveList.each { s ->
stageMap[i] = stageFunc("Stage-${i}", s)
i++
}
parallel stageMap
}
}
}
}
}
I want to execute some stage in loop. I have Jenkinsfile
pipeline {
agent any
tools {}
parameters {}
environment {}
stages {
stage('Execute') {
steps {
script {
for (int i = 0; i < hostnameMap.size; i++) {
hostname = hostnameMap[i]
echo 'Executing ' + hostname
stage('Backup previous build ' + hostname) {
backup(hostname, env.appHome)
}
stage('Deploy ' + hostname) {
when {
expression { env.BRANCH_NAME ==~ /(dev|master)/ }
}
steps {
script {
deploy(hostname , env.appHome, env.appName)
}
}
}
stage('Restart ' + hostname) {
when {
expression { env.BRANCH_NAME ==~ /(dev|master)/ }
}
steps {
script {
restart(hostname , env.appName, env.port)
}
}
}
}
}
}
}
}
}
But got error
java.lang.NoSuchMethodError: No such DSL method 'when' found among
steps
Separately all of this stage works fine. Why I got this error?
stage('Execute') {
steps {
script {
for (int i = 0; i < hostnameMap.size; i++) {
hostname = hostnameMap[i]
echo 'Executing ' + hostname
stage('Backup previous build ' + hostname) {
backup(hostname, env.appHome)
}
stage('Deploy ' + hostname) {
if (env.BRANCH_NAME ==~ /(dev|master)/) {
deploy(hostname, env.appHome, env.appName)
}
}
stage('Restart ' + hostname) {
if (env.BRANCH_NAME ==~ /(dev|master)/) {
restart(hostname, env.appName, env.port)
}
}
}
}
}
}
when is a directive used in the declarative pipeline definition - it won't work inside script {} block. Instead use if.