Force complete Jenkins job - jenkins

I am running Jenkins pipeline script.
From this pipeline script we are running one script which keeps on producing output,so this process does not end.
So because of this my Jenkins job is not getting completed but as all the steps completed here i somehow want to mark this job complete after say 10 mins.
Is there any way to complete jenkins job from pipeline scipt after say 10 mins.
Below is my pipeline script and runspbt.sh is that never ending script.
pipeline {
agent {label 'Executionmachine3089'}
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 /home/rxx11pp/runspbt.sh"
}
}
}}

you can wait and check for some success output (or maybe a marker file in the workspace) .
steps{
sh "sleep 600s"
script{
if(testSuccessMethod){
currentBuild.result = "SUCCESS"
return
}else{currentBuild.result = "FAILURE"
return
}
}

You may want to use linux's timeout command:
pipeline {
agent { label 'Executionmachine3089' }
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 timeout 90 /home/rxx11pp/runspbt.sh"
}
}
}
}
This will start the script and signal it to exit after 90 seconds.
Note this is not related to Jenkins.

Related

trigger pipeline job from jenkins pipeline script

I have two pipeline jobs which are job A and job B. I need to trigger job B while job A is running.. because job A will not finish due to some API calls. so I need to start the next pipeline job B.
how can we trigger another pipeline job from the Jenkins file?
All the parallel blocks of a,b,c need to run.
Below I have pasted the job A Jenkins script.
pipeline {
agent any
stages {
stage('need to run parallelly'){
steps {
parallel(
a:{
dir('file path'){
bat """
npm install
"""
}
},
b:{
dir('file path'){
bat """
npm start
"""
}
},
c:{
build job: 'JOB_B'
}
)
}
}
}
}
You have an example here.
In your case, try:
pipeline {
agent any
stages {
stage('need to run parallelly'){
steps{
script{
parallel(
a:{
dir('file path'){
bat """
npm install
"""
}
},
b:{
dir('file path'){
bat """
npm start
"""
}
},
"build":{
build job: 'JenkinsTest'
},
)
}
}
}
}

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

How to run Jenkins scripted pipeline job on free slave?

I want to run Jenkins scripted pipeline job at the specified slave at the moment when no other job is running on it.
After my job will be started, no other jobs should be performed on this slave, they will have to wait for my job ending running
All the tutorials that I found allowed me to run job on the node when it is free but did not protect me from launching other jobs on this node
Could you tell me how can I do this?
Because Pipeline has two Syntax, there are two ways to achieve that. For scripted pipeline please check the second one.
Declarative
pipeline {
agent none
stages {
stage('Build') {
agent { label 'slave-node​' }
steps {
echo 'Building..'
sh '''
'''
}
}
}
post {
success {
echo 'This will run only if successful'
}
}
}
Scripted
node('your-node') {
try {
stage 'Build'
node('build-run-on-this-node') {
sh ""
}
} catch(Exception e) {
throw e
}
}

Jenkins declarative pipline multiple slave

I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run.
pipeline {
triggers { pollSCM '0 0 * * 0' }
agent { dockerfile { label 'jenkins-slave'
filename 'Dockerfile'
}
}
stages{
stage('1'){
steps{ sh "blah" }
} // stage
} // stages
} // pipeline
What I need to do now is run a new stage on a different slave, NOT in docker.
I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave.
stage('test new slave') {
agent { node { label 'e2e-aws' } }
steps {
sh "ifconfig"
} // steps
} // stage
I get the following error message
13:14:23 unknown flag: --workdir
13:14:23 See 'docker exec --help'.
I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues
1. My post actions show an error
2. The stages that have parallel stages also had an error.
I can't find any examples that are similar to what I am doing.
You can use the node block to select a node to run a particular stage.
pipeline {
agent any
stages {
stage('Init') {
steps {
node('master'){
echo "Run inside a MASTER"
}
}
}
}
}

Mark a stage in Jenkins Pipeline as eg "UNSTABLE" but proceed with future stages?

I'm going to use Jenkins pipeline plugin to test several binaries A B C on several nodes 1 2 3.
In the end of my test I would like to have every single result of all possible combinations. So my Pipe may not abort when a single stage fails. It should proceed.
eg: A1 green, A2 green, A3 red, B1 green, B2 red, ..., C3 green
But when the first binary returns with an value unequal zero ("Binary not working on the system") it's stage is marked as FAILURE and any other stages are skipped.
Is there a possibility in Jenkins Pipeline to mark a stage as "UNSTABLE" but proceed with running the other tests?
According to Continue Jenkins job after failed stage while marking stage as failed can't mark this step as failed. The solution of this in running tasks in parallel is not working for my setup. So is it possible to safely mark it as something else? Is it possible to manipulate the result of a stage?
This question How to continue past a failing stage in Jenkins declarative pipeline syntax intents to use a scripted pipeline. I would like to avoid that if it is possible to do it in an other way.
pipeline {
agent {label 'master'}
stages {
stage('A1') {
agent {label 'Node1'}
steps {
sh 'binA'
}
}
stage('A2') {
agent {label 'Node1'}
steps {
sh 'binB' // If this bin fails, all following stages are skipped
}
}
// ...
stage('C3'){
agent {label 'Node3'}
steps {
sh 'binC'
}
}
}
}
Declarative Pipeline: Though using currentBuild.result = 'UNSTABLE' works in declarative pipelines too, Blue Ocean displays all stages as unstable irrespective of which stage fails.
To mark only specific stages as unstable, use the step unstable(message: String) as described here within your stage and install/update the following plugins:
Pipeline: Basic Steps to 2.16 or newer
Pipeline: API Plugin to 2.34 or newer
Pipeline: Groovy to 2.70 or newer
Pipeline Graph Analysis to 1.10 or newer
Sample pipeline stage:
stage('Sign Code') {
steps {
script {
try {
pwd()
sh "<YOUR SCRIPT HERE>"
}
catch (err) {
unstable(message: "${STAGE_NAME} is unstable")
}
}
}
}
Note: This also marks the overall build status as unstable.
There is now a more elegant solution, that not only allows you to set a stage and the job result to unstable. Using catchError, you can set any combination of stage and build result:
pipeline {
agent any
stages {
stage('1') {
steps {
sh 'exit 0'
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
In the example above, all stages will execute, the pipeline will be successful, but stage 2 will show as failed:
As mentioned above, you can freely choose the buildResult and stageResult. You can even fail the build and continue the execution of the pipeline.
Just make sure your Jenkins is up to date, since this is a fairly new feature. (Pipeline: Basic Steps needs to be 2.18 or newer)
For scripted pipeline, you can use try .. catch blocks inside the stages and then set currentBuild.result = 'UNSTABLE'
in the exception handler.

Resources