I am working with Jenkins pipelines and I have this code:
stages {
stage('Stage1') {
options {
timeout(time: 1, unit: "MINUTES")
}
steps {
script {
sh'''
#!/bin/bash
set -eux pipefail
ssh user#server.com "
ssh -p 50 user#localhost'\
docker run --rm --name name\
-e user=...\
-e passwd=...\
-v /location:/location2\
-w location2\
server2.com:6000/my-x-y:1.1\
python script.py\
'\
"
'''
}
}
}
}
When the connection inside the script is not being made the job will timeout but it will still go on and will be still marked as succeeded.
I get this message:
17:10:53 Cancelling nested steps due to timeout
17:10:53 Sending interrupt signal to process
After that the jobs moves to the next stage and the status is success.
So even though I am getting timeout the job is being marked as success.
I'd like to send notifications when this stage is not properly executed (I already have a notification.sh script for it).
Anyway I can get this job to be aborted when it gets the timeout?
Or any other way to go around this in order to warn users that this stage was not properly executed?
Try something like below.
try {
timeout (time: 10, unit: 'SECONDS') {
sh'''
#!/bin/bash
set -eux pipefail
ssh user#server.com "
ssh -p 50 user#localhost'\
docker run --rm --name name\
-e user=...\
-e passwd=...\
-v /location:/location2\
-w location2\
server2.com:6000/my-x-y:1.1\
python script.py\
'\
"
'''
}
}
catch (error) {
echo "Error: $error"
def cause = error.getCauses()[0].getClass().toString()
if(cause.contains("ExceededTimeout")) { // If you want handle timeout as a special case
echo "This was a Timeout"
// Do whatever you want
}
}
Full Sample Pipeline
pipeline {
agent any
stages {
stage('TimerTest') {
steps {
script {
try {
timeout (time: 10, unit: 'SECONDS') {
echo "In timer"
sleep 15
}
}
catch (error) {
echo "XXXX: $error"
def cause = error.getCauses()[0].getClass().toString()
println "$cause"
if(cause.contains("ExceededTimeout")) {
echo "This was a Timeout"
// Do what ever you want
}
}
}
}
}
}
}
Related
Need help setting up a pipeline on jenkins.
It is necessary to run tests and collect logs in parallel, it worked out, but now there is another problem, the collection of logs is not completed. Maybe there is some method how to stop a task after another task is completed?
stage('Smoke Run') {
steps {
parallel(
first: {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '$PYTHON -m pytest --testit android_tv/tests/smoke_run/ --clean-alluredir --alluredir=/Users/jenkins/allure-report/android-tv'
}
},
second: {
sh "$ADB logcat -c"
sh "$ADB logcat -> ~/jenkins/workspace/Android_TV_Smoke_Run/android_tv/tests/smoke_run/logs_tv/log.log"
}
)
}
}
found a solution
steps {
script {
stop = false
parallel(
first: {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
set +e
$PYTHON -m pytest --testit android_tv/tests/smoke_run/ --clean-alluredir --alluredir=/Users/jenkins/allure-report/android-tv
set -e
'''.stripIndent()
stop = true
}
},
second: {
while (!stop){
sleep 10
}
sh '''pgrep adb logcat | xargs kill'''
sh '''echo "Finish writing logs"'''
}
)
}
}
}```
I have a spring boot application on centos server and use a shell file to restart it.
jenkins version: docker run -dp 8080:8080 --name jenkins jenkinsci/blueocean
start-service.sh
#!/bin/bash
sudo systemctl restart sb
In my Jenkinsfile i upload jar file to server and execute the start-service.sh, but jenkins seem dosen't know my java application restart success or fail.
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
steps {
sh 'mvn -B -DskipTests clean package'
sh 'mvn help:evaluate -Dexpression=project.name | grep "^[^\\[]" > project-name'
sh 'mvn help:evaluate -Dexpression=project.version | grep "^[^\\[]" > project-ver'
}
}
stage('Deploy') {
agent any
environment {
HOST = "${HEHU_HOST}"
USER = "yunwei"
DIR = "/www/java/sb-demo"
VERSION_FILE = "${DIR}/version"
CMD_SERVICE = "${DIR}/start-service.sh"
}
steps {
sshagent (credentials: ['hehu']) {
sh '''
name=$(cat project-name)
ver=$(cat project-ver)
jarFile=${name}-${ver}.jar
scp target/${jarFile} ${USER}#${HOST}:${DIR}/${jarFile}
scp project-ver ${USER}#${HOST}:${VERSION_FILE}
ssh -o StrictHostKeyChecking=no -l ${USER} ${HOST} -a ${CMD_SERVICE}
'''
}
}
}
}
}
I deliberately let Java application go wrong, and systemctl restart is fail but jenkins stage is success.
#SpringBootApplication
#RestController
public class Application {
public static void main(String[] args) {
throw new RuntimeException("Test error");
// SpringApplication.run(Application.class, args);
}
#GetMapping("/test")
String test() {
return "furukawa nagisa\n";
}
}
Try Daniel Taub solution get syntax error.
pipeline {
agent none
stages {
stage('Hello') {
agent any
steps {
sshagent (credentials: ['hehu']) {
SH_SUCCESS = sh(
script: '''
ssh -o StrictHostKeyChecking=no -l yunwei ${HEHU_HOST} -a /www/java/sb-demo/start-service.sh
''',
returnStatus: true
) == 0
echo '${SH_SUCCESS}'
}
}
}
}
}
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 9: Expected a step # line 9, column 21.
SH_SUCCESS = sh(
You can get the sh returned exit status code and fail the build manually if its error status.
For checking your exit code, Jenkins support it that way
returnStatus (optional)
Normally, a script which exits with a nonzero status code will cause the step to fail with an exception. If this option is checked, the return value of the step will instead be the status code. You may then compare it to zero, for example.
script {
SH_SUCCESS = sh (
script: "your command",
returnStatus: true
) == 0
}
To manually fail the build you have couple of options:
error('Fail my build!')
Or alternatively
currentBuild.result = 'FAILURE'
return
So without Jenkins Pipeline the Naginator Plugin allows to restart a specific build on failure using regular expressions.
I like the retry option in Jenkins pipeline but I am not sure if I can catch an error from the build in the catch block and do a retry.
Is there a way to do so?
Eg: I have jenkins build which runs make. now make fails with an error: "pg_config.h missing". I want to catch this error and retry the build again a couple of times.
How can I do the above? Also, is it possible to catch multiple errors similar to regular expressions in Naginator somehow using pipelines?
retry("3"){
try {
sh "${cmd} 2>&1 > cmdOutput.txt"
sh "cat cmdOutput.txt"
} catch(FlowInterruptedException interruptEx) {
throw interruptEx
} catch(err) {
def cmdOutput = readFile('cmdOutput.txt').trim()
if (cmdOutput.contains("pg_config.h missing")) {
error "Command failed with error : ${err}. Retrying ...."
} else {
echo "Command failed with error other than `pg_config.h missing`"
}
}
}
I use the 'waitUntil' step and a counter to retry a shell command. I capture the output of the shell command so that I can run regex checks against the output and then continue or exit the loop.
// example pipeline
pipeline {
agent {
label ""
}
stages {
// stage('clone repo') {
// steps {
// git url: 'https://github.com/your-account/project.git'
// }
// }
// stage ('install') {
// steps {
// sh 'npm install'
// }
// }
stage('build') {
steps {
script {
// wrap with timeout so the job aborts if no activity
timeout(activity: true, time: 5, unit: 'MINUTES') {
// loop until the inner function returns true
waitUntil {
// setup or increment "count" counter and max value
count = (binding.hasVariable('count')) ? count + 1 : 1
countMax = 3
println "try: $count"
// Note: you must include the "|| true" after your command,
// so that the exit code always returns as 0. The "sh" command is
// actually running '/bin/sh -xe'. The '-e' option forces the script
// to exit on non-zero exit code. Prevent this by forcing a 0 exit code
// by adding "|| true"
// execute command and capture stdout
// Uncomment one of these 3 lines to test different conditions.
output = sh returnStdout: true, script: 'echo "Finished: SUCCESS" || true'
// output = sh returnStdout: true, script: 'echo "BUILD FAILED" || true'
// output = sh returnStdout: true, script: 'echo "something else happened" || true'
// show the output in the log
println output
// run different regex tests against the output to check the state of your build
buildOK = output ==~ /(?s).*Finished: SUCCESS.*/
buildERR = output ==~ /(?s).*BUILD FAILED.*/
// then check your conditions
if (buildOK) {
return true // success, so exit loop
} else if (buildERR) {
if (count >= countMax) {
// count exceeds threshold, so throw an error (exits pipeline)
error "Retried $count times. Giving up..."
}
// wait a bit before retrying
sleep time: 5, unit: 'SECONDS'
return false // repeat loop
} else {
// throw an error (exits pipeline)
error 'Unknown error - aborting build'
}
}
}
}
}
}
}
// post {
// always {
// cleanWs notFailBuild: true
// }
// }
}
I configure Jenkins to work with sonarqube scanner. The scan are working fine. The jenkins pipeline is working and I don't have any isssue in the jenkins log.
SonarQube Scanner 3.0.3.778
Jenkins: 2.70
SonarQube Scanner for Jenkins plugin: 2.6.1
I use this code:
stage('SonarQube analysis') {
sh 'sed -ie "s|_PROJECT_|${PROJECT_CODE}|g" $WORKSPACE/_pipeline/sonar-project.properties'
// requires SonarQube Scanner 3.0+
def scannerHome = '/opt/sonar/bin/sonar-scanner';
withSonarQubeEnv('mscodeanalysis') {
sh "${scannerHome}/bin/sonar-scanner -Dproject.settings=$WORKSPACE/_pipeline/sonar-project.properties"
}
}
}
}
}
}
// No need to occupy a node
stage("Quality Gate"){
timeout(time: 15, unit: 'MINUTES') { // Just in case something goes wrong, pipeline will be killed after a timeout
def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv
if (qg.status != 'OK') {
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
My problem come from Quality Gate. It never POST the json payload to jenkins. I don't see a json entry inside jenkins log. But I know the connection between jenkins and sonarqube server is working because I was able to send a POST using curl from the sonarqube VM.
Here the jenkins job output:
Timeout set to expire in 15 min
[Pipeline] {
[Pipeline] waitForQualityGate
Checking status of SonarQube task 'AV3irVJXpvBxXXNJYZkd' on server 'mscodeanalysis'
SonarQube task 'AV3irVJXpvBxXXNJYZkd' status is 'PENDING'
Cancelling nested steps due to timeout
Here is my payload that never reach jenkins pipeline:
url: http://sonar-server:9000/api/ce/task?id=AV3irVJXpvBxXXNJYZkd
{"task":{"organization":"default-organization","id":"AV3irVJXpvBxXXNJYZkd","type":"REPORT","componentId":"AV3hrJeCfL_nrF2072FH","componentKey":"POOL-003","componentName":"POOL-003","componentQualifier":"TRK","analysisId":"AV3irVkZszLEB6PsCK9X","status":"SUCCESS","submittedAt":"2017-08-14T21:36:35+0000","submitterLogin":"jenkins","startedAt":"2017-08-14T21:36:37+0000","executedAt":"2017-08-14T21:36:38+0000","executionTimeMs":650,"logs":false,"hasScannerContext":true}}
I can't insert image but the Quality gate is Pass and the analysis task is success.
Let me know if I need to include more information.
Thank you
The issue could be that Jenkins is using https with self-signed certificate. Then solution is:
Generate truststore for SonarQube:
keytool -import -trustcacerts -alias jenkins-host-name -file cert.crt -keystore sonarqube.jks
keystore passw: password
Where cert.crt - is certificate used for ssl for jenkins, jenkins-host-name - is a hostname of jenkins in the docker network (which is used in webhook)
Add truststore to SonarQube Dockerfile:
FROM sonarqube
COPY sonarqube.jks /var/sonar_cert/
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
Update sonar.properties
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=/var/sonar_cert/sonarqube.jks -Djavax.net.ssl.trustStorePassword=password
Then if you have a correct user and password for Jenkins provided in webhook URL everything should work.
Tried: Jenkins 2.107.2, SonarQube 7.1
Here is a quick example of what we did to resolve this issue:
SonarQube randomly hangs at "pending" state. Telling it to retry refreshes it. We set it to 10 seconds in this example
maxRetry = 200
forloop (i=0; i<maxRetry; i++){
try {
timeout(time: 10, unit: 'SECONDS') {
waitForQualityGate()
}
} catch(Exception e) {
if (i == maxRetry-1) {
throw e
}
}
}
Was surprised to find that #Katone Vi's answer worked so well. Based on their answer we added a quick exit on success and used the DSL for the original request:
stage('SonarQube') {
steps {
withSonarQubeEnv('SonarQube') {
sh """
${scannerHome}/bin/sonar-scanner -Dsonar.projectKey=XXX_${env.STAGE}_lambda
"""
}
script {
Integer waitSeconds = 10
Integer timeOutMinutes = 10
Integer maxRetry = (timeOutMinutes * 60) / waitSeconds as Integer
for (Integer i = 0; i < maxRetry; i++) {
try {
timeout(time: waitSeconds, unit: 'SECONDS') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Sonar quality gate status: ${qg.status}"
} else {
i = maxRetry
}
}
} catch (Throwable e) {
if (i == maxRetry - 1) {
throw e
}
}
}
}
}
}
If you have configured SonarQube to use a HTTP(S) proxy, make sure that your jenkins is either reachable through the proxy or is configured as a "non-proxy host". This can be done with the http.nonProxyHosts property or HTTP_NONPROXYHOSTS environment variable. See also the documentation for further information and syntax.
If you are using Jenkinsfile, this is workaround:
define creadentials:
environment {
CRED = credentials('jenkins_user_pass')
}
then use:
stage("Quality Gate") {
steps {
script {
while(true){
sh "sleep 2"
def url="http://jenkinsURL/job/${env.JOB_NAME.replaceAll('/','/job/')}/lastBuild/consoleText";
def sonarId = sh script: "wget -qO- --content-on-error --no-proxy --auth-no-challenge --http-user=${CRED_USR} --http-password=${CRED_PSW} '${url}' | grep 'More about the report processing' | head -n1 ",returnStdout:true
sonarId = sonarId.substring(sonarId.indexOf("=")+1)
echo "sonarId ${sonarId}"
def sonarUrl = "http://jenkinsURL/sonar/api/ce/task?id=${sonarId}"
def sonarStatus = sh script: "wget -qO- '${sonarUrl}' --no-proxy --content-on-error | jq -r '.task' | jq -r '.status' ",returnStdout:true
echo "Sonar status ... ${sonarStatus}"
if(sonarStatus.trim() == "SUCCESS"){
echo "BREAK";
break;
}
if(sonarStatus.trim() == "FAILED "){
echo "FAILED"
currentBuild.result = 'FAILED'
break;
}
}
}
}
}
I have faced similar issue, while the quality Gate back-end activities in Sonar server takes less than 20 sec to complete its analysis.But quality gate fail/success response from sonar-webhook in jenkins job take lot of time and stuck.
stage('Sonar:QG') {
steps {
**sleep(10) /* Added 10 sec sleep that was suggested in few places*/**
script{
timeout(time: 10, unit: 'MINUTES') {
def qg = waitForQualityGate abortPipeline: true
if (qg.status != 'OK') {
echo "Status: ${qg.status}"
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
Essentially check below thing:-
Webhook is configured in sonar or not :- SonarQube -> Administration -> Webhooks
http://:/sonarqube-webhook/
or Use localhost in place of IP in http://locahlhost:port/sonarqube-webhook/ solves issue in my case.
I'm made more simple decision but it work a same
stage("Quality gate") {
steps {
retry(3){
waitForQualityGate abortPipeline: true
}
}
}
Adding a sh 'sleep 10' between stage('SonarQube analysis') AND stage("Quality Gate") fix the issue. Now the jenkins job receive
Checking status of SonarQube task 'AV3rHxhp3io6giaQF_OA' on server 'sonarserver'
SonarQube task 'AV3rHxhp3io6giaQF_OA' status is 'SUCCESS'
SonarQube task 'AV3rHxhp3io6giaQF_OA' completed. Quality gate is 'OK'
I am currently starting to convert our builds into a Jenkins build pipeline. At a certain point it is necessary for us to wait for the startup of a web application within a docker container.
My idea was to use something like this:
timeout(120) {
waitUntil {
sh 'wget -q http://server:8080/app/welcome.jsf -O /dev/null'
}
}
Unfortunately this makes the pipeline build fail:
ERROR: script returned exit code 4
Is there any simple way to make this work?
Edit:
I managed to make it work using the following code, but the stage is still marked as failed (although the build continues and is marked green in the end).
timeout(120) {
waitUntil {
try {
sh 'wget -q http://server:8080/app/welcome.jsf -O /dev/null'
return true
} catch (exception) {
return false
}
}
}
They just released a new version of the Pipeline Nodes and Processes Plugin which adds support for returning the exit status.
This seems to do the job now:
timeout(5) {
waitUntil {
script {
def r = sh script: 'wget -q http://remoterhoste/welcome.jsf -O /dev/null', returnStdout: true
return (r == 0);
}
}
}
You can use wget options to achieve that:
waitUntil {
sh 'wget --retry-connrefused --tries=120 --waitretry=1 -q http://server:8080/app/welcome.jsf -O /dev/null'
}
120 tries, plus wait 1s between retries, retry even in case of connection refused, this might be slightly more seconds. So to make sure it is only 120s, then you can use timeout from shell:
waitUntil {
sh 'timeout 120 wget --retry-connrefused --tries=120 --waitretry=1 -q http://server:8080/app/welcome.jsf -O /dev/null'
}
If you don't have wget on the jenkins node (e.g. the default docker image) you can also install and use the HTTPRequest Plugin like this.
timeout(5) {
waitUntil {
script {
try {
def response = httpRequest 'http://server:8080/app/welcome.jsf'
return (response.status == 200)
}
catch (exception) {
return false
}
}
}
}