I have a spring boot application on centos server and use a shell file to restart it.
jenkins version: docker run -dp 8080:8080 --name jenkins jenkinsci/blueocean
start-service.sh
#!/bin/bash
sudo systemctl restart sb
In my Jenkinsfile i upload jar file to server and execute the start-service.sh, but jenkins seem dosen't know my java application restart success or fail.
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
steps {
sh 'mvn -B -DskipTests clean package'
sh 'mvn help:evaluate -Dexpression=project.name | grep "^[^\\[]" > project-name'
sh 'mvn help:evaluate -Dexpression=project.version | grep "^[^\\[]" > project-ver'
}
}
stage('Deploy') {
agent any
environment {
HOST = "${HEHU_HOST}"
USER = "yunwei"
DIR = "/www/java/sb-demo"
VERSION_FILE = "${DIR}/version"
CMD_SERVICE = "${DIR}/start-service.sh"
}
steps {
sshagent (credentials: ['hehu']) {
sh '''
name=$(cat project-name)
ver=$(cat project-ver)
jarFile=${name}-${ver}.jar
scp target/${jarFile} ${USER}#${HOST}:${DIR}/${jarFile}
scp project-ver ${USER}#${HOST}:${VERSION_FILE}
ssh -o StrictHostKeyChecking=no -l ${USER} ${HOST} -a ${CMD_SERVICE}
'''
}
}
}
}
}
I deliberately let Java application go wrong, and systemctl restart is fail but jenkins stage is success.
#SpringBootApplication
#RestController
public class Application {
public static void main(String[] args) {
throw new RuntimeException("Test error");
// SpringApplication.run(Application.class, args);
}
#GetMapping("/test")
String test() {
return "furukawa nagisa\n";
}
}
Try Daniel Taub solution get syntax error.
pipeline {
agent none
stages {
stage('Hello') {
agent any
steps {
sshagent (credentials: ['hehu']) {
SH_SUCCESS = sh(
script: '''
ssh -o StrictHostKeyChecking=no -l yunwei ${HEHU_HOST} -a /www/java/sb-demo/start-service.sh
''',
returnStatus: true
) == 0
echo '${SH_SUCCESS}'
}
}
}
}
}
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 9: Expected a step # line 9, column 21.
SH_SUCCESS = sh(
You can get the sh returned exit status code and fail the build manually if its error status.
For checking your exit code, Jenkins support it that way
returnStatus (optional)
Normally, a script which exits with a nonzero status code will cause the step to fail with an exception. If this option is checked, the return value of the step will instead be the status code. You may then compare it to zero, for example.
script {
SH_SUCCESS = sh (
script: "your command",
returnStatus: true
) == 0
}
To manually fail the build you have couple of options:
error('Fail my build!')
Or alternatively
currentBuild.result = 'FAILURE'
return
Related
I am working with Jenkins pipelines and I have this code:
stages {
stage('Stage1') {
options {
timeout(time: 1, unit: "MINUTES")
}
steps {
script {
sh'''
#!/bin/bash
set -eux pipefail
ssh user#server.com "
ssh -p 50 user#localhost'\
docker run --rm --name name\
-e user=...\
-e passwd=...\
-v /location:/location2\
-w location2\
server2.com:6000/my-x-y:1.1\
python script.py\
'\
"
'''
}
}
}
}
When the connection inside the script is not being made the job will timeout but it will still go on and will be still marked as succeeded.
I get this message:
17:10:53 Cancelling nested steps due to timeout
17:10:53 Sending interrupt signal to process
After that the jobs moves to the next stage and the status is success.
So even though I am getting timeout the job is being marked as success.
I'd like to send notifications when this stage is not properly executed (I already have a notification.sh script for it).
Anyway I can get this job to be aborted when it gets the timeout?
Or any other way to go around this in order to warn users that this stage was not properly executed?
Try something like below.
try {
timeout (time: 10, unit: 'SECONDS') {
sh'''
#!/bin/bash
set -eux pipefail
ssh user#server.com "
ssh -p 50 user#localhost'\
docker run --rm --name name\
-e user=...\
-e passwd=...\
-v /location:/location2\
-w location2\
server2.com:6000/my-x-y:1.1\
python script.py\
'\
"
'''
}
}
catch (error) {
echo "Error: $error"
def cause = error.getCauses()[0].getClass().toString()
if(cause.contains("ExceededTimeout")) { // If you want handle timeout as a special case
echo "This was a Timeout"
// Do whatever you want
}
}
Full Sample Pipeline
pipeline {
agent any
stages {
stage('TimerTest') {
steps {
script {
try {
timeout (time: 10, unit: 'SECONDS') {
echo "In timer"
sleep 15
}
}
catch (error) {
echo "XXXX: $error"
def cause = error.getCauses()[0].getClass().toString()
println "$cause"
if(cause.contains("ExceededTimeout")) {
echo "This was a Timeout"
// Do what ever you want
}
}
}
}
}
}
}
Am I able somehow to copy data from one stage for usage on another?
For example, I have one stage where I want to clone my repo, and on another run the Kaniko which will copy (on dockerfile) all data to container and build it
How to do this? Because
Stages are independent and I not able to operate via the same data on both
on Kaniko I not able to install the GIT to clone it there
Thanks in advance
Example of code :
pipeline {
agent none
stages {
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
sh 'cd repo'
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
}
}
}
P.S. On dockerfile I using such as
ADD . /
You can try to use stash:
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
script {
stash includes: 'repo/', name: 'myrepo'
}
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
script {
unstash 'myrepo'
}
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
I have a docker in docker setup that build's docker images which is not on the same node as the Jenkins node. When I try to build using the Jenkins node I receive:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
To fix I can build a docker image using below within Jenkinsfile:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
This works as expected, I can use the above Jenkins pipeline config to build a Docker container.
I'm attempting to use the Docker server running at tcp://10.44.10.8:2375 to package a Java Maven project on a new container running on Docker. I've defined the pipeline build as :
pipeline {
agent any
stages {
stage('Maven package') {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
}
}
And receive this message from Jenkins with no further output:
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Maven package)
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘dockerserverlabel’
I've configured the Docker label in Jenkins as :
Which matches the 'Docker Build' settings from the Jenkins file above.
But it seems I've not included some other config within Jenkins and/or the Jenkinsfile to enable the docker image to be built on tcp://10.44.10.8:2375 ?
I'm working through https://www.jenkins.io/doc/tutorials/build-a-java-app-with-maven/ which describes a pipeline for building a maven project on Docker:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
But how to configure the build on a separate Docker container is not described.
Can this Jenkins config:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
be used with
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
?
Currently I have a pipeline working something like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
HOME = '/'
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
}
Problem is, gradle.properties ends up being put in a more known location on the host system for the duration of the build.
I know that Docker lets me 'mount' files from the host. So I'd like to do this instead:
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle ' +
'-v ' + credentials('gradle.properties') +
':/.gradle/gradle.properties'
}
}
Unfortunately, this ends up running this:
$ docker run -t -d -u 1001:1001 -v /home/tester/.gradle:/.gradle -v #credentials(<anonymous>=gradle.properties):/.gradle/gradle.properties -w
Is there a way to have it expand it?
I couldn't find a way to make the environment work for docker args.
My workaround ended up being switching back to scripted pipeline:
agent {
label 'linux'
}
environment {
GRADLE_PROPERTIES = credentials('gradle.properties')
}
steps{
script {
docker.image('java:8').inside('-v /home/tester/.gradle:/.gradle ' +
"-v $GRADLE_PROPERTIES:/.gradle/gradle.properties"){
}
}
}
How to create a function def test() which does some steps after sshing into an instance
I have something like this:
#!/usr/bin/env groovy
def test() {
cd $testPath
mv test*.txt archiveFiles
sh "someScript.sh"
}
pipeline {
agent java
parameters {
string(
name: 'testPath',
defaultValue: '/home/ubuntu/testFiles',
description: 'file directory'
)
}
stages {
stage(test) {
steps{
script{
sh "ssh ubuntu#IP 'test()'"
}
}
}
}
}
I am trying to ssh into an instance and do the steps in the function test() by calling it
I am getting an error like this:
bash: -c: line 1: syntax error: unexpected end of file
ERROR: script returned exit code 1
We use the SSH plugin as follows:
steps {
timeout(time: 2, unit: 'MINUTES') {
sshagent(credentials: ['local-dev-ssh']) {
sh "ssh -p 8022 -l app ${ENVIRONMENT_HOST_NAME} './run-apps.sh ${SERVICE_NAME} ${DOCKER_IMAGE_TAG_PREFIX}-${env.BUILD_NUMBER}'"
}
}
}