Running ssh command on server from Jenkins - jenkins

I have a Jenkins stage as:
stage("Deploy on Server") {
steps {
script {
sh 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path; sh redeploy.sh && exit;"'
}
}
}
and some scripts on my server (centos):
redeploy.sh:
declare -i result=0
...
sh restart.sh
result+=$?
echo "Step 6: result = " $result
# 7. if restart fail, restart /versions/*.jar "sh restart-previous.sh"
if [ $result != "0" ]
then
sh restart-previous.sh
result+=$?
fi
echo "Deploy done. Final result: " $result
restart.sh
nohup java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar &
Because I execute the redeploy.sh script from the Jenkins, the problem is that it will cling on Jenkins console and will log all application events there, instead to create a nohup file in the patch where my app is deployed.
In some examples I found that it is recommended to use nohup directly in ssh command, but I can't do this because I need to execute a script (with all the steps, nohup can't doing that) and not directly a command.
exit cmd will be ignored because the previous command will never be closed.
thanks

Finally, I found the solution. One problem was in restart.sh, because is needed to force from cmd to specify the log file. So, nohup is ignored/unused, and the command become:
java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar </dev/null>> logfile.log 2>&1 &
Another problem was with killing the previous jar process. Be very carrefour, because using project name as path in jenkins script, this will create a new process for your user and will be accidentally killed when you will want to stop your application:
def statusCode = sh returnStatus: true, script: 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path/app-folder; sh redeploy.sh;"'
if (statusCode != 0) {
currentBuild.result = 'FAILURE'
echo "FAILURE"
}
stop.sh
if pgrep -u username -f app-name
then
pkill -u username -f app-name
fi
# (app-name is a string, some words from the running cmd to open tha application)
Because app-folder from Jenkins script and app-name from stop.sh are equals (or even app-folder contains app-name value), when you'll try to kill app-name process, accidentally you'll kill the ssh connection and Jenkins will get 255 status code, but the redeploy.sh script from server will be done successfully because it will be executed independently.
The solution is so simple, but hard to be discovered. You should be sure that you give an explicit name for your search command which will find only and only the process id of your application.
Finally, stop.sh must be as:
if pgrep -u username -f my-app-v1.0.jar
then
pkill -u username -f my-app-v1.0.jar
fi

Related

SSH into machine, check to see if file exists, then fail Jenkins Execute shell if it does

I am trying to SSH into a machine from a Jenkins Execute shell, check to see if a file exists on the machine and if it does then fail the shell. I have the following code however I can seem to get Jenkins to recognize that the output of echo is "yes" or no"
Please let me know what you think...
sshpass -p ${ServerNodePw} ssh -T -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ${ServerNodeUser}#${ServerNodeIP} << EOSSH
(ls /Volumes/ServerHD/Users/username/Desktop/JenkinsVMbuild.lock && echo yes) || echo no
2>/dev/null 1>/dev/null
echo "$?"
if [ "$?" = yes ]
then
echo "File found"
exit 1
currentBuild.result = 'FAILURE'
else
echo "File not found"
fi
EOSSH
The value of $? is the exit status of the last executed command (or pipeline). In your case, it would always expand to 0.
You can use command substitution instead if you want to store the echoed string in a variable.
You will also need to properly escape (or quote) the script in your here document. Otherwise $var will be substituted by the caller before passing the script to ssh.

How to run a shellscript from backgorund in Jenkins

I have the below command but it is not working, i see the process is created and killed automatically
BUILD_ID=dontKillMe nohup /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
Jenkins Output:
[ssh-agent] Using credentials user1 (private key for user1)
[job1] $ /bin/sh -xe /tmp/jenkins19668363456535073453.sh
+ BUILD_ID=dontKillMe
+ nohup /Folder1/job1.sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 8765 killed;
[ssh-agent] Stopped.
Finished: SUCCESS
I have run a command without the nohup, so you may try to use:
sh """
BUILD_ID=dontKillMe /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
"""
In my case, I did not need to redirect STDERR to STDOUT because the process I was running captured all errors and displayed on STDOUT directly.
We use daemonize for that. It properly starts the program in the background and does not kill it when the parent process (bash) exits.

CICD Pipeline Deployment stage unable to come out after successful deployment

Background : Spring boot application deployment using cicd pipeline declarative script
Issue :
The spring boot application jar file is able to launch successfully. After some time we can access application health info also from browser but the build job is unable to exit from deployment stage. It is spinning at this stage continuously.
Action Taken: even we have added timeout=120000 in launch command but no change in behaviour.
Help : please help us how can we make clean exit after deployment stage from jenkin cicd declarative pipeline.
We are ssh'ing and executing our launch command. The code is like:
sshagent([sshAgent]) { sh "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -v *.jar sudouser#${server}:/opt/project/tmp/application-demo.jar" sh "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sudouser#${server} nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active=$profile -Dhttpport=8890 - /opt/project/tmp/application-demo.jar ' timeout=120000" }
I need to come out (clean exit) from jenkins build after deployment stage is successful.
You need to add the '&' to the start process in backgroud
Example:
nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active = $ profile -Dhttpport = 8890 - /opt/project/tmp/application-demo.jar &
You can also put an 'if' condition for when the log appears 'started in' to exit execution.
Example:
status(){
timeout=$1
process=$2
log=$3
string=$4
if (timeout ${timeout} tail -f ${log} &) | grep "${string}" ; then
echo "${process} started with success."
else
echo "${process} startup failed." 1>&2
exit 1
fi
}
start_app(){
java -jar -dspring.profiles.active=$profile -dhttpport=8890 - /opt/project/tmp/application-demo.jar >> /tmp/log.log 2>&1 &
status "60" "application-demo" "/tmp/log.log" "started"
}

How can i execute an shell script in my own jenkins pipeline plugin?

my problem is that i want to execute an script inside my jenkins pipeline plugin, and the 'perf script' command do not work.
My script is:
#! /bin/bash
if test $# -lt 2
then
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script > info.perf
echo "voila"
fi
exit 0
My Jenkins can execute sudo so this is not the problem, and in my own Linux Shell this script works perfectly..
How can i solve this?
I solved this adding the -i option to perf script command:
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script -i perf.data > info.perf
echo "voila"
Seems like Jenkins is not able to read perf.data without -i option
If the redirection does not work within the script, try and see if it is working within the DSL Jenkinsfile.
If you call that script with the sh step supports returnStdout (JENKINS-26133):
res = sh(returnStdout: true, script: '/path/to/your/bash/script').trim()
You could process the result directly in res, bypassing the need for a file.

jenkins pipeline: multiline shell commands with pipe

I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world

Resources