I have the below command but it is not working, i see the process is created and killed automatically
BUILD_ID=dontKillMe nohup /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
Jenkins Output:
[ssh-agent] Using credentials user1 (private key for user1)
[job1] $ /bin/sh -xe /tmp/jenkins19668363456535073453.sh
+ BUILD_ID=dontKillMe
+ nohup /Folder1/job1.sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 8765 killed;
[ssh-agent] Stopped.
Finished: SUCCESS
I have run a command without the nohup, so you may try to use:
sh """
BUILD_ID=dontKillMe /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
"""
In my case, I did not need to redirect STDERR to STDOUT because the process I was running captured all errors and displayed on STDOUT directly.
We use daemonize for that. It properly starts the program in the background and does not kill it when the parent process (bash) exits.
Related
Is there any reason why xvfb-run will not be executed as docker overridden command?
Having an image from this Dockerfile:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y xvfb
Built with:
docker build -f Dockerfile.xvfb -t xvfb-test
If I execute a custom docker command with xfvb-run:
docker run xvfb-test bash -x /usr/bin/xvfb-run echo
It gets stuck and never ends
But, if I enter to the image docker run --rm -it xvfb-test bash, and execute the same command xvfb-run echo it finished immediately (meaning that the Xvfb server started and was able to execute the command)
This is an excerpt of xvfb-run script:
...
trap : USR1
(trap '' USR1; exec Xvfb ":$SERVERNUM" $XVFBARGS $LISTENTCP -auth $AUTHFILE >>"$ERRORFILE" 2>&1) &
XVFBPID=$!
wait || :
...
Executing with bash -x we can see what lines is the last that was executed:
+ XAUTHORITY=/tmp/xvfb-run.YwmHlq/Xauthority
+ xauth source -
+ trap : USR1
+ XVFBPID=16
+ wait
+ trap '' USR1
+ exec Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.YwmHlq/Xauthority
From this link
The docker --init option in the run command basically sets ENTRYPOINT to tini and passes the CMD to it or whatever you specify on the commandline. Without init, CMD becomes pid 1. In this case, /bin/bash
looks like running the command withouth init parameter, running as PID 1, is not handling the signal USR1 correctly.
Looking that the xvfb-run script and where it gets stucked seems that the signal USR1 (that Xvfb process sends) never gets propagated to the wait statement.
A way to force signal propagation is to add the --init flag to docker run command.
From the documentation, is exactly what it does.
Run an init inside the container that forwards signals and reaps processes
I have a Jenkins stage as:
stage("Deploy on Server") {
steps {
script {
sh 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path; sh redeploy.sh && exit;"'
}
}
}
and some scripts on my server (centos):
redeploy.sh:
declare -i result=0
...
sh restart.sh
result+=$?
echo "Step 6: result = " $result
# 7. if restart fail, restart /versions/*.jar "sh restart-previous.sh"
if [ $result != "0" ]
then
sh restart-previous.sh
result+=$?
fi
echo "Deploy done. Final result: " $result
restart.sh
nohup java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar &
Because I execute the redeploy.sh script from the Jenkins, the problem is that it will cling on Jenkins console and will log all application events there, instead to create a nohup file in the patch where my app is deployed.
In some examples I found that it is recommended to use nohup directly in ssh command, but I can't do this because I need to execute a script (with all the steps, nohup can't doing that) and not directly a command.
exit cmd will be ignored because the previous command will never be closed.
thanks
Finally, I found the solution. One problem was in restart.sh, because is needed to force from cmd to specify the log file. So, nohup is ignored/unused, and the command become:
java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar </dev/null>> logfile.log 2>&1 &
Another problem was with killing the previous jar process. Be very carrefour, because using project name as path in jenkins script, this will create a new process for your user and will be accidentally killed when you will want to stop your application:
def statusCode = sh returnStatus: true, script: 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path/app-folder; sh redeploy.sh;"'
if (statusCode != 0) {
currentBuild.result = 'FAILURE'
echo "FAILURE"
}
stop.sh
if pgrep -u username -f app-name
then
pkill -u username -f app-name
fi
# (app-name is a string, some words from the running cmd to open tha application)
Because app-folder from Jenkins script and app-name from stop.sh are equals (or even app-folder contains app-name value), when you'll try to kill app-name process, accidentally you'll kill the ssh connection and Jenkins will get 255 status code, but the redeploy.sh script from server will be done successfully because it will be executed independently.
The solution is so simple, but hard to be discovered. You should be sure that you give an explicit name for your search command which will find only and only the process id of your application.
Finally, stop.sh must be as:
if pgrep -u username -f my-app-v1.0.jar
then
pkill -u username -f my-app-v1.0.jar
fi
Background : Spring boot application deployment using cicd pipeline declarative script
Issue :
The spring boot application jar file is able to launch successfully. After some time we can access application health info also from browser but the build job is unable to exit from deployment stage. It is spinning at this stage continuously.
Action Taken: even we have added timeout=120000 in launch command but no change in behaviour.
Help : please help us how can we make clean exit after deployment stage from jenkin cicd declarative pipeline.
We are ssh'ing and executing our launch command. The code is like:
sshagent([sshAgent]) { sh "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -v *.jar sudouser#${server}:/opt/project/tmp/application-demo.jar" sh "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sudouser#${server} nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active=$profile -Dhttpport=8890 - /opt/project/tmp/application-demo.jar ' timeout=120000" }
I need to come out (clean exit) from jenkins build after deployment stage is successful.
You need to add the '&' to the start process in backgroud
Example:
nohup '/opt/java/hotspot/8/64_bit/jdk1.8.0_141/bin/java -jar -Dspring.profiles.active = $ profile -Dhttpport = 8890 - /opt/project/tmp/application-demo.jar &
You can also put an 'if' condition for when the log appears 'started in' to exit execution.
Example:
status(){
timeout=$1
process=$2
log=$3
string=$4
if (timeout ${timeout} tail -f ${log} &) | grep "${string}" ; then
echo "${process} started with success."
else
echo "${process} startup failed." 1>&2
exit 1
fi
}
start_app(){
java -jar -dspring.profiles.active=$profile -dhttpport=8890 - /opt/project/tmp/application-demo.jar >> /tmp/log.log 2>&1 &
status "60" "application-demo" "/tmp/log.log" "started"
}
I would like to create a script for openwrt that every day changes some variables inside the Shadowsocks service. This is the script but i don't know where to put it or how to manage to call it every day or every reboot of router.
#!/bin/sh /etc/rc.common
restart=0
for i in `uci show shadowsocks | grep alias | sed -r 's/.*\[(.*)\].*/\1/'`
do
server=$(uci get shadowsocks.#servers[${i}].alias)
result=$(nslookup $server)
new_ip=$(echo "${result}" | tail -n +3 | awk -F" " '/^Address 1/{ print $3}')
if [ -n "$new_ip" ]; then
logger -t shadowsocks "nslookup $server -> $new_ip"
old_ip=$(uci get shadowsocks.#servers[${i}].server)
if [ "$old_ip" != "$new_ip" ]; then
logger -t shadowsocks "detect $server ip address change ($old_ip -> $new_ip)"
restart=1
uci set shadowsocks.#servers[${i}].server=${new_ip}
fi
else
logger -t shadowsocks "nslookup $server fail"
fi
done
if [ $restart -eq 1 ]; then
logger -t shadowsocks "restart for server ip address change"
uci commit shadowsocks
/etc/init.d/shadowsocks restart
fi
You can use cron utility. Cron is a time-based job scheduler in Unix-like computer OS. It allows to run jobs/programs/scripts at specified times.
OpenWrt comes with a cron system by default, provided by busybox.
Cron is not enabled by default, so your jobs won't be run. To activate cron in Openwrt:
/etc/init.d/cron start
/etc/init.d/cron enable
Ref: https://oldwiki.archive.openwrt.org/doc/howto/cron
Now considering your question, if you want to run mentioned script every day:
Edit cron file using crontab -e command. And write below line.
0 0 * * * sh /path/to/your/script.sh
This command will run your script at 00:00 (every day mid night). You can easily modify the above command to schedule your job at any other time. Good reference to generate cron job entry: https://crontab.guru/
To see if crontab is working properly:
tail -f /var/log/syslog | grep CRON
Now coming to your second question "Run script at every reboot of router":
You can put your script in /etc/rc.local. This file will be executed as as a shell script on every boot up by /etc/rc.d/S95done in Openwrt. So just edit /etc/rc.local with sh /path/to/your/script.sh Make sure your script is executable and doing your task properly.
my problem is that i want to execute an script inside my jenkins pipeline plugin, and the 'perf script' command do not work.
My script is:
#! /bin/bash
if test $# -lt 2
then
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script > info.perf
echo "voila"
fi
exit 0
My Jenkins can execute sudo so this is not the problem, and in my own Linux Shell this script works perfectly..
How can i solve this?
I solved this adding the -i option to perf script command:
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script -i perf.data > info.perf
echo "voila"
Seems like Jenkins is not able to read perf.data without -i option
If the redirection does not work within the script, try and see if it is working within the DSL Jenkinsfile.
If you call that script with the sh step supports returnStdout (JENKINS-26133):
res = sh(returnStdout: true, script: '/path/to/your/bash/script').trim()
You could process the result directly in res, bypassing the need for a file.