I have used Jenkins for build and deploy my artifacts to server. After deploying files I stopped service by using kill command
kill -9 'pgrep -f service name'
note that the service killed but jenkins job fail with status code -1 although this command works fine when I use it at shell of linux server without jenkins
Please help me why I get -1 exit status?
and how I can kill process at linux server through jenkins job without failure ?
Edit : the below logs which appears after adding /bin/bash -x to my script:
#/bin/bash -x
pid=$(pgrep -f service-name); echo "killing $pid"; kill -9 $pid;
[SSH] executing...
killing 13664
16924
16932
[SSH] completed
[SSH] exit-status: -1
Build step 'Execute shell script on remote host using ssh' marked build as failure
Email was triggered for: Failure - Any
Edit : the output of command ps -ef | grep service-name is :
ps -ef | grep service-name
[SSH] executing...
channel stopped
user 11786 11782 0 15:28 ? 00:00:00 bash -c ps -ef | grep service-name
user 11799 11786 0 15:28 ? 00:00:00 grep service-name
root 19981 11991 0 Aug15 pts/1 00:02:53 java -jar /root/service-name /spring.config.location=/root/service-name/application.properties
[SSH] completed
--- the output of trial script :
#/bin/bash -x
ps -ef | grep service-name
pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do
ps -ef | grep $pid
kill -9 $pid
echo "kill command returns $?"
done
[SSH] executing...
channel stopped
root 56980 11991 37 11:03 pts/1 00:00:33 java -jar /root/service-name --spring.config.location=/root/service-name/application.properties
root 57070 57062 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57079 57070 0 11:05 ? 00:00:00 grep service-name
root 56980 11991 37 11:03 pts/1 00:00:33 java -jar /root/service-name --spring.config.location=/root/service-name/application.properties
root 57083 57081 0 11:05 ? 00:00:00 grep 56980
kill command returns 0
root 57070 57062 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57081 57070 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57085 57081 0 11:05 ? 00:00:00 grep 57070
kill command returns 0
root 57081 1 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57086 57081 0 11:05 ? 00:00:00 ps -ef
root 57087 57081 0 11:05 ? 00:00:00 grep 57081
[SSH] completed
[SSH] exit-status: -1 ```
If you want to kill any process whose full command line match service-name you should change your script:
#/bin/bash
pgrep -f service-name | while read pid; do
ps -ef | grep $pid # so you can see what you are going to kill
kill -9 $pid
done
Command pgrep returns a list of process one per line.
In order to get pid list separated by a space and call kill command once:
#/bin/bash
kill -9 $(pgrep -f service-name -d " ")
In order to view which process are selected by pgrep use:
pgrep -a -f sevice-name
or
ps -ef | grep service-name
use man pgrep to see all options
In your case, job is killed because pgrep match the job script, so you should use a more specific pattern with the -x parameters:
#/bin/bash
pgrep -xf "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do
kill -9 $pid
done
Related
I'm using a shell script that is trying to create a directory inside a container which is running. But it produces error as binary file not found.
Here is an example script:
#!/bin/sh
set -x
CONTAINER_ID=`docker ps | grep postgres | awk '{print $1}'`
docker exec -it $CONTAINER_ID bash mkdir /backup
Try this:
#!/bin/sh
set -x
CONTAINER_ID=`docker ps | grep postgres | awk '{print $1}'`
docker exec -it $CONTAINER_ID sh -c "mkdir /backup"
The sh -c "mkdir /backup" should work.
In case your docker image have bash inside it then try bash -c "mkdir /backup"
I tried from my end and got the desired result.
$ sh script.sh
+ docker ps
+ awk '{print $1}'
+ grep inspiring_sinoussi
+ CONTAINER_ID=08a35fa3c040
+ docker exec -it 08a35fa3c040 sh -c 'mkdir /backup'
$ docker exec -it 08a35fa3c040 sh
/ # ls / | grep backup
backup
I have a job like this:
parameterized ${GIT_URL} and ${REMOTE_IP}.
clone code by git url and package my project as jar
scp jar file to remote ip, and then start it as server.`
I am using Publish Over SSH Plugin.
The problem is, I have to add every server to my job configuration.
So is it possible to execute shell with parameterized remote ip like this?
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP}
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &
Yes. Use "$REMOTE_IP" to resolve it to the parameter value.
#!/bin/sh
scp ${APP_NAME}.jar root#"$REMOTE_IP":/root/${APP_NAME}.jar
ssh root#"$REMOTE_IP"
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &
I solved this in another way.
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP} "sh -s" -- < /opt/jenkins/my.sh ${REMOTE_IP} ${APP_NAME}
So my.sh is a local shell file which define how to start jar as server with parameterized ip
I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !
docker client for docker ps has very useful flag -l which shows container information which was run recently. However all other docker commands requires providing either CONTAINER ID or NAME.
Is there any nice trick which would allow to call:
docker logs -f -l
instead of:
docker logs -f random_name
You can you docker logs -f `docker ps -ql`
For the last container
docker ps -n 1
or variants such as
docker ps -qan 1
can be handy
After a while playing with docker tutorial, I created small set of aliases:
alias docker_last="docker ps -l | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_all="docker ps -a | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_up="docker ps | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_down="docker ps -a | tail -n +2 | grep -v Up | awk '{ print \$(NF) }' | xargs docker $1"
Which allow to call command on last, all, up and down containers:
docker_last logs # Display logs from last created container
docker_down rm # Remove all stopped containers
docker_up stop # Stop all running containers
On my OS X development machine, sometimes I start a Rails server and due to an error or mishap I get back the prompt but the server is still running.
It happens often enough that I wrote a shell script to handle it...
~/bin/krr_kill_rails_processes.sh
#/bin/bash
echo "Rails processes:"
ps aux | grep -ie rails | awk '{print}'
ps aux | grep -ie rails | awk '{print $2}' | xargs kill -9
It works, but it's messy...
$ krr_kill_rails_processes.sh
Rails processes:
jimpie 76575 0.0 0.0 2432768 632 s002 S+ 4:46PM 0:00.00 grep -ie rails
jimpie 76573 0.0 0.0 2433432 968 s002 S+ 4:46PM 0:00.00 sh /Users jimpie/bin/krr_kill_rails_processes.sh jimpie 76426 0.0 0.6 3140040 95144 s001 S+ 4:42PM 0:04.71 /Users jimpie/.rvm/rubies/ruby-1.9.3-p327/bin/ruby script/rails s
kill: 76578: No such process
[1] 76573 killed krr_kill_rails_processes.sh
How can I improve it so that...
It doesn't find and kill itself.
It doesn't find and kill the grep command.
It doesn't emit that "No such process" error.
(Any other suggested improvements...)
In case it's relevant, here's the output when I start the Rails server...
$ bundle exec rails s
=> Booting Thin
=> Rails 3.2.9 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
If the intent is to kill only the process for one rails project, you can kill the rails server using the process-id in the tmp/pids/server.pid file:
[ -f "<project-dir>/tmp/pids/server.pid" ] && kill -9 `cat "<project-dir>/tmp/pids/server.pid`"
If you want to use your grep approach, then you can use this trick to prevent the grep command from showing up in the grep results :
ps aux | grep "[r]ails"
It doesn't find and kill the grep command.
instead of grep -ie rails, you can use grep -ie [r]ails
#/bin/bash
echo "Rails processes:"
ps aux | grep -ie [r]ails | awk '{print}'
ps aux | grep -ie [r]ails | awk '{print $2}' | xargs kill -9
If you os has pkill, you can use pkill -9f rails to kill rails, no need script
for more details, see https://developer.apple.com/library/mac/documentation/Darwin/Reference/Manpages/man1/pkill.1.html
Instead of start and kill rails in Shell, I suggest you to take a look at Pow. No configuration, no maintenance required. All you need is put a symbolic link inside .pow folder. And your can access "your_project_name.dev" (no hosts change required)
even better, there is a small GUI application available for you to manage it. Anvil
Largely from http://soziotechnischemassnahmen.blogspot.com/2010/03/poor-mans-pgrep-on-mac-os-x.html
Try
ps -axo pid,command | awk '$NF == "script/rails" {print $1}' | xargs kill
Also, strongly agree with their suggestion of installing proctools and just doing pkill rails.
e.g.
> cat input.txt
76575 grep
76573 /Users jimpie/bin/krr_kill_rails_processes.sh
76426 /Users jimpie/.rvm/rubies/ruby-1.9.3-p327/bin/ruby script/rails
> awk '$NF == "script/rails" {print $1}' input.txt
76426
Thanks for the various suggestions.
Turns out my OS X came with pkill, but it isn't working for me...
$ ps aux | grep -e rails
jimpie 77530 0.0 0.7 3178332 122492 ?? S Sun05PM 0:35.54 /Users/jimpie/.rvm/rubies/ruby-1.9.3-p327/bin/ruby script/rails s
jimpie 83891 0.0 0.0 2432768 608 s000 R+ 1:23PM 0:00.00 grep -e rails
$ pkill rails
$ ps aux | grep -e rails
jimpie 77530 0.0 0.7 3178332 122492 ?? S Sun05PM 0:35.55 /Users/jimpie/.rvm/rubies/ruby-1.9.3-p327/bin/ruby script/rails s
jimpie 83906 0.0 0.0 2432768 624 s000 R+ 1:23PM 0:00.00 grep -e rails
$ pkill -9 rails
$ ps aux | grep -e rails
jimpie 77530 0.0 0.7 3178332 122492 ?? S Sun05PM 0:35.55 /Users/jimpie/.rvm/rubies/ruby-1.9.3-p327/bin/ruby script/rails s
jimpie 83923 0.0 0.0 2432768 612 s000 R+ 1:23PM 0:00.00 grep -e rails
$ pkill -9f rails
pkill: illegal option -- 9
usage: pkill [-signal] [-ILfilnovx] [-F pidfile] [-G gid]
[-P ppid] [-U uid] [-g pgrp]
[-t tty] [-u euid] pattern ...
For future ref, eventually discovered grep's -v option and worked out a script that does what I want.
~/bin/krr_kill_rails_processes.sh
#/bin/bash
echo "Killing Rails processes..."
ps aux | grep -ie rails | grep -v 'grep' | grep -v 'krr' | awk '{print}'
ps aux | grep -ie rails | grep -v 'grep' | grep -v 'krr' | awk '{print $2}' | xargs kill -9