how to exit my script with sequence ctrl+c when using timeout - timeout

and many thanks by any help.
When I run the command:
timeout 10 sh -c 'sleep 6; echo "Done"'
in my command line, I am allowed to exit the excecution by the keyboard sequence ctrl+c. However, when I encapsulate the command in a shell script:
#!/bin/bash
timeout 10 sh -c 'sleep 6; echo "Done"'
exit 0
the keyboard sequence ctrl+c doesn't have the hoped effect: it is, abort the shell script excecution. In fact, I have to wait 6 seconds and then the shell script fishish.
Could you advice me please to do what I would like to do? To finish the script once the keyboard sequence is given?

The man page for timeout specifies the foreground option
--foreground
When not running timeout directly from a shell prompt, allow COMMAND
to read from the TTY and receive TTY signals. In this mode, children of
COMMAND will not be timed out.

Related

how to get the timestamps of a command execution in dockerfile

This is normal way of doing in shell
starttime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $starttime
# sleep for 5 seconds
sleep 5
# end time
endtime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $endtime
STARTTIME=$(date -d "${starttime}" +%s)
ENDTIME=$(date -d "${endtime}" +%s)
RUNTIME=$((ENDTIME-STARTTIME))
echo "Seconds ${RUNTIME} in sec"
Wanted the same way in a docker file
Wanted to get the timestamps before and after execution of a command in dockerfile
Could some please help on this.
It is exactly the same. A RUN command runs an ordinary Bourne shell command line (wrapping it in sh -c). If you have this much scripting involved you might consider writing it into a shell script, COPYing the script into your image, then RUNning it.
If this is just for temporary diagnostics, and you don't need to calculate the time in seconds, you can just run date as is without the rest of the scripting.
RUN date; make; date # except this won't actually stop on failure
If you were especially motivated you could take the script from the question, make it take a command as an argument, and write a script around it
#!/bin/sh
starttime=$(date '+%d/%m/%Y %H:%M:%S')
sh -c "$#"
rc=$?
endtime=$(date '+%d/%m/%Y %H:%M:%S')
...
exit "$rc"
Then in your Dockerfile you can use the SHELL directive to make this run RUN commands. You will rarely see RUN commands using JSON arrays, and this will bypass your script.
# must be executable and have a correct #!/bin/sh line
COPY timeit.sh /usr/local/bin
SHELL ["/usr/local/bin/timeit.sh"]
RUN make
RUN ["/bin/echo", "this will not be timed"]

How to run a shellscript from backgorund in Jenkins

I have the below command but it is not working, i see the process is created and killed automatically
BUILD_ID=dontKillMe nohup /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
Jenkins Output:
[ssh-agent] Using credentials user1 (private key for user1)
[job1] $ /bin/sh -xe /tmp/jenkins19668363456535073453.sh
+ BUILD_ID=dontKillMe
+ nohup /Folder1/job1.sh
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 8765 killed;
[ssh-agent] Stopped.
Finished: SUCCESS
I have run a command without the nohup, so you may try to use:
sh """
BUILD_ID=dontKillMe /Folder1/job1.sh > /Folder2/Job1.log 2>&1 &
"""
In my case, I did not need to redirect STDERR to STDOUT because the process I was running captured all errors and displayed on STDOUT directly.
We use daemonize for that. It properly starts the program in the background and does not kill it when the parent process (bash) exits.

Cron task inside container from host

I am trying a cron task inside a container from host but with no luck. From the host I am adding the following line on crontab -e
* * * * * docker exec -it sample_container bash -c 'touch /selected/directory/temp$(date +%H-%M)'
But this is not working. Interestingly, when I run the command independently outside crontab it is successfully executing. Can anyone explain what am I missing here?
Note: when debugging such problems with cron, you should look for errors in your local system mails or redirect those to your real mail by adding MAILTO=yourmail#yourdomain.com on top of your crontab file.
There are 2 problems with your crontab command
TLDR; the fixed cron expression
* * * * * docker exec sample_container bash -c 'touch /selected/directory/temp$(date +\%H-\%M)'
% has a special meaning in crontab
From man -s 5 crontab
Percent-signs (%) in the command, unless escaped with backslash (\),
will be changed into newline characters, and all data after the
first % will be sent to the command as standard input.
So you will need to escape those % signs in your date format string
Cron does not allocate a tty
Cron does not allocate a tty whereas your are trying to use one when executing your command (i.e. the -t option to docker exec). The command will therefore fail with the error the input device is not a TTY
You do not need to go interactive (-i) nor to allocate a tty for this command to do its job anyway, so you have to drop those options to launch it from cron.

Delayed jobs stops after some time

I have an application which relies heavily on delayed jobs. So I have setup two servers, one of which servers up the application (m3.medium ec2 instance) while other one runs my delayed jobs(t2.micro ec2 instance). I have created a start and stop script for the delayed jobs. This is where I am facing issues. Delayed jobs run smoothly but the problem is that they stop automatically after some time. So everytime they stop I have to manually start them again. I have no clue whatsoever why they stop in the middle of processing a job.
So basically I have two questions:
What can I do so that the jobs don't stop, or if they do they start automatically immediately/or after some time?
How can I make them start automatically on instance reboot/start?
I have looked at many similar questions, but none seem to help.
Any advice appreciated.
Edit 1:
My start/stop script for the delayed jobs.
set -e
# drop privs if necessary
if [ "$(id -u)" == "0" ]; then
exec su $(stat -c %U $(dirname $(readlink -f $0))/../config/environment.rb) -c "/bin/bash $0 $#"
exit -1;
fi
# switch to app root
cd $(dirname $(readlink -f $0))/..
# set up config
if [ -e "config/GEM_HOME" ]; then
export GEM_HOME=$(cat config/GEM_HOME)
fi
#export GEM_HOME=/path/to/gem/home
export RAILS_ENV=production
# run delayed jobs
exec script/delayed_job $#
# following an article I have tried adding the following code restart on crash.
# restarting the service
respawn
#Give up if restart occurs 10 times in 90 seconds.
respawn limit 10 90
Its seem you might be having memory issue which is killing it. you can try Monit to automatically start job if its killed
ref: http://railscasts.com/episodes/375-monit
Alternative:
You may also use sidekiq instead of delayed job

return from docker-compose up in jenkins

I have base image with Jboss copied on it. Jboss is started with a script and takes around 2 minutes.
In my Dockerfile I have created a command.
CMD start_deploy.sh && tail -F server.log
I do a tail to keep the container alive otherwise "docker-compose up" exits when script finishes and container stops.
The problem is when I do "docker-compose up" through Jenkins the build doesn't finishes because of tail and I couldn't start the next build.
If I do "docker-compose up -d" then next build starts too early and starts executing tests against the container which hasn't started yet.
Is there a way to return from docker-compose up when server has started completely.
Whenever you have chained commands or piped commands (|), it is easier to:
easier wrap them in a script, and use that script in your CMD directive:
CMD myscript
or wrap them in an sh -c command:
sh -c 'start_deploy.sh && tail -F server.log'
(but that last one depends on the ENTRYPOINT of the image.
A default ENTRYPOINT should allow this CMD to work)

Resources