I use a scheduler (Rufus scheduler) to launch a process called "ar_sendmail" (from ARmailer), every minute.
The process should NOT be launched when there is already such a process running in order not to eat up memory.
How do I check to see if this process is already running? What goes after the unless below?
scheduler = Rufus::Scheduler.start_new
scheduler.every '1m' do
unless #[what goes here?]
fork { exec "ar_sendmail -o" }
Process.wait
end
end
end
unless `ps aux | grep ar_sendmai[l]` != ""
unless `pgrep -f ar_sendmail`.split("\n") != [Process.pid.to_s]
This looks neater I think, and uses built-in Ruby module. Send a 0 kill signal (i.e. don't kill):
# Check if a process is running
def running?(pid)
Process.kill(0, pid)
true
rescue Errno::ESRCH
false
rescue Errno::EPERM
true
end
Slightly amended from Quick dev tips You might not want to rescue EPERM, meaning "it's running, but you're not allowed to kill it".
Related
I have a Rails application that runs jobs on the background using the Resque adapter. I have noticed that once a couple of days my workers disappear (just stop), my jobs get stuck in the queue, and I have to restart the workers anew every time they stop.
I check using ps -e -o pid,command | grep [r]esque and launch workers in the background using
(RAILS_ENV=production PIDFILE=./resque.pid BACKGROUND=yes bundle exec rake resque:workers QUEUE='*' COUNT='12') 2>&1 | tee -a log/resque.log.
Then I stopped redis-server using /etc/init.d/redis-server stop and again checked the worker processes. They disappeared.
This gives a reason to think that worker processes stop maybe because the redis server restarting because of some reason.
Is there any Rails/Ruby way solution to this problem? What comes to my mind is writing a simple Ruby code that would watch the worker processes with the period, say, 5 seconds, and restart them if they stop.
UPDATE:
I don't want to use tools such as Monit, God, eye, and etc. They are not reliable. Then I will need to watch them too. Something like to install God to manage Resque workers, then install Monit to watch God, ...
UPDTAE
This is what I am using and it is really working. I manually stoped redis-server and then started it again. This script successfully launched the workers.
require 'logger'
module Watch
def self.workers_dead?
processes = `ps -e -o pid,command | grep [r]esque`
return true if processes.empty?
false
end
def self.check(time_interval)
logger = Logger.new('watch.log', 'daily')
logger.info("Starting watch")
while(true) do
if workers_dead?
logger.warn("Workers are dead")
restart_workers(logger)
end
sleep(time_interval)
end
end
def self.restart_workers(logger)
logger.info("Restarting workers...")
`cd /var/www/agts-api && (RAILS_ENV=production PIDFILE=./resque.pid BACKGROUND=yes rake resque:workers QUEUE='*' COUNT='12') 2>&1 | tee -a log/resque.log`
end
end
Process.daemon(true,true)
pid_file = File.dirname(__FILE__) + "#{__FILE__}.pid"
File.open(pid_file, 'w') { |f| f.write Process.pid }
Watch.check 10
You can use process monitoring tools such as monit, god, eye etc. These tools can check for resque PID and memory usage at time interval specified by you. You also have options to restart background processes if the memory limit exceeds your specified expectations. Personally, I use eye gem.
You could do it much simpler. Start Resque in foreground. When it exits, start it again. No pid files, no monitoring, no sleep.
require 'logger'
class Restarter
def initialize(cmd:, logger: Logger.new(STDOUT))
#cmd = cmd
#logger = logger
end
def start
loop do
#logger.info("Starting #{#cmd}")
system(#cmd)
#logger.warn("Process exited: #{#cmd}")
end
end
end
restarter = Restarter.new(
cmd: 'cd /var/www/agts-api && (RAILS_ENV=production rake resque:workers QUEUE='*' COUNT='12') 2>&1 | tee -a log/resque.log',
logger: Logger.new('watch.log', 'daily')
)
restarter.start
Let's say we have test job:
class TestJob < ActiveJob::Base
queue_as :default
def perform(video)
video.process
end
end
After it we run bundle exec sidekiq start
And run in new terminal window in rails console
Video.first.pending!; TestJob.perform_later(Video.first)
Job is running, ffmpeg is on background, everything is fine, but according to official sidekiq wiki docs I try:
require 'sidekiq/api'
Sidekiq::Queue.new
=> #<Sidekiq::Queue:0x00000006406978 #name="default", #rname="queue:default">
Sidekiq::Queue.new.each {|job| puts job}
=> nil
Sidekiq::Queue.new.size
=> 0
ss = Sidekiq::ScheduledSet.new
=> #<Sidekiq::ScheduledSet:0x000000064a4330 #_size=0, #name="schedule">
ss.size
=> 0
Why there is no jobs? The job is running successfully ( I see it also in first window where sidekiq starts, but I can't see and delete it in rails console )
I am using Ubuntu 14 if it helps
With best regards, Ruslan.
UPD:
It looks that
ps = Sidekiq::ProcessSet.new; ps.each(&:quiet!)
works
, but it doesn't stop my ffmpeg process this part of code internally in process:
cmd = "ffmpeg -i #{input_file.shellescape} #{options} -threads 0 -y #{self.path + outfile}"
pid = spawn(cmd, :out => output_file, :err => output_file)
Process.wait(pid)
How to stop it?
If a job is processing, it's not enqueued anymore.
I use sidekiq web UI and various queues for my jobs. If I realize that particular job is failing I can clear out the queue of subsequent jobs of the same class.
I want to run a binary through ruby for a limited time. In my case its airodump-ng with the full command of:
airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture
For the ones who don't know airdump-ng for normal it starts and doesn't terminate itself. Its running forever if the user doesn't stop it by pressing Strg + C. This isn't a problem at the bash but executing it through ruby it's causing serious trouble. Is there a way to limit the time a binary is runned by the system method?
Try timeout library:
require 'timeout'
begin
Timeout.timeout(30) do
system('airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture')
end
rescue Timeout::Error
# timeout
end
You could use the Ruby spawn method. From the Ruby docs:
This method is similar to #system but it doesn’t wait for the command to finish.
Something like this:
# Start airodump
pid = spawn('airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture')
# Wait a little while...
sleep 60
# Then stop airodump (similar to pressing CTRL-C)
Process.kill("INT", pid)
I start a script simple.rb with ruby simple.rb > log.txt &. I want it to run infinitely. It runs for a while, but pidof ruby does not return anything. The script stops running, and there is no error code or exit msg in the log file. What happened? Do ruby loops end eventually? I can restart the ruby script when it ends from a bash endless loop, but I'm curious as to why this script ends, and how I can find out if it doesn't emit an exit code/msg.
def main_loop
puts "Doing stuff.."
end
while true
main_loop
sleep 5.seconds
end
The #seconds is unnecessary and is probably messing up your code, since #sleep takes a number (float or integer, I believe). See http://apidock.com/ruby/Kernel/sleep .
$stdout.sync = true
def main_loop
puts "Doing stuff.."
end
while true
main_loop
sleep 5
end
I'm using Capistrano run a remote task. My task looks like this:
task :my_task do
run "my_command"
end
My problem is that if my_command has an exit status != 0, then Capistrano considers it failed and exits. How can I make capistrano keep going when exit when the exit status is not 0? I've changed my_command to my_command;echo and it works but it feels like a hack.
The simplest way is to just append true to the end of your command.
task :my_task do
run "my_command"
end
Becomes
task :my_task do
run "my_command; true"
end
For Capistrano 3, you can (as suggested here) use the following:
execute "some_command.sh", raise_on_non_zero_exit: false
The +grep+ command exits non-zero based on what it finds. In the use case where you care about the output but don't mind if it's empty, you'll discard the exit state silently:
run %Q{bash -c 'grep #{escaped_grep_command_args} ; true' }
Normally, I think the first solution is just fine -- I'd make it document itself tho:
cmd = "my_command with_args escaped_correctly"
run %Q{bash -c '#{cmd} || echo "Failed: [#{cmd}] -- ignoring."'}
You'll need to patch the Capistrano code if you want it to do different things with the exit codes; it's hard-coded to raise an exception if the exit status is not zero.
Here's the relevant portion of lib/capistrano/command.rb. The line that starts with if (failed... is the important one. Basically it says if there are any nonzero return values, raise an error.
# Processes the command in parallel on all specified hosts. If the command
# fails (non-zero return code) on any of the hosts, this will raise a
# Capistrano::CommandError.
def process!
loop do
break unless process_iteration { #channels.any? { |ch| !ch[:closed] } }
end
logger.trace "command finished" if logger
if (failed = #channels.select { |ch| ch[:status] != 0 }).any?
commands = failed.inject({}) { |map, ch| (map[ch[:command]] ||= []) << ch[:server]; map }
message = commands.map { |command, list| "#{command.inspect} on #{list.join(',')}" }.join("; ")
error = CommandError.new("failed: #{message}")
error.hosts = commands.values.flatten
raise error
end
self
end
I find the easiest option to do this:
run "my_command || :"
Notice: : is the NOP command so the exit code will simply be ignored.
I just redirect STDERR and STDOUT to /dev/null, so your
run "my_command"
becomes
run "my_command > /dev/null 2> /dev/null"
this works for standard unix tools pretty well, where, say, cp or ln could fail, but you don't want to halt deployment on such a failure.
I not sure what version they added this code but I like handling this problem by using raise_on_non_zero_exit
namespace :invoke do
task :cleanup_workspace do
on release_roles(:app), in: :parallel do
execute 'sudo /etc/cron.daily/cleanup_workspace', raise_on_non_zero_exit: false
end
end
end
Here is where that feature is implemented in the gem.
https://github.com/capistrano/sshkit/blob/4cfddde6a643520986ed0f66f21d1357e0cd458b/lib/sshkit/command.rb#L94