Ruby infinite while true ends silently - ruby-on-rails

I start a script simple.rb with ruby simple.rb > log.txt &. I want it to run infinitely. It runs for a while, but pidof ruby does not return anything. The script stops running, and there is no error code or exit msg in the log file. What happened? Do ruby loops end eventually? I can restart the ruby script when it ends from a bash endless loop, but I'm curious as to why this script ends, and how I can find out if it doesn't emit an exit code/msg.
def main_loop
puts "Doing stuff.."
end
while true
main_loop
sleep 5.seconds
end

The #seconds is unnecessary and is probably messing up your code, since #sleep takes a number (float or integer, I believe). See http://apidock.com/ruby/Kernel/sleep .
$stdout.sync = true
def main_loop
puts "Doing stuff.."
end
while true
main_loop
sleep 5
end

Related

ruby pipes, IO and stderr redirect

I am looking to have a ruby program (a rake task) observe an output from another rake task. The output writer outputs to stderr. I'd like to read those lines. I'm having difficulty setting it up. If I have a writer (stdout_writer.rb) that constantly prints something:
#!/usr/bin/env ruby
puts 'writing...'
while true
$stdout.puts '~'
sleep 1
end
and a file that reads it and echoes (stdin_reader.rb):
#!/usr/bin/env ruby
puts 'reading...'
while input = ARGF.gets
puts input
input.each_line do |line|
begin
$stdout.puts "got line #{line}"
rescue Errno::EPIPE
exit(74)
end
end
end
and I'm trying to have them work together, nothing happens:
$ ./stdout_writer.rb 2>&1 | ./stdin_reader.rb
$ ./stdout_writer.rb | ./stdin_reader.rb
nothing... although if I just echo into stdin_reader.rb, I get what I expect:
piousbox#e7440:~/projects/opera_events/sendgrid-example-operaevent$ echo "ok true" | ./stdin_reader.rb
reading...
ok true
got line ok true
piousbox#e7440:~/projects/opera_events/sendgrid-example-operaevent$
so how would I setup a script that gets stderr piped into it, so that it can read it line-by-line? Additional info: this will be an ubuntu upstart service script1.rb | script2.rb where script1 sends a message, and script2 verifies that the message was sent by script1
The issue seems to be that as stdout_writer runs infinitely, stdin_reader will never get a chance to read the STDOUT from stdout_writer as the pipe, in this case, is waiting for stdout_writer to be finished before stdin_reader starts reading. I tested this by changing while true to 5.times do. If you do that, and wait 5 seconds, the result of ./stdout_writer.rb | ./stdin_reader.rb is
reading...
writing...
got line writing...
~
got line ~
~
got line ~
~
got line ~
~
got line ~
~
got line ~
This isn't an issue with your code itself, but more so an issue with the way that ruby execution in terms of STDOUT | STDIN handling works.
Also, I don't think I've ever learned as much as I learned researching this question. Thank you for the fun exercise.
The output from stdout_writer.rb is being buffered by Ruby, so the reader process doesn’t see it. If you wait long enough, you should see the result appear in chunks.
You can turn buffering off and get the result you’re expecting by setting sync to true on $stdout at the start of stdout_writer.rb:
$stdout.sync = true

How can I execute a binary through ruby for a limited time?

I want to run a binary through ruby for a limited time. In my case its airodump-ng with the full command of:
airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture
For the ones who don't know airdump-ng for normal it starts and doesn't terminate itself. Its running forever if the user doesn't stop it by pressing Strg + C. This isn't a problem at the bash but executing it through ruby it's causing serious trouble. Is there a way to limit the time a binary is runned by the system method?
Try timeout library:
require 'timeout'
begin
Timeout.timeout(30) do
system('airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture')
end
rescue Timeout::Error
# timeout
end
You could use the Ruby spawn method. From the Ruby docs:
This method is similar to #system but it doesn’t wait for the command to finish.
Something like this:
# Start airodump
pid = spawn('airodump-ng -c <mychan> mon0 --essid "my_wlan" --write capture')
# Wait a little while...
sleep 60
# Then stop airodump (similar to pressing CTRL-C)
Process.kill("INT", pid)

How to check for a running process with Ruby?

I use a scheduler (Rufus scheduler) to launch a process called "ar_sendmail" (from ARmailer), every minute.
The process should NOT be launched when there is already such a process running in order not to eat up memory.
How do I check to see if this process is already running? What goes after the unless below?
scheduler = Rufus::Scheduler.start_new
scheduler.every '1m' do
unless #[what goes here?]
fork { exec "ar_sendmail -o" }
Process.wait
end
end
end
unless `ps aux | grep ar_sendmai[l]` != ""
unless `pgrep -f ar_sendmail`.split("\n") != [Process.pid.to_s]
This looks neater I think, and uses built-in Ruby module. Send a 0 kill signal (i.e. don't kill):
# Check if a process is running
def running?(pid)
Process.kill(0, pid)
true
rescue Errno::ESRCH
false
rescue Errno::EPERM
true
end
Slightly amended from Quick dev tips You might not want to rescue EPERM, meaning "it's running, but you're not allowed to kill it".

unix at command pass variable to shell script?

I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord.
Here's my test lines in Rails
output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE`
return render :text => output
Then my parking_timer.sh looks like this
#!/bin/sh
~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1
echo "All Done"
Finally, ParkingTimer.rb reads the passed variable with
ARGV.each do|a|
puts "Argument: #{a}"
end
The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s
If I put quotes around the right hand side like so
... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE"
I get,
-bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory
I I leave the quotes out, I get,
at: garbled time
This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6
I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.
After playing around with irb a bit, here's what I found.
The backtick operator invokes the shell after ruby has done any interpretation necessary. For my test case, the strace output looked something like this:
execve("/bin/sh", ["sh", "-c", "echo at 12:57 < /etc/fstab"], [/* 67 vars */]) = 0
Since we know what it's doing, let's take a look at how your command will be executed:
/bin/sh -c "at 12:57 < RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE"
That looks very odd. Do you really want to pipe parking_timer.sh, the script, as input into the at command?
What you probably ultimately want is something like this:
/bin/sh -c "RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE | at 12:57"
Thus, the output of the parking_timer.sh command will become the input to the at command.
So, try the following:
output = `#{Rails.root}/lib/parking_timer.sh STRING_VARIABLE | at #{(Time.now + 60).strftime("%H:%M")}`
return render :text => output
You can always use strace or truss to see what's happening. For example:
strace -o strace.out -f -ff -p $IRB_PID
Then grep '^exec' strace.out* to see where the command is being executed.

Capistrano & Bash: ignore command exit status

I'm using Capistrano run a remote task. My task looks like this:
task :my_task do
run "my_command"
end
My problem is that if my_command has an exit status != 0, then Capistrano considers it failed and exits. How can I make capistrano keep going when exit when the exit status is not 0? I've changed my_command to my_command;echo and it works but it feels like a hack.
The simplest way is to just append true to the end of your command.
task :my_task do
run "my_command"
end
Becomes
task :my_task do
run "my_command; true"
end
For Capistrano 3, you can (as suggested here) use the following:
execute "some_command.sh", raise_on_non_zero_exit: false
The +grep+ command exits non-zero based on what it finds. In the use case where you care about the output but don't mind if it's empty, you'll discard the exit state silently:
run %Q{bash -c 'grep #{escaped_grep_command_args} ; true' }
Normally, I think the first solution is just fine -- I'd make it document itself tho:
cmd = "my_command with_args escaped_correctly"
run %Q{bash -c '#{cmd} || echo "Failed: [#{cmd}] -- ignoring."'}
You'll need to patch the Capistrano code if you want it to do different things with the exit codes; it's hard-coded to raise an exception if the exit status is not zero.
Here's the relevant portion of lib/capistrano/command.rb. The line that starts with if (failed... is the important one. Basically it says if there are any nonzero return values, raise an error.
# Processes the command in parallel on all specified hosts. If the command
# fails (non-zero return code) on any of the hosts, this will raise a
# Capistrano::CommandError.
def process!
loop do
break unless process_iteration { #channels.any? { |ch| !ch[:closed] } }
end
logger.trace "command finished" if logger
if (failed = #channels.select { |ch| ch[:status] != 0 }).any?
commands = failed.inject({}) { |map, ch| (map[ch[:command]] ||= []) << ch[:server]; map }
message = commands.map { |command, list| "#{command.inspect} on #{list.join(',')}" }.join("; ")
error = CommandError.new("failed: #{message}")
error.hosts = commands.values.flatten
raise error
end
self
end
I find the easiest option to do this:
run "my_command || :"
Notice: : is the NOP command so the exit code will simply be ignored.
I just redirect STDERR and STDOUT to /dev/null, so your
run "my_command"
becomes
run "my_command > /dev/null 2> /dev/null"
this works for standard unix tools pretty well, where, say, cp or ln could fail, but you don't want to halt deployment on such a failure.
I not sure what version they added this code but I like handling this problem by using raise_on_non_zero_exit
namespace :invoke do
task :cleanup_workspace do
on release_roles(:app), in: :parallel do
execute 'sudo /etc/cron.daily/cleanup_workspace', raise_on_non_zero_exit: false
end
end
end
Here is where that feature is implemented in the gem.
https://github.com/capistrano/sshkit/blob/4cfddde6a643520986ed0f66f21d1357e0cd458b/lib/sshkit/command.rb#L94

Resources