How to run non blocking command from Ruby? - ruby-on-rails

User goes to page A to create a new multiplayer game
The script in page A generates a unique ID for the game, and creates a worker for it. Something like: rails runner GameWorker.new(:game_id => game_id).start_game
The script in page A redirects the user to page B, where he can see the newly created game, and others can join.
The worker should be alive until the end of the game.
What would be the proper way to run the command that starts the worker? It must be non blocking and ideally redirect output to the log file, in case something goes wrong.
I'm using Rails 3, if it matters.
UPDATE
I'm gonna rephrase my question: How to run a linux command from within ruby and don't wait for the command to end? I mean the equivalent for &>>. In php for instance, &>> works fine and I don't need to use any special php functiont, but in ruby it seems to get overriden by and the script waits for the command to end and grab the output.

I HIGHLY recommend not running a process per game. If you want a non-blocking game that is not turn based, then you probably want to look at event-machine, or something like https://github.com/celluloid/celluloid-io
With either, you'll be creating threads that you'll process at future points in time.
But -- if you do want to just fire off a process in ruby, here you go.. from How to fire and forget a subprocess?
pid = Process.fork
if pid.nil? then
# In child
exec "whatever --take-very-long"
else
# In parent
Process.detach(pid)
end

Related

Monit's second "Does not exist" overrides first one

I have a process which I am monitoring using Monit. If process dies for some reason, I want to send a Slack notification using a shell script and also restart it. This behaviour though does not work with "does not exist" directive. The last one is executed and previous one ignored. For example code below:
check process xyz with pidfile /var/run/xyz.pid
start program = "/etc/init.d/xyz start" with timeout 60 seconds
stop program = "/etc/init.d/xyz stop"
if does not exist then restart
if does not exist then exec "/opt/somescript.sh"
It executes script but does not restart. it also looks like from documentation that this is how it will behave. Any other way to get this working. Documentation reference (Not exactly clear but resembles the actual behaviour):
If not defined, it defaults to a restart action.
You can override the default action with the following statement:
I believe monit doesn't allow you to have the same statements twice. You would have to write your script on restarting the process in your somescript.sh.
My guess is the default action is already to restart the process, as per the documentation, and you are overriding that with an exec action
Cleaner way is to add the restart script inside your somescript.sh.
If you don't want to do that, you can also combine the two actions in one, like this:
if does not exist then exec "/etc/init.d/xyz restart && /opt/somescript.sh"

Ruby How to create daemon process that will spawn multiple workers

I have a script called 'worker.rb'. When ran this script will perform processing for a while (an hour lets say) and then die.
I need to have another script which is going to be responsible for spawning the worker script above. Let's call this script 'runner.rb'. 'runner.rb' will be called with an argument dictating how many workers it is allowed to spawn.
I'd like runner.rb to do the following: (e.g. 'ruby runner.rb 5')
- Query the database for specific values (e.g. got 100 values)
- Spawn 5 instances of 'worker.rb' (passing the first 5 values respectively)
- Keep checking for any of the instances of 'worker.rb' spawned above to finish and then call 'worker.rb' again with the 6th value from the database and continue this process indefinitely.
I'm using the Daemons gem but am lost as the best way to go about this. The 'runner' script should definitely be daemonized - but should worker also be daemonized?
How should 'runner' go about checking if 'worker' has finished or not? Can this be done using a PID stored in a file?
I used Daemons gem before. But somehow it didn't do well on keep the number of child processes. Then I made a another one, called light_daemon. You could let light_daemon to prefork certain number of worker processes. If one of the worker dies for any reason, the light_daemon will spawn a new one to replace it. If your worker process may cause memory leaking issue, you could let the work to actively die before it gets too big. The parent process will keep the number of the worker processes constant. I used it in the produce site of one of my projects. I worked pretty well.
The following is an example daemon using the light-daemon gem.
require 'rubygems'
require 'light_daemon'
class Client
def initialize
#count = 0
end
def call
`echo "process: #{Process.pid}" >> /tmp/light-daemon.txt`
sleep 3
#count +=1
(#count < 100)? true : false
end
end
LightDaemon::Daemon.start(Client.new, :children=> 2, :pid_file => "/tmp/light-daemon.pid" )
In the daemon, the worker process dies after the method "call" is invoked 100 times. Then a new worker process is spawned and the process continues.

Permanent daemon for quering a web resource

I have a rails 3 application and looked around in the internet for daemons but didnt found the right for me..
I want a daemon which fetches data permanently (exchange courses) from a web resource and saves it to the database..
like:
while true
Model.update_attribte(:course, http::get.new("asdasd").response)
end
I've only seen cron like jobs, but they only run after a specific time... I want it permanently, depending on how long it takes to end the query...
Do you understand what i mean?
The gem light-daemon I wrote should work very well in your case.
http://rubygems.org/gems/light-daemon
You can write your code in a class which has a perform method, use a queue system like this and at application startup enqueue the job with Resque.enqueue(Updater).
Obviously the job won't end until the application is stopped, personally I don't like that, but if this is the requirement.
For this reason if you need to execute other tasks you should configure more than one worker process and optionally more than one queue.
If you can edit your requirements and find a trigger for the update mechanism the same approach still works, you only have to remove the while true loop
Sample class needed:
Class Updater
#queue = :endless_queue
def self.perform
while true
Model.update_attribute(:course, http::get.new("asdasd").response)
end
end
end
Finaly i found a cool solution for my problem:
I use the god gem -> http://god.rubyforge.org/
with a bash script (link) for starting / stopping a simple rake task (with an infinite loop in it).
Now it works fine and i have even some monitoring with god running that ensures that the rake task runs ok.

Cucumber step to pause and hand control over to the user

I'm having trouble debugging cucumber steps due to unique conditions of the testing environment. I wish there was a step that could pause a selenium test and let me take over.
E.g.
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then: I want to take control of the mouse
At that point I could interact with the application exactly as if I had done all the previous steps myself after running rails server -e test
Does such a step exist, or is there a way to make it happen?
You can integrate ruby-debug into your Cucumber tests. Nathaniel Ritmeyer has directions here and here which worked for me. You essentially require ruby-debug, start the debugger in your environment file, and then put "breakpoint" where ever you want to see what's going on. You can both interact with the browser/application and see the values of your ruby variables in the test. (I'm not sure whether it'll let you see the variables in your rails application itself - I'm not testing against a rails app to check that).
I came up with the idea to dump the database. It doesn't let you continue work from the same page, but if you have the app running during the test, you can immediately act on the current state of things in another browser (not the one controlled by Selenium).
Here is the step:
When /I want to take control/i do
exec "mysqldump -u root --password=* test > #{Rails.root}/support/snapshot.sql"
end
Because it is called by exec, DatabaseCleaner has no chance to truncate tables, so actually it's irrelevant that the command is a database dump. You don't have to import the sql to use the app in its current state, but it's there if you need it.
My teammate has done this using selenium, firebug a hook (#selenium_with_firebug)
Everything he learned came from this blogpost:
http://www.allenwei.cn/tips-add-firebug-extension-to-capybara/
Add the step
And show me the page
Where you want to interact with it
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then show me the page
use http://www.natontesting.com/2009/11/09/debugging-cucumber-tests-with-ruby-debug/
Big thank you to #Reed G. Law for the idea of dumping the database. Then loading it into development allowed me to determine exactly why my cucumber feature was not impacting database state as I had expected. Here's my minor tweak to his suggestion:
When /Dump the database/i do
`MYSQL_PWD=password mysqldump -u root my_test > #{Rails.root}/snapshot.sql`
# To replicate state in development run:
# `MYSQL_PWD=password mysql -u root my_development < snapshot.sql`
end
You can also use the following in feature/support/debugging.rb to let you step through the feature one step at a time:
# `STEP=1 cucumber` to pause after each step
AfterStep do |scenario|
next unless ENV['STEP']
unless defined?(#counter)
puts "Stepping through #{scenario.title}"
#counter = 0
end
#counter += 1
print "At step ##{#counter} of #{scenario.steps.count}. Press Return to"\
' execute...'
STDIN.getc
end

Ruby on Rails: How to run things in the background?

When a new resource is created and it needs to do some lengthy processing before the resource is ready, how do I send that processing away into the background where it won't hold up the current request or other traffic to my web-app?
in my model:
class User < ActiveRecord::Base
after_save :background_check
protected
def background_check
# check through a list of 10000000000001 mil different
# databases that takes approx one hour :)
if( check_for_record_in_www( self.username ) )
# code that is run after the 1 hour process is finished.
user.update_attribute( :has_record )
end
end
end
You should definitely check out the following Railscasts:
http://railscasts.com/episodes/127-rake-in-background
http://railscasts.com/episodes/128-starling-and-workling
http://railscasts.com/episodes/129-custom-daemon
http://railscasts.com/episodes/366-sidekiq
They explain how to run background processes in Rails in every possible way (with or without a queue ...)
I've just been experimenting with the 'delayed_job' gem because it works with the Heroku hosting platform and it was ridiculously easy to setup!!
Add gem to Gemfile, bundle install, rails g delayed_job, rake db:migrate
Then start a queue handler with;
RAILS_ENV=production script/delayed_job start
Where you have a method call which is your lengthy process i.e
company.send_mail_to_all_users
you change it to;
company.delay.send_mail_to_all_users
Check the full docs on github: https://github.com/collectiveidea/delayed_job
Start a separate process, which is probably most easily done with system, prepending a 'nohup' and appending an '&' to the end of the command you pass it. (Make sure the command is just one string argument, not a list of arguments.)
There are several reasons you want to do it this way, rather than, say, trying to use threads:
Ruby's threads can be a bit tricky when it comes to doing I/O; you have to take care that some things you do don't cause the entire process to block.
If you run a program with a different name, it's easily identifiable in 'ps', so you don't accidently think it's a FastCGI back-end gone wild or something, and kill it.
Really, the process you start should be "deamonized," see the Daemonize class for help.
you ideally want to use an existing background job server, rather than writing your own. these will typically let you submit a job and give it a unique key; you can then use the key to periodically query the jobserver for the status of your job without blocking your webapp. here is a nice roundup of the various options out there.
I like to use backgroundrb, its nice it allows you to communicate to it during long processes. So you can have status updates in your rails app
I think spawn is a great way to fork your process, do some processing in background, and show user just some confirmation that this processing was started.
What about:
def background_check
exec("script/runner check_for_record_in_www.rb #{self.username}") if fork == nil
end
The program "check_for_record_in_www.rb" will then run in another process and will have access to ActiveRecord, being able to access the database.

Resources