Calling Models from Config Files - ruby-on-rails

I am writing a ruby scheduler - namely rufus-scheduler and there are commands i need to write in the initializers section inside the config folder to perform a task every 1 minute or so. I am trying to access a method from a module within this. So my code would look like
scheduler.every("1m") do
puts("HELLO #{Time.now}")
ModelName.methodname("WHAT ARE YOU DOING")
end
This somehow doesn't perform the necessary operation in the model. Also im not sure if this is the right way to do things - such as call a model inside a config file. Is there a better place to put this code in the Model? Or is calling a Model within config files perfectly good practice. I looked over the internet to see the usage of different types of files in ruby but couldn't find a proper material. Any help or guidance appreciated.

If you want to access models from stand-alone tasks the best way is to use the rails runner wrapper. For example, you'd call your script as:
rails runner call_model.rb
This loads in the Rails environment and then executes your script, eliminating the need to do that yourself. Models on their own will not work since they are lacking the context of Rails.
If that's not sufficient, you may need to load the Rails environment more directly by including config/environment.rb into your rufus-scheduler configuration.

It sounds like you actually want a real scheduled action of some sort. config files are for configuration, not for actual working code of that sort.
There are tons of ways to run scheduled tasks in rails.
google "rails daemons" or "rails scheduled tasks" to start you off.
Here's a good list of scheduled-task best practices using cron:
A cron job for rails: best practices?

Related

How do I associate a Ruby script with an object and execute it on demand (in Rails)?

I'm interested in uploading a script (file) via my Rails application and then being able to execute this on demand.
I imagine the first step here is to handle the file upload such that I a Model which I can reference the file like Model.script. This makes sense to me.
And then from here, my plan is to expose a route/controller method which would execute this script, but I'm not sure how to handle the actual execution inside this method. For example if I had
class Model
def run_script
# logic to run self.script
end
end
How would I execute the associated file/code given that it is a Ruby file? Note that the script does not need to run in the context of the Rails application, it just needs to run.
You can use eval, which will run a Ruby script, but be very, very, careful!
For a detailed explanation see "Code is data, data is code".
Example:
eval(#model_instance.script)
eval is very dangerous. If you give a user the ability to upload and run an unchecked script, it may create a huge problem. For example, someone can upload a script like this:
User.delete_all
This will delete all users from the users table using a SQL query without invoking any Active Record callbacks.
You can use Ruby's taint method to add some additional safety, but it is not 100% foolproof.
For a detailed example, see "Locking Ruby in the Safe".

Rails: whenever + delayed_job to optimize rake tasks?

I use whenever to call rake tasks throughout the day, but each task launches a new Rails environment. How can I run tasks throughout the day without relaunching Rails for each job?
Here's what I came up with, would love to get some feedback on this...?
Refactor each rake task to instead be a method within the appropriate Model.
Use the delayed_job gem to assign low priority and ensure these methods run asynchronously.
Instruct whenever to call each Model.method instead of calling the rake task
Does this solution make sense? Will it help avoid launching a new Rails environment for each job? .. or is there a better way to do this?
--
Running Rails 3
You could certainly look into enqueuing delayed_jobs via cron, then having one long running delayed_job worker.
Then you could use whenever to help you create the delayed_job enqueuing methods. It's probably easiest to have whenever's cron output call a small wrapper script which loads active_record and delayed_job directly rather than your whole rails stack. http://snippets.aktagon.com/snippets/257-How-to-use-ActiveRecord-without-Rails
You might also like to look into clockwork.rb, which is a long-running process that would do the same thing you're using cron for (enqueuing delayed_jobs): http://rubydoc.info/gems/clockwork/0.2.3/frames
You could also just try using a requeuing strategy in your delayed_jobs: https://gist.github.com/704047
Lots of good solutions to this problem, the one I eventually ended up integrating is as such:
Moved my rake code to the appropriate models
Added controller/routing code for calling models methods from the browser
Configured cronjobs using the whenever gem to run command 'curl mywebsite.com/model#method'
I tried giving delayed_job a go but didn't like the idea of running another Rails instance. My methods are not too server intensive, and the above solution allows me to utilize the already-running Rails environment.
Comment this line from schedule.rb
require File.expand_path(File.dirname(__FILE__) + "/environment")
Instead load only required ruby files like models in your case.

using rabbitmq with rails, how to create the endless loop process?

In a rails web app, if I write messages to a queue like rabbitmq, how will clients be notified when a producer sends a message to the queue?
I'm guessing I have to create a seperate process that runs in the background to respond to messages correct? i.e. this code is outside of the scope of a web application.
If this is the case, is it possible to re-use the models/libs that are in the rails application already? do I have to copy this code in 2 places then?
It looks like your application requires what's usually called a background or worker process. This is a fairly common requirement for any moderately complex web application.
I'm guessing I have to create a seperate process that runs in the background to respond to messages correct?
Yes - you're right about this. Whilst it's perfectly possible to use threads to handle the background tasks (in your case, reading and processing messages from RabbitMQ), the standard and recommended route for a Rails application is to run a separate background process.
If this is the case, is it possible to re-use the models/libs that are in the rails application already?
Absolutely. The simplest possible way to get this working is by using Rails' built in runner command.
Another option is to create a ruby script which loads up your Rails application. For example, you could create the file my_script.rb in the root of your project, which might look something like this:
# Load my application:
require File.join(File.dirname(__FILE__), 'config/environment.rb')
# Now you can access your Rails environment as normal:
MyModel.all.each { |x| x.do_something }
If your needs become more complex, or you find that you need to run more than one background process to keep up with the volume of data you need to process, you might want to look at one of the many available libraries and frameworks which can help with this.
Once you've created your background process, you'll need a way to run it continuously when you deploy it to your production server. Whilst it's possible to use libraries like daemons, as suggested by ctcherry, I would recommend using a dedicated tool like upstart (if deploying to ubuntu) or runit. A good summary of the most popular options is available here.
You are correct, you do need a background process. And you can keep the code for that process in the lib folder of the Rails project if you like, I have done that before without issue, and it keeps related code together which is nice.
I used this library to create my long running process, it was quite simple:
http://daemons.rubyforge.org/
In order to re-use models from your rails application you can run a require on the config/environment.rb file to get everything loaded. (Set RAILS_ENV as an environment variable first to select the correct envrionement) From that point the script behaves as though you are inside a rails console session.

Rake task for scraping with rails

I'm starting to write scrapers to get data from different websites. I built the first scraper in a rake file and am now starting to write a second rake file to get data from a second site. For now, I am writing a scraper specific to each site I'm interested in (not trying to build a generic scraper).
I have 3 questions:
Is writing rake tasks a good choice for me? Are there alternatives I should consider?
How can I add functions/methods to my rake files? (sorry, very silly questions, but I can't figure out how to structure my code... so for now it's just 500 lines of uninterrupted code in a long method) for instance, I'd like a "get_description(section)" method that returns the description from the page. The method could be different depending on which site I'm scraping.
How can I test my task with RSpec? I'd like to give a link and make sure the output of my tasks matches what I expect to get
Thanks for your help!
As a general principle, rake tasks should be very minimal. Refer the actual behavior to real classes. These classes can then be easily tested.
Example:
task :scrape do
Scraper.scrape!
end
class Scraper
def self.scrape!
# do something
end
end
describe Scraper do
# your tests
end
You could, as #brad indicated, use thor, which has a regular class structure by itself, so in theory it should be easier to test the tasks themselves. I haven't done that though.
You can define methods in rake, but I don't know where they end up. You shouldn't do that, so don't bother. Keep task bodies minimal, write normal code to do the dirty work.
Sure rake is fine if you want to use it, you can also check out thor which uses more standard ruby-like syntax rather than the dsl rake provides you.
Rake is just another ruby library so you can include whatever you like in there. As such you can write your own library and load it in your rake file. Check out how Bundler does it for instance. They've just defined their own classes, then created tasks inside of it. It uses thor by the way, which, from what I can gather somehow proxies those tasks on to rake, haven't really looked through it thoroughly though so i could be wrong.
If you're defining things in your own library, just use rspec as you normally would for any other project, then hook that library into rake or thor with whatever means and you're off to the races

Rails best practice question: Where should one put shared code and how will it be loaded?

The rails books and web pages I've been following have all stuck to very simple projects for the sake of providing complete examples. I'm moving away from the small project app and into a realm of non-browser clients and need to decide where to put code that is shared by all involved parties.
The non-browser client is a script that runs on any machine which can connect to the database. Browser clients write commands into the database, which the script examines and decides what to do. Upon completion, the script then writes its result back. The script is not started by the RoR server, but has access to its directory structure.
Where would be the best place for shared code to live, and how would the RoR loader handle it? The code in question doesn't really belong in a model, otherwise I'd drop it in there and be done with it.
I'd put the shared code in the Rails project's /lib directory and consider making it a custom Rake task.
It really depends on how much you use this shared code. If you use it everywhere, then throw it in the lib folder (as has already been stated here). If you are only using it in a few places, you might want to consider making a plugin out of it and loading it only in the places that use it. It's nice to only load what you need (one of the reasons I'm loving merb).

Resources