delayed_job - Performs not up to date code? - ruby-on-rails

I'm using delayed_job (tried both tobi's and collective_idea's) on site5.com shared hosting, with passenger as rails environment.
I managed to make jobs done.
However, it seems the plugin ignores any changes in a job class source code after first run.
I have restarted the server on every change (touch tmp/restart.txt) but it still ignores it.
Example:
file: lib/xx_job.rb
class XxJob
def perform
Rails.logger.info "XX START"
TempTest.delete_all
i = 0
10.times {
i+=1
TempTest.create(:name => "XXX")
sleep(1)
}
Rails.logger.info "XX END"
end
end
In a simple controller I call:
Delayed::Job.enqueue(XxJob.new)
Conclusions I have gathered:
If I change xx_job.rb to xx_job1.rb - error on the controller
If I change class XxJob to class XxJob1 - error on the controller
If I delete all the perform method content - the old code old code is executed
New .rb file with class and perform, enqueue this class - works perfectly
If I change something in that new file's perform and run job again - old code is executed
Between every change I made a restart for the server.
It seems like Passenger or something else saves class cache.
How can I delete this cache? Is is stored on the server somewhere? (I hope I have access to it from the shared hosting)
Thanks!

If you run delayed job workers daemonized, then you need to restart them to reload the code. Also, keep in mind that each worker loads its own instance of rails.

Eventually I figured that out - several workers were running in background, each of them caught a job and had their own cache.
I didn't know how to kill them so I changed the table's name for several seconds. That killed them :)
Then I used https://github.com/tobi/delayed_job/wiki/Running-Delayed::Worker-as-a-daemon as worker start, and it works great.

Related

Setting up counter cache scale

My counter cache is locking the row under heavy load so I found wanelo/counter-cache gem which seems to be perfect for my problem but I can't set it up and it must be something really simple but I can't see it.
https://github.com/wanelo/counter-cache
I want it to use my already working delayed jobs and Redis.
In my config file
Counter::Cache.configure do |c|
c.default_worker_adapter = here??? DelayedJob ??
c.recalculation_delay = 5.hours
c.redis_pool = Redis.new
c.counting_data_store = Counter::Cache::Redis
end
If I don't put the line c.default_worker_adapter when executing it says
undefined method 'enqueue' for nil:NilClass
Any idea on what's going on? What should I put in the Worker Adapter? Nothing seems to work.
Thank you for your time
default_worker_adapter is the name of the class that will be handling your updates. An example is given on the github page of the gem. For example if you're using sidekiq, you would make sidekiq worker class and name it whatever you want. On the github page, this class is called CounterWorker and you can copy it exactly as its given, though you can use whatever delayed job framework you want. From then on, any counter_cache_on definitions on your models will use that class to make the counter updates.

How to disable class cache for part of Rails application

I am developing a Rails app for network automation. Part of app consists logic to run operations, part are operations themselves. Operation is simply a ruby class that performs several commands for network device (router, switch etc).
Right now, operation is simply part of Rails app repo. But in order to make development process more agile, I would like to decouple app and operations. I would have 2 repos - one for app and one for operations. App deploy would follow standard procedure, but operation would sync every time something is pushed to master. And what is more important, I don't want to restart app after operations repo update.
So my question is:
How to exclude several classes (or namespaces) from being cashed in production Rails app - I mean every time I call this class it would be reread file from disk. What could be potential dangers of doing so?
Some code example:
# Example operation - I would like to add or modify such classes withou
class FooOperation < BaseOperation
def perform(host)
conn = new_connection(host) # method from BaseOperation
result = conn.execute("foo")
if result =~ /Error/
# retry, its known bug in device foo
conn.execute("foo")
else
conn.exit
return success # method from BaseOperation
end
end
end
# somewhere in admin panel I would do so:
o = Operations.create(name: "Foo", class_name: "Foo")
o.id # => 123 # for next example
# Ruby worker which actually runs an operation
class OperationWorker
def perform(operation_id, host)
operation = Operation.find(operation_id)
# here, everytime I load this I want ruby to search for implementation on filesystem, never cache
klass = operation.class_name.constantize
class.new(host).perform #
end
end
i think you have quite a misunderstanding about how ruby code loading and interpretation works!
the fact that rails reloads classes at development time is kind of a "hack" to let you iterate on the code while the server has already loaded, parsed and executed parts of your application.
in order to do so, it has to implement quite some magic to unload your code and reload parts of it on change.
so if you want to have up-to-date code when executing an "operation" you are probably best of by spawning a new process. this will guarantee that your new code is read and parsed properly when executed with a blank state.
another thing you can do is use load instead of require because it will actually re-read the source on subsequent requests. you have to keep in mind, that subsequent calls to load just add to the already existing code in the ruby VM. so you need to make sure that every change is compatible with the already loaded code.
this could be circumvented by some clever instance_eval tricks, but i'm not sure that is what you want...

Resque job not actually backgrounding

It is instead taking up my processor, and then effectually timing out.
I have in my controller :
after_save :handle_file
def handle_test
Resque.enqueue UnpackFileOnS3, parent.id
end
It hits this mark, and then the entire app waits for it to set up and upload the files as prescribed inside my Job. Then it predictably times out because it takes awhile to upload it.
This occurs in my console as well.. If I run :
Resque.enqueue UnpackFileOnS3, 4
Then instead of enqueue'ing it, it locks up my console as it tries to run the entire file. I think that normally, console would just enqueue it to a worker and redis..
Why isn't this actually happening in the background? As I assume if that were the case, the timeouts would not occur.
My guess is that you are running resque in an inline mode. In this mode queing is disabled. Check your configs for this kind of code:
Resque.inline = ENV['RAILS_ENV'] == "cucumber"
#or whatever, important part is the inline option

Permanent daemon for quering a web resource

I have a rails 3 application and looked around in the internet for daemons but didnt found the right for me..
I want a daemon which fetches data permanently (exchange courses) from a web resource and saves it to the database..
like:
while true
Model.update_attribte(:course, http::get.new("asdasd").response)
end
I've only seen cron like jobs, but they only run after a specific time... I want it permanently, depending on how long it takes to end the query...
Do you understand what i mean?
The gem light-daemon I wrote should work very well in your case.
http://rubygems.org/gems/light-daemon
You can write your code in a class which has a perform method, use a queue system like this and at application startup enqueue the job with Resque.enqueue(Updater).
Obviously the job won't end until the application is stopped, personally I don't like that, but if this is the requirement.
For this reason if you need to execute other tasks you should configure more than one worker process and optionally more than one queue.
If you can edit your requirements and find a trigger for the update mechanism the same approach still works, you only have to remove the while true loop
Sample class needed:
Class Updater
#queue = :endless_queue
def self.perform
while true
Model.update_attribute(:course, http::get.new("asdasd").response)
end
end
end
Finaly i found a cool solution for my problem:
I use the god gem -> http://god.rubyforge.org/
with a bash script (link) for starting / stopping a simple rake task (with an infinite loop in it).
Now it works fine and i have even some monitoring with god running that ensures that the rake task runs ok.

Ruby on Rails: How to run things in the background?

When a new resource is created and it needs to do some lengthy processing before the resource is ready, how do I send that processing away into the background where it won't hold up the current request or other traffic to my web-app?
in my model:
class User < ActiveRecord::Base
after_save :background_check
protected
def background_check
# check through a list of 10000000000001 mil different
# databases that takes approx one hour :)
if( check_for_record_in_www( self.username ) )
# code that is run after the 1 hour process is finished.
user.update_attribute( :has_record )
end
end
end
You should definitely check out the following Railscasts:
http://railscasts.com/episodes/127-rake-in-background
http://railscasts.com/episodes/128-starling-and-workling
http://railscasts.com/episodes/129-custom-daemon
http://railscasts.com/episodes/366-sidekiq
They explain how to run background processes in Rails in every possible way (with or without a queue ...)
I've just been experimenting with the 'delayed_job' gem because it works with the Heroku hosting platform and it was ridiculously easy to setup!!
Add gem to Gemfile, bundle install, rails g delayed_job, rake db:migrate
Then start a queue handler with;
RAILS_ENV=production script/delayed_job start
Where you have a method call which is your lengthy process i.e
company.send_mail_to_all_users
you change it to;
company.delay.send_mail_to_all_users
Check the full docs on github: https://github.com/collectiveidea/delayed_job
Start a separate process, which is probably most easily done with system, prepending a 'nohup' and appending an '&' to the end of the command you pass it. (Make sure the command is just one string argument, not a list of arguments.)
There are several reasons you want to do it this way, rather than, say, trying to use threads:
Ruby's threads can be a bit tricky when it comes to doing I/O; you have to take care that some things you do don't cause the entire process to block.
If you run a program with a different name, it's easily identifiable in 'ps', so you don't accidently think it's a FastCGI back-end gone wild or something, and kill it.
Really, the process you start should be "deamonized," see the Daemonize class for help.
you ideally want to use an existing background job server, rather than writing your own. these will typically let you submit a job and give it a unique key; you can then use the key to periodically query the jobserver for the status of your job without blocking your webapp. here is a nice roundup of the various options out there.
I like to use backgroundrb, its nice it allows you to communicate to it during long processes. So you can have status updates in your rails app
I think spawn is a great way to fork your process, do some processing in background, and show user just some confirmation that this processing was started.
What about:
def background_check
exec("script/runner check_for_record_in_www.rb #{self.username}") if fork == nil
end
The program "check_for_record_in_www.rb" will then run in another process and will have access to ActiveRecord, being able to access the database.

Resources