Rails spring wisper listener method caching - ruby-on-rails

It turns out that Spring caches my wisper listener method (I'm writing quite simple Engine).
Example:
app/models/myengine/my_class.rb
class Myengine::MyClass
include Wisper::Publisher
def something
# some logic
publish(:after_something, self)
end
end
config/initializers/wisper.rb
Wisper.subscribe(Myengine::MyObserver.new)
app/observers/myengine/my_observer.rb
class Myengine::MyObserver
def after_something my_class_instance
# any changes here requires Spring manual restart in order to be reflected in tests
another_method
end
def another_method
# all changes here or in any other class methods works ok with Spring and are instantly visible in tests
return true
end
end
By Spring restart I mean manual execution of spring stop command which is really annoying.
What is more mysterious I may change another_method return value to false and then tests are failing which is OK, but when I change after_something method body to let say return false it doesn't have any effect on tests (like the body of the after_something is somehow cached).
It is not high priority problem because this strange behaviour is only visible inside listener method body and easy to overcome by moving all logic to another method in the class. Anyway it might be confusing (especially at the beginning when I didn't know the exact problem).

The problem is properly caused because when you subscribe a listener globally, even if its class is reloaded, the object remains in memory pointing to the class it was originally constructed from, even if the class has been reloaded in the meantime.
Try this in config/initializers/wisper.rb:
Rails.application.config.to_prepare do
Wisper.clear if Rails.env.development?
Wisper.subscribe(Myengine::MyObserver.new)
end
to_prepare will run the block before every request for development environment, but once, as normal for production environment. Therefore provided your listener does not maintain any state it should work as expected.
The Wisper.clear is needed to remove the existing listeners subscribed before we re-subscribe a new instance from the reloaded class. Be aware that #clear will clear all subscribers, so if you have similar code as the above in more than one engine only the last engine to be loaded will have its listeners subscribed.

Related

Rails, how to know if a particular request is still running

How can I detect if a particular request is still active?
For example I have this request uuid:
# my_controller.rb
def my_action
request.uuid # -> ABC1233
end
From another request, how can I know if the request with uuid ABC1233 is still working?
For the curious:
Following beanstalk directives I am running cron jobs using URL requests.
I don't want to start the next iteration if the previous one is still running. I can not just relay in a ini/end flag updated by the request because the request some times dies before it finishes.
Using normal cron tasks I was managing this properly using the PID of the process.
But I don't think I can use PID any more because processes in a web server can be reused among different requests.
I don't think Rails (or more correctly, Rack) has support for this since (to the best of my knowledge) each Rails request doesn't know about any other requests. You may try to get access to all running threads (and even processes) but such implementation (if even possible) seems ugly to me
.
How about implementing it yourself?
class ApplicationController < ActionController::Base
before_filter :register_request
after_filter :unregister_request
def register_request
$redis.set request.uuid
end
def unregister_request
$redis.unset request.uuid
end
end
You'll still need to figure out what to do with exceptions since after_filters are skipped (perhaps move this whole code to a middleware: on the before phase of the middleware it writes the uuid to redis and on the after phase it removes the key ). There's a bunch of other ways to achieve this I'm sure and obviously substitute redis with your favorite persistence of choice.
Finally I recovered my previous approach based on PIDs.
I implemented something like this:
# The Main Process
module MyProcess
def self.run_forked
Process.fork do
SynchProcess.run
end
Process.wait
end
def self.run
RedisClient.set Process.pid # store the PID
... my long process code is here
end
def self.still_alive?(pid)
!!Process.kill(0, pid) rescue false
end
end
# In one thread I can do
MyProcess.run_forked
# In another thread I can do
pid = RedisClient.get
MyProcess.still_alive?(pid) # -> true if the process still running
I can call this code from a Rails request and even if the request process is reused the child one is not and I can monitor the PID of the child process to see if the Ruby process is still running.

ActiveJob could not find record in `test` environment

This issue only exists in test environment. Everything runs fine in development environment.
I am facing a strange issue after recently upgrading to Rails 5.0.0.1 from Rails 4.2.7.1. Everything was working fine before this upgrade.
In one of my models, I use ActiveJob to perform a task.
# webhook_invocation.rb
def schedule_invocation
WebhookRequestJob.perform_later(id)
end
def init
remember_webhook # No DB changes
init_errors_context # No DB changes
flow_step_invocation.implementation = self
flow_step_invocation.save!
return unless calculate_expressions # No DB changes
calculated! # An AASM event, with no callbacks
schedule_invocation
end
and in WebhookRequestJob#perform, I retrieve the object using the ID supplied
# webhook_request_job.rb
def perform(webhook_invocation_id)
invocation = WebhookInvocation.find_by(id: webhook_invocation_id)
invocation.run_request
end
The problem is that in the #perform, it cannot find the record (invocation becomes nil). I even tried putting p WebhookInvocation.all as the first line, but all it prints is an empty collection. On the other hand, if I try p WebhookInvocation.all in #schedule_invocation method, it properly prints out all the objects of WebhookInvocation.
There is no exception being raised, no lines of warnings either.
Edit 1:
I even tried passing the object directly to #perform_later i.e. WebhookRequestJob.perform_later(self), but the received object at #perform is nil.
Edit 2:
I noticed that there are some messages like Creating scope :fail. Overwriting existing method FlowStepInvocation.fail, caused by using AASM. I eliminated them by using create_scopes: false. But that still didn't solve the problem.
My guess from the info you supplied is that you have are calling the schedule_invocation method in a after_save or after_create callback. Since the callback is called, ActiveJob might start processing the job even before the object is actually persisted (before COMMIT is done). In this case your record will not show up in the database when job is processed and you will get an empty collection or nil.
To fix this change your callback to after_commit to make sure that the COMMIT action has happened before you queue the job.
It turns out that config.active_job.queue_adapter was set to :inline as default before Rails 5, but it is set to :async in Rails 5.
This made the specs to fail (don't know why). To resolve this, I put the following line in my config/environments/test.rb:
config.active_job.queue_adapter = :inline

How to disable class cache for part of Rails application

I am developing a Rails app for network automation. Part of app consists logic to run operations, part are operations themselves. Operation is simply a ruby class that performs several commands for network device (router, switch etc).
Right now, operation is simply part of Rails app repo. But in order to make development process more agile, I would like to decouple app and operations. I would have 2 repos - one for app and one for operations. App deploy would follow standard procedure, but operation would sync every time something is pushed to master. And what is more important, I don't want to restart app after operations repo update.
So my question is:
How to exclude several classes (or namespaces) from being cashed in production Rails app - I mean every time I call this class it would be reread file from disk. What could be potential dangers of doing so?
Some code example:
# Example operation - I would like to add or modify such classes withou
class FooOperation < BaseOperation
def perform(host)
conn = new_connection(host) # method from BaseOperation
result = conn.execute("foo")
if result =~ /Error/
# retry, its known bug in device foo
conn.execute("foo")
else
conn.exit
return success # method from BaseOperation
end
end
end
# somewhere in admin panel I would do so:
o = Operations.create(name: "Foo", class_name: "Foo")
o.id # => 123 # for next example
# Ruby worker which actually runs an operation
class OperationWorker
def perform(operation_id, host)
operation = Operation.find(operation_id)
# here, everytime I load this I want ruby to search for implementation on filesystem, never cache
klass = operation.class_name.constantize
class.new(host).perform #
end
end
i think you have quite a misunderstanding about how ruby code loading and interpretation works!
the fact that rails reloads classes at development time is kind of a "hack" to let you iterate on the code while the server has already loaded, parsed and executed parts of your application.
in order to do so, it has to implement quite some magic to unload your code and reload parts of it on change.
so if you want to have up-to-date code when executing an "operation" you are probably best of by spawning a new process. this will guarantee that your new code is read and parsed properly when executed with a blank state.
another thing you can do is use load instead of require because it will actually re-read the source on subsequent requests. you have to keep in mind, that subsequent calls to load just add to the already existing code in the ruby VM. so you need to make sure that every change is compatible with the already loaded code.
this could be circumvented by some clever instance_eval tricks, but i'm not sure that is what you want...

Wrapping Sidekiq's perform method to add timezone awareness

I have a rails application with a dynamically configured time zone. It is stored in a database table containing other options, and the rails application itself is configured to UTC (default).
I've made the application itself aware of the timezone with a simple around filter using Time.use_zone(..., &block).
I would like to do something similar for my Sidekiq workers. Some of them process data that has timezone relevance, so they need it. I don't see any filtering options available in Sidekiq itself, no callbacks, before/after type things I can hook into. My current solution is to a prepend a module, like so:
module TimeZoneAwareWorker
def perform(*args)
Time.use_zone(Options.time_zone) do
super
end
end
end
and mixed in:
class MyWorker
include Sidekiq::Worker
prepend TimeZoneAwareWorker
...
end
This works fine for simple workers, but breaks down if the prepend occurs in the same class as the include Sidekiq::Worker. If the worker is subclassed, the hierarchy doesn't work out for the prepended perform to wrap the implementation.
Is there a better way? Ultimately it seems what I really want is a foolproof method of wrapping a single method with another method, and yielding the wrapped implementation.
I know my other option is monkeypatching before/after/around type callbacks into Sidekiq's implementation, but I'd like to only go there if forced.
Sidekiq has its own middleware solution:
Sidekiq has a similar notion of middleware to Rack: these are small
bits of code that can implement functionality. Sidekiq breaks
middleware into client-side and server-side.
Client-side middleware runs before the pushing of the job to Redis and allows you to modify/stop the job before it gets pushed. Client
middleware may receive the class argument as a Class object or a
String containing the name of the class.
Server-side middleware runs 'around' job processing. Sidekiq's retry feature is implemented as a simple middleware.
You can easily create your own middleware agent to add the timezone awareness code.

Delayed job: How to reload the payload classes during every call in Development mode

I am running a delayed job worker. When ever I invoke the foo method, worker prints hello.
class User
def foo
puts "Hello"
end
handle_asynchronously :foo
end
If I make some changes to the foo method, I have to restart the worker for the changes to reflect. In the development mode this can become quite tiresome.
I am trying to find a way to reload the payload class(in this case User class) for every request. I tried monkey patching the DelayedJob library to invoke require_dependency before the payload method invocation.
module Delayed::Backend::Base
def payload_object_with_reload
if Rails.env.development? and #payload_object_with_reload.nil?
require_dependency(File.join(Rails.root, "app", "models", "user.rb"))
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
This approach doesn't work as the classes registered using require_dependency needs to be reloaded before the invocation and I haven't figured out how to do it. I spent some time reading the dispatcher code to figure out how Rails reloads the classes for every request. I wasn't able to locate the reload code.
Has anybody tried this before? How would you advise me to proceed? Or do you have any pointers for locating the Rails class reload code?
I managed to find a solution. I used ActiveSupport::Dependencies.clear method to clear the loaded classes.
Add a file called config/initializers/delayed_job.rb
Delayed::Worker.backend = :active_record
if Rails.env.development?
module Delayed::Backend::Base
def payload_object_with_reload
if #payload_object_with_reload.nil?
ActiveSupport::Dependencies.clear
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
end
As of version 4.0.6, DelayedJob reloads automatically if Rails.application.config.cache_classes is set to false:
In development mode, if you are using Rails 3.1+, your application code will automatically reload every 100 jobs or when the queue finishes. You no longer need to restart Delayed Job every time you update your code in development.
This looks like it solves your problem without the alias_method hackery:
https://github.com/Viximo/delayed_job-rails_reloader

Resources