I have an object which for reasons of data scaling, needs to call Object.where(x=y).delete_all. Using destroy_all is too time consuming.
However as a result of that I want to enforce that no dev accidentally registers an after_destroy callback or even a dependent: destroy relationship because they'll both be ignored during the delete_all process.
What would be the best way in RSpec to test that after_destroy a model receives NO callbacks?
I'd like to achieve something along these lines:
it "should not have any registered after_destroy callbacks" do
o = MyObject.new
o.destroy
expect(o).to_not have_received('*')
end
Possible?
I think that approach is doomed: many methods internal to Active Record are called during a call to destroy, and you'd have to sort those out from your methods, or ones defined by callbacks (the bad methods won't necessarily have an obvious name, eg if they use the block form of before/after destroy).
You can however directly inspect the set of callbacks:
MyObject._destroy_callbacks
and check whether it is empty.
You can check what options have been set on associations more explicitly:
MyObject.reflect_on_all_associations.any? {|reflection| reflection.options[:dependent] == :destroy}
but these are implemented using callbacks so should show up in _destroy_callbacks
With Observers officially removed from Rails 4.0, I'm curious what other developers are using in their place. (Other than using the extracted gem.) While Observers were certainly abused and could easily become unwieldily at times, there were many use-cases outside of just cache-clearing where they were beneficial.
Take, for example, an application that needs to track changes to a model. An Observer could easily watch for changes on Model A and record those changes with Model B in the database. If you wanted to watch for changes across several models, then a single observer could handle that.
In Rails 4, I'm curious what strategies other developers are using in place of Observers to recreate that functionality.
Personally, I'm leaning towards a sort of "fat controller" implementation, where these changes are tracked in each models controller's create/update/delete method. While it bloats the behavior of each controller slightly, it does help in readability and understanding as all the code is in one place. The downside is that there's now code that is very similar scattered throughout several controllers. Extracting that code into helper methods is an option, but you're still left with calls to those methods littered everywhere. Not the end of the world, but not quite in the spirit of "skinny controllers" either.
ActiveRecord callbacks are another possible option, though one I don't personally like as it tends to couple two different models too closely together in my opinion.
So in the Rails 4, no-Observers world, if you had to create a new record after another record was created/updated/destroyed, what design pattern would you use? Fat controllers, ActiveRecord callbacks, or something else entirely?
Thank you.
Take a look at Concerns
Create a folder in your models directory called concerns. Add a module there:
module MyConcernModule
extend ActiveSupport::Concern
included do
after_save :do_something
end
def do_something
...
end
end
Next, include that in the models you wish to run the after_save in:
class MyModel < ActiveRecord::Base
include MyConcernModule
end
Depending on what you're doing, this might get you close without observers.
They are in a plugin now.
Can I also recommend an alternative which will give you controllers like:
class PostsController < ApplicationController
def create
#post = Post.new(params[:post])
#post.subscribe(PusherListener.new)
#post.subscribe(ActivityListener.new)
#post.subscribe(StatisticsListener.new)
#post.on(:create_post_successful) { |post| redirect_to post }
#post.on(:create_post_failed) { |post| render :action => :new }
#post.create
end
end
My suggestion is to read James Golick's blog post at http://jamesgolick.com/2010/3/14/crazy-heretical-and-awesome-the-way-i-write-rails-apps.html (try to ignore how immodest the title sounds).
Back in the day it was all "fat model, skinny controller". Then the fat models became a giant headache, especially during testing. More recently the push has been for skinny models -- the idea being that each class should be handling one responsibility and a model's job is to persist your data to a database. So where does all my complex business logic end up? In business logic classes -- classes that represent transactions.
This approach can turn into a quagmire (giggity) when the logic starts getting complicated. The concept is sound though -- instead of triggering things implicitly with callbacks or observers that are hard to test and debug, trigger things explicitly in a class that layers logic on top of your model.
Using active record callbacks simply flips the dependency of your coupling. For instance, if you have modelA and a CacheObserver observing modelA rails 3 style, you can remove CacheObserver with no issue. Now, instead say A has to manually invoke the CacheObserver after save, which would be rails 4. You've simply moved your dependency so you can safely remove A but not CacheObserver.
Now, from my ivory tower I prefer the observer to be dependent on the model it's observing. Do I care enough to clutter up my controllers? For me, the answer is no.
Presumably you've put some thought into why you want/need the observer, and thus creating a model dependent upon its observer is not a terrible tragedy.
I also have a (reasonably grounded, I think) distaste for any sort of observer being dependent on a controller action. Suddenly you have to inject your observer in any controller action (or another model) that may update the model you want observed. If you can guarantee your app will only ever modify instances via create/update controller actions, more power to you, but that's not an assumption I would make about a rails application (consider nested forms, model business logic updating associations, etc.)
Wisper is a great solution. My personal preference for callbacks is that they're fired by the models but the events are only listened to when a request comes in i.e. I don't want callbacks fired while I'm setting up models in tests etc. but I do want them fired whenever controllers are involved. This is really easy to setup with Wisper because you can tell it to only listen to events inside a block.
class ApplicationController < ActionController::Base
around_filter :register_event_listeners
def register_event_listeners(&around_listener_block)
Wisper.with_listeners(UserListener.new) do
around_listener_block.call
end
end
end
class User
include Wisper::Publisher
after_create{ |user| publish(:user_registered, user) }
end
class UserListener
def user_registered(user)
Analytics.track("user:registered", user.analytics)
end
end
In some cases I simply use Active Support Instrumentation
ActiveSupport::Notifications.instrument "my.custom.event", this: :data do
# do your stuff here
end
ActiveSupport::Notifications.subscribe "my.custom.event" do |*args|
data = args.extract_options! # {:this=>:data}
end
My alternative to Rails 3 Observers is a manual implementation which utilizes a callback defined within the model yet manages to (as agmin states in his answer above) "flip the dependency...coupling".
My objects inherit from a base class which provides for registering observers:
class Party411BaseModel
self.abstract_class = true
class_attribute :observers
def self.add_observer(observer)
observers << observer
logger.debug("Observer #{observer.name} added to #{self.name}")
end
def notify_observers(obj, event_name, *args)
observers && observers.each do |observer|
if observer.respond_to?(event_name)
begin
observer.public_send(event_name, obj, *args)
rescue Exception => e
logger.error("Error notifying observer #{observer.name}")
logger.error e.message
logger.error e.backtrace.join("\n")
end
end
end
end
(Granted, in the spirit of composition over inheritance, the above code could be placed in a module and mixed in each model.)
An initializer registers observers:
User.add_observer(NotificationSender)
User.add_observer(ProfilePictureCreator)
Each model can then define its own observable events, beyond the basic ActiveRecord callbacks. For instance, my User model exposes 2 events:
class User < Party411BaseModel
self.observers ||= []
after_commit :notify_observers, :on => :create
def signed_up_via_lunchwalla
self.account_source == ACCOUNT_SOURCES['LunchWalla']
end
def notify_observers
notify_observers(self, :new_user_created)
notify_observers(self, :new_lunchwalla_user_created) if self.signed_up_via_lunchwalla
end
end
Any observer that wishes to receive notifications for those events merely needs to (1) register with the model that exposes the event and (2) have a method whose name matches the event. As one might expect, multiple observers can register for the same event, and (in reference to the 2nd paragraph of the original question) an observer can watch for events across several models.
The NotificationSender and ProfilePictureCreator observer classes below define methods for the events exposed by various models:
NotificationSender
def new_user_created(user_id)
...
end
def new_invitation_created(invitation_id)
...
end
def new_event_created(event_id)
...
end
end
class ProfilePictureCreator
def new_lunchwalla_user_created(user_id)
...
end
def new_twitter_user_created(user_id)
...
end
end
One caveat is that the names of all events exposed across all the models must be unique.
I think the the issue with Observers being deprecated is not that observers were bad in and of themselves but that they were being abused.
I would caution against adding too much logic in your callbacks or simply moving code around to simulate the behavior of an observer when there is already a sound solution to this problem the Observer pattern.
If it makes sense to use observers then by all means use observers. Just understand that you will need to make sure that your observer logic follows sound coding practices for example SOLID.
The observer gem is available on rubygems if you want to add it back to your project
https://github.com/rails/rails-observers
see this brief thread, while not full comprehensive discussion I think the basic argument is valid.
https://github.com/rails/rails-observers/issues/2
You could try https://github.com/TiagoCardoso1983/association_observers . It is not yet tested for rails 4 (which wasn't launched yet), and needs some more collaboration, but you can check if it does the trick for you.
How about using a PORO instead?
The logic behind this is that your 'extra actions on save' are likely going to be business logic. This I like to keep separate from both AR models (which should be as simple as possible) and controllers (which are bothersome to test properly)
class LoggedUpdater
def self.save!(record)
record.save!
#log the change here
end
end
And simply call it as such:
LoggedUpdater.save!(user)
You could even expand on it, by injecting extra post-save action objects
LoggedUpdater.save(user, [EmailLogger.new, MongoLogger.new])
And to give an example of the 'extras'. You might want to spiffy them up a bit though:
class EmailLogger
def call(msg)
#send email with msg
end
end
If you like this approach, I recommend a read of Bryan Helmkamps 7 Patterns blog post.
EDIT: I should also mention that the above solution allows for adding transaction logic as well when needed. E.g. with ActiveRecord and a supported database:
class LoggedUpdater
def self.save!([records])
ActiveRecord::Base.transaction do
records.each(&:save!)
#log the changes here
end
end
end
It's worth mentioning that Observable module from Ruby standard library cannot be used in active-record-like objects since instance methods changed? and changed will clash with the ones from ActiveModel::Dirty.
Bug report for Rails 2.3.2
I have the same probjem! I find a solution ActiveModel::Dirty so you can track your model changes!
include ActiveModel::Dirty
before_save :notify_categories if :data_changed?
def notify_categories
self.categories.map!{|c| c.update_results(self.data)}
end
http://api.rubyonrails.org/classes/ActiveModel/Dirty.html
When should I save my models in rails? and who should be responsible for calling save, the model itself, or the caller?
Lets say I have (public)methods like udpate_points, update_level, etc. in my user model. There are 2 options:
The model/method is responsible for calling save . So each method will just call self.save.
The caller is responsible for calling save. So each method only updates the attributes but the caller calls user.save when it's done with the user.
The tradeoffs are fairly obvious:
In option #1 the model is guaranteed to save, but we call save multiple times per transaction.
In option #2 we call save only once per transaction, but the caller has to make sure to call save. For example team.leader.update_points would require me to call team.leader.save which is somewhat non-intuitive. This can get even more complicated if I have multiple methods operating on the same model object.
Adding a more specific info as per request:
update level looks at how many points the users has and updates the level of the user. The function also make a call to the facebook api to notify it that the user has achieved an new level, so I might potently execute it as a background job.
My favorite way of implementing stuff like this is using attr_accessors and model hooks. Here is an example:
class User < ActiveRecord::Base
attr_accessor :pts
after_validation :adjust_points, :on => :update
def adjust_points
if self.pts
self.points = self.pts / 3 #Put whatever code here that needs to be executed
end
end
end
Then in your controller you do something like this:
def update
User.find(params[:id]).update_attributes!(:pts => params[:points])
end
We are creating a system in Ruby on Rails and we want to be able to offer our users a bit of control about notifications and actions that can take place when some pre-defined trigger occurs. In addition, we plan on iterating through imported data and allowing our users to configure some actions and triggers based on that data.
Let me give you a few examples to better clarify:
Trigger - Action
------------------------------------------------------------------------
New Ticket is Created - User receives an e-mail
New Ticket Parsed for Keyword 'evil' - Ticket gets auto-assigned to a
particular group
User Missed 3 Meetings - A ticket is automatically created
Ideally, we would like some of the triggers to be configurable. For instance, the last example would possibly let you configure how many meetings were missed before the action took place.
I was wondering what patterns might help me in doing this event/callback situation in Ruby on Rails. Also, the triggers and actions may be configurable, but they will be predefined; so, should they be hard coded or stored in the database?
Any thoughts would be greatly appreciated. Thanks!
Update 1: After looking at it, I noticed that the badges system on SO is somewhat similar, based on these criteria, I want to do this action. It's slightly different, but I want to be able to easily add new criteria and actions and present them to the users. Any thoughts relating to this?
I think that what you are looking for are the Observers.
In your examples the Observers could handle the first and the third example (but not the second one, since an Observer only observes the object, not interact with it, even though it is technically possible).
Some code to show how I mean:
class TicketObserver < ActiveRecord::Observer
def after_create(ticket)
UserMailer.deliver_new_ticket_notification
end
end
class UserObserver < ActiveRecord::Observer
def after_update(user)
Ticket.new if user.recently_missed_a_meeting and user.missed_meetings > 3
end
end
And then add the observers to environment.rb
config.active_record.observers = :user_observer, :ticket_observer
Of course you will have to fill in the logic for the missed_meetings, but one detail to mention.
Since the after_update will trigger after every time that the user is being updated, the recently_missed_a_meeting attribute is useful. I usually follow the thinking of restful-authentication and have an instance variable that is being set to true everytime I want to trigger that row. That can be done in a callback or in some custom logic depends on how you track the meetings.
And for the second example, I would put it in a before_update callback, perhaps having the keywords in a lookup table to let users update which words that should trigger the move to a specific group.
You should look at the "callback" methods in Rails
For docs see - Callbacks
Your first rule would be implemented via the after_create method.
If you want them to be configurable, I would suggest using a model / table to store the possible actions and doing a lookup within the callback.
If this is high volume, be sure to consider caching the configuration since it would end up doing a db lookup on each callback.
Maybe something like a state-machine can help. Try AASM gem for RoR.
I have an ActiveRecord model with a status column. When the model is saved with a status change I need to write to a history file the change of status and who was responsible for the change. I was thinking an after_save callback would work great, but I can't use the status_changed? dynamic method to determine that the history write is necessary to execute. I don't want to write to the history if the model is saved but the status wasn't changed. My only thought on handling it right now is to use an instance variable flag to determine if the after_save should execute. Any ideas?
This may have changed since the question was posted, but the after_save callback should have the *_changed? dynamic methods available and set correctly:
class Order
after_save :handle_status_changed, :if => :status_changed?
end
or
class Order
after_save :handle_status_changed
def handle_status_changed
return unless status_changed?
...
end
end
Works correctly for me w/ Rails 2.3.2.
Use a before_save callback instead. Then you have access to both the new and old status values. Callbacks are wrapped in a transaction, so if the save fails or is canceled by another callback, the history write will be rolled back as well.
I see two solutions:
Like you said: add a variable flag and run callback when it is set.
Run save_history after updating your record.
Example:
old_status = #record.status
if #record.update\_attributes(params[:record])
save_history_here if old_status != #record.status
flash[:notice] = "Successful!"
...
else
...
end
Has anyone not heard of database triggers?
If you write an on_update database trigger on the database server, then every time a record gets updated, it will create a historical copy of the previous record's values in the associated audit table.
This is one of the main things I despise about Rails. It spends so much time trying to do everything for the developer that it fools developers into thinking that they have to follow such vulgar courses of action as writing specialized rails methods to do what the freaking database server already is fully capable of doing all by itself.
shakes head at Rails once again