When should I save my models in rails? and who should be responsible for calling save, the model itself, or the caller?
Lets say I have (public)methods like udpate_points, update_level, etc. in my user model. There are 2 options:
The model/method is responsible for calling save . So each method will just call self.save.
The caller is responsible for calling save. So each method only updates the attributes but the caller calls user.save when it's done with the user.
The tradeoffs are fairly obvious:
In option #1 the model is guaranteed to save, but we call save multiple times per transaction.
In option #2 we call save only once per transaction, but the caller has to make sure to call save. For example team.leader.update_points would require me to call team.leader.save which is somewhat non-intuitive. This can get even more complicated if I have multiple methods operating on the same model object.
Adding a more specific info as per request:
update level looks at how many points the users has and updates the level of the user. The function also make a call to the facebook api to notify it that the user has achieved an new level, so I might potently execute it as a background job.
My favorite way of implementing stuff like this is using attr_accessors and model hooks. Here is an example:
class User < ActiveRecord::Base
attr_accessor :pts
after_validation :adjust_points, :on => :update
def adjust_points
if self.pts
self.points = self.pts / 3 #Put whatever code here that needs to be executed
end
end
end
Then in your controller you do something like this:
def update
User.find(params[:id]).update_attributes!(:pts => params[:points])
end
Related
Lets put a bit of context on this question. Given an Ecommerce application in Ruby on Rails. Let's deal with 2 models for example. User and CreditCard.
My User is in the system after a registration no issue there.
CreditCard is a model with the credit card information (yes I know about PCI compliance but that's not the point here)
In the Credit Card model, I include a callback after_validation that will do a validation of the credit card against your bank.
Let me put some simple code here.
models/user.rb
class User < ActiveRecord::Base
enum :status, [:active, :banned]
has_one :credit_card
end
models/credit_card.rb
class CreditCard < ActiveRecord::Base
belongs_to :user
after_validation :validate_at_bank
def validate_at_bank
result = Bank.validate(info) #using active_merchant by exemple
unless result.success
errors.add {credit_card: "Bank doesn't validate"}
user.banned!
end
end
end
controllers/credit_cards_controller.rb
class CreditCardsController < ApplicationController
def create
#credit_card = CreditCard.new(credit_card_params) # from Strong Parameters
if #credit_card.save
render #success
else
render #failure
end
end
end
What causing me issue
It look like Rails opens a transaction in ActiveRecord when I'm doing a new. At this point nothing is send to the database.
When the bank reject the credit card, I want to ban the user. I do this by calling banned! Now I realised this update is going to the same transaction. I can see the update, but once the save doesn't go though, everything is rollback from both models. The credit card is not saved (that is good), the user is not saved (this is not good since I want to ban him)
I try to add a Transaction wrapper, but this only add a database checkpoint. I could create a delayed job for the ban, but this seems to me to be overkill. I could use a after_rollback callback, but I'm not sure this is the right way. I'm a bit surprise, I never caught this scenario before, leading me to believe that my patern is not correct or the point where I make this call is incorrect.
After much review and more digging I came up with 3 ways to handle this situation. Depending on the needs you have, one of them should be good for you.
Separate thread and new database connection
Calling the validate function before the save explictly
Send the task to be perform to a Delayed Job
Separate thread
The following answer shows how to do this operation.
https://stackoverflow.com/a/20743433/552443
This will work, but again not that nice and simple.
Call valid? before the save
This is a very quick solution. The issue is that a new developer could erase the valid? line thinking that the .save will do the job properly.
Calling Delayed Job
This could be any ActionJob provider. You can send the task to banned the user in a separate thread. Depending on your setup this is quite clean but not everybody needs DelayedJob.
If you see anything please add it to a new solution of the comments.
I have an object which for reasons of data scaling, needs to call Object.where(x=y).delete_all. Using destroy_all is too time consuming.
However as a result of that I want to enforce that no dev accidentally registers an after_destroy callback or even a dependent: destroy relationship because they'll both be ignored during the delete_all process.
What would be the best way in RSpec to test that after_destroy a model receives NO callbacks?
I'd like to achieve something along these lines:
it "should not have any registered after_destroy callbacks" do
o = MyObject.new
o.destroy
expect(o).to_not have_received('*')
end
Possible?
I think that approach is doomed: many methods internal to Active Record are called during a call to destroy, and you'd have to sort those out from your methods, or ones defined by callbacks (the bad methods won't necessarily have an obvious name, eg if they use the block form of before/after destroy).
You can however directly inspect the set of callbacks:
MyObject._destroy_callbacks
and check whether it is empty.
You can check what options have been set on associations more explicitly:
MyObject.reflect_on_all_associations.any? {|reflection| reflection.options[:dependent] == :destroy}
but these are implemented using callbacks so should show up in _destroy_callbacks
I'm building a flow whereby a user can administer an event, specifically doing the following:
Register attendees
Attach photos
Attach fitness information
Each of these currently happens in a seperate controller, and can happen in any order.
Having completed all three, I'd then like to generate an email out to all attendees with links ot the photos, etc.
I'm having trouble finding the best approach to check against the three conditions listed above. Currently, I'm approaching it by creating a service called GenerateEmailsToAttendees with a method .try. This checks against the conditions, and if all are met, generates the emails: e.g:
class GenerateEmailsToAttendees
def try(event)
if event.has_some_fitness_activities? and event.has_some_attendees? and event.has_some_photos?
event.attendances.each do |attendance|
attendance.notify_user_about_write_up
end
end
end
end
The problem now is that I have this GenerateEmailsToAttendees scattered across three controllers (AttendeesController#register, PhotosController#attach and FitnessInfoController#attach). I also run the risk of duplicating the notifications to the users.
Is there a better way? Could I use an observer to watch for the three conditions being met?
I can provide more information on the model structure if it's useful.
Thanks!
How about moving your observer to a cron job? i.e: remove it from all three controllers, and just put it in a rake task and schedule it to run every week/day/hour etc on all events that have met the conditions. You should probably set a flag on the event if the email has been generated so you don't spam the same user twice. I understand that this might not be realtime but it'll definitely solve your problem. I would recommend using https://github.com/javan/whenever for managing your cronjobs.
I would put this into an after_save callback: then Rails will just take care of it automatically. You will probably need some system to ensure that this only happens once. I would do something like this:
add a new boolean field to track whether the event has all of the required "stuff" done in order to send out the email, eg "published"
when the various things that can make an event "published" happen, call a method in the Event model which tests if the event is ready to be published and currently NOT published: if it is, then update the model to be published and send the email.
eg - (i'm guessing at your join table names here)
#app/models/event_attendance.rb
after_create :is_event_publishable?
def is_event_publishable?
self.event.publishable?
end
#app/models/event_fitness_activity.rb
after_create :is_event_publishable?
def is_event_publishable?
self.event.publishable?
end
#app/models/event_photo.rb
after_create :is_event_publishable?
def is_event_publishable?
self.event.publishable?
end
#app/models/event.rb
def publishable?
if !self.published && self.fitness_activities.size > 0 and self.attendences.size > 0 and self.photos.size > 0
self.attendances.each do |attendance|
attendance.notify_user_about_write_up
end
end
end
Now you don't need anything to do with this at all in your controllers. Generally i'm in favour of keeping controllers as absolutely standard as possible.
Yes, you can create an observer that watches multiple models with a single 'after_save' callback using something like
observe :account, :balance
def after_save(record)
make your checks here
end
To preserve data integrity, I need to prevent some models from being modified after certain events. For example, a product shouldn't be allowed to be written off after it has been sold.
I've always implemented this in the controller, like so (pseudo-ish code):
def ProductsController < ApplicationController
before_filter require_product_not_sold, :only => [ :write_off ]
private
def require_product_not_sold
if #product.sold?
redirect_to #product, :error => "You can't write off a product that has been sold"
end
end
end
It just struck me that I could also do this in the model. Something like this:
def Product < ActiveRecord::Base
before_update :require_product_not_sold
private
def require_product_not_sold
if self.written_off_changed?
# Add an error, fail validation etc. Prevent the model from saving
end
end
end
Also consider that there may be several different events that require that a product has not been sold to take place.
I like the controller approach - you can set meaningful flash messages rather than adding validation errors. But it feels like this code should be in the model (eg if I wanted to use the model outside of my Rails app).
Am I doing it wrong?
What are the advantages of handling this in my model?
What are the disadvantages of handling this in my model?
If I handle it in the model, should I really be using validates rather than a callback? What's the cleanest way to handle it?
Thanks for your ideas :)
It seems like you already have this one covered, based on your question. Ideally a model should know how to guard its state, as data objects are typically designed with portability in mind (even when they'll never be used that way).
But in this case you want to prevent an action before the user even has access to the model. Using a model validation in this case means you're too late and the user has already gone farther than he should by having access to and attempting to write off a product which should never have been accessible based on its sold status.
So I guess the ideal answer is "both." The model "should" know how to protect itself, as a backup and in case it's ever used externally.
However in the real world we have different priorities and constraints, so I'd suggest implementing the change you listed if feasible, or saving it for the next project if not.
As far as using a model callback versus a validation, I think that's a trickier question but I'll go with a validation because you'd likely want to present a message to the user and validation is built for exactly that use (I'd consider this more of a friendly and expected user error than a hostile or security-related one which you might handle differently).
Is that along the lines of what you've been considering?
I have an ActiveRecord model with a status column. When the model is saved with a status change I need to write to a history file the change of status and who was responsible for the change. I was thinking an after_save callback would work great, but I can't use the status_changed? dynamic method to determine that the history write is necessary to execute. I don't want to write to the history if the model is saved but the status wasn't changed. My only thought on handling it right now is to use an instance variable flag to determine if the after_save should execute. Any ideas?
This may have changed since the question was posted, but the after_save callback should have the *_changed? dynamic methods available and set correctly:
class Order
after_save :handle_status_changed, :if => :status_changed?
end
or
class Order
after_save :handle_status_changed
def handle_status_changed
return unless status_changed?
...
end
end
Works correctly for me w/ Rails 2.3.2.
Use a before_save callback instead. Then you have access to both the new and old status values. Callbacks are wrapped in a transaction, so if the save fails or is canceled by another callback, the history write will be rolled back as well.
I see two solutions:
Like you said: add a variable flag and run callback when it is set.
Run save_history after updating your record.
Example:
old_status = #record.status
if #record.update\_attributes(params[:record])
save_history_here if old_status != #record.status
flash[:notice] = "Successful!"
...
else
...
end
Has anyone not heard of database triggers?
If you write an on_update database trigger on the database server, then every time a record gets updated, it will create a historical copy of the previous record's values in the associated audit table.
This is one of the main things I despise about Rails. It spends so much time trying to do everything for the developer that it fools developers into thinking that they have to follow such vulgar courses of action as writing specialized rails methods to do what the freaking database server already is fully capable of doing all by itself.
shakes head at Rails once again