I think I'm being dense here because I keep getting a stack too deep error...
I have a Child and a Parent relational objects. I want 2 things to happen:
if you try to update the Child, you cannot update its status_id to 1 unless it has a Parent association
if you create a Parent and then attach it to the Child, then the Child's status should be auto-set to 1.
Here's how the Parent association gets added:
parent = Parent.new
if parent.save
child.update_attributes(parent_id:1)
end
I have these callbacks on the Child model:
validate :mark_complete
after_update :set_complete
# this callback is here because there is a way to update the Child model attributes
def mark_complete
if self.status_id == 1 && self.parent.blank?
errors[:base] << ""
end
end
def set_complete
if self.logistic.present?
self.update_attribute(:status_id, 1)
end
end
The code above is actually not that efficient because it's 2 db hits when ideally it would be 1, done all at once. But I find it too brain draining to figure out why... I'm not sure why it's not even working, and therefore can't even begin to think about making this a singular db transaction.
EXAMPLE
Hopefully this helps clarify. Imagine a Charge model and an Item model. Each Item has a Charge. The Item also has an attribute paid. Two things:
If you update the Item, you cannot update the paid to true until the Item has been associated with a Charge object
If you link a Charge object to a Item by updating the charge_id attribute on the Item, then code should save you time and auto set the paid as true
There's a lot that I find confusing here, but it seems to me that you call :set_complete after_update and within set_complete you are updating attributes, thus you seem to have a perpetual loop there. There might be other loops that I can't see but that one stands out to me.
One way to avoid a circularly recursive situation like this is to provide a flag as a parameter (or otherwise) that will stop the loop from continuing.
In this case, (though I am not sure about the case entirely) I think you could provide a flag indicating the origin of the call. If the origin of the update is a charge being attached, then pass a flag that will stop the check from happening or modify it to keep the loop from happening. Perhaps a secondary set of logic is in order for such a case?
I faced a stack level too deep problem some time back when working with ActiveRecord callbacks.
In my case the problem was with update_attribute after the update goes through the callback i.e. set_complete in your case is called again in which the update_attribute is triggered again in turn and this repeats endlessly.
I got around that by using update_column instead which does not trigger any callbacks or validations however setting a flag is what was advised more often online.
At this point I do not have an answer for reducing your database write operations, and will add to this answer if I can think of anything.
Hope this helps
Related
So a situation came up at work and I wanted to discuss it here because we could not get to an agreement between us:
We have two models, Order and Passport, which are related in a way that an Order has_one passport and a passport has_many orders. Whenever an order is completed, its associated passport must be 'locked', that is, turned into read-only (that information was already used to clear customs, so it can't be changed afterwards). We want to enforce that rule in the Passport model and we've thought of the following options:
Creating a validation. CONS: There will be records yielding valid? => false when technically the record is fine (although it can't be saved). For example, if other records have a validates_associated :passport on them, that could be a problem.
Overriding the readonly? method. CONS: This will raise an exception when trying to update that record, although you would expect that calling a save method won't ever raise one.
Creating a before_save callback. This has two flavors: either raise an exception (which is pretty much like the readonly? option) or add an #error and return false to stop the callback chain. CONS: Adding validation errors from outside a proper validation can be considered a bad practice. Also, you might find yourself calling valid? and getting true and then call save and get false.
This situation made us think a lot about the relationship between validations and Rails. What exactly does it mean for a record to be valid?? Does it imply that the save will work?
I would like to listen to your opinions to learn about this scenario. Maybe the best approach is neither one of the three! Thanks!
What about marking this record as read-only by using readonly! instance method? See the API
You could do it in a constructor, like:
class Passport < ActiveRecord::Base
def initialize(*args)
super(*args)
readonly! if orders.count>0 # or similar
end
end
I think there is an extra alternative. What you describe dictates that the Passport model can have some different states. I would consider using a state machine to describe the relevant orders status for the passport.
eg:
open
pending
locked
other_update_actions ...
With that in mind, all relevant order actions will trigger an event to the passport model and its state.
If it is possible to integrate the update actions to certain events then you could handle the readonly part in a more elegant way (incompatible state transition).
As an extra check you can always keep an ugly validator as a last resort to prevent the model from being updated without the state machine.
you can check the aasm gem for this
I seem to have a race condition in my Rails app. While deleting a user and all of the associated models that depend on it, new associated models are sometimes created by the user. User deletions can take a while if we're deleting a lot of content, so it makes sense that race conditions would exist here. This ends up creating models that point to a user that doesn't exist.
I've tried fixing this by creating a UserDeletion model, which acts as a sort of mutex lock. Before it starts deleting the user, it'll create a new UserDeletion record. When a user tries to create new content, it checks to make sure an associated UserDeletion record doesn't exist. After it's done, it deletes it.
This hasn't solved the problem, though, so I'm wondering how other people have handled similar issues with AR callbacks and race conditions.
First of all when there is a lot of content associated, we moved on to use manual delete process using SQL DELETE instead off Rails destroy. (Though this may not work for You, If You carelessly introduced a lot of callback dependencies that does something after record is destroyed)
def custom_delete
self.class.transaction do
related_objects.delete_all
related_objects_2.delete_all
delete
end
end
If You find Yourself writing this all the time, You can simply wrap it inside class method that accepts list of related_objects keys to delete.
class ActiveRecord::Base
class << self
def bulk_delete_related(*args)
define_method "custom_delete" do
ActiveRecord::Base.transaction do
args.each do |field|
send(field).delete_all
end
end
delete
end
end
end
end
class SomeModel < ActiverRecord::Base
bulk_delete :related_objects, :related_objects2, :related_object
end
I inserted the class method inside ActiveRecord::Base class directly, but probably You should better extract it to module. Also this only speeds things up, but does not resolve the original problem.
Secondly You can introduce FK constraints (we did that to ensure integrity, as we do a lot of custom SQL). It will work the way that User won't be deleted as long as there are linked objects. Though it might not be what You want. To increase effectivity of this solution You can always delegate user deletion to a background job, that will retry deleting user until it's actually can be deleted (no new objects dropped in)
Alternatively You can do the other way around as we did at my previous work in some cases. If it's more important to delete user rather than to be sure that there are no zombie records, use some swipe process to clean up time to time.
Finally the truth is somewhere in the middle - apply constraints to relations that definitely need to be cleaned up before removing user and just rely on sweeper to remove less important ones that shouldn't interfere with user deletion.
Problem is not trivial but it should be solvable to some extent.
I have 2 records of the same model, and I want to keep some of the data on these records in sync.
I was going to do a after_save callback (or maybe observer) to trigger updating the other record, but I am afraid this is going to cause an infinite loop of saves because the other record will cause a callback.
I read here that you can bypass callbacks on save, but these approaches seem to be hackish and not consistent between rails 2 and 3 (we are moving to rails 3 in a couple months).
Is there a better option?
You can create attr_accessor:
attr_accessor :dont_run_callback
after_save :my_callback
def my_callback
MyModel.find(1).update_attributes(..., :dont_run_callback => true) unless dont_run_callback
end
something like that
You can use the update_columns method while updating the 2nd record based on updates on the first one and vice versa.
I have an application where I would like to override the behavior of destroy for many of my models. The use case is that users may have a legitimate need to delete a particular record, but actually deleting the row from the database would destroy referential integrity that affects other related models. For example, a user of the system may want to delete a customer with whom they no longer do business, but transactions with that customer need to be maintained.
It seems I have at least two options:
Duplicate data into the necessarily models effectively denormalizing my data model so that deleted records won't affect related data.
Override the "destroy" behavior of ActiveRecord to do something like set a flag indicating the user "deleted" the record and use this flag to hide the record.
Am I missing a better way?
Option 1 seems like a horrible idea to me, though I'd love to hear arguments to the contrary.
Option 2 seems somewhat Rails-ish but I'm wondering the best way to handle it. Should I create my own parent class that inherits from ActiveRecord::Base, override the destroy method there, then inherit from that class in the models where I want this behavior? Should I also override finder behavior so records marked as deleted aren't returned by default?
If I did this, how would I handle dynamic finders? What about named scopes?
If you're not actually interested in seeing those records again, but only care that the children still exist when the parent is destroyed, the job is simple: add :dependent => :nullify to the has_many call to set references to the parent to NULL automatically upon destruction, and teach the view to deal with that reference being missing. However, this only works if you're okay with not ever seeing the row again, i.e. viewing those transactions shows "[NO LONGER EXISTS]" under company name.
If you do want to see that data again, it sounds like what you want has nothing to do with actually destroying records, which means that you will never need to refer to them again. Hiding seems to be the way to go.
Instead of overriding destroy, since you're not actually destroying the record, it seems significantly simpler to put your behavior in a hide method that triggers a flag, as you suggested.
From there, whenever you want to list these records and only include visible records, one simple solution is to include a visible scope that doesn't include hidden records, and not include it when you want to find that specific, hidden record again. Another path is to use default_scope to hide hidden records and use Model.with_exclusive_scope { find(id) } to pull up a hidden record, but I'd recommend against it, since it could be a serious gotcha for an incoming developer, and fundamentally changes what Model.all returns to not at all reflect what the method call suggests.
I understand the desire to make the controllers look like they're doing things the Rails way, but when you're not really doing things the Rails way, it's best to be explicit about it, especially when it's really not that much of a pain to do so.
I wrote a plugin for this exact purpose, called paranoia. I "borrowed" the idea from acts_as_paranoid and basically re-wrote AAP using much less code.
When you call destroy on a record, it doesn't actually delete it. Instead, it will set a deleted_at column in your database to the current time.
The README on the GitHub page should be helpful for installation & usage. If it isn't, then let me know and I'll see if I can fix that for you.
I have a before_create filter that checks if people are posting too many comments.
If they are I want to flag their account.
class Comment < ActiveRecord::Base
before_create :check_rate_limit
def check_rate_limit
comments_in_last_minute = self.user.comments.count(:conditions => ["comments.created_at > ?", 1.minute.ago])
if comments_in_last_minute > 2
user.update_attribute :status, "suspended"
return false
end
true
end
end
The before filter returns false to stop the comment from being created. The problem is that this triggers a ROLLBACK which also undoes the changes I made to the user model.
What's the correct pattern to accomplish this? Specifically: running a check each time an object is created and being able to edit another model if the check fails.
I think the best approach to rate limiting is queueing the requests and reading them at the maximal allowable rate.
The trigger to flag overuse the simply becomes a set number of requests in the queue.
It also has the advantage of not immediately impacting on your database behind as it allows to move the bottleneck before the database in a better controllable queueing system. This allows hte site to remain responsive even under "attack".
These queues can be as simple as a hashmap with a linked list. But better use some threadsafe fifo if avilable
This isn't an ideal answer but for now I ended up just returning true even when the account was suspended. This way one more went through, but future ones did not.
for me it seems like returning false in the callback doesn't stop returning the record, even though it is not saved onto the database, weird.