Rails stack level too deep in after_commit - ruby-on-rails

I have a TreatmentEvent model. Here are the relevant parts:
class TreatmentEvent < ActiveRecord::Base
attr_accessible :taken #boolean
attr_accessible :reported_taken_at #DateTime
end
When I set the taken column, I want to set reported_taken_at if taken is true. So I tried an after_save callback like so:
def set_reported_taken_at
self.update_attribute(:reported_taken_at, Time.now) if taken?
end
I think update_attribute calls save, so that's causing the stack level too deep error. But using the after_commit callback is causing this to happen, too.
Is there a better way to conditionally update one column when another changes? This answer seems to imply you should be able to call update_attributes in an after_save.
Edit
This also happens when using update_attributes:
def set_reported_taken_at
self.update_attributes(reported_taken_at: Time.now) if self.taken?
end

As a note, stack level too deep generally means an infinite loop
--
In your case, the issue will almost certainly be caused by:
after_commit :set_reported_token_at
def set_reported_taken_at
self.update_attribute(:reported_taken_at, Time.now) if taken?
end
--
The problem is after_commit is going to try and save the reported_taken_at even if you've just saved a record. So you're going to go over the record again and again and again and again...
Often known as a recursive loop - it's used a lot in native development, but for request (HTTP) based apps, it's bad as it leads to a never-ending processing of your request
Fix
Your fix should be like this:
#model
before_save :set_reported_token_at
def set_reported_taken_at
self.reported_taken_at = Time.now if taken? #-> assuming you have a "taken" method
end

Can't you use a before_save? You can see if the other field value has changed and if so update this field. That way you just have one DB call.

Related

Is destroying a persistent record in a `before_validation` callback bad code?

I want to prevent reports where the values of distributed && checked are both nil || zero from being created.
I also want users to be able to delete existing reports by setting these values to either nil or zero in the form (reports are created and edited in batches and the UX leans towards setting the value to 0 or deleting the values from the form instead of a 'delete' button).
As a hobbyist, something about this feels like a bad idea:
class Report < ApplicationRecord
...
before_validation :prevent_meaningless_reports, if: (distributed.nil? || distributed.zero?) && (checked.nil? || checked.zero?)
...
...
private
def prevent_meaningless_reports
if new_record?
throw :abort
else #persisted?
self.destroy
end
end
end
It feels bad to be destroying a record during a before_* callback.
I'm risking asking a potentially opinion-based question because it seems like I might be violating some software principle that would tell me why this is a bad idea.
If this is acceptable behavior, is it better to do this in two blocks, one for before_create (for new records) and one for before_update (for persistent records)?
The issue you'll run into with this is when you trying to update a meaningless_report that already exists to perform further operations on it.
E.g:
meaningless_report.update(attribute: some_value)
meaningless_report.some_other_value #meaningless_report is already removed from the db and frozen
If you try to update the meaningless_report object again, you'll run into a RuntimeError. So this is definitely not a good idea
A better option is to use ActiveRecord validations, such that even before the reports are created, we validate them to ensure they are not meaningless and also ensure the records don't become meaningless during updates
E.g:
class Report < ApplicationRecord
validate :validate_meaningful_reports
def not_meaningful_report?
(distributed.nil? || distributed.zero?) && (checked.nil? || checked.zero?)
end
private
def validate_meaningful_reports
if not_meaningful_report?
errors.add(:base, "Report violates validations")
end
end
end
With this, we ensure that no meaningless report is created. To handle old meaningless reports, we can use a script or maybe a rake task for this. The script/task can look like this:
meaningless_reports = Report.unscoped.find_each { |r| r.destroy if r.not_meaningful_report?}
This is much safer

How do I create a transaction out of multiple Rails save methods?

I'm using Rails 5. I have a model that looks like this
class CryptoIndexCurrency < ApplicationRecord
belongs_to :crypto_currency
end
I have a service method where I want to populate this table with records, which I do like so
CryptoIndexCurrency.delete_all
currencies.each do |currency|
cindex_currency = CryptoIndexCurrency.new({:crypto_currency => currency})
cindex_currency.save
end
The problem is the above is not very transactional, in as far as if something happens after the first statement, the "delete_all" will have executed but nothing else will have. What is the proper way to create a transaction here and equally as important, where do I place that code? Would like to know the Rails convention here.
I think you can just do:
CryptoIndexCurrency.transaction do
CryptoIndexCurrency.delete_all
CryptoIndexCurrency.create(currencies.map{ |c| {crypto_currency: c} })
end
If you are using Activerecord you can use the builtin transaction mechanism. Otherwise, one way would be to make sure you validate all your data and only save when everything is valid. Take a look at validates_associate and the like.
That said, if your process is inherently non validatable/nondeterministic (eg. you call external APIs to validate a payment) then the best is to ensure you have some cleaning methods that take care of your failure
If you have deterministic failures:
def new_currencies_valid?(currencies)
currencies.each do
return false if not currency.valid?(:create)
end
true
end
if new_currencies_valid?(new_currencies)
Currency.delete_all # See note
new_currencies.each(&:save)
end
A sidenote : unless you really understand what you are doing, I suggest calling destroy_all which runs callbacks on deletion (such as deleting dependent: :destroy) associations

Rails. Update model attributes on save

Thought it's an easy task however I have stuck a little bit with this issue:
Would like to update one of the attributes of the model whenever it's saved, thus having a callback in the model:
after_save :calculate_and_save_budget_contingency
def calculate_and_save_budget_contingency
self.total_contingency = self.budget_contingency + self.risk_contingency
self.save
# => this doesn't work as well.... self.update_attribute :budget_contingency, (self.budget_accuracy * self.budget_estimate) / 1
end
And the webserver shoots back with the message ActiveRecord::StatementInvalid (SystemStackError: stack level too deep: INSERT INTO "versions"
Which basically tells me that there is an infite loop of save to the model, after_save and then we save the model again... which goes into another loop of saving the model
Just stuck at this point of time on this model attribute calculation. If anyone has encountered this issue, and has a nice nifty/rails solution, please shoot me a message below, thanks
Change your code to following
before_save :calculate_and_save_budget_contingency
def calculate_and_save_budget_contingency
self.total_contingency = self.budget_contingency + self.risk_contingency
end
Reason for that is - if you run save in after_save you end up in infinite loop: a save calls after_save callback, which calls save which calls after_save, which...
In general it's wise you use after save only for changing associated models, etc.
Try before_save or before_validation, but don't include the .save

Access previous value of association on record update

I have a "event" model that has many "invitations". Invitations are setup through checkboxes on the event form. When an event is updated, I wanted to compare the invitations before the update, to the invitations after the update. I want to do this as part of the validation for the event.
My problem is that I can't seem to access the old invitations in any model callback or validation. The transaction has already began at this point and since invitations are not an attribute of the event model, I can't use _was to get the old values.
I thought about trying to use a "after_initialize" callback to store this myself. These callbacks don't seem to respect the ":on" option though so I can't do this only :on :update. I don't want to run this every time a object is initialized.
Is there a better approach to this problem?
Here is the code in my update controller:
def update
params[:event][:invited_user_ids] ||= []
if #event.update_attributes(params[:event])
redirect_to #event
else
render action: "edit"
end
end
My primary goal is to make it so you can add users to an event, but you can't not remove users. I want to validate that the posted invited_user_ids contains all the users that currently are invited.
--Update
As a temporary solution I made use for the :before_remove option on the :has_many association. I set it such that it throws an ActiveRecord::RollBack exception which prevents users from being uninvited. Not exactly what I want because I can't display a validation error but it does prevent it.
Thank you,
Corsen
Could you use ActiveModel::Dirty? Something like this:
def Event < ActiveRecord::Base
validates :no_invitees_removed
def no_invitees_removed
if invitees.changed? && (invitees - invitees_was).present?
# ... add an error or re-add the missing invitees
end
end
end
Edit: I didn't notice that the OP already discounted ActiveModel::Dirty since it doesn't work on associations. My bad.
Another possibility is overriding the invited_user_ids= method to append the existing user IDs to the given array:
class Event < ActiveRecord::Base
# ...
def invited_user_ids_with_guard=(ids)
self.invited_user_ids_without_guard = self.invited_user_ids.concat(ids).uniq
end
alias_method_chain :invited_user_ids=, :guard
end
This should still work for you since update_attributes ultimately calls the individual attribute= methods.
Edit: #corsen asked in a comment why I used alias_method_chain instead of super in this example.
Calling super only works when you're overriding a method that's defined further up the inheritance chain. Mixing in a module or inheriting from another class provides a means to do this. That module or class doesn't directly "add" methods to the deriving class. Instead, it inserts itself in that class's inheritance chain. Then you can redefine methods in the deriving class without destroying the original definition of the methods (because they're still in the superclass/module).
In this case, invited_user_ids is not defined on any ancestor of Event. It's defined through metaprogramming directly on the Event class as a part of ActiveRecord. Calling super within invited_user_ids will result in a NoMethodError because it has no superclass definition, and redefining the method loses its original definition. So alias_method_chain is really the simplest way to acheive super-like behavior in this situation.
Sometimes alias_method_chain is overkill and pollutes your namespace and makes it hard to follow a stack trace. But sometimes it's the best way to change the behavior of a method without losing the original behavior. You just need to understand the difference in order to know which is appropriate.

Is there a way to prevent serialized attributes in rails from getting updated even if there are not changes?

This is probably one of the things that all new users find out about Rails sooner or later. I just realized that rails is updating all fields with the serialize keyword, without checking if anything really changed inside. In a way that is the sensible thing to do for the generic framework.
But is there a way to override this behavior? If I can keep track of whether the values in a serialized fields have changed or not, is there a way to prevent it from being pushed in the update statement? I tried using "update_attributes" and limiting the hash to the fields of interest, but rails still updates all the serialized fields.
Suggestions?
Here is a similar solution for Rails 3.1.3.
From: https://sites.google.com/site/wangsnotes/ruby/ror/z00---topics/fail-to-partial-update-with-serialized-data
Put the following code in config/initializers/
ActiveRecord::Base.class_eval do
class_attribute :no_serialize_update
self.no_serialize_update = false
end
ActiveRecord::AttributeMethods::Dirty.class_eval do
def update(*)
if partial_updates?
if self.no_serialize_update
super(changed)
else
super(changed | (attributes.keys & self.class.serialized_attributes.keys))
end
else
super
end
end
end
Yes, that was bugging me too. This is what I did for Rails 2.3.14 (or lower):
# config/initializers/nopupdateserialize.rb
module ActiveRecord
class Base
class_attribute :no_serialize_update
self.no_serialize_update = false
end
end
module ActiveRecord2
module Dirty
def self.included(receiver)
receiver.alias_method_chain :update, :dirty2
end
private
def update_with_dirty2
if partial_updates?
if self.no_serialize_update
update_without_dirty(changed)
else
update_without_dirty(changed | (attributes.keys & self.class.serialized_attributes.keys))
end
else
update_without_dirty
end
end
end
end
ActiveRecord::Base.send :include, ActiveRecord2::Dirty
Then in your controller use:
model_item.no_serialize_update = true
model_item.update_attributes(params[:model_item])
model_item.increment!(:hits)
model_item.update_attribute(:nonserializedfield => "update me")
etc.
Or define it in your model if you do not expect any changes to the serialized field once created (but update_attribute(:serialized_field => "update me" still works!)
class Model < ActiveRecord::Base
serialize :serialized_field
def no_serialize_update
true
end
end
I ran into this problem today and ended up hacking my own serializer together with a getter and setter. First I renamed the field to #{column}_raw and then used the following code in the model (for the media attribute in my case).
require 'json'
...
def media=(media)
self.media_raw = JSON.dump(media)
end
def media
JSON.parse(media_raw) if media_raw.present?
end
Now partial updates work great for me, and the field is only updated when the data is actually changed.
The problem with Joris' answer is that it hooks into the alias_method_chain chain, disabling all the chains done after (like update_with_callbacks which accounts for the problems of triggers not being called). I'll try to make a diagram to make it easier to understand.
You may start with a chain like this
update -> update_with_foo -> update_with_bar -> update_with_baz
Notice that update_without_foo points to update_with_bar and update_without_bar to update_with_baz
Since you can't directly modify update_with_bar per the inner workings of alias_method_chain you might try to hook into the chain by adding a new link (bar2) and calling update_without_bar, so:
alias_method_chain :update, :bar2
Unfortunately, this will get you the following chain:
update -> update_with_bar2 -> update_with_baz
So update_with_foo is gone!
So, knowing that alias_method_chain won't let you redefine _with methods my solution so far has been to redefine update_without_dirty and do the attribute selection there.
Not quite a solution but a good workaround in many cases for me was simply to move the serialized column(s) to an associated model - often this actually was a good fit semantically anyway.
There is also discussions in https://github.com/rails/rails/issues/8328.

Resources