Rails validations: update at the same time as creation - ruby-on-rails

I'm having some trouble trying to figure out how to order ActiveRecord writes to make my validations be happy, and I'm not sure what to search for this kind of problem.
The problem is that before the request would occur, everything would be valid; after the transformations would occur, everything would be valid again; but while the transformation is happening, since it's impacting more than one model instance, the database would enter an invalid state if I update each model one by one without taking into account both changes at the same time. I'd love some suggestions!
Background
I have a model called HelpRequest and another called HelperAssignments.
The rule is that a HelpRequest may have 0 or 1 active HelperAssignments. But if a Helper cannot complete the request, they may reassign it to another Helper, creating a new HelperAssignment. Since we need the history of assignments to a particular HelpRequest, there may be a number of HelperAssignments for a HelpRequest, but only one is active.
As a result, the HelperAssignment table has a few relevant attributes:
help_request_id: Refers to the HelpRequest corresponding to this assignment.
close_status: If this is set to reassigned, reassignment_id must be present.
reassignment_id: For a given help_request_id, only one may be nil (i.e. it is the current active assignment)
Problem
When a reassignment happens...
... if I create the new HelperAssignment first, it would break validations because more than one active HelperAssignment for the request would be present :(
... if I update the old HelperAssignment first to have a close_status of reassigned, the new HelperAssignment wouldn't exist yet so I couldn't get its ID, and therefore the validations would fail.
Is there an idiomatic way to do this transformation? I'd like to avoid a) disabling validations for this particular type of requests, or b) adding an extra database state for "being in the process of reassigning". Looks like enforcing referential integrity in models can get a little tricky in Rails... thanks in advance!

Related

RAILS: Prevent creating a record twice if two requests are sent at the exact same time

I'm using an external API which for whatever reason posts every request twice at the exact same time. This is out of my control.
This causes records to be created twice in my mysql database.
I do have a validation in my model that checks if a record exists. This works fine if the two requests are sent after each other, but doesn't work if the two requests are sent at the same time.
The only thing I can think of is creating a job for each request that is executed at some random amount of time from now and validating the uniqueness in my model. But I wonder if there is a better way of dealing with this?
So how do I deal with this issue?
What may help you is adding a unique validation to your model and uniqueness constraint to the DB itself. The reason is that the model validation won't be enough, it will be needed just to make your application "aware" of that restriction. What will actually prevent you from saving two duplicate records is the DB constraint.
So when two duplicating requests will successfully pass your validation, the first of them will create a record and the second will try to do so, too, but MySQL adapter will raise ActiveRecord::RecordNotUnique which you'll be able to handle.
That's actually nothing more than a common way to handle race conditions. So basically add unique index to the corresponding table and it will do the trick.
Active Record uniqueness validation does not guarantee uniqueness at
the database level
The solution is straightforward to implement, you just need to enforce
uniqueness at the database level as well as at the model level.
You can create a database index and then require that the index to be unique
The documentation actually suggests the same
uniqueness helper validates that the attribute's value is unique right
before the object gets saved. It does not create a uniqueness
constraint in the database, so it may happen that two different
database connections create two records with the same value for a
column that you intend to be unique. To avoid that, you must create a
unique index on that column in your database.

rails callback when save fails

In Rails 5 I've implemented a series of relationships that cause a chicken-and-egg problem when saving one complex model. (IDs are needed to relate objects, but don't exist until after they're saved.)
I'll need to create and save objects the hard way, but I need to clean up after myself if save fails, so I don't end up with a database full of empty objects.
From the model, how do I ensure my clean-up code runs if and only if a save fails? The standard list of callbacks doesn't seem to cover this case, unless I'm missing something.
Model callbacks are one of the most overused and misused features in Rails. They are great for adding simple callbacks to the lifecycle of a model but very hard to control when they are fired (like in your tests where they slow everything down) or to tap into the flow to add application logic.
If your callback ever effects more than the model defining the callback thats a very good sign that you should reconsider using a callback.
In this case what you most likely want is a transaction:
A.transaction do
begin
a = A.create!(some_params)
a.bs.create!(some_other_params)
rescue ActiveRecord::RecordInvalid
a
end
end
This wraps the operation in a database transaction that is rolled back if either operation fails - leaving the database untouched.
You can either inline this in the controller or wrap it in a service object.

Rails 4+ Best practices: Delete parent while keeping children

I want to keep child records and the hierarchy, even when the parent is deleted. I see two options:
Keep the existing parent and utilize a "deleted_at" field to indicate the
parent itself is inactive, but the relationship still exists. This will lead to a number of effectively dead parent records being stored forever. Meh.
Assign all abandoned child records to a generic "collector" zombie parent record. I prefer this, but then you lose the history to the original source of the child record.
I don't have the Rails experience to see ahead as to which of these 2 would be the most advisable path to take, or maybe there's an altogether different solution.
SO is telling me this appears to be a subjective question and they may close it. I hope not, because I'm sure this is a question that others have as well.
It seems to me like you're basically asking about "soft delete" functionality. When I want this kind of behavior, I usually add an active attribute that defaults to true. I also add an active scope to the model so I can do something like Salon.active to conveniently get everything that's active.
So I guess my answer is that I'd do something like #1, which I would call a soft delete. Idea #2 seems pretty crazy to me.

Saving record fails due to uniqueness conflict with itself?

I have a procedure which receives two models, one which already exists, and another one which holds new attributes which I want to merge in the first one.
Since other parts of the program are holding the same reference to the new model, I can't just operate on the existing one. Therefor I do the following:
def merge(new_model, existing_model)
new_model.attributes = existing_model.attributes.merge(new_model.attributes)
new_model.id = existing_model.id
end
Now the new_model is being saved which gives me the uniqueness erorr (even though it's technically the same model). I also tried using the reload method, but that yields the same result.
Background:
The method above is run in a before_add callback on an association. I want to be able to call update on a model (with nested associations) without having to specify IDs of the nested models. This update is supposed to merge some associations, which is why I try to do the whole merge thing above.
You can't set the id of a model and then save the record expecting the id to be set since the id is the primary key of the database. So you are actually creating a whole new record and, thus, the uniqueness validation error. So you'll need to think of some other design to accomplish what you are wanting. It may help to know that what you are trying to do sounds similar to a deep_dup, except that ActiveRecord doesn't define this method (but Hash does).

Which part of Rails should take care of updating chained model statuses?

I have 2 models
ShippingClass, which define a shipping fare and the destinations to which the shipping fare is applicable
Shop, which has a state machine that determine if a number of actions are allowed or not
A shop has_many shipping_classes. The user can add or delete shipping classes and, among other factors, the fact that at least one shipping_class exist or not has an impact on the shop state.
The bottom line is that any time a shipping class is added/deleted/modified I run an update_state method on the shop model to keep the state up to date. What this method does is basically checking how many shipping_classes are associated to the shop and adjust the shop status accordingly (eg, to simplify the shop state is active if there is at least 1 shipping_class assigned, otherwise is inactive)
I was wondering if it is good practice to update the shop state from the controller. I am in fact evaluating the opportunity of having the ShippingClass to update the Shop upon save and destroy. While this may be more error proof, as I don't need to remember to update the Shop model every time I save a ShippingClass, it however increases the coupling of the models.
Using callbacks to do this appears not to be an option. These are wrapped into transaction.
Therefore the parent model (Shop) does not see exactly what is the status of the associated models (ShippingClasses) before the transaction is completed.
EDIT
Another option, as pointed out below, is to place the model update into an observer. The advantage of this is that it is not wrapped into a transaction, so the Shop model should be able to check the associated ShippingClasses. The disadvantage is that is not wrapped into a transaction, so a failure to update the Shop model would desync the Shop status. This would, however be better than placing the update into the controller, as it would be done once forever.
Another option could be override the save and destroy methods of ShippingClass and update the Shop model from there.
What is the best practice and why?
THanks in advance
As you pointed out, keeping the logic in the model is best since it will then apply any time a controller attempts to modify/delete/add a ShippingClass. Within the model, I would look to use a callback on ShippingClass - have the ShippingClass update the state of any Shops that are affected as a result of the modify/delete/add operation.

Resources