What is considered "best practice" or the general rule of thumb for when something should be wrapped in a transaction block?
Is it primarily just when you are going to be performing actions on a collection of things, and you want to rollback if something breaks? Something like:
class User < ActiveRecord::Base
def mark_all_posts_as_read!
transaction do
posts.find_each { |p| p.update_attribute(:read, true) }
end
end
end
Are there other scenarios where it would be beneficial to perform things inside a transaction?
I'm not sure that qualifies as a great use for a transaction: generally, I would only use a transaction in a model if the state of one object depended on the state of another object. If either object's state is incorrect, then I don't want either to be committed.
The classic example, of course, is bank accounts. If you're transferring money from one account to another, you don't want to add it to the receiving account, save, and then debit it from the sending account. If any part of that goes wrong then money has just vanished, and you will have some pretty angry customers. Doing both parts in one transaction ensures that if an error occurs, neither will have committed anything to the database.
The ActiveRecord Transaction Documentation does a surprisingly good job of discussing the how and why of using transactions... and there's always the Wikipedia article if you want more information as well.
Related
I use transaction to save Hotel model. Here is a code:
def init
Hotel.transaction do
#hotel.save!
create_related_models
end
end
I have uniqueness validation on :name in hotel. Validation isn't work inside transaction. What is the way of implementing database related validations inside a transaction?
Explanation why validation not work.
When user submit a form then request takes about 10s. If he click another time (on save button) during request then he will save two hotels with the same name(which is issue). It is because first transaction didn't finish, when another transaction start. So when another start there is no hotel in database so validation return true
You will need to either:
Add a database constraint to prevent this behaviour, and catch and handle any error. The application cannot have visibility of un-committed RDBMS transactions. Only the database can do that.
Add a locking mechanism in the application, which will be difficult if you are running on multiple threads (Heroku dynos?).
Greatly reduce the time taken for the transaction.
Move the creation of the related models outside of the transaction, and provide a mechanism for manually deleting the Hotel record is a problem arises during execution.
Remove the uniqueness validation.
Lets put a bit of context on this question. Given an Ecommerce application in Ruby on Rails. Let's deal with 2 models for example. User and CreditCard.
My User is in the system after a registration no issue there.
CreditCard is a model with the credit card information (yes I know about PCI compliance but that's not the point here)
In the Credit Card model, I include a callback after_validation that will do a validation of the credit card against your bank.
Let me put some simple code here.
models/user.rb
class User < ActiveRecord::Base
enum :status, [:active, :banned]
has_one :credit_card
end
models/credit_card.rb
class CreditCard < ActiveRecord::Base
belongs_to :user
after_validation :validate_at_bank
def validate_at_bank
result = Bank.validate(info) #using active_merchant by exemple
unless result.success
errors.add {credit_card: "Bank doesn't validate"}
user.banned!
end
end
end
controllers/credit_cards_controller.rb
class CreditCardsController < ApplicationController
def create
#credit_card = CreditCard.new(credit_card_params) # from Strong Parameters
if #credit_card.save
render #success
else
render #failure
end
end
end
What causing me issue
It look like Rails opens a transaction in ActiveRecord when I'm doing a new. At this point nothing is send to the database.
When the bank reject the credit card, I want to ban the user. I do this by calling banned! Now I realised this update is going to the same transaction. I can see the update, but once the save doesn't go though, everything is rollback from both models. The credit card is not saved (that is good), the user is not saved (this is not good since I want to ban him)
I try to add a Transaction wrapper, but this only add a database checkpoint. I could create a delayed job for the ban, but this seems to me to be overkill. I could use a after_rollback callback, but I'm not sure this is the right way. I'm a bit surprise, I never caught this scenario before, leading me to believe that my patern is not correct or the point where I make this call is incorrect.
After much review and more digging I came up with 3 ways to handle this situation. Depending on the needs you have, one of them should be good for you.
Separate thread and new database connection
Calling the validate function before the save explictly
Send the task to be perform to a Delayed Job
Separate thread
The following answer shows how to do this operation.
https://stackoverflow.com/a/20743433/552443
This will work, but again not that nice and simple.
Call valid? before the save
This is a very quick solution. The issue is that a new developer could erase the valid? line thinking that the .save will do the job properly.
Calling Delayed Job
This could be any ActionJob provider. You can send the task to banned the user in a separate thread. Depending on your setup this is quite clean but not everybody needs DelayedJob.
If you see anything please add it to a new solution of the comments.
So a situation came up at work and I wanted to discuss it here because we could not get to an agreement between us:
We have two models, Order and Passport, which are related in a way that an Order has_one passport and a passport has_many orders. Whenever an order is completed, its associated passport must be 'locked', that is, turned into read-only (that information was already used to clear customs, so it can't be changed afterwards). We want to enforce that rule in the Passport model and we've thought of the following options:
Creating a validation. CONS: There will be records yielding valid? => false when technically the record is fine (although it can't be saved). For example, if other records have a validates_associated :passport on them, that could be a problem.
Overriding the readonly? method. CONS: This will raise an exception when trying to update that record, although you would expect that calling a save method won't ever raise one.
Creating a before_save callback. This has two flavors: either raise an exception (which is pretty much like the readonly? option) or add an #error and return false to stop the callback chain. CONS: Adding validation errors from outside a proper validation can be considered a bad practice. Also, you might find yourself calling valid? and getting true and then call save and get false.
This situation made us think a lot about the relationship between validations and Rails. What exactly does it mean for a record to be valid?? Does it imply that the save will work?
I would like to listen to your opinions to learn about this scenario. Maybe the best approach is neither one of the three! Thanks!
What about marking this record as read-only by using readonly! instance method? See the API
You could do it in a constructor, like:
class Passport < ActiveRecord::Base
def initialize(*args)
super(*args)
readonly! if orders.count>0 # or similar
end
end
I think there is an extra alternative. What you describe dictates that the Passport model can have some different states. I would consider using a state machine to describe the relevant orders status for the passport.
eg:
open
pending
locked
other_update_actions ...
With that in mind, all relevant order actions will trigger an event to the passport model and its state.
If it is possible to integrate the update actions to certain events then you could handle the readonly part in a more elegant way (incompatible state transition).
As an extra check you can always keep an ugly validator as a last resort to prevent the model from being updated without the state machine.
you can check the aasm gem for this
I'm creating a customized "Buy Now" page that is a combination of User, address, Sale, SaleLine, and Payment models. To initiate the payment process, I need to specifie the Sale ID in a callback. So my #new method looks something like this...
# new
def bitcoin
require 'cgi'
#payment_to_tender = get_cost
#sale = Sale.create
begin
payment = create_new_payment_wallet(ENV["BITCOIN"], "http://www.example.com/a/b", #sale.id)
end
end
So the key line in there is the middle where a new Sale record is created. This page doesn't require any kind of login or anything (because it's the signup page technically). Will this be a problem? I think anytime, even a bot navigates to the page, it will spawn yet another Sale record. Will that eventually catch up with me? Should I run a script nightly that deletes all orphan Sale records that are older than a day, or should I try a different algo?
Rails can handle as many models as required
Models are just .rb files which are opened when you call ActiveRecord, they're not applications or anything super-resource intensive
However, what they represent is much more than just opening a file. The question you're really asking is "is my schema set up correctly?", which is a different ballgame
ActiveRecord Assocations
Rails is "object orientated", which means everything you do has to work around an object. This is typically an ActiveRecord Object, which is made up of a database query & associated data
One of the biggest problems with Rails apps is an inefficient use of the ActiveRecord Association structure. ActiveRecord Associations work by defining "relations" in your models, allowing you to call one piece of data & automatically have its related data attached in the object
The problem for most people is ActiveRecord Assocations pick up data they don't need, causing unnecessary expensive database calls. This is where the problems arise, and is what you're trying to address
Creating Independent Records
If you want to create a record with another, you can use the after_create method, like this:
#app/models/bitcoin.rb
Class BitCoin < ActiveRecord::Base
after_create :create_sale
end
This will actually create a sale record for you, if it's related correctly
I have a requirement that certain attribute changes to records are not reflected in the user interface until those changes are approved. Further, if a change is made to an approved record, the user will be presented with the record as it exists before approval.
My first try...
was to go to a versioning plugin such as paper_trail, acts_as_audited, etc. and add an approved attribute to their version model. Doing so would not only give me the ability to 'rollback' through versions of the record, but also SHOULD allow me to differentiate between whether a version has been approved or not.
I have been working down this train of thought for awhile now, and the problem I keep running into is on the user side. That is, how do I query for a collection of approved records? I could (and tried) writing some helper methods that get a collection of records, and then loop over them to find an "approved" version of the record. My primary gripe with this is how quickly the number of database hits can grow. My next attempt was to do something as follows:
Version.
where(:item_type => MyModel.name, :approved => true).
group(:item_type).collect do |v|
# like the 'reify' method of paper_trail
v.some_method_that_converts_the_version_to_a_record
end
So assuming that the some_method... call doesn't hit the database, we kind of end up with the data we're interested in. The main problem I ran into with this method is I can't use this "finder" as a scope. That is, I can't append additional scopes to this lookup to narrow my results further. For example, my records may also have a cool scope that only shows records where :cool => true. Ideally, I would want to look up my records as MyModel.approved.cool, but here I guess I would have to get my collection of approved models and then loop over them for cool ones would would result in the very least in having a bunch of records initialized in memory for no reason.
My next try...
involved creating a special type of "pending record" that basically help "potential" changes to a record. So on the user end you would lookup whatever you wanted as you normally would. Whenever a pending record is apply!(ed) it would simply makes those changes to the actual record, and alls well... Except about 30 minutes into it I realize that it all breaks down if an "admin" wishes to go back and contribute more to his change before approving it. I guess my only option would be either to:
Force the admin to approve all changes before making additional ones (that won't go over well... nor should it).
Try to read the changes out of the "pending record" model and apply them to the existing record without saving. Something about this idea just doesn't quite sound "right".
I would love someone's input on this issue. I have been wrestling with it for some time, and I just can't seem to find the way that feels right. I like to live by the "if its hard to get your head around it, you're probably doing it wrong" mantra.
And this is kicking my tail...
How about, create an association:
class MyModel < AR::Base
belongs_to :my_model
has_one :new_version, :class_name => MyModel
# ...
end
When an edit is made, you basically clone the existing object to a new one. Associate the existing object and the new one, and set a has_edits attribute on the existing object, the pending_approval attribute on the new one.
How you treat the objects once the admin approves it depends on whether you have other associations that depend on the id of the original model.
In any case, you can reduce your queries to:
objects_pending_edits = MyModel.where("has_edits = true").all
then with any given one, you can access the new edits with obj.new_version. If you're really wanting to reduce database traffic, eager-load that association.