Changing Business Rule Validations Over Time in Rails - ruby-on-rails

Several times I've been in a situation where we have a model with a business-driven validation such as:
class Order < ActiveRecord::Base
validates_numericality_of :total, :greater_than => 5.0
# Some more logic
end
At some point, the boss man decides that the new minimum order should be $10, so we update the validation to 10. However, this means that any existing orders with values between $5 and $10 will no longer validate, and any logic where I call order.save() will begin to fail (sometimes unpleasantly silently). I've run into this many times in a largish shipping Rails app, and haven't found a good solution yet. Some ideas:
Ensure that there are no pending "orders" that will be affected when the code change is rolled out
Add an :if => Proc.new { |o| o.created_at.nil? or o.created_at > date_new_validation_is_effective } to the new validation, but certain this quickly becomes unwieldy
"Validate" business rules somewhere else, such as the controllers where user-specified input is accepted, rather than as model validations. But this violates the Fat Model/Skinny Controller principle that has many proponents in Rails.
Is there a different approach for integrating this logic, or keeping a strategy like #2 manageable in the long run?

You could set this business logic validation to only run :on => :create. I'm assuming that you don't often edit/update the total of an order.
That would put it into effect for all orders going forward, while not affecting the validity of existing models in the system.

You could add a version to the order record and version specific validations.

Related

Validations in Rails

I have designed a very simple web application which associates authors, books and ratings.
In each of the respective models
Author
has_many :books
Book
belongs_to :author
has_many :reviews
Review
belongs_to :book
Model attributes
Author : title, fname, lname, DOB
Book : ISBN, title, publish_date, pages
Review : rating(1-5), description
I am wondering if I completely validate all of these attributes to my liking in the models, 1 attribute for example
validates :ISBN, :only_integer => true, length: { is: 13 }
do I need to worry about validations for data elsewhere?
I know that validations for the model run on the server side so there may need to be some validation on the client side (in JS). I am trying to ensure that there are no flaws when it comes to asserting data correctness.
As is so often the case: it depends.
In a simple Rails application, all models will be updated through a view request to a controller which in turn fills in the params into the models, then tries to save the model and any validation errors that occur are rendered back to the server.
In that scenario, all your code will have to do is to react to failed calls to #save but you can sleep soundly knowing that everything in your database is how it is supposed to be.
In more complex applications, putting all the validation logic into your model might not work as well anymore: every call to #save will have to run through all the validation logic, slowing things down, and different parts of your application might have different requirements to the input parameters.
In that scenario there are many ways to go about it. Form objects with validations specific to the forms they represent are a very common solution. These form models then distribute their input among one or more underlying ActiveRecord models.
But the Rails way is to take these one step at a time and avoid premature optimization. For the foreseeable future, putting your validation into your model will be enough to guarantee consistency.
do I need to worry about validations for data elsewhere?
Yes you do.
Application level validations are still prone to race conditions.
For things that should be unique like for example ISBN numbers database constraints are vital if uniqueness is to be guarenteed. Other areas where this can cause issues are when you have a limit on the count of an association.
While validations prevent most errors they are not a replacement for database constraints to ensure the correctness of data. Both are needed if correctness is important.

What is the Rails Convention for validating field length?

With ActiveRecord models, I know you can validate the length of an input field like so
class User
validates :user_name, length: { maximum: 20 }
end
However, one of the design patterns in Rails recommends thin models. If you have a ton of validations, the above code might seem intimidating. I read there was another way you could do this.
You can simply use an ActiveRecord::Schema to accomplish the same task.
class CreateUsers < ActiveRecord::Migration
def change
create_table :users do |t|
t.string :user_name, limit: 20
end
end
end
That accomplishes the exact same thing only you don't even need the second line in your Users model.
What is the standard Rails convention regarding this?
Some people would argue that you have to have skinny controllers and skinny models. However, this can create several additional classes in your application.
Sometimes having a fat model if documented and laid out logically can be easier to read. I will ignore the 'best practices' if it makes the code easier to read as I may not always be the only person touching that code. If the application scales to a point where multiple people will be accessing the same files, I will consider extracting it at that point as a refactor. However, this has rarely been the case.
While it is good to set limits on your database, you also want to have client validations to prevent someone having their data truncated with no feedback to them. For example, (a really horrible example), if you were to limit the username of an User to only six characters and I type in kobaltz as my username, I will wonder why my username/password never works as the database truncated it to kobalt. You will also run into issues where MySQL (or similar) will throw database level errors which is annoying to fix/troubleshoot. You also have to consider if modifying a database in production, if you set the limits where they did not exist before, you could end up corrupting your data.
Having a few validations in your model does not make it a 'fat' model in my opinion. It makes it easier to read. If you're not using an IDE like RubyMine and only using an editor, you do not have the luxury of Jump to Definition which can make the abstraction of your model easier to follow.
If you use second approach, you won't be able to get the error. Its on mysql level and not on model level, so active record won't tell you the reason for user not getting created or updated.
object.errors
will be empty.
Check this
http://apidock.com/rails/ActiveModel/Errors/full_messages

How to properly enforce a conditional read-only record on Rails?

So a situation came up at work and I wanted to discuss it here because we could not get to an agreement between us:
We have two models, Order and Passport, which are related in a way that an Order has_one passport and a passport has_many orders. Whenever an order is completed, its associated passport must be 'locked', that is, turned into read-only (that information was already used to clear customs, so it can't be changed afterwards). We want to enforce that rule in the Passport model and we've thought of the following options:
Creating a validation. CONS: There will be records yielding valid? => false when technically the record is fine (although it can't be saved). For example, if other records have a validates_associated :passport on them, that could be a problem.
Overriding the readonly? method. CONS: This will raise an exception when trying to update that record, although you would expect that calling a save method won't ever raise one.
Creating a before_save callback. This has two flavors: either raise an exception (which is pretty much like the readonly? option) or add an #error and return false to stop the callback chain. CONS: Adding validation errors from outside a proper validation can be considered a bad practice. Also, you might find yourself calling valid? and getting true and then call save and get false.
This situation made us think a lot about the relationship between validations and Rails. What exactly does it mean for a record to be valid?? Does it imply that the save will work?
I would like to listen to your opinions to learn about this scenario. Maybe the best approach is neither one of the three! Thanks!
What about marking this record as read-only by using readonly! instance method? See the API
You could do it in a constructor, like:
class Passport < ActiveRecord::Base
def initialize(*args)
super(*args)
readonly! if orders.count>0 # or similar
end
end
I think there is an extra alternative. What you describe dictates that the Passport model can have some different states. I would consider using a state machine to describe the relevant orders status for the passport.
eg:
open
pending
locked
other_update_actions ...
With that in mind, all relevant order actions will trigger an event to the passport model and its state.
If it is possible to integrate the update actions to certain events then you could handle the readonly part in a more elegant way (incompatible state transition).
As an extra check you can always keep an ugly validator as a last resort to prevent the model from being updated without the state machine.
you can check the aasm gem for this

Uniqueness validation in database when validation has a condition

Using uniqueness validations in Rails is not safe when there are multiple processes unless the constraint is also enforced on the database (in my case a PostgreSQL database, so see this blog post).
In my case, the uniqueness validation is conditional: it should only be enforced if another attribute in the model becomes true. So I have
class Model < ActiveRecord::Base
validates_uniqueness_of :text, if: :is_published?
def is_published?
self.is_published
end
end
So the model has two attributes: is_published (a boolean) and text (a text attribute). text should be unique across all models of type Model if is_published is true.
Using a unique index (as suggested in the linked blog post) is too constraining because it would enforce the constraint regardless of the value of is_published.
Is anyone aware of a "conditional" index on a PostgreSQL database? Or another way to fix this?
Yes, use a partial UNIQUE index.
CREATE UNIQUE INDEX tbl_txt_is_published_idx ON tbl (text) WHERE is_published;
Example:
How to add a conditional unique index on PostgreSQL
I think that - given speed is not your main concern - you can achieve proper uniqueness validation without creating additional db indexes. Goal can be achieved at the application level. This is especially valuable if you want conditional uniqueness, as some dbs (e.g. versions of MySQL < 8) does not support partial indexes (or so called filtered indexes).
My solution is based on following assumption:
uniqueness check (validator) is run by Rails in the same transaction as save/destroy action that relies on it.
This assumption seems to be true: https://api.rubyonrails.org/classes/ActiveRecord/Transactions/ClassMethods.html
Both #save and #destroy come wrapped in a transaction that ensures that whatever you do in validations or callbacks will happen under its protected cover.
transaction calls can be nested. By default, this makes all database statements in the nested transaction block become part of the parent transaction.
Having that you can use pessimistic locking (https://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html) to exclusively lock records that you want to evaluate for uniqueness in validator. That will prevent another, simultaneously running validator - and actually anything that happens after it - from executing until lock is released at the end of transaction. That ensures atomicity of validate-save pair and proper uniqueness enforcement.
In your code it would look like that:
class Model < ActiveRecord::Base
validates :text, uniqueness: {
conditions: ->{ lock.where(is_published: true) }
}
end
The only downside I can see is having db records locked for the whole validate-save process. That won't work well under heavy load, but then many applications don't work under such conditions anyway.

Validate an ActiveRecord model as if it has one hypothetical change

Lets say I have some validations that I only want to run if my record is in a specific state. This allows a draft record to be saved that is incomplete, and the rest of the content can be filled in later.
validates_presence_of :intro, :codename, :body,
if: lambda { |o| o.content_state == :review }
Now I want to know if this record's content can be considered complete, which will allow it to be moved to the review state. Validations provide a fantastic framework for applying constraints and requirements to model properties, and I want to leverage that.
So I have to take a draft content record, and then validate it as if it was in the review state, and if it comes up with any errors, the content is not yet complete.
But the only way that I have managed to do this is as follows:
def content_completable?
old_content_state = content_state
self.content_state = 'review'
return valid?
ensure
self.content_state = old_content_state
end
This feels pretty kludgy to me. It seems to work, however I'm powering these states with an actual sate machine, the docs of which say manually assigned a state name is not a good thing. But I have to because there may not be a transition back.
I don't actually want to change the model at this point, even if it is valid. I only want to know if the model would be valid if it were in a different state?
I was hoping ActiveRecords #valid? method would accept a hash attributes that would override the current values on the model, but it doesn't appear to work this way.
Is there a better way to do what I'm doing?
You could clone the record, modify the state and call valid? on the cloned record.
You can set a virtual attribute and test that first:
attr_accessor :new_content_state
validates_presence_of :intro, :codename, :body,
if: lambda { |o| (o.new_content_state || o.content_state) == :review }
def content_completable?
self.new_content_state = 'review'
return valid?
end
Naturally, you may still need to cleanup after the virtual state, but that depends on how long this model persists. It's also less intrusive since you only use this attribute for a limited purpose.
A completely different approach would be to avoid using AR validators in the first place. They aren't quite designed for this purpose, and although it seems elegant at first glance, the abstraction is leaking through...

Resources