With ActiveRecord models, I know you can validate the length of an input field like so
class User
validates :user_name, length: { maximum: 20 }
end
However, one of the design patterns in Rails recommends thin models. If you have a ton of validations, the above code might seem intimidating. I read there was another way you could do this.
You can simply use an ActiveRecord::Schema to accomplish the same task.
class CreateUsers < ActiveRecord::Migration
def change
create_table :users do |t|
t.string :user_name, limit: 20
end
end
end
That accomplishes the exact same thing only you don't even need the second line in your Users model.
What is the standard Rails convention regarding this?
Some people would argue that you have to have skinny controllers and skinny models. However, this can create several additional classes in your application.
Sometimes having a fat model if documented and laid out logically can be easier to read. I will ignore the 'best practices' if it makes the code easier to read as I may not always be the only person touching that code. If the application scales to a point where multiple people will be accessing the same files, I will consider extracting it at that point as a refactor. However, this has rarely been the case.
While it is good to set limits on your database, you also want to have client validations to prevent someone having their data truncated with no feedback to them. For example, (a really horrible example), if you were to limit the username of an User to only six characters and I type in kobaltz as my username, I will wonder why my username/password never works as the database truncated it to kobalt. You will also run into issues where MySQL (or similar) will throw database level errors which is annoying to fix/troubleshoot. You also have to consider if modifying a database in production, if you set the limits where they did not exist before, you could end up corrupting your data.
Having a few validations in your model does not make it a 'fat' model in my opinion. It makes it easier to read. If you're not using an IDE like RubyMine and only using an editor, you do not have the luxury of Jump to Definition which can make the abstraction of your model easier to follow.
If you use second approach, you won't be able to get the error. Its on mysql level and not on model level, so active record won't tell you the reason for user not getting created or updated.
object.errors
will be empty.
Check this
http://apidock.com/rails/ActiveModel/Errors/full_messages
Related
I have designed a very simple web application which associates authors, books and ratings.
In each of the respective models
Author
has_many :books
Book
belongs_to :author
has_many :reviews
Review
belongs_to :book
Model attributes
Author : title, fname, lname, DOB
Book : ISBN, title, publish_date, pages
Review : rating(1-5), description
I am wondering if I completely validate all of these attributes to my liking in the models, 1 attribute for example
validates :ISBN, :only_integer => true, length: { is: 13 }
do I need to worry about validations for data elsewhere?
I know that validations for the model run on the server side so there may need to be some validation on the client side (in JS). I am trying to ensure that there are no flaws when it comes to asserting data correctness.
As is so often the case: it depends.
In a simple Rails application, all models will be updated through a view request to a controller which in turn fills in the params into the models, then tries to save the model and any validation errors that occur are rendered back to the server.
In that scenario, all your code will have to do is to react to failed calls to #save but you can sleep soundly knowing that everything in your database is how it is supposed to be.
In more complex applications, putting all the validation logic into your model might not work as well anymore: every call to #save will have to run through all the validation logic, slowing things down, and different parts of your application might have different requirements to the input parameters.
In that scenario there are many ways to go about it. Form objects with validations specific to the forms they represent are a very common solution. These form models then distribute their input among one or more underlying ActiveRecord models.
But the Rails way is to take these one step at a time and avoid premature optimization. For the foreseeable future, putting your validation into your model will be enough to guarantee consistency.
do I need to worry about validations for data elsewhere?
Yes you do.
Application level validations are still prone to race conditions.
For things that should be unique like for example ISBN numbers database constraints are vital if uniqueness is to be guarenteed. Other areas where this can cause issues are when you have a limit on the count of an association.
While validations prevent most errors they are not a replacement for database constraints to ensure the correctness of data. Both are needed if correctness is important.
This may be a dumb/basic question, but I have a model Program with a business type of either "Work", "School", "Church", etc. Right now, it stores these in the database as chars "w","s","c", etc. However, if I do that, I have to run a switch (case in rails) to get the real name. I do these with a few other attributes.
Is it better to just store the exact value in the database instead of running this case statement? I guess its a question of saving database space over the time it takes to process the name (and I guess there is also an issue of code readability).
EDIT
I should clarify, these things only have 4 immutable values. There is an outside chance I may have to add one or two later, but it does not seem to make sense to create a new resource as there is really no CRUD actions done on these attributes.
EDIT
Program.rb
attr_accessible :org_type
validates :org_type, :presence => true.
:inclusion => %w[any c s c] # And have a case later
#OR :inclusion -> $w[Any Corporation School Church]
If you are likely to add more later just make it a resource. It gives you more advantages then disadvantages.
If you did this for performance reasons: First make it working with the resource. Then fix the performance. In this case it looks like premature optimisation. There is no need to not use a Resource here.
Also you expect already that values are being added so that makes the case quite clear.
Though, if you give more info, there might be even better solutions. It depends on which fields you need for each type.
I'm trying to understand why Rails chooses to include :limit, :null, :default and others in its migrations column options.
It's my understanding that Rails is opinionated against DB constraints, rather enforcing consistency and non-nullness (and many others) through ActiveRecord validations such as validates_presence_of and various callbacks such as before_save.
Assuming hypothetically that I fully subscribe to the "everything in the model" philosophy, shouldn't I avoid using those abovementioned column options? What am I missing here?
Thanks!
Default values are useful to set at a db level, otherwise you're going to have code everywhere to ensure booleans are set to false by default instead of nil.
The data source I am working with is terrible. Some places where you would expect integers, you get "Three". In the phone number field, you may get "the phone # is xxx". Some fields are simply blank.
This is OK, as I'm parsing each field so "Three" will end up in my model as integer 3, phone numbers (and such) will be extracted via regex. Users of the service KNOW that the data is sketchy and incomplete, as it's an unfortunate fact of the way our data source is maintained and there's nothing we can do about it but step up our parsing game! As an aside, we are producing our own version of the data slowly as we parse more and more of the original data, but this poor source has to do for now.
So users select the data they wish to parse, and we do what we can, returning a partial/incorrect model. Now the final model that we want to store should be validated - there are certain fields that can't be null, certain strings must adhere to a format and so on.
The flow of the app is:
User tells the service which data to
parse.
Service goes off and grabs
the data, parses what it can and
returns a partial model with
whatever data it could retrieve.
We display the data to the user,
allowing them to make corrections
and to fill in any mandatory fields
for which no data was collected.
This user-corrected data is to be
saved, and therefore validated.
If validation fails, show data again
for user to make fixes, rinse &
repeat.
What is the best way to go about having a model which starts off being potentially completely invalid or containing no data, but which needs to be validated eventually? The two ways I've thought of (and partially implemented) are:
2 models - a Data model, which has validations etc, and an UnconfirmedData model, which has no validations. The original data is put into an UnconfirmedData model until the user has made their corrections, at which point it it put into a Data model and validation is attempted.
One model, with a "confirmed data" flag, with validation being performed manually rather than Rails' validation.
In practice I lean towards using 2 models, but I'm pretty new to Rails so I thought there me be a nicer way to do this, Rails has a habit of surprising me like that :)
Must you save your data in between requests? If so, I would use your two model format, but use Single Table Inheritance (STI) to keep things dry.
The first model, the one responsible for the parsing and the rendering and the doing-the-best-it-can, shouldn't have any validations or restrictions on saving it. It should however have the type column in the migration so you can use the inheritance goodness. If you don't know what I'm talking about, read up on the wealth of information on STI, a good place to start would be a definitive guide.
The second model would be the one you would use in the rest of the application, the strict model, the one which has all the validations. Every time a user submitted reworked and potentially valid data, your app would try and move your instance of the open model created from the params, to an instance of the second model, and see if it was valid. If it was, save it to the database, and the type attribute will change, and everything will be wonderful. If it isn't valid, save the first instance, and return the second instance to the user so the validation error messages can be used.
class ArticleData < ActiveRecord::Base
def parse_from_url(url)
# parses some stuff from the data source
end
end
class Article < ArticleData
validates_presence_of :title, :body
validates_length_of :title, :greater_than => 20
# ...
end
You'll need a pretty intense controller action to facilitate the above process, but it shouldn't be too difficult. In the rest of your application, make sure you run your queries on the Article model to only get back valid ones.
Hope this helps!
Using one model should be straightforward enough. You'll need an attribute/method to determine whether the validations should be performed. You can pass :if => to bypass/enable them:
validates_presence_of :title, :if => :should_validate
should_validate can be a simple boolean attribute that returns false when the model instance is "provisional", or a more complicated method if necessary.
thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around.
My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below:
way database and table name shard by (maybe it's should be called partitioned by?)
YZ.X db_YZ.tb_X order serial number last three digits
YYYYMMDD. db_YYYYMMDD.tb date
YYYYMM.DD db_YYYYMM.tb_ DD date too
The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution.
I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most.
But this is just not DRY!
What I need is a clean solution like the following DSL:
class Order < ActiveRecord::Base
shard_by :order_serialno do |key|
[get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const
get_db_name_by(key),
get_tb_name_by(key),
]
end
end
Can anybody enlight me? Any help would be greatly appreciated~~~~
Case two (where only db name changes) is pretty easy to implement with DbCharmer. You need to create your own sharding method in DbCharmer, that would return a connection parameters hash based on the key.
Other two cases are not supported right away, but could be easily added to your system:
You implement sharding method that knows how to deal with database names in your sharded dabatase, this would give you an ability to do shard_for(key) calls to your model to switch db connection.
You add a method like this:
class MyModel < ActiveRecord::Base
db_magic :sharded => { :sharded_connection => :my_sharding_method }
def switch_shard(key)
set_table_name(table_for_key(key)) # switch table
shard_for(key) # switch connection
end
end
Now you could use your model like this:
MyModel.switch_shard(key).first
MyModel.switch_shard(key).count
and, considering you have shard_for(key) call results returned from the switch_shard method, you could use it like this:
m = MyModel.switch_shard(key) # Switch connection and get a connection proxy
m.first # Call any AR methods on the proxy
m.count
If you want that particular DSL, or something that matches the logic behind the legacy sharding you are going to need to dig into ActiveRecord and write a gem to give you that kind of capability. All the existing solutions that you mention were not necessarily written with your situation in mind. You may be able to bend any number of solutions to your will, but in the end you're gonna have to probably write custom code to get what you are looking for.
Sounds like, in this case, you should consider not use SQL.
If the data sets are that big and can be expressed as key/value pairs (with a little de-normalization), you should look into couchDB or other noSQL solutions.
These solutions are fast, fully scalable, and is REST based, so it is easy to grow and backup and replicate.
We all have gotten into solving all our problems with the same tool (Believe me, I try to too).
It would be much easier to switch to a noSQL solution then to rewrite activeRecord.