I am sorry if similar question have been asked already I couldn't find anything identical.
Thus, can someone tell me why before_save especially conditional one can to be considered bad, please?
before_save :something, if: Proc.new { self.abc == 'hello' }
Therefore I understand why validation sometimes fits much better, however what I don't understand is why some people think that callbacks can to be a bad thing to use and they force you to write only validations but never make them conditional.
I personally think that there's can be far larger problem because this change can affect already existing entries and so it's okay to implement conditional validator or to provide if for before_save if you plan to modify data only in certain case. Why some people think it's not okay? Could someone help me with that?
Thank you very much!
I think that the only disadvantage of before_save or before_validation is when it is not understood or used properly
That it should be used only after carefully deliberating that such callbacks are really meant to be use global-wise for all records (consider also old records that are already stored in the DB)
That if the callbacks are only applicable to certain specific records
or conditions, then it is better to not pollute the model, and just implement the logic outside the model.
That if the callbacks are changing a state, may it be the record or other records, then the names of those callbacks should be explicitly saying so, and that the developer should know and understood that these should not have inadvertent effects.
That if the callbacks are not changing state, then it is immediately safe to use as it guarantees immutability and idempotence.
That the order of the callbacks are important, and that limited understanding of what's happening may leave a developer / new developer to write code with inadvertent effects, and might not work all the time.
That before_save and before_validation are different, and that a developer should understood that some callbacks are meant to be used as before_save, and some, as before_validation
Aside from these, I do not see any problem with before_save as it can potentially clean and break down the logic of the code, and allows reusability especially if you have subclasses of the model.
Just my immediate thoughts on the matter...
Using callbacks are standard Rails practice! When used appropriately they are great DRY helpers when it comes to maintaining data-integrity. Callbacks are heavily used for data-formatting user input (like removing spaces or dashes from a cellphone number field) where responding back with an error via a validation would just frustrate the user. Use validations to handle cases you can't predict or data that would be otherwise unpredictable and callbacks elsewhere (like downcasing an email before saving).
Related
What is the best way of checking for parameter presence inside of a controller action:
params.require(:data) # will raise an ActionController::ParameterMissing error if the parameter is not found in the request
params[:data].present? # check if something is in the parameter
Which should be the preferred way, and why?
The first way will throw an error, which will need to be handled. The second way can be used in a conditional. Both ways could be used to enable the controller action to return an error message that the parameter is missing.
It seems to me that you may have a slight misunderstanding in the purpose of params.require.
If your aim is to simply check for the existance of a certain parameter to organise the flow of code, then yes always use .present?
The core aim of .require and Strong Params is not to check for existance of a parameter or not, but to allow for the creation of a whitelist for your parameters, and raise an exception when any parameter not specifically whitelisted attempts to pass, or an essential parameter is missing. a very different purpose altogether. I suggest a look at StrongParameters api.
The general aim for StrongParams and .require / .permit is to protect your database, they are used to protect your data when mass-assigning to a model. As I am sure you are aware with an in production project, your (or your customers) data is everything, and NOT using a whitelist for parameters is an extremely irresponsible thing to be doing.
EDIT
It seems my point may have been taken the wrong way due to the bulk of it talking about StrongParams, so let me explain a little more.
What i was hoping you would understand through the explanation of the purpose of strong params, is that it is not a viable choice for use in a mere presence check. The other answers have also stated that they recommend the use of .present?, however the reasoning is not a solid foundation for why this is better.
Code readability is of course important, but code readability has nothing to do with handling errors, you can very easily write code that is readable and handles exceptions. therefore code readability shouldn't even come into the equation, because that is just based on how well you can write your exception handling logic.
And as for not wanting errors on a production app, that is a completely counter intuitive point, because the whole purpose of throwing and handling exceptions is exactly for that reason, to stop errors occurring on a production app, avoiding silent logic that may have small unexpected bugs and could cause problems down the line.
The core principle as to why .present? is the obvious choice is in the question itself:
How to check for parameter presence
All code is written for a purpose, to solve a given problem, or to provide functionality suitable for a given need. So what should be considered is what is the purpose of .present? and what is the purpose .require. The purpose of .present? is fairly simple, checking for presence and returning a Boolean result in either case (meaning you can put it together with other tests using ||, not possible with .require), it fulfills the needs of the question.
Now lets take a look at .require, which is a feature from ActionController::StrongParameters module. A quick look of the API states the purpose of the StrongParameters module:
It provides an interface for protecting attributes from end-user assignment.
Is this the purpose that you are seeking, now of course the question you asked is quite a broad one, and its difficult to read the intent from it, but from the bare question "I want to check parameter presence", we can conclude that .require or any StrongParameters features is not purposed for this.
Whew! that was long, sorry for the mini Essay there, but I want to bring focus to the fact that, while the other answers bring some topics are valid and should be considered, (not just in this question but in all code). I hope to bring to your attention that there is WAY more to consider than simple code readability or not wanting to handle errors, which can both be solved very easily in a variety of solutions. Even small simple decisions like this should be considered with a perspective that has much more core principles of programming in mind as well.
Long story short: I can hammer a nail in with the handle of a screwdriver, but that is not its purpose, and if I keep doing it i'm eventually gona have a bad time.
I believe you answered your own question. If you prefer to handle exceptions, use require. If you like to use conditions instead, use present?.
For me the second way is preferred cause it reduces the chance that my app will crash on production (if I failed to handle exceptions properly).
It depends on whether you want to raise an error in regards to if a parameter is present or not.
I would personally say to go for .present in the sense that you could always use it in a conditional, and allows for better code readability, vs. alternative of the messy, and complicated nature of handling errors.
Even under the hypothetical assumption that the parameter was not present, you could still render another page should you choose. IMO, in the specific case where you have a simply looking for the presence of an error, really don't see a reason why you'd want the application to throw an error vs. false. Even the worse-case hypothetical situation of a parameter not being present when using .present? can be appropriately handled by a simple if and else statement:
if (some value).present?
render("some_page")
else
render("error_page")
end
What do you see as the pros and cons of using callbacks for domain logic? (I'm talking in the context of Rails and/or Ruby projects.)
To start the discussion, I wanted to mention this quote from the Mongoid page on callbacks:
Using callbacks for domain logic is a bad design practice, and can lead to
unexpected errors that are hard to debug when callbacks in the chain halt
execution. It is our recommendation to only use them for cross-cutting
concerns, like queueing up background jobs.
I would be interested to hear the argument or defense behind this claim. Is it intended to apply only to Mongo-backed applications? Or it is intended to apply across database technologies?
It would seem that The Ruby on Rails Guide to ActiveRecord Validations and Callbacks might disagree, at least when it comes to relational databases. Take this example:
class Order < ActiveRecord::Base
before_save :normalize_card_number, :if => :paid_with_card?
end
In my opinion, this is a perfect example of a simple callback that implements domain logic. It seems quick and effective. If I was to take the Mongoid advice, where would this logic go instead?
I really like using callbacks for small classes. I find it makes a class very readable, e.g. something like
before_save :ensure_values_are_calculated_correctly
before_save :down_case_titles
before_save :update_cache
It is immediately clear what is happening.
I even find this testable; I can test that the methods themselves work, and I can test each callback separately.
I strongly believe that callbacks in a class should only be used for aspects that belong to the class. If you want to trigger events on save, e.g. sending a mail if an object is in a certain state, or logging, I would use an Observer. This respects the single responsibility principle.
Callbacks
The advantage of callbacks:
everything is in one place, so that makes it easy
very readable code
The disadvantage of callbacks:
since everything is one place, it is easy to break the single responsibility principle
could make for heavy classes
what happens if one callback fails? does it still follow the chain? Hint: make sure your callbacks never fail, or otherwise set the state of the model to invalid.
Observers
The advantage of Observers
very clean code, you could make several observers for the same class, each doing a different thing
execution of observers is not coupled
The disadvantage of observers
at first it could be weird how behaviour is triggered (look in the observer!)
Conclusion
So in short:
use callbacks for the simple, model-related stuff (calculated values, default values, validations)
use observers for more cross-cutting behaviour (e.g. sending mail, propagating state, ...)
And as always: all advice has to be taken with a grain of salt. But in my experience Observers scale really well (and are also little known).
Hope this helps.
EDIT: I have combined my answers on the recommendations of some people here.
Summary
Based on some reading and thinking, I have come to some (tentative) statements of what I believe:
The statement "Using callbacks for domain logic is a bad design practice" is false, as written. It overstates the point. Callbacks can be good place for domain logic, used appropriately. The question should not be if domain model logic should go in callbacks, it is what kind of domain logic makes sense to go in.
The statement "Using callbacks for domain logic ... can lead to unexpected errors that are hard to debug when callbacks in the chain halt execution" is true.
Yes, callbacks can cause chain reactions that affect other objects. To the degree that this is not testable, this is a problem.
Yes, you should be able to test your business logic without having to save an object to the database.
If one object's callbacks get too bloated for your sensibilities, there are alternative designs to consider, including (a) observers or (b) helper classes. These can cleanly handle multi object operations.
The advice "to only use [callbacks] for cross-cutting concerns, like queueing up background jobs" is intriguing but overstated. (I reviewed cross-cutting concerns to see if I was perhaps overlooking something.)
I also want to share some of my reactions to blog posts I've read that talk about this issue:
Reactions to "ActiveRecord's Callbacks Ruined My Life"
Mathias Meyer's 2010 post, ActiveRecord's Callbacks Ruined My Life, offers one perspective. He writes:
Whenever I started adding validations and callbacks to a model in a Rails application [...] It just felt wrong. It felt like I'm adding code that shouldn't be there, that makes everything a lot more complicated, and turns explicit into implicit code.
I find this last claim "turns explicit into implicit code" to be, well, an unfair expectation. We're talking about Rails here, right?! So much of the value add is about Rails doing things "magically" e.g. without the developer having to do it explicitly. Doesn't it seem strange to enjoy the fruits of Rails and yet critique implicit code?
Code that is only being run depending on the persistence state of an object.
I agree that this sounds unsavory.
Code that is being hard to test, because you need to save an object to test parts of your business logic.
Yes, this makes testing slow and difficult.
So, in summary, I think Mathias adds some interesting fuel to the fire, though I don't find all of it compelling.
Reactions to "Crazy, Heretical, and Awesome: The Way I Write Rails Apps"
In James Golick's 2010 post, Crazy, Heretical, and Awesome: The Way I Write Rails Apps, he writes:
Also, coupling all of your business logic to your persistence objects can have weird side-effects. In our application, when something is created, an after_create callback generates an entry in the logs, which are used to produce the activity feed. What if I want to create an object without logging — say, in the console? I can't. Saving and logging are married forever and for all eternity.
Later, he gets to the root of it:
The solution is actually pretty simple. A simplified explanation of the problem is that we violated the Single Responsibility Principle. So, we're going to use standard object oriented techniques to separate the concerns of our model logic.
I really appreciate that he moderates his advice by telling you when it applies and when it does not:
The truth is that in a simple application, obese persistence objects might never hurt. It's when things get a little more complicated than CRUD operations that these things start to pile up and become pain points.
This question right here ( Ignore the validation failures in rspec ) is an excellent reason why to not put logic in your callbacks: Testability.
Your code can have a tendency to develop many dependencies over time, where you start adding unless Rails.test? into your methods.
I recommend only keeping formatting logic in your before_validation callback, and moving things that touch multiple classes out into a Service object.
So in your case, I would move the normalize_card_number to a before_validation, and then you can validate that the card number is normalized.
But if you needed to go off and create a PaymentProfile somewhere, I would do that in another service workflow object:
class CreatesCustomer
def create(new_customer_object)
return new_customer_object unless new_customer_object.valid?
ActiveRecord::Base.transaction do
new_customer_object.save!
PaymentProfile.create!(new_customer_object)
end
new_customer_object
end
end
You could then easily test certain conditions, such as if it is not-valid, if the save doesn't happen, or if the payment gateway throws an exception.
In my opinion, the best scenario for using callbacks is when the method firing it up has nothing to do with what's executed in the callback itself. For example, a good before_save :do_something should not execute code related to saving. It's more like how an Observer should work.
People tend to use callbacks only to DRY their code. It's not bad, but can lead to complicated and hard to maintain code, because reading the save method does not tell you all it does if you don't notice a callback is called. I think it is important to explicit code (especially in Ruby and Rails, where so much magic happens).
Everything related to saving should be be in the save method. If, for example, the callback is to be sure that the user is authenticated, which has no relation to saving, then it is a good callback scenario.
Avdi Grimm have some great examples in his book Object On Rails.
You will find here and here why he do not choose the callback option and how you can get rid of this simply by overriding the corresponding ActiveRecord method.
In your case you will end up with something like :
class Order < ActiveRecord::Base
def save(*)
normalize_card_number if paid_with_card?
super
end
private
def normalize_card_number
#do something and assign self.card_number = "XXX"
end
end
[UPDATE after your comment "this is still callback"]
When we are speaking of callbacks for domain logic, I understand ActiveRecord callbacks, please correct me if you think the quote from Mongoid referer to something else, if there is a "callback design" somewhere I did not find it.
I think ActiveRecord callbacks are, for the most (entire?) part nothing more than syntactic sugar you can rid of by my previous example.
First, I agree that this callbacks method hide the logic behind them : for someone who is not familiar with ActiveRecord, he will have to learn it to understand the code, with the version above, it is easily understandable and testable.
Which could be worst with the ActiveRecord callbacks his their "common usage" or the "decoupling feeling" they can produce. The callback version may seems nice at first but as you will add more callbacks, it will be more difficult to understand your code (in which order are they loaded, which one may stop the execution flow, etc...) and test it (your domain logic is coupled with ActiveRecord persistence logic).
When I read my example below, I feel bad about this code, it's smell. I believe you probably do not end up with this code if you were doing TDD/BDD and, if you forget about ActiveRecord, I think you would simply have written the card_number= method. I hope this example is good enough to not directly choose the callback option and think about design first.
About the quote from MongoId I'm wondering why they advice to not use callback for domain logic but to use it to queueing background job. I think queueing background job could be part of the domain logic and may sometimes be better designed with something else than a callback (let's say an Observer).
Finally, there is some criticism about how ActiveRecord is used / implemented with Rail from an Object Oriented programming design point of view, this answer contain good information about it and you will find more easily. You may also want to check the datamapper design pattern / ruby implementation project which could be replacement (but how much better) for ActiveRecord and do not have his weakness.
I don't think the answer is all too complicated.
If you're intending to build a system with deterministic behavior, callbacks that deal with data-related things such as normalization are OK, callbacks that deal with business logic such as sending confirmation emails are not OK.
OOP was popularized with emergent behavior as a best practice1, and in my experience Rails seems to agree. Many people, including the guy who introduced MVC, think this causes unnecessary pain for applications where runtime behavior is deterministic and well known ahead of time.
If you agree with the practice of OO emergent behavior, then the active record pattern of coupling behavior to your data object graph isn't such a big deal. If (like me) you see/have felt the pain of understanding, debugging and modifying such emergent systems, you will want to do everything you can to make the behavior more deterministic.
Now, how does one design OO systems with the right balance of loose coupling and deterministic behavior? If you know the answer, write a book, I'll buy it! DCI, Domain-driven design, and more generally the GoF patterns are a start :-)
http://www.artima.com/articles/dci_vision.html, "Where did we go wrong?". Not a primary source, but consistent with my general understanding and subjective experience of in-the-wild assumptions.
What's the more accepted Rails approach?
Validate that a foreign key exists on creation/update
or
"Damage control" when we want to use the non-existent foreign key?
Initial validation requires more resources on row creation/updating, and may even be redundant when I'm creating rows systematically in my code (i.e. not user generated). However, I can smoothly write my business logic, without fear of running into bad foreign key.
And on the other hand, damage control allows for quick creation and updating, but of course, more checks and recovery in my logic.
Can you think of any other pros and cons? Perhaps there's even more alternatives than just these two doctrines.
How do experienced Rails developers handle this problem?
If by validate ahead of time you mean doing an additional database request just to validate the existence of a key, I would say don't do it. We use FKs all over, and have almost never run into problems, especially not on a create or update. If it does fail, that's probably a good thing, unlike a validation you can do something about, if you just tried to add an association to a no longer existing object, that seems like a pretty good reason for an error to me.
If you have particularly volatile entities such that an instance might frequently have been deleted between the time it is instantiated and the time you try to save it in a FK, then maybe in that particular case it might be worth it, but as a general guide I would not.
I also often use FKs between tables that are deleted using logical deletes (a la acts_as_paranoid, set a deleted_at flag rather than actually delete the row), which also eases the problems of FKs failing, and I find to be a very helpful strategy at least in my app.
Or, put differently, is there any reason not to use it on all of my models?
Some background: is_paranoid is a gem that makes calls to destroy set a deleted_at timestamp rather than deleting the row (and calls to find exclude rows with non-null deleted_ats).
I've found this feature so useful that I'm starting to include it in every model -- hard deleting records is just too scary. Is there any reason this is a bad thing? Should this feature be turned on by default in Rails?
Ruby is not for cowards who are scared of their own code!
In most cases you really want to delete the record completely. Consider a table that contains relationships between two other models. This is an obvious case when you would not like to use deleted_at.
Another thing is that your approach to database design is kinda rubyish. You will suffer of necessity to handle all this deleted_At stuff, when you have to write more complex queries to your tables than mere finds. And you surely will, when your application's DB takes lots of space so you'll have to replace nice and shiny ruby code with hacky SQL queries. You may want then to discard this column, but--oops--you have already utilized deleted_at logic somewhere and you'll have to rewrite larger pieces of your app. Gotcha.
And at the last place, actually it seems natural when things disappear upon deletion. And the whole point of the modelling is that the models try to express in machine-readable terms what's going on there. By default you delete record and it passes forever. And only reason deleted_at may be natural is when a record is to be later restored or should prevent similar record to be confused with the original one (table for Users is most likely the place you want to use it). But in most models it's just paranoia.
What I'm trying to say is that the plausibility to restore deleted records should be an explicitly expressed intent, because it's not what people normally expect and because there are cases where implicit use of it is error prone and not just adds a small overhead (unlike maintaining a created_at column).
Of course, there is a number of cases where you would like to revert deletion of records (especially when accidental deletion of valuable data leads to an unnecessary expenditure). But to utilize it you'll have to modify your application, add forms an so on, so it won't be a problem to add just another line to your model class. And there certainly are other ways you may implement storing deleted data.
So IMHO that's an unnecessary feature for every model and should be only turned on when needed and when this way to add safety to models is applicable to a particular model. And that means not by default.
(This past was influenced by railsninja's valuable remarks).
#Pavel Shved
I'm sorry but what? Ruby is not for cowards scared of code? This could be one of the most ridiculous things I have ever heard. Sure in a join table you want to delete records, but what about the join model of a has many through, maybe not.
In Business applications it often makes good sense to not hard delete things, Users make mistakes, A LOT.
A lot of your response, Pavel, is kind of dribble. There is no shame in using SQL where you need to, and how does using deleted_at cause this massive refactor, I'm at a loss about that.
#Horace I don't think is_paranoid should be in core, not everyone needs it. It's a fantastic gem though, I use it in my work apps and it works great. I am also fairly certain it hasn't forced me to resort to sql when I wouldn't need to, and I can't see a big refactor in my future due to it's presence. Use it when you need it, but it should be a gem.
Is it necessary to unit test ActiveRecord validations or they are well-tested already and hence reliable enough?
Validations per se should be trustable, but you may want to check if the validation is present.
Put in other words, a good way to test something is as if it were a black box, abstracting the tests from the implementation, so for instance you may have a test that checks that a person model can't be saved without a name, but don't care about how the Person class performs that validation.
It should be sufficient to accept that libraries such as ActiveRecord are better-tested by the developers than they ever will be by you: for them it's a primary concern, for you it's at best tangential.
That's not to say there won't be bugs - I found a small one the MS SQL Server adapter once a long time ago - but the kind of test you're likely to be implementing is highly unlikely to expose them as they're most likely to be edge cases. If you do find a bug, of course, it's probably very helpful if you report it with a test case that exposes it!
I would only test ActiveRecord internals if I was seeking to understand better a particular aspect that the library implements. I would not include those exploratory tests in any application project, since they're not really relevant to the project.
In general, you should write tests for code that you write yourself: if you live or try to live in a TDD world, the tests should be written before. If your models have validation rules then you should almost certainly write tests to ensure the rules are present. In most cases, the tests will be trivial, but they'll really be useful if a line inadvertently gets deleted some time in the future...
As Mike wrote, at the very least you should test that the validation exists. It's just a bit of double-entry accounting (sanity check) that is easy enough to do.
Depending on the situation, you should also test that your model is valid or invalid under particular circumstances. For example if your field requires a certain format, then test the example formats that are valid and those that aren't. It's much easier to see what this means by reading a few examples in your tests:
class Person < ActiveRecord::Base
validates_format_of :email,
:with => /\A([^#\s]+)#((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i
end
Yes, the validations are well-tested and reliable enough. But your correct use of the validations is what you want to verify.
As a side note, Ryan Bigg's blog post has_and_belongs_to_many double insert mentions someone encountering a bug in ActiveRecord (not validation related, though). As he points out, don't assume Rails can't possibly have a bug, because we know there are 900 open tickets for Rails.
But yes, the main reason you'd write a test is to check that your use of ActiveRecord is correct.