Advice on methods for custom validation - ruby-on-rails

I am using Ruby on Rails 3 and I would like to know if my approach to validate new record is good or not.
I don't use the common RoR validation system, so in my model I have all custom validation mathods like these:
def validates_user_name(user)
...
end
def validates_user_surname(user)
...
end
...
that I call from controller in this way
def create
...
#user.validates_user_name(params[:user])
#user.validates_user_name(params[:user])
...
end
Is it a good way to validate the creation of new user? There will be problems with hackers using this approach?

I think you're going to have a hard time convincing anyone that your custom validations are better than what's built into Rails, especially if the validation logic is similar.
If you still want control over when things happen, you should take advantage of the built-in callback hooks like before_create. There are lots of advantages of doing it this way, including automatic transaction rollback and decoupling. However, if what you're doing is already accomplished by Rails, it's not advisable to reinvent the wheel.

Related

Rails best practice - handling errors rails

I am new to rails. I want to know the best way to handle inputs errors in rails. Using :message in validates_format_of method then checking in the views the value of the hash or initializing the model with a ActiveModel::Errors.new, and then using it in the views (passing in the model attr_reader :errors), or other any way ?
I use :messageĀ invalidates_format_of methodĀ then checking in the views, This is a universal practice
The guide http://guides.rubyonrails.org/active_record_validations_callbacks.html#error_messages-and-error_messages_for
A good way to validate input errors is to have both client-side and server-side validations. You can rely on Rails validators on your models, and on the front end you can either use the newer HTML elements, javascript, or a combination of both.
With regards to Rails validations specifically, I don't think you need to stray too far from the conventions. Obviously if you need a different strategy you can definitely incorporate it, but at that point I would definitely suggest adding tests (which is not something I typically do for Rails validations).

RoR - Don't destroy object, just flag as hidden

I have a simple model in RoR and I would like to keep eveything people enter on the site. But I also want to be able to hide some content if the user click on "Remove".
So I added a bolean attribute in my model called "displayed".
I would like to know, what would be the best-practices-styled method.
I guess I have to change the controller with something like :
def destroy
#point = Point.find(params[:id])
#point.displayed = false
#point.save
respond_to do |format|
format.html { redirect_to points_url }
format.json { head :no_content }
end
But I am not sure it is clean. What would be the best way to do it.
As you guess I am noobish with RoR. Chunks of code would be appreciated.
Thank you
Implement it yourself (rather than using a gem). It's much, much easier than it seems at first, and it's less complex than any of the gems out there that change the meaning of the destroy method, which is a bad idea, in my opinion.
I'm not saying that using the gems themselves are complex - I'm saying that by changing the meaning of the destroy method you're changing the meaning of something that people in the Rails world take for granted - that when you call destroy that record is going to go away and that destroy maybe also be called on dependent objects if they are chained together via dependent: destroy callbacks.
Changing the meaning of destroy is also bad because in the "convention over configuration" world, when you screw with conventions you're essentially breaking the "automagic-ness" of your Rails code. All that stuff you take for granted because you read a piece of Rails code and you know that certain assumptions generally apply - those go out the window. When you change those assumptions in ways that aren't obvious you're almost certain to introduce a bug down the line because of it.
Don't get me wrong, there's nothing better than actually reading the code for checking your assumptions, but it's also nice, as a community, to be able to talk about certain things and generally have their behavior act in a certain way.
Consider the following:
There's nothing in Rails that says you have to implement the destroy action in the controller, so don't. It's one of the standard actions, but it's not required.
Use the update action to set and clear an archived boolean attribute (or something similarly named)
I've used the acts_as_paranoid gem, and if you need to add any scopes to your models (other than the ones the gem provides) you're going to find yourself having to hack your way around it, turning off the default "hide archived records" scope, and when you run into that it almost immediately loses its value. Besides, that gem does almost nothing on its own, and its functionality could easily be written yourself (and I mean barely more work than installing the gem itself), so there's really no benefit to using it from that perspective.
As previously stated, overriding the destroy method or action is a bad idea because it breaks the Rails (and ActiveRecord) convention as to what it means to call destroy on an object. Any gem that does this (acts_as_paranoid for example) is also breaking that convention, and you're going to wind up confusing yourself or someone else because destroy simply won't mean what it's supposed to mean. This adds confusion, not clarity to your code. Don't do this - you'll pay for it later.
If you want to use a soft-delete gem because you are protecting against some theoretical, future developer who might hork up your data...well, the best solution to that is not to hire or work with those people. People that inexperienced need mentorship, not a gem to prevent them from making mistakes.
If you really, absolutely, must prevent destroying a record of a given model (in addition to being able to simply archive it), then use the before_destroy callback and simply return false, which will prevent it from being destroyed at all unless an explicit call to delete is used (which isn't the same as destroy anyway). Also, having the callback in place makes it (a) really obvious why destroy doesn't work without changing its meaning, and (b) it's easy to write a test to make sure it's not destroyable. This means in the future, if you should accidentally remove that callback or do something else that makes that model destroyable, then a test will fail, alerting you to the situation.
Something like this:
class Point < ActiveRecord::Base
def archive
update_attribute!(:displayed, false)
end
end
And then call #point.archive in the destroy action of your controller where you would normally call #point.destroy. You can also create a default_scope to hide archived points until you explicitly query for them, seethe RoR guide on appling a default scope.
Edit: Updated my answer as per normalocity & logan's comments below.
Look at the acts_as_archive gem. It will do soft deletes seamlessly.
Your solution is good, but you can use the acts_as_paranoid gem to manage that.
In this scenario, instead of adding a new boolean flag its better to added a deleted_at:datetime column
#point = Point.find(params[:id])
#point.touch(:deleted_at)
...
Then later
Point.where(deleted_at: nil) # these are NOT hidden
Point.where.not(deleted_at: nil) # these are hidden

accessing current_user in model; has to be a better way for logging and auth

I know the dogma says to not access current_user in a model but I don't fully agree with it. For example, I want to write a set of logging functions when an action happens via a rails callback. Or simply writing who wrote a change when an object can have multiple people write to it (not like a message which has a single owner). In many ways, I see current_user more as config for an application - in other words make this app respond to this user. I would rather have my logging via the model DSL rather than in the action where it seems REALLY out of place. What am I missing?
This idea seems rather inelegant Access current_user in model
as does this: http://rails-bestpractices.com/posts/47-fetch-current-user-in-models
thx
edit #1
So my question isn't if there are gems that can do auditing / logging. I currently use paper_trail (although moving away from it because I can do same functionality in approx 10 lines of ruby code); it is more about whether current_user should never be accessed in the model - I essentially want to REDUCE my controller code and push down logic to models where it should be. Part of this might be due to the history of ActiveRecord which is essentially a wrapper around database tables for which RoR has added a lot of functionality over the years.
You've given several examples that you'd like to accomplish, I'll go through the solution to each one separately:
I want to write a set of logging functions when an action happens via
a rails callback
Depending on how you want to log (DB vs writing to the logger). If you want to log to the DB, you should have a separate logging model which is given the appropriate information from the controller, or simply with a belongs_to :user type setup. If you want to write to the logger, you should create a method in your application controller which you can call from your create and update methods (or whatever other actions you wanted to have a callback on.)
Or simply writing who wrote a change when an object can have multiple people write to it
class Foo < ActiveRecord::Base
belongs_to :user, as: :edited_by
end
class FooController < ApplicationController
def update
#foo = Foo.find(params[:id])
#foo.attributes = params[:foo]
#foo.edited_by = current_user
end
end
I think you're misunderstanding what the model in Rails does. Its scope is the database. The reason it can't access current_user, is because the current user is not stored in the database, it is a session variable. This has absolutely nothing to do with the model, as this is something that can not exist without a browser.
ActiveRecord::Base is not a class that is designed to work with the browser, it is something that works with the database and only the database. You are using the browser as an interface to that model, but that layer is what needs to access browser specific things such as session variables, as your model is extending a class that is literally incapable of doing so.
This is not a dogma or style choice. This is a fact of the limitations of the class your model is extending from. That means your options basically boil down to extending from something else, handling it in your controller layer, or passing it to the model from your controller layer. ActiveRecord will not do what you want in this case.
The two links you show (each showing imho the same approach) is very similar to a approach I still use. I store the current_user somewhere (indeed thread-context is the safest), and in an observer I can then create a kind of audit-log of all changes to the watched models, and still log the user.
This is imho a really clean approach.
An alternative method, which is more explicit, less clean but more MVC, is that you let the controller create the audit-log, effectively logging the actions of the users, and less the effects on different models. This might also be useful, and in one website we did both. In a controller you know the current-user, and you know the action, but it is more verbose.
I believe your concerns are that somehow this proposed solution is not good enough, or not MVC enough, or ... what?
Another related question: How to create a full Audit log in Rails for every table?
Also check out the audited gem, which solves this problem as well very cleanly.
Hope this helps.

Should I prevent record editing using a filter in my controller or a callback in my model?

To preserve data integrity, I need to prevent some models from being modified after certain events. For example, a product shouldn't be allowed to be written off after it has been sold.
I've always implemented this in the controller, like so (pseudo-ish code):
def ProductsController < ApplicationController
before_filter require_product_not_sold, :only => [ :write_off ]
private
def require_product_not_sold
if #product.sold?
redirect_to #product, :error => "You can't write off a product that has been sold"
end
end
end
It just struck me that I could also do this in the model. Something like this:
def Product < ActiveRecord::Base
before_update :require_product_not_sold
private
def require_product_not_sold
if self.written_off_changed?
# Add an error, fail validation etc. Prevent the model from saving
end
end
end
Also consider that there may be several different events that require that a product has not been sold to take place.
I like the controller approach - you can set meaningful flash messages rather than adding validation errors. But it feels like this code should be in the model (eg if I wanted to use the model outside of my Rails app).
Am I doing it wrong?
What are the advantages of handling this in my model?
What are the disadvantages of handling this in my model?
If I handle it in the model, should I really be using validates rather than a callback? What's the cleanest way to handle it?
Thanks for your ideas :)
It seems like you already have this one covered, based on your question. Ideally a model should know how to guard its state, as data objects are typically designed with portability in mind (even when they'll never be used that way).
But in this case you want to prevent an action before the user even has access to the model. Using a model validation in this case means you're too late and the user has already gone farther than he should by having access to and attempting to write off a product which should never have been accessible based on its sold status.
So I guess the ideal answer is "both." The model "should" know how to protect itself, as a backup and in case it's ever used externally.
However in the real world we have different priorities and constraints, so I'd suggest implementing the change you listed if feasible, or saving it for the next project if not.
As far as using a model callback versus a validation, I think that's a trickier question but I'll go with a validation because you'd likely want to present a message to the user and validation is built for exactly that use (I'd consider this more of a friendly and expected user error than a hostile or security-related one which you might handle differently).
Is that along the lines of what you've been considering?

Rails: Cleaning up ugly controllers

I'm all for the skinnier controller, fatter model mindset.
I wanted to know how to go about:
How you identify things that should be moved to your model (assuming you're like me and you get lazy and need to refactor your controllers because you just shove code in there)
How you write and structure the creation of new elements in your controller. See following example.
Example
I had a relatively messy controller for a polymophic "vote". I've cleaned it up pretty well, but I wanted to know if I could improve this action a little better:
def up
vote = Vote.new
vote.vote = true
vote.voter = current_user
vote.voteable = Recipe.find params[:id]
vote.save
end
To me it's just a little ugly, and I probably should just use create instead of new, but I'm wondering if I'm driving down a deadly path here by using a non-standard action (concerning REST).
I'm working on switching it to new right now. But I definitely wanted to get the point of view of the community about.
The key to this is Test-Driven Development. Once you make it a habit, the question of where to put code is answered for you 95% of the time. Here's why.
Unit testing (model testing in Rails) is the easiest place to test code. Model methods should be unit tested "black box" style - meaning you don't care what's inside the method, only that input X provides output Y. This will also cause you to write a greater number of smaller methods in your model, instead of very large methods. The easier it is to test, the better - and not just for testing's sake. Simpler methods are easier to override by other code, which is a big advantage of Ruby.
Controller (functional) tests, on the other hand, will find you caring more about what happens inside the action, since those methods aren't cut and dry input/output scenarios. Database calls happen, session variables are set, etc. Shoulda is a great test suite that automates a lot of this for you.
Finally, my advice is to look inside some of your favorite plugins to see how they're doing things. And if you're interested more in testing, I have an article about restful controller tests in Shoulda that might get you started.
In RESTful controllers, especially with a lot of actions I sometimes create a before_filter to load the object:
before_filter :load_recipe, :only => %w(show edit update destroy up down)
private
def load_recipe
#recipe = Recipe.find(params[:id])
end
In your case I might consider moving voting to the user model so you would have something like:
def up
current_user.vote(#recipe)
end
And then in your model:
class User < ActiveRecord::Base
has_many :votes
def vote(object)
votes.create(:vote => true, :voteable => object)
end
end
The benefit of that is that you can easily test the behavior of voting in isolation, and it can be re-used if there are other places you might want to enable voting (voting as in implicit result of another action, voting via API, mass-voting, etc).

Resources