Are too many filters bad? + rails - ruby-on-rails

I started using 'flog' and 'flay' gems to bring down code complexity and duplication. As a result, some of my controllers started having a lot of before and after filters. For an example, even if one line of code is repeated in multiple methods of a controller, i started shifting that code to a before_filter. flog n flay do say that my code is optimized but i was wondering whether it really is? Do so many filters bring down execution time?

I don't necessarily think so, but I haven't tested it. One way to ensure efficiency is to add conditions on filters. For example: before_filter :store_image, :unless => :has_image?
This way a model would only execute store_image if no image is present.

Related

Dry up Rails Active Record query conditions

In my Ruby on Rails project, I have a mailer that basically prepares a daily digest of things that happened in the system for a given user. In the mailer controller, I am gathering all the relevant records from the various models according to some common pattern (within a certain date, not authored by this user, not flagged, etc) and with minor differences from model to model.
There are half a dozen of models involved here (and counting), and most of them have unified column names for certain things (like date of publishing, or whether an item is flagged by admin or not). Hence, the 'where's that go into query are mostly the same. There are minor differences in conditions, but at least 2 or 3 conditions are exactly the same. I easily assume there may be even more similar conditions between models, since we are just starting the feature and haven't figured out the eventual shape of the data yet.
I basically chain the 'where' calls upon each model. It irritates me to have 6 lines of code so close to each other, spanning so far to the right of my code editor, and yet so similar. I am dreaded by the idea that at some point we will have to change one of the 'core' conditions, munging with that many lines of code all at once.
What I'd love to do is to move a core set of conditions that goes into each query into some sort of Proc or whatever, then simply call it upon each model like a scope, and after that continue the 'where' chain with model-specific conditions. Much like a scope on each model.
What I am struggling with is how exactly to do that, while keeping the code inside mailer. I certainly know that I can declare a complex scope inside a concern, then mix it into my models and start each of queries with that scope. However, this way the logic will go away from the mailer into an uncharted territory of model concerns, and also it will complicate each model with a scope that is currently only needed for one little mailer in a huge system. Also, for some queries, a set of details from User model is required for a query, and I don't want each of my models to handle User.
I like the way scopes are defined in the Active Record models via lambdas (like scope :pending, -> { where(approved: [nil, false]) }), and was looking for a way to use similar syntax outside model class and inside my mailer method (possibly with a tap or something like that), but I haven't found any good examples of such an approach.
So, is it possible to achieve? Can I collect the core 'where' calls inside some variable in my mailer method and apply them to many models, while still being able to continue the where chain after that?
The beauty of Arel, the technology behind ActiveRecord query-building, is it's all completely composable, using ordinary ruby.
Do I understand your question right that this is what you want to do?
def add_on_something(arel_scope)
arel_scope.where("magic = true").where("something = 1")
end
add_on_something(User).where("more").order("whatever").limit(10)
add_on_something( Project.where("whatever") ).order("something")
Just ordinary ruby method will do it, you don't need a special AR feature. Because AR scopes are already composable.
You could do something like:
#report_a = default_scope(ModelA)
#report_b = default_scope(ModelB)
private
def default_scope(model)
model.
where(approved: [nil, false]).
order(:created_at)
# ...
end

break away from a controller action

My question is if there is a rails equivalent to 'breaking' away from a controller action such as
def new
if some_confirmation
do_stuff
break #I know this only breaks out of a loop but I want it to break out of the action. is this possible?
end
do_some_other_stuff_which_it_should_not_reach_after_breaking
end
Also this raises the question if I'm doing something wrong and shouldn't be rather doing it with a before_filter
Also last question, is it better to enclose everything in a 'if-else'-statement in such definitions (not necessarily controller actions but also normal definitions) or do it like the way I intend to do up there ^?
The answer is to use return. Controller actions are just methods and return is how you return early from a method. There's nothing wrong with doing this and you should feel free to do so as needed.
You can use before_filters (which has been renamed to before_action in Rails 4), as you mention, but I recommend only doing this if every method in the controller requires the condition. Otherwise, you end up with a huge list of before_actions at the top that you have to keep in mind (or be surprised by) while reading the actions later; and it gets especially confusing if you have to keep in mind which ones apply and which ones don't for which actions!
Whether or not if-else statements are preferred is a bit of an opinion... but, inspired by Avdi Grimm's excellent book Confident Ruby, I recommend to do as you've done here... what you've set up is essentially a guard clause whereby you take care of exiting the method early up front, and then you get into the actual meat of the method. In Confident Ruby, Avdi Grimm talks about breaking methods up into logical parts to tell a coherent story that doesn't force readers to keep track of various states throughout the life of the method. Using an if-else statement tends to force readers to keep track of state, whereas using guard clauses allows you to quickly identify conditions to leave the method via and then forget about them so you can focus on the actual purpose of the method.

Clean controllers: preparing data for views

Consider I have a controller with an action which renders a view. The view needs data to render. I know the following ways to prepare and send it to the view:
Using instance variables
class CitiesController < ApplicationController
def index
#cities = Cities.order(:name).limit(10)
end
end
This is the default approach which can be found in Rails documentation, but it has some disadvantages:
It makes the action code fat which becomes responsible not only for controller logic, but also for the data preparation.
Views need to access this data through instance variables – those #-variables break the principle of least astonishment.
Using helper methods
class CitiesController < ApplicationController
helper_method :cities
def index
end
def cities
#cities ||= Cities.order(:name).limit(10)
end
end
That's the way I prefer the most. It keeps action methods clean, so I may implement controller logic there not mixing it with data preparations in one method. Also, there's no need to use mysterious instance variables in views, making them isolated. However:
The data preparations are still in the controller. It becomes unreadable when there are a lot of these helper methods, especially when they are relative to different actions/views.
There's a need of having a unique name for each helper method. Say, I can't have a method called products which will return different data for different actions (of course, I can do it in one method, but it would look ugly).
Using the facade pattern
Partially the problem is solved in this article: https://medium.com/p/d65b86cdb5b1
But I didn't like this approach because it introduces a #magic_facade_object in views.
Using inherited resources
It may look beautiful in examples, but in my opinion when it comes to the real code, controller code becomes a spaghetti-monster very fast. The other thing is that a page view usually needs not only the resource but also other data to render (sidebar blocks, etc.) and I still have to use another way to prepare it. Combining different approaches makes the code even more unreadable. Finally, I don't like to use resource variable, because it makes not very clear what is the view about.
So, here is the question. How do you keep your controllers clean?
How do you keep your controllers clean?
By writing DRY code and sprinkling some gem magic around.
Having a look at your bullet points, I think I have a different opinion on most of the stuff.
#cities = Cities.order(:name).limit(10) is exactly what i think belongs into a rails controller and it does not violate the principle of least surprise, it's kind of the opposite. instance variables are the default way of passing around variables from controllers to views, even though that is a pretty ugly thing to do. it's "the rails way" (TM)!
decent_exposure takes away most of these concerns
please stop applying old-school pattern to rails or ruby code. it's really just useful in large applications where you are struggling to keep sane with the amount of code that's within a single controller method. write clean code, test it thoroughly and you will be fine 80% of the time.
don't use "one size fits all" tools. most often, you need to write more configuration than you would need code to make it work. it's also getting a lot more complex through this kind of things.

How to improve performance of single-page application?

Introduction
I have a (mostly) single-page application built with BackboneJS and a Rails backend.
Because most of the interaction happens on one page of the webapp, when the user first visits the page I basically have to pull a ton of information out of the database in one large deeply joined query.
This is causing me some rather extreme load times on this one page.
NewRelic appears to be telling me that most of my problems are because of 457 individual fast method calls.
Now I've done all the eager loading I can do (I checked with the Bullet gem) and I still have a problem.
These method calls are most likely ocurring in my Rabl serializer which I use to serialize a bunch of JSON to embed into the page for initializing Backbone. You don't need to understand all this but suffice to say it could add up to 457 method calls.
object #search
attributes :id, :name, :subscription_limit
# NOTE: Include a list of the members of this search.
child :searchers => :searchers do
attributes :id, :name, :gravatar_icon
end
# Each search has many concepts (there could be over 100 of them).
child :concepts do |search|
attributes :id, :title, :search_id, :created_at
# The person who suggested each concept.
child :suggester => :suggester do
attributes :id, :name, :gravatar_icon
end
# Each concept has many suggestions (approx. 4 each).
node :suggestions do |concept|
# Here I'm scoping suggestions to only ones which meet certain conditions.
partial "suggestions/show", object: concept.active_suggestions
end
# Add a boolean flag to tell if the concept is a favourite or not.
node :favourite_id do |concept|
# Another method call which occurs for each concept.
concept.favourite_id_for(current_user)
end
end
# Each search has subscriptions to certain services (approx. 4).
child :service_subscriptions do
# This contains a few attributes and 2 fairly innocuous method calls.
extends "service_subscriptions/show"
end
So it seems that I need to do something about this but I'm not sure what approach to take. Here is a list of potential ideas I have:
Performance Improvement Ideas
Dumb-Down the Interface
Maybe I can come up with ways to present information to the user which don't require the actual data to be present. I don't see why I should absolutely need to do this though, other single-page apps such as Trello have incredibly complicated interfaces.
Concept Pagination
If I paginate concepts it will reduct the amount of data being extracted from the database each time. Would product an inferior user interface though.
Caching
At the moment, refreshing the page just extracts the entire search out of the DB again. Perhaps I can cache parts of the app to reduce on DB hits. This seems messy though because not much of the data I'm dealing with is static.
Multiple Requests
It is technically bad to serve the page without embedding the JSON into the page but perhaps the user will feel like things are happening faster if I load the page unpopulated and then fetch the data.
Indexes
I should make sure that I have indexes on all my foreign keys. I should also try to think about places where it would help to have indexes (such as favourites?) and add them.
Move Method Calls into DB
Perhaps I can cache some of the results of the iteration I do in my view layer into the DB and just pull them out instead of computing them. Or I could sync things on write rather than on read.
Question
Does anyone have any suggestions as to what I should be spending my time on?
This is a hard question to answer without being able to see the actual user interface, but I would focus on loading exactly only as much data as is required to display the initial interface. For example, if the user has to drill down to see some of the data you're presenting, then you can load that data on demand, rather than loading it as part of the initial payload. You mention that a search can have as many as 100 "concepts," maybe you don't need to fetch all of those concepts initially?
Bottom line, it doesn't sound like your issue is really on the client side -- it sounds like your server-side code is slowing things down, so I'd explore what you can do to fetch less data, or to defer the complex queries until they are definitely required.
I'd recommend separating your JS code-base into modules that are dynamically loaded using an asset loader like RequireJS. This way you won't have so many XHRs firing at load time.
When a specific module is needed it can load and initialize at an appropriate time instead of every page load.
If you complicate your code a little, each module should be able to start and stop. So, if you have any polling occurring or complex code executing you can stop the module to increase performance and decrease the network load.

Rails 2 call from one model to another is slow

In Rails 2, I'm trying to optimize the performance of a web page the loads slowly.
I'm timing the execution time of statements in a model and finding that a surprising amount of the time is in a call from inside one model to another model, even though it appears there is no database access at all.
To be specific, let's say the model that is slow is department, and I'm calculating Department.expenditures. The expenditures method needs to know whether the quarter has been closed, and that information is in a different model, Quarter
The first time that Department.expenditures calls Quarter.closed? there is a database access, and I can accept that. But I've done something so to keep that in memory inside the model method, so that future calls to Quarter.closed? have no database access. The code inside Quarter.closed? now runs in around 4 microseconds, but simply invoking Quarter.closed? from inside Department.expenditures takes 400 microseconds, and with hundreds of departments, that adds up.
I could cache the Quarter.closed value inside a global variable, but that seems hairy. Does anyone know what is going on or have a suggestion about a better practice?
Not 100% sure if this applies to your problem. But with similar loading time problems in many cases eager loading solves the problem. You would do it like this:
Department.all(:include => :expenditures)
I'm a bit out of Rails 2 syntax. In Rails 3 you can specify includes quite detailed like this:
Category.includes(:posts => [{:comments => :guest}, :tags]).find(1)
I think (but not sure) the :include option in Rails 2 allowed for similar syntax
So maybe this would work:
Department.all(:include => [:expenditures => [:quarters]])
(Maybe need some experiments with combination of arra/hash syntax here)

Resources