Rails App Interface Architecture Design - ruby-on-rails

I have a rails application that serves as an interface to a hybrid of data. Most of the information I require is retrieved from the command-line program using XML-RPC. Aside from this, I require some additional bit of data which I have no option but to store in a database. For this reason, I am having trouble figuring out what would be the best way to design the application.
I have overridden self.all and self.find(id) such that they rely on calls to super and then "enrich" the object by defining its instance variables to the appropriate data retrieved from the program using XML-RPC.
This all seems pretty convoluted though. For example, I imagine I have lost the ability to use the magic finders (find_by_x), and I don't know if anything else will break as a result of this.
My question is if there is a more logical and sensible way of going on about doing this. That is, designing an application that depends on XML-RPC for its data for the most part, but also some data stored in a database.
I did read about after_find. Using this callback, I can implement the "object enriching" process and have it run anytime there is a found record. However, my method of retrieving data associated with an item is different than that of retrieving all item data. The way I do it for retrieving all item data (self.all) is way more efficient, but unfortunately not applicable, to retrieving only one item's data (self.find). This would work well if there were a way I could make the callback not apply to self.all calls.

In my experience, you shouldn't mess with ActiveRecord's finders - there is a lot of magic that they rely on.
after_find is a great direction to start with, but if you're having issues with batching, then what I'd recommend is twofold - use a caching layer and alias_method_chain to implement a version of #all that performs your batched XML-RPC find, caches it, and then pass the call through to the unaliased original all. Then, your after_find would check your cache for data first, and if it's not there, perform the remote find. This would let you successfully batch data for all finds while utilizing the callback.
That said, there is probably an easier way to do this. I would just use models that don't descend from ActiveRecord::Base, but rather, which descend from some XMLRPC base interface, and then have faux associations on them that point to AR instances with your database information. Thus, you might have something like:
class XmlRpcModelBase
...
def find(...)
end
def all(...)
end
def extra_data
#extra_data ||= SomeActiveRecordModel.find(...)
end
end
class Foo < XmlRpcModelBase
end
It's not ideal, and honestly, it's going to depend a lot on how much of this is read, and how much is read/write, but I would try to stay out of ActiveRecord's way where possible, and instead just bolt on the AR-related pieces as necessary.

Related

Rails - Force model to be created via factory method

I'm using Rails 4. I have a class, Cart, which needs to be accessed within my application.
I want it accessed using the factory pattern:
class CartFactory
def self.obtain_cart_for_user(user)
...
end
end
I need this approach because sometimes, I want to return an existing cart and sometimes create a new one (based upon the age of the cart, its contents, whether the products in it are still available etc).
This is easy enough.
However, I also want to make sure some other future programmer doesn't instantiate a cart directly, or fetch one by any other means, including via model associations, such as:
Cart.new(...)
user.carts.new(...)
Cart.find(id)
Cart.find_by_attribute(blah: blah)
Cart.where(...).first
Is there any way to prevent that?
Well, it's possible to make the constructor private:
private_class_method :new
And of course, you can try making the ActiveRecord query methods (.find, .where etc.) private as well. But to me that sounds like a good way to end up with erratic behaviour. If you were to go this route, make sure your app is thoroughly tested first.
Another route would be for Cart not to extend ActiveRecord::Base (which I'm assuming it does), and instead include only the parts you need, like ActiveRecord::Persistence. If you are willing to dive in deep, check out the parts that are included in the source for ActiveRecord::Base.
Edit: Still one option would be to make Cart itself private within a module that only exposes CartFactory. There's no built-in syntax for a "private class", but it's possible to achieve since Ruby classes are just regular objects. Again, no idea how well ActiveRecord would deal with that.
But lastly there is of course the question of whether you want to do this at all. In general, Ruby is not very good at protecting you from yourself. :) As expressed in the latter linked answer, documentation and trust go a long way.

Rails short term caching of complex query results

The special_item_id_list method is responsible for returning an array of ids. The query and logic is complicated enough that I only want to have to run it once per any page request, but I'll be utilizing that resulting array of ids in many different places. The idea is to be able to use the is_special? method or the special_items scope freely without worrying about incurring overhead each time they are used, so they rely on the special_item_id_list method to do the heavy lifting and caching.
I don't want the results of this query to persist between page loads, but I'd like the query ran only once per page load. I don't want to use a global variable and thought a class variable on the model might work, however it appears that the class variable does persist between page loads. I'm guessing the Item class is part of the Rails stack and stays in memory.
So where would be the preferred place for storing my id list so that it's rebuilt on each page load?
class Item < ActiveRecord::Base
scope :special_items, lambda { where(:id => special_item_id_list) }
def self.special_item_id_list
#special_item_id_list ||= ... # some complicated queries
end
def is_special?
self.class.special_item_id_list.include?(id)
end
end
UPDATE: What about using Thread? I've done this before for tracking the current user and I think it could be applied here, but I wonder if there's another way? Here's a StackOverflow conversation discussing threads! and also mentions the request_store! gem as possibly a cleaner way of doing so.
This railscast covers what you're looking for. In short, you're going to want to do something like this:
after_commit :flush_cache
def self.cached_special_item_list
Rails.cache.fetch("special_items") do
special_item_id_list
end
end
private
def flush_cache
Rails.cache.delete("special_items")
end
At first I went with a form of Jonathan Bender's suggestion of utilizing Rails.cache (thanks John), but wasn't quite happy with how I was having to expire it. For lack of a better idea I thought it might be better to use Thread after all. I ultimately installed the request_store gem to store the query results. This keeps the data around for the duration I wanted (the lifetime of the request/response) and no longer, without any need for expiration.
Are you really sure this optimisation is necessary? Are you having performance issues because of it? Unless it's actually a problem I would not worry about it.
That said; you could create a new class, make special_item_id_list an instance method on that class and then pass the class around to anything needs to use that expensive-to-calculate data.
Or it might suffice to cache the data on instances of Item (possibly by making special_item_id_list an instance method), and not worry about different instances not being able to share the cache.

accessing current_user in model; has to be a better way for logging and auth

I know the dogma says to not access current_user in a model but I don't fully agree with it. For example, I want to write a set of logging functions when an action happens via a rails callback. Or simply writing who wrote a change when an object can have multiple people write to it (not like a message which has a single owner). In many ways, I see current_user more as config for an application - in other words make this app respond to this user. I would rather have my logging via the model DSL rather than in the action where it seems REALLY out of place. What am I missing?
This idea seems rather inelegant Access current_user in model
as does this: http://rails-bestpractices.com/posts/47-fetch-current-user-in-models
thx
edit #1
So my question isn't if there are gems that can do auditing / logging. I currently use paper_trail (although moving away from it because I can do same functionality in approx 10 lines of ruby code); it is more about whether current_user should never be accessed in the model - I essentially want to REDUCE my controller code and push down logic to models where it should be. Part of this might be due to the history of ActiveRecord which is essentially a wrapper around database tables for which RoR has added a lot of functionality over the years.
You've given several examples that you'd like to accomplish, I'll go through the solution to each one separately:
I want to write a set of logging functions when an action happens via
a rails callback
Depending on how you want to log (DB vs writing to the logger). If you want to log to the DB, you should have a separate logging model which is given the appropriate information from the controller, or simply with a belongs_to :user type setup. If you want to write to the logger, you should create a method in your application controller which you can call from your create and update methods (or whatever other actions you wanted to have a callback on.)
Or simply writing who wrote a change when an object can have multiple people write to it
class Foo < ActiveRecord::Base
belongs_to :user, as: :edited_by
end
class FooController < ApplicationController
def update
#foo = Foo.find(params[:id])
#foo.attributes = params[:foo]
#foo.edited_by = current_user
end
end
I think you're misunderstanding what the model in Rails does. Its scope is the database. The reason it can't access current_user, is because the current user is not stored in the database, it is a session variable. This has absolutely nothing to do with the model, as this is something that can not exist without a browser.
ActiveRecord::Base is not a class that is designed to work with the browser, it is something that works with the database and only the database. You are using the browser as an interface to that model, but that layer is what needs to access browser specific things such as session variables, as your model is extending a class that is literally incapable of doing so.
This is not a dogma or style choice. This is a fact of the limitations of the class your model is extending from. That means your options basically boil down to extending from something else, handling it in your controller layer, or passing it to the model from your controller layer. ActiveRecord will not do what you want in this case.
The two links you show (each showing imho the same approach) is very similar to a approach I still use. I store the current_user somewhere (indeed thread-context is the safest), and in an observer I can then create a kind of audit-log of all changes to the watched models, and still log the user.
This is imho a really clean approach.
An alternative method, which is more explicit, less clean but more MVC, is that you let the controller create the audit-log, effectively logging the actions of the users, and less the effects on different models. This might also be useful, and in one website we did both. In a controller you know the current-user, and you know the action, but it is more verbose.
I believe your concerns are that somehow this proposed solution is not good enough, or not MVC enough, or ... what?
Another related question: How to create a full Audit log in Rails for every table?
Also check out the audited gem, which solves this problem as well very cleanly.
Hope this helps.

Rails: Run code before ActiveRecord model retrieval

Is there a way to run code BEFORE model retrieval? I know about the after_find callback, but I need to run before. I'd also like it to run only ONCE per retrieval regardless of the number of records returned. Looking at the RoR source it seems the query is actually executed in exec_queries(or to_a) in ActiveRecord::Relation. Do/Should I override this method to add this hook?
And just in case I'm going about this all wrong, the reason I'm asking is I have an external REST API I am using to retrieve data, but it is too slow to retrieve after every page reload. I was originally using memcached, but I figured I could just use ActiveRecord to cache the data in a database so I can easily query the data and possibly join it with similar data from other REST APIs. I'd like to plug in a callback that would after a certain timeout duration, reload the data from REST before returning ActiveRecord results.
Basically I'm looking for the best way to centralize refreshing my database from another source (REST) instead of cluttering up my controllers or overriding every model accessor that I use (is there a way to override them all easily?). Perhaps the best solution lies here.
It appears all of the built in methods like all, find, first, and last call apply_finder_options (and then where), but the dynamically created finders (find_by_name, etc) call find_dynamic_match which eventually calls where. This is what lead me to the to_a method on the relation, since it is common and called when the query is actually executed, and not just when building a relation before a query is executed. However getting this low level in into Rails makes me uncomfortable.
It seems like my problem shouldn't be an uncommon one, so perhaps my approach is wrong?
FYI, I'm new to rails and ruby. Thanks!
I would strongly advise against using a low-level hook in place of explicit cache checking. ActiveRecord has it's own caching mechanism, but if that isn't doing it for you and you need to build your own - use it explicitly before using ActiveRecord finders. Hooks like these can make it very confusing to determine what is happening and why, and is not a recommended practice. Here is an example using a proxy model:
class CacheProxy
attr_accessor :klass
def initialize(klass)
#klass = klass
end
def method_missing(method_id, *arguments, &block)
reload_if_necessary
klass.send(method_id, *arguments, &block)
end
private
def reload_if_necessary
return unless needs_reload?
# perform reload
end
def needs_reload?
# determine if we need to reload the cache
end
end
class ActiveRecord::Base
def self.proxy
CacheProxy.new(self)
end
end
Now you can even do MyModel.proxy.find_by_first_name_and_last_name('John', 'Doe')

Rails Tableless Model

I'm creating a tableless Rails model, and am a bit stuck on how I should use it.
Basically I'm trying to create a little application using Feedzirra that scans a RSS feed every X seconds, and then sends me an email with only the updates.
I'm actually trying to use it as an activerecord model, and although I can get it to work, it doesn't seem to "hold" data as expected.
As an example, I have an initializer method that parses the feed for the first time. On the next requests, I would like to simply call the get_updates method, which according to feedzirra, is the existing object (created during the initialize) that gets updated with only the differences.
I'm finding it really hard to understand how this all works, as the object created on the initialize method doesn't seem to persist across all the methods on the model.
My code looks something like:
def initialize
feed parse here
end
def get_updates
feedzirra update passing the feed object here
end
Not sure if this is the right way of doing it, but it all seems a bit confusing and not very clear. I could be over or under-doing here, but I'd like your opinion about this approach.
Thanks in advance
Using the singleton design pattern it is possible to keep values in memory between requests in ruby on rails. Rails does not reload all objects on every request, so it is possible to keep an in memory store.
with the following in config/initializers/xxx
require 'singleton'
class PersistanceVariableStore
include Singleton
def set(val)
#myvar = val
end
def get
#myvar
end
end
In a controller for example :
#r = PersistanceVariableStore.instance
#r.set(params[:set]) if params[:set]
Then in a view :
<%= #r.get %>
The value in #r will persist between requests ( unless running in cgi mode ).
Not that I think this is a good idea...
The instance variable will not persist between requests since they are entirely different instances. You will likely want to store the feed data in a database so it can be saved between requests and updated after the next request.

Resources