rails: how to link values in data migrations to class/model variables? - ruby-on-rails

my sql DB contains tables "jobs" and "job_categories."
"job_categories" associates job category strings (i.e. "Software Development") with an integer number (i.e. 7).
I need these associations saved into variables in my job controller for various query functions. How can I use rails to dynamically link changes to the job_categories table to variables in my jobs controller? I've worked with RoR for a few weeks now but am still a little fuzzy on how everything interacts. Thank you!

There's one big gotcha with what you're trying to do, but first I'll answer your question as asked.
Create class-level accessors in your JobsController, then write an Observer on the JobCategory class that makes the appropriate changes to the JobsController after save and destroy events.
class JobsController < ActionController::Base
##categories = JobCategory.find(:all)
cattr_accessor :categories
# ...
end
class JobCategoryObserver < ActiveRecord::Observer
def after_save(category)
JobsController.categories[category.name] = category.id
end
def after_destroy(category)
JobsController.categories.delete(category.name)
end
end
You'll need additional logic that removes the old name if you allow for name changes. The methods in ActiveRecord::Dirty will help with that.
So, the gotcha. The problem with an approach like this is that typically you have more than one process serving requests. You can make a change to the job_categories table, but that change only is updated in one process. The others are now stale.
Your job_categories table is likely to be small. If it's accessed with any frequency, it'll be cached in memory, either by the OS or the database server. If you query it enough, the results of that query may even be cached by the database. If you aren't querying it very often, then you shouldn't be bothering with trying to cache inside JobsController anyway.
If you absolutely must cache in memory, you're better off going with memcached. Then you get a single cache that all your Rails processes work against and no stale data.

Related

Rails short term caching of complex query results

The special_item_id_list method is responsible for returning an array of ids. The query and logic is complicated enough that I only want to have to run it once per any page request, but I'll be utilizing that resulting array of ids in many different places. The idea is to be able to use the is_special? method or the special_items scope freely without worrying about incurring overhead each time they are used, so they rely on the special_item_id_list method to do the heavy lifting and caching.
I don't want the results of this query to persist between page loads, but I'd like the query ran only once per page load. I don't want to use a global variable and thought a class variable on the model might work, however it appears that the class variable does persist between page loads. I'm guessing the Item class is part of the Rails stack and stays in memory.
So where would be the preferred place for storing my id list so that it's rebuilt on each page load?
class Item < ActiveRecord::Base
scope :special_items, lambda { where(:id => special_item_id_list) }
def self.special_item_id_list
#special_item_id_list ||= ... # some complicated queries
end
def is_special?
self.class.special_item_id_list.include?(id)
end
end
UPDATE: What about using Thread? I've done this before for tracking the current user and I think it could be applied here, but I wonder if there's another way? Here's a StackOverflow conversation discussing threads! and also mentions the request_store! gem as possibly a cleaner way of doing so.
This railscast covers what you're looking for. In short, you're going to want to do something like this:
after_commit :flush_cache
def self.cached_special_item_list
Rails.cache.fetch("special_items") do
special_item_id_list
end
end
private
def flush_cache
Rails.cache.delete("special_items")
end
At first I went with a form of Jonathan Bender's suggestion of utilizing Rails.cache (thanks John), but wasn't quite happy with how I was having to expire it. For lack of a better idea I thought it might be better to use Thread after all. I ultimately installed the request_store gem to store the query results. This keeps the data around for the duration I wanted (the lifetime of the request/response) and no longer, without any need for expiration.
Are you really sure this optimisation is necessary? Are you having performance issues because of it? Unless it's actually a problem I would not worry about it.
That said; you could create a new class, make special_item_id_list an instance method on that class and then pass the class around to anything needs to use that expensive-to-calculate data.
Or it might suffice to cache the data on instances of Item (possibly by making special_item_id_list an instance method), and not worry about different instances not being able to share the cache.

ActiveRecord Loading Relational Table As Enum Instances On Class Load

I have a relational database table that holds a product lookup. This table powers multiple systems, only one of which is a Rails app. In the Rails app, I want to use the product lookup as an ActiveRecord class with instance members with the product code - for example, the key code field is a 4-digit alphanumeric. It would be nice to be able to refer to instances by the code like this: ProductCode.01A3. I don't want to simply declare them in the Rails code, of course, because the DB is the system of record for multiple systems. Also, how would Ruby react to a non-existent product code? If ProductCode.ABCD doesn't exist, does it just silently return a nil, and I'd need nil checks everywhere? And then there's the issue of releasing a new ProductCode into production. Updating the table would require reloading the class instance variables.
Thoughts? Can this be done? Should this be done? I've searched for a library but maybe my Google-fu isn't that good.
Lookup tables are a great tool to reduce the number of database queries and I often use them in long worker processes. I wouldn't recommend using them in a webapp unless the data is often accessed rarely changes and is expensive to get.
My implementation for you problem would look like this:
class ProductCode
#product_codes = {}
class << self
attr_accessor :product_codes
end
def self.get(code)
#product_codes[code.to_s]
end
def self.cache_all
# whatever you do here
#product_codes = {'01A3' => 42}
end
end
ProductCodes.cache_all
ProductCodes.get('01A3')

Rails: How can I cache system table data

Say I have a system table 'categories' with 2 fixed records. The user will not be allowed to delete these 2 but may wish to add their own to extend their list. I need to be able to pull out the 'garden' category at certain times, e.g when creating a garden project.
A class attribute reader that returns the garden instance would do the job, but I would like to know how this can be improved with caching?
I believe memoization would only work per process which is almost pointless here. I would like it to be set once (perhaps the first time it's accessed or on app start-up) and just remain in cache for future use.
Example setup:
class Project < ActiveRecord::Base
belongs_to :category
end
class Category < SystemTable
cattr_reader :garden
def self.garden
##garden ||= self.find_by_name('garden')
end
end
How about the approach shown below, which will retrieve the garden instance once when the Category class is loaded:
class Category < SystemTable
GARDEN = self.find_by_name('garden')
end
Now whenever you need the garden category you can use Category::GARDEN.
Interlock, a plugin which works with memcached, will automatically cache any instances that are made by a straight 'find' using id. Similarly, you can bypass interlock and just cache the object in memcached manually. Class-level variables is a dirty and potentially bug-causing solution (and i'm not even sure if it works). There's plenty of info on installing and using memcached/memcache-client on the web.

Ruby on Rails: Instantiate associated models with find_by_sql?

Apparently, include and select can't be used simultaneously on a Rails find query, and this has been repeatedly marked as wontfix:
http://dev.rubyonrails.org/ticket/7147
http://dev.rubyonrails.org/ticket/5371
This strikes me as very inconvenient, because the times I'd want to use include are exactly the same times I'd want to use select - when every bit of performance counts.
Is there any way to work around this and manually generate a combined include-with-select using find_by_sql, or any other method? The trouble is, I'm not aware of any way to emulate the functionality of include, where it instantiates models in memory to hold the included associated models, such that I can enter model1.associated_models and have it not hit the database again.
Have you considered creating model for database view? For example:
Create database view, with your complicated SQL query:
CREATE VIEW production_plan_items AS
SELECT * FROM [...]
INNER JOIN [...];
Create model for this view:
# app/view_model.rb
class ViewModel < ActiveRecord::Base
self.abstract_class = true
def readonly?
true
end
def before_destroy
raise ActiveRecord::ReadOnlyRecord
end
end
# app/models/logical/production_plan_item.rb
module Logical
class ProductionPlanItem < ::ViewModel
end
end
Use as always, but remember that these records are READ ONLY!
Logical::ProductionPlanItem.where( ... )
If performance still be an issue in the future, you can quite easily convert DB views to materialized views using triggers and stored procedures. This will give your application enormous speed boost, and you don't have to change even one line of Rails code.
Enterprise Rails is highly recommended reading:
http://www.amazon.com/Enterprise-Rails-Dan-Chak/dp/0596515200/ref=sr_1_1?ie=UTF8&qid=1293140116&sr=8-1

From objects to tables using activerecord

I'm getting some objects from an external library and I need to store those objects in a database. Is there a way to create the tables and relationships starting from the objects, or I have to dig into them and create migrations and models by hand?
Thanks!
Roberto
Even if you could dynamically create tables on the fly like that (not saying that you can). I wouldn't want to do that. There is so much potential for error there.
I would create the migrations by hand and have the tables and fields pre-created and fill them in with rows as needed.
Note: This is a TERRIBLE hack and you'll be ironing out the bugs for years to come, but it is however pretty easy:
This relies on rails ActiveSupport, and ActiveRecord already being loaded
Say you get a random object from a third party library which has 2 instance variables - it's class might look like this:
class Animal
attr_accessor :name, :number_of_legs
end
a = SomeThirdPartyLibrary.get_animal
You can use reflection to figure out it's name, and columns:
table_name = a.class.to_s.tableize
column_names = a.instance_variables.map{ |n| n[1..-1] } # remove the #
column_types = a.instance_variables.map{ |n| a.instance_variable_get(n).class
}.map{ |c| sql_type_for_class(c) } # go write sql_type_for_class please
Then you can use ActiveRecord migrations to create your table, like this:
ActiveRecord::Migration.class_eval do
create_table table_name do |t|
column_names.zip(column_types).each do |colname, coltype|
t.column colname, coltype
end
end
end
Then you can finally declare an activerecord class which will then interface with the just-created table.
# Note we declare a module so the new classes don't conflict with the existing ones
module GeneratedClasses; end
eval "class GeneratedClasses::#{a.class} < ActiveRecord::Base; end"
Presto!
Now you can do this:
a = GeneratedClasses::Animal.new
a.update_attributes whatever
a.save
PS: Don't do this!
Apart from being awful, if your rails app restarts it will lose all concept of the Generated Classes, so you'll need to devise some mechanism of persisting those too.
I have this exact situation. I have to read data external to the application and the performance hit is so big, that I store locally. I have gone with a solution where I have, over time, developed a schema and migrations by hand, that work with the data, and allow me to persist the data to the tables. I have developed a caching scheme that works for my data and the performance has increased significantly.
All that to say, I did everything by hand and I don't regret it. I can have confidence that my database is stable and that I am not re-creating db tables on the fly. Because of that, I have no concern about the stability of my application.
Depending on what you're trying to do with the objects, you can store objects directly into the database by serializing them.
Try looking at some ORM solutions. Or store as XML.

Resources