If we have a small table which contains relatively static data, is it possible to have Active Record load this in on startup of the app and never have to hit the database for this data?
Note, that ideally I would like this data to be join-able from other Models which have relationships to it.
An example might be a list of countries with their telephone number prefix - this list is unlikely to change, and if it did it would be changed by an admin. Other tables might have relationships with this (eg. given a User who has a reference to the country, we might want to lookup the country telephone prefix).
I saw a similar question here, but it's 6 years old and refers to Rails 2, while I am using Rails 5 and maybe something has been introduced since then.
Preferred solutions would be:
Built-in Rails / ActiveRecord functionality to load a table once on startup and if other records are subsequently loaded in which have relationships with the cached table, then link to the cached objects automatically (ie. manually caching MyModel.all somewhere is not sufficient, as relationships would still be loaded by querying the database).
Maintained library which does the above.
If neither are available, I suppose an alternative method would be to define the static dataset as an in-memory enum/hash or similar, and persist the hash key on records which have a relationship to this data, and define methods on those Models to lookup using the object in the hash using the key persisted in the database. This seems quite manual though...
[EDIT]
One other thing to consider with potential solutions - the manual solution (3) would also require custom controllers and routes for such data to be accessible over an API. Ideally it would be nice to have a solution where such data could be offered up via a RESTful API (read only - just GET) if desired using standard rails mechanisms like Scaffolding without too much manual intervention.
I think you may be discounting the "easy" / "manual" approach too quickly.
Writing the data to a ruby hash / array isn't that bad an idea.
And if you want to use a CRUD scaffold, why not just use the standard Rails model / controller generator? Is it really so bad to store some static data in the database?
A third option would be to store your data to a file in some serialized format and then when your app loads read this and construct ActiveRecord objects. Let me show an example:
data.yml
---
- a: "1"
b: "1"
- a: "2"
b: "2"
This is a YAML file containing an array of hashes; you can construct such a file with:
require 'yaml'
File.open("path.yml", "w") do |f|
data = [
{ "a" => "1", "b" => 1 },
{ "a" => "2", "b" => 2 }
]
f.write(YAML.dump(data))
end
Then to load the data, you might create a file in config/initializers/ (everything here will be autoloaded by rails):
config/initializers/static_data.rb
require 'yaml'
# define a constant that can be used by the rest of the app
StaticData = YAML.load(File.read("data.yml")).map do |object|
MyObjectClass.new(object)
end
To avoid having to write database migrations for MyObjectClass (when it's not actually being stored in the db) you can use attr_accessor definitions for your attributes:
class MyObjectClass < ActiveRecord::Base
# say these are your two columns
attr_accessor :a, :b
end
just make sure not to run stuff like save, delete, or update on this model (unless you monkeypatch these methods).
If you want to have REST / CRUD endpoints, you'd need to write them from scratch because the way to change data is different now.
You'd basically need to do any update in a 3 step process:
load the data from YAML into a Ruby object list
change the Ruby object list
serialize everything to YAML and save it.
So you can see you're not really doing incremental updates here. You could use JSON instead of YAML and you'd have the same problem. With Ruby's built in storage system PStore you would be able to update objects on an individual basis, but using SQL for a production web app is a much better idea and will honestly make things more simple.
Moving beyond these "serialized data" options there are key-val storage servers store data in memory. Stuff like Memcached and Redis.
But to go back to my earlier point, unless you have a good reason not to use SQL you're only making things more difficult.
It sounds like FrozenRecord would be a good match for what you are looking for.
Active Record-like interface for read only access to static data files of reasonable size.
Related
If we have a small table which contains relatively static data, is it possible to have Active Record load this in on startup of the app and never have to hit the database for this data?
Note, that ideally I would like this data to be join-able from other Models which have relationships to it.
An example might be a list of countries with their telephone number prefix - this list is unlikely to change, and if it did it would be changed by an admin. Other tables might have relationships with this (eg. given a User who has a reference to the country, we might want to lookup the country telephone prefix).
I saw a similar question here, but it's 6 years old and refers to Rails 2, while I am using Rails 5 and maybe something has been introduced since then.
Preferred solutions would be:
Built-in Rails / ActiveRecord functionality to load a table once on startup and if other records are subsequently loaded in which have relationships with the cached table, then link to the cached objects automatically (ie. manually caching MyModel.all somewhere is not sufficient, as relationships would still be loaded by querying the database).
Maintained library which does the above.
If neither are available, I suppose an alternative method would be to define the static dataset as an in-memory enum/hash or similar, and persist the hash key on records which have a relationship to this data, and define methods on those Models to lookup using the object in the hash using the key persisted in the database. This seems quite manual though...
[EDIT]
One other thing to consider with potential solutions - the manual solution (3) would also require custom controllers and routes for such data to be accessible over an API. Ideally it would be nice to have a solution where such data could be offered up via a RESTful API (read only - just GET) if desired using standard rails mechanisms like Scaffolding without too much manual intervention.
I think you may be discounting the "easy" / "manual" approach too quickly.
Writing the data to a ruby hash / array isn't that bad an idea.
And if you want to use a CRUD scaffold, why not just use the standard Rails model / controller generator? Is it really so bad to store some static data in the database?
A third option would be to store your data to a file in some serialized format and then when your app loads read this and construct ActiveRecord objects. Let me show an example:
data.yml
---
- a: "1"
b: "1"
- a: "2"
b: "2"
This is a YAML file containing an array of hashes; you can construct such a file with:
require 'yaml'
File.open("path.yml", "w") do |f|
data = [
{ "a" => "1", "b" => 1 },
{ "a" => "2", "b" => 2 }
]
f.write(YAML.dump(data))
end
Then to load the data, you might create a file in config/initializers/ (everything here will be autoloaded by rails):
config/initializers/static_data.rb
require 'yaml'
# define a constant that can be used by the rest of the app
StaticData = YAML.load(File.read("data.yml")).map do |object|
MyObjectClass.new(object)
end
To avoid having to write database migrations for MyObjectClass (when it's not actually being stored in the db) you can use attr_accessor definitions for your attributes:
class MyObjectClass < ActiveRecord::Base
# say these are your two columns
attr_accessor :a, :b
end
just make sure not to run stuff like save, delete, or update on this model (unless you monkeypatch these methods).
If you want to have REST / CRUD endpoints, you'd need to write them from scratch because the way to change data is different now.
You'd basically need to do any update in a 3 step process:
load the data from YAML into a Ruby object list
change the Ruby object list
serialize everything to YAML and save it.
So you can see you're not really doing incremental updates here. You could use JSON instead of YAML and you'd have the same problem. With Ruby's built in storage system PStore you would be able to update objects on an individual basis, but using SQL for a production web app is a much better idea and will honestly make things more simple.
Moving beyond these "serialized data" options there are key-val storage servers store data in memory. Stuff like Memcached and Redis.
But to go back to my earlier point, unless you have a good reason not to use SQL you're only making things more difficult.
It sounds like FrozenRecord would be a good match for what you are looking for.
Active Record-like interface for read only access to static data files of reasonable size.
I have a Rails application and a JSON file which contains the data, instead of creating table for a model i want to point my model to read that jSON file, and i should be table to treat that file like a table, please suggest.
If I understand you correctly, you want a model that uses a JSON file as it's "backend DB" instead of a normal DB?
In order to get a Rails model to point to a JSON file, you would need to use a JSON DB adapter, and I'm not sure if there is one for Rails.
Rails uses what are called "adapters", where the same code:
Model.find(1)
Will work on any DB (PostgreSQL, MySQL, SQLite3, etc...) because the Model.find() method has been implemented by each adapter.
This allows the interface for the developer to remain the same as long as the adapter implements it.
It avoids the problem where every DB creator implements a different interface, and now everyone has to learn those particular methods (convention for the win!).
All that said, I can't find a JSON DB adapter, so if you want that functionality you'll have to read a JSON file and search against it.
However, if you're talking about using client-side storage with a JSON file, this isn't possible because client-side only understand JavaScript and a Ruby model (class) is on the backend server. They don't directly talk to each other. In that case, you'll have to implement a JavaScript model that maps to the JSON data.
MySQL as in-memory-database
Rails with in memory database has a way to use MySQL in-memory; then you load your data from the JSON file at the start and dump it out at the end (or after commits).
In-memory DB adapter
https://github.com/maccman/supermodel exists but looks dead (4 years old). Maybe you find others?
Rolling it yourself with Nulldb
Use https://github.com/nulldb/nulldb to throw away all SQL statements and register some hooks (after_save etc.) to store them in some hash. You then load that has into memory at the start and dump it out to JSON later.
Separating concerns
My favourite approach, maybe too late if you have lots of working code already:
Separate your active-record code from your actual domain model. That means, if you today have a model class Stuff < ActiveRecord::Base, then separate that into class StuffAR < ActiveRecord::Base and class Stuff.
Stuff then contains an instance of StuffAR.
Using the proper ruby mechanisms (i.e., BasicObject::method_missing), you can delegate all calls from Stuff to StuffAR by default.
You now have complete control over your data and can do whatever with it.
(This is only one way to do it, depending on how dynamic/flexible you want to be, and if you need a real DB part-time, you can do it different; i.e. class Stuff < StuffAR at the one extreme, or not using a generic method_missing but explicitly coded methods which call StuffAR etc. - Stuff is PORO (plain old ruby objects) now and you use StuffAR just for the DB contact)
In this approach, be careful not to use Stuff like an AR object. I.e., do not use Stuff.where(name: 'xyz') from outside, but create domain methods for that (e.g., in this example, Stuff.find_by_name(...).
Yes, this is coding overhead, but it does wonders to improve your code when your models become big and unwieldy after a time.
Don't need AR at all?
If you do only want to use JSON ever, and never use a real DB, then do the same as before, just leave StuffAR out. It's just PORO then.
I think you have to import your JSON file to the database (e.g. sqlite3) to handle it as a table.
The other workaround would be:
Create a JSON importer for your model which fills the Users from the JSON into an Array of users.
If you do that, you'll have to write the whole searching/ordering by yourself in plain ruby.
I don't know what your current circumstances are, but if you would like to change some data or add data, I suggest using a simple and lightweight database like sqlite3
I want to save settings for my users and some of them would be one out of a predefined list! Using https://github.com/ledermann/rails-settings ATM.
The setting for f.e. weight_unit would be out of [:kg, :lb].
I don't really want to hardcode that stuff into controller or view code.
It's kind of a common functionality, so I was wondering: Did anyone come up with some way of abstracting that business into class constants or the database in a DRY fashion?
Usually, when I have to store some not important information which I don't care to query individually, I store them on a serialized column.
In your case you could create a new column in your users table (for example call it "settings").
After that you add to user model
serialize :settings, Hash
from this moment you can put whatever you like into settings, for example
user.settings = {:weight_unit => :kg, :other_setting1 => 'foo', :other_setting2 => 'bar'}
and saving with user.save you will get, in settings column, the serialized data.
Rails does also de-serialize it so after fetching a user's record, calling user.settings, you will get all saved settings for the user.
To get more information on serialize() refer to docs: http://api.rubyonrails.org/classes/ActiveRecord/AttributeMethods/Serialization/ClassMethods.html#method-i-serialize
UPDATE1
To ensure that settings are in the predefined list you can use validations on your user model.
UPDATE2
Usually, if there are some pre-defined values it's a good habit to store them in a constant inside the related model, in this way you have access to them from model (inside and outside). Acceptable values does not change by instance so it makes sense to share them between all. An example which is more valuable than any word. Defining in your User model:
ALLOWED_SETTINGS = {:weight_unit => [:kg, :lb],
:eyes_color => [:green, :blue, :brows, :black],
:hair_length => [:short, :long]}
you can use it BOTH
outside the model itself, doing
User::ALLOWED_SETTINGS
inside your model (in validations, instance methods or wherever you want) using:
ALLOWED_SETTINGS
Based on your question, it sounds like these are more configuration options that a particular user will choose from that may be quite static, rather than dynamic in nature in that the options can change over time. For example, I doubt you'll be adding various other weight_units other than :kg and :lb, but it's possible I'm misreading your question.
If I am reading this correctly, I would recommend (and have used) a yml file in the config/ directory for values such as this. The yml file is accessible app wide and all your "settings" could live in one file. These could then be loaded into your models as constants, and serialized as #SDp suggests. However, I tend to err on the side of caution, especially when thinking that perhaps these "common values" may want to be queried some day, so I would prefer to have each of these as a column on a table rather than a single serialized value. The overhead isn't that much more, and you would gain a lot of additional built-in benefits from Rails having them be individual columns.
That said, I have personally used hstore with Postgres with great success, doing just what you are describing. However, the reason I chose to use an hstore over individual columns was because I was storing multiple different demographics, in which all of the demographics could change over time (e.g. some keys could be added, and more importantly, some keys could be removed.) It sounds like in your case it's highly unlikely you'll be removing keys as these are basic traits, but again, I could be wrong.
TL;DR - I feel that unless you have a compelling reason (such as regularly adding and/or removing keys/settings), these should be individual columns on a database table. If you strongly feel these should be stored in the database serialized, and you're using Postgres, check out hstore.
If you are using PostgreSQL, I think you can watch to HStore with Rails 4 + this gem https://github.com/devmynd/hstore_accessor
I have a standard rails application, that uses a mysql database through Active Record, with data loaded through a separate parsing process from a rather large XML file.
This was all well and good, but now I need to load data from an Oracle database, rather than the XML file.
I have no control how the database looks, and only really need a fraction of the data it contains (maybe one or two columns out of a few tables). As such, what I really want to do is make a call to the database, get data back, and put the data in the appropriate locations in my existing, Rails friendly mysql database.
How would I go about doing this? I've heard* you can (on a model by model basis) specifiy different databases for Rails Models to use, but that sounds like they use them in their entirety, (that is, the database is Rails friendly). Can I make direct Oracle calls? Is there a process that makes this easier? Can Active Record itself handle this?
A toy example:
If I need to know color, price, and location for an Object, then normally I would parse a huge XML file to get this information. Now, with oracle, color, price, and location are all in different tables, indexed by some ID (there isn't actually an "Object" table). I want to pull all this information together into my Rails model.
Edit: Sounds like what I'd heard about was ActiveRecord's "establish_connection" method...and it does indeed seem to assume one model is mapped to one table in the target database, which isn't true in my case.
Edit Edit: Ah, looks like I might be wrong there. "establish_connection" might handle my situation just fine (just gotta get ORACLE working in the first place, and I'll know for sure... If anyone can help, the question is here)
You can create a connection to Oracle directly and then have ActiveRecord execute a raw SQL statement to query your tables (plural). Off the top of my head, something like this:
class OracleModel < ActiveRecord::Base
establish_connection(:oracle_development)
def self.get_objects
self.find_by_sql("SELECT...")
end
end
With this model you can do OracleModel.get_objects which will return a set of records whereby the columns specified in the SELECT SQL statement are attributes of each OracleModel. Obviously you can probably come up with a more meaningful model name than I have!
Create an entry named :oracle_development in your config/database.yml file with your Oracle database connection details.
This may not be exactly what you are looking for, but it seems to cover you situation pretty well: http://pullmonkey.com/2008/4/21/ruby-on-rails-multiple-database-connections/
It looks like you can make an arbitrarily-named database configuration in the the database.yml file, and then have certain models connect to it like so:
class SomeModel < ActiveRecord::Base
establish_connection :arbitrary_database
#other stuff for your model
end
So, the solution would be to make ActiveRecord models for just the tables you want data out of from this other database. Then, if you really want to get into some sql, use ActiveRecord::Base.connection.execute(sql). If you need it as a the actual active_record object, do SomeModel.find_by_sql(sql).
Hope this helps!
I don't have points enough to edit your question, but it sounds like what you really need is to have another "connection pool" available to the second DB -- I don't think Oracle itself will be a problem.
Then, you need to use these alternate connections to "simply" execute a custom query within the appropriate controller method.
If you only need to pull data from your Oracle database, and if you have any ability to add objects to a schema that can see the data you require . . . .
I would simplify things by creating a view on the Oracle table that projects the data you require in a nice friendly shape for ActiveRecord.
This would mean maintaining code to two layers of the application, but I think the gain in clarity on the client-side would outweigh the cost.
You could also use the CREATE OR REPLACE VIEW Object AS SELECT tab1., tab2. FROM tab1,tab2 syntax so the view returned every column in each table.
If you need to Insert or Update changes to your Rails model, then you need to read up on the restrictions for doing Updates through a view.
(Also, you may need to search on getting Oracle to work with Rails as you will potentially need to install the Oracle client software and additional Ruby modules).
Are you talking about an one-time data conversion or some permanent data exchange between your application and the Oracle database? I think you shouldn't involve Rails in. You could just make a SQL query to the Oracle database, extract the data, and then just insert it into the MySQL database.
I am developing a Ruby on Rails website and I have an "architectural" question : my application needs some parameters and I'm wondering where to store them.
In concrete terms, my application receive some requests which are evaluated and then sent. So, the Request model must have attributes concerning these treatments : a validation status and a sending status. For instance, validation status can be "accepted", "rejected" or "waiting". Sending status can be "sent", "waiting", "error during sending" or stuff like that. I have to store those status codes parameters somewhere, but I don't know what is the best solution.
I could create a model for each one and store them in the database (and having an active record model ValidationStatus for instance) but : wouldn't it be a bite excessive to create a database/model for storing data like that?
I could also just use them in the code without "storing" them, I could store them in a YAML file...
So, a more simpler question: how do you deal with your application parameters in RoR?
There are lots of global configuration plugins, most of them revolve around the idea of loading a YAML file at some point. Check this page, this plugin and even this Railscast.
I put them in the database. I have a lot of these, and they are all pretty straightforward lists of strings. The tables are all the same - id, name, description.
I generate models for them rather than having an actual model file for each one. In app/models I have a file called active_record_enums.rb, which in your case would look something like this:
ACTIVE_RECORD_ENUMS = %w{
ValidationStatus
SendingStatus
}
ACTIVE_RECORD_ENUMS.each do |classname|
eval "class #{classname} < ActiveRecord::Base; end"
classname.constantsize.class_eval do
# Add useful methods - id_for(name) and value_for(id) are handy
end
end
This file has to be required in a config file somewhere; other than that it's pretty straightforward.
(Have since viewed that rails cast mentioned above [episode 85] - it looks like a bit more 'the rails way' than below)
Another approach is to build on the existing configuration mechanism in Rails.
Lets presume there are two types of configs:
App wide configs common to dev/test/prod environments
Configs specific to envrionments dev/test/prod
For the first scenario, items in "RAILS_ROOT + '/config/environment.rb'" work. Just see that the names are captialised so they are Ruby constants. A variation to this is have a reference to another file in that environment.rb file ...
require RAILS_ROOT + '/config/appConfigCommon.rb'
and place relevant config items in that file. This has the advantage of being able to be referenced independant of Rails.
For scenario 2, a similar approach can be taken. Place items for development in "RAILS_ROOT + '/config/environments/development.rb'" or something like
require RAILS_ROOT + '/config/environments/appConfigDev.rb'
and place environment specific items in that required file, making sure they start with caps. And follow the same pattern for test/prod (and others if need be).
The config items are directly accessible in views and controllers (not sure about models) by simply using the constant name.
I am not using Ruby but I will tell you that I started out (in ASP.NET) placing lots of settings in a Web.Config file (similar to a YAML). As time went on, though, the system evolved to the point where different instances needed different settings. So, almost all of them have migrated to the database. So...if you'll be deploying multiple instances of your site, I'd strongly recommend keeping settings in a table of your database (mine has just one record, with fields for various settings). If I had done this to start, I'd have saved a significant amount of time.
I tend to just make a string column for each, and use validates_inclusion_of to set what's acceptable for them.
class Request < ActiveRecord::Base
validates_inclusion_of :validation_status, :in => ['accepted','rejected','waiting']
validates_inclusion_of :sending_status, :in => ['sent','waiting','...']
end
If you need to have things happen (ie. emails sent) when the status changes, look into using the Acts As State Machine plugin to manage it.