I have a ruby on Rails 4 app, using devise and with a User model and a Deal model.
I am creating a user_deals table for has_many/has_many relationship between User and Deal.
Here is the migration
class CreateUserDeals < ActiveRecord::Migration
def change
create_table :user_deals do |t|
t.belongs_to :user
t.belongs_to :deal
t.integer :nb_views
t.timestamps
end
end
end
When a user load a Deal (for example Deal id= 4), I use a method called show
controllers/deal.rb
#for the view of the Deal page
def show
end
In the view of this Deal id=4 page, I need to display the nb of views of the Devise's current_user inside the Deal page the user is currently on.
deal/show.html
here is the nb of views of user: <% current_user.#{deal_id}.nb_views%>
Lets' say I have 10M+ user_deals lines, I wanted to know if I should use an index
add_index :user_deals, :user_id
add_index :user_deals, :deal_id
or maybe
add_index(:deals, [:user_id, deal_id])
Indeed in other situations I would have said Yes, but here I don't know how Rails works behind the scenes. It feels as if Rails is aware of what to do without me needing to speed up the process,...as if when Rails loads this view that there is no SQL query (such as 'find the nb of views WHERe user_id= x and deal_id= Y')....because I'm using just for the current_user who is logged-in (via devise's current_user) and for deal_id Rails knows it as we are on the very page of this deal (show page) so I just pass it as a parameter.
So do I need an index to speed it up or not?
Your question on indexes is a good one. Rails does generate SQL* to do its magic so the normal rules for optimising databases apply.
The magic of devise only extends to the current_user. It fetches their details with a SQL query which is efficient because the user table created by devise has helpful indexes on it by default. But these aren't the indexes you'll need.
Firstly, there's a neater more idiomatic way to do what you're after
class CreateUserDeals < ActiveRecord::Migration
def change
create_join_table :users, :deals do |t|
t.integer :nb_views
t.index [:user_id, :deal_id]
t.index [:deal_id, :user_id]
t.timestamps
end
end
end
You'll notice that migration included two indexes. If you never expect to create a view of all users for a given deal then you won't need the second of those indexes. However, as #chiptuned says indexing each foreign key is nearly always the right call. An index on an integer costs few write resources but pays out big savings on read. It's a very low cost default defensive position to take.
You'll have a better time and things will feel clearer if you put your data fetching logic in the controller. Also, you're showing a deal so it will feel right to make that rather than current_user the centre of your data fetch.
You can actually do this query without using the through association because you can do it without touching the users table. (You'll likely want that through association for other circumstances though.)
Just has_many :user_deals will do the job for this.
To best take advantage of the database engine and do this in one query your controller can look like this:
def show
#deal = Deal.includes(:user_deals)
.joins(:user_deals)
.where("user_deals.user_id = ?", current_user.id)
.find(params["deal_id"])
end
Then in your view...
I can get info about the deal: <%= #deal.description %>
And thanks to the includes I can get user nb_views without a separate SQL query:
<%= #deal.user_deals.nb_views %>
* If you want to see what SQL rails is magically generating just put .to_sql on the end. e.g. sql_string = current_user.deals.to_sql or #deal.to_sql
Yes, you should use an index to speed up the querying of the user_deals records. Definitely at least on user_id, but probably both [:user_id, :deal_id] as you stated.
As for why you don't see a SQL query...
First off, your code in the view appears to be incorrect. Assuming you have set up a has_many :deals, through: :user_deals association on your User class, it should be something like:
here is the nb of views of user: <%= current_user.deals.find(deal_id).nb_views %>
If you see the right number showing up for nb_views, then a query should be made when the view is rendered unless current_user.deals is already being loaded earlier in the processing or you've got some kind of caching going on.
If Rails is "aware", there is some kind of reason behind it which you should figure out. Expected base Rails behavior is to have a SQL query issued there.
Is a cleaner way of indexing other tables not:
class CreateUserDeals < ActiveRecord::Migration
def change
create_table :user_deals do |t|
t.references :user
t.references :deal
t.integer :nb_views
t.timestamps
end
end
end
Related
I am developing on RoR 4, with Oracle, PostGreSQL and MSSQL as target databases.
I am building a hierarchy of 4 objects, for which I need to display parent-child relationships through the same query whatever level I start from. Not easy to figure out, but the hint is that none of the object should have identical IDs.
The issue here is that rails maintains a dedicated sequence for each object, so duplicated IDs will appear for sure.
How can I create a sequence to fill a unique_id field which remains unique for all my data ?
Thanks for your help,
Best regards,
Fred
I finally found this solution:
1 - create a sequence to be used by each of concerned objects
class CreateGlobalSequence < ActiveRecord::Migration
def change
execute "CREATE SEQUENCE global_seq INCREMENT BY 1 START WITH 1000"
end
end
2 - Declare this sequence to be used for identity columns in each of concerned models
class BusinessProcess < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
class BusinessRule < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
and so on. It works fine.
Rails is great !
Thanks for your help, and best regards,
Fred
Id column for each table is unique identifier for each table record. It will not make any impact on other table Id column.
Don't know why you need this. But you can achieve it by some extent. Like below :
class CreateSimpleModels < ActiveRecord::Migration
def self.up
create_table :simple_models do |t|
t.string :xyz
t.integer :unique_id
t.timestamps
end
execute "CREATE SEQUENCE simple_models_unique_id_seq OWNED BY
simple_models.unique_id INCREMENT BY 1 START WITH 100000"
end
def self.down
drop_table :simple_models
execute "DELETE SEQUENCE simple_models_unique_id_seq"
end
end
But after 100000 record in db it will again going to similar for other model.
The default id column has the identity attribute, which is stored per-table. If your models fit the bill for Single Table Inheritance you'd be able to define a custom id attribute on the base class. In your case since you said it's a hierarchy that might be the way to go.
The harder? (STI is a bit to digest but very powerful) way of doing this involves what I'm working on this similar issue with a shared PAN (Private Account Number in this system) in a shared namespace.
class CreatePans < ActiveRecord::Migration
def change
create_table :pans do |t|
t.string :PAN
t.timestamps
end
end
end
class AddPanIdToCustomers < ActiveRecord::Migration
def change
add_column :customers, :pan_id, :integer
end
end
The first migration will add the ID table, the second adds the foreign key to the customers table. You'll also need to add the relationships to the models has_many :pans and belongs_to :customers. You can then refer to their identity by the :pan_id attribute (however you name it). It's a roundabout way of doing things, but in my case business requirements force it - hacky as it is.
For example in this migration I have a relation one to many "Category has many Subcategories". If I not put "add_index :subcategories, :category_id", it work anyway.
class CreateSubcategories < ActiveRecord::Migration
def change
create_table :subcategories do |t|
t.string :nombre
t.string :descripcion
t.integer :category_id
t.timestamps
end
add_index :subcategories, :category_id
end
end
For validate foreign key I use this
validates :category, presence: true
Its advised to add index on such a column as supposedly you will be performing multiple lookups across the two tables. In relational database, column category_id would be a foreign key on subcategories table which references id column of category table. You'll find more information on Database index at wikipedia (quickest available reference).
Sure, you could skip creating index for this column but for a performance penalty eventually. Sure it would work without index but I believe you also want an application that is good usability wise - usability in terms of performance. When your table grows large(theoretically), you'll eventually start noticing that Data Retrieval involving joins across two or more tables, categories and subcategories in this case relatively slower.
Sure, one can argue that there is a performance penalty for maintaining an index, i.e. the DBMS would have to go through extra writes. So, it is really up to you and your business requirement whether or not you have more number of Data retrievals or Data writes. If you have more data retrievals then definitely go for the index, if you think there won't be much reads and only writes which you feel your application can live with (less likely) then sure you could skip it.
Given the scenario where you're performing validations on the category's presence. I would definitely go with adding the index.
I'm writing a migration to add a column to a table. The value of the column is dependent on the value of two more existing columns. What is the best/fastest way to do this?
Currently I have this but not sure if it's the best way since the groups table is can be very large.
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Groups = Group.all.each do |g|
c = "red" if g.is_active && is_live
c = "green" if g.is_active
c = "orange"
g.update_attribute(:type, c)
end
end
def self.down
end
end
It's generally a bad idea to reference your models from your migrations like this. The problem is that the migrations run in order and change the database state as they go, but your models are not versioned at all. There's no guarantee that the model as it existed when the migration was written will still be compatible with the migration code in the future.
For example, if you change the behavior of the is_active or is_live attributes in the future, then this migration might break. This older migration is going to run first, against the new model code, and may fail. In your basic example here, it might not crop up, but this has burned me in deployment before when fields were added and validations couldn't run (I know your code is skipping validations, but in general this is a concern).
My favorite solution to this is to do all migrations of this sort using plain SQL. It looks like you've already considered that, so I'm going to assume you already know what to do there.
Another option, if you have some hairy business logic or just want the code to look more Railsy, is to include a basic version of the model as it exists when the migration is written in the migration file itself. For example, you could put this class in the migration file:
class Group < ActiveRecord::Base
end
In your case, that alone is probably sufficient to guarantee that the model will not break. Assuming active and live are boolean fields in the table at this time (and thus would be whenever this migration was run in the future), you won't need any more code at all. If you had more complex business logic, you could include it in this migration-specific version of model.
You might even consider copying whole methods from your model into the migration version. If you do that, bear in mind that you shouldn't reference any external models or libraries in your app from there, either, if there's any chance that they will change in the future. This includes gems and even possibly some core Ruby/Rails classes, because API-breaking changes in gems are very common (I'm looking at you, Rails 3.0, 3.1, and 3.2!).
I would highly suggest doing three total queries instead. Always leverage the database vs. looping over a bunch of items in an array. I would think something like this could work.
For the purposes of writing this, I'll assume is_active checks a field active where 1 is active. I'll assume live is the same as well.
Rails 3 approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.where(active: 1, live: 1).update_all(type: "red")
Group.where(active: 1, live: 0).update_all(type: "green")
Group.where(active: 0, live: 0).update_all(type: "orange")
end
end
Feel free to review the documentation of update_all here.
Rails 2.x approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.update_all("type = red", "active = 1 AND live = 1")
Group.update_all("type = red", "active = 1 AND live = 0")
Group.update_all("type = red", "active = 0 AND live = 0")
end
end
Rails 2 documentation
I would do this in a
after_create
# or
after_save
in your ActiveRecord model:
class Group < ActiveRecord::Base
attr_accessor :color
after_create :add_color
private
def add_color
self.color = #the color (wherever you get it from)
end
end
or in the migration you'd probably have to do some SQL like this:
execute('update groups set color = <another column>')
Here is an example in the Rails guides:
http://guides.rubyonrails.org/migrations.html#using-the-up-down-methods
In a similar situation I ended up adding the column using add_column and then using direct SQL to update the value of the column. I used direct SQL and not the model per Jim Stewart's answer, since then it doesn't depend on the current state of the model vs. the current state of the table based on migrations being run.
class AddColorToGroup < ActiveRecord::Migration
def up
add_column :groups, :color, :string
execute "update groups set color = case when is_active and is_live then 'red' when is_active then 'green' else 'orange' end"
end
def down
remove_column :groups, :color
end
end
I'm attempting to design an achievement system in Ruby on Rails and have run into a snag with my design/code.
Attempting to use polymorphic associations:
class Achievement < ActiveRecord::Base
belongs_to :achievable, :polymorphic => true
end
class WeightAchievement < ActiveRecord::Base
has_one :achievement, :as => :achievable
end
Migrations:
class CreateAchievements < ActiveRecord::Migration
... #code
create_table :achievements do |t|
t.string :name
t.text :description
t.references :achievable, :polymorphic => true
t.timestamps
end
create_table :weight_achievements do |t|
t.integer :weight_required
t.references :exercises, :null => false
t.timestamps
end
... #code
end
Then, when I try this following throw-away unit test, it fails because it says that the achievement is null.
test "parent achievement exists" do
weightAchievement = WeightAchievement.find(1)
achievement = weightAchievement.achievement
assert_not_nil achievement
assert_equal 500, weightAchievement.weight_required
assert_equal achievement.name, "Brick House Baby!"
assert_equal achievement.description, "Squat 500 lbs"
end
And my fixtures:
achievements.yml...
BrickHouse:
id: 1
name: Brick House
description: Squat 500 lbs
achievable: BrickHouseCriteria (WeightAchievement)
weight_achievements.ym...
BrickHouseCriteria:
id: 1
weight_required: 500
exercises_id: 1
Even though, I can't get this to run, maybe in the grand scheme of things, it's a bad design issue. What I'm attempting to do is have a single table with all the achievements and their base information (name and description). Using that table and polymorphic associations, I want to link to other tables that will contain the criteria for completing that achievement, e.g. the WeightAchievement table will have the weight required and exercise id. Then, a user's progress will be stored in a UserProgress model, where it links to the actual Achievement (as opposed to WeightAchievement).
The reason I need the criteria in separate tables is because the criteria will vary wildly between different types of achievements and will be added dynamically afterwards, which is why I'm not creating a separate model for each achievement.
Does this even make sense? Should I just merge the Achievement table with the specific type of achievement like WeightAchievement (so the table is name, description, weight_required, exercise_id), then when a user queries the achievements, in my code I simply search all the achievements? (e.g. WeightAchievement, EnduranceAchievement, RepAchievement, etc)
The way achievement systems generally work is that there are a large number of various achievements that can be triggered, and there's a set of triggers that can be used to test wether or not an achievement should be triggered.
Using a polymorphic association is probably a bad idea because loading in all the achievements to run through and test them all could end up being a complicated exercise. There's also the fact that you'll have to figure out how to express the success or failure conditions in some kind of table, but in a lot of cases you might end up with a definition that does not map so neatly. You might end up having sixty different tables to represent all the different kinds of triggers and that sounds like a nightmare to maintain.
An alternative approach would be to define your achievements in terms of name, value and so on, and have a constant table which acts as a key/value store.
Here's a sample migration:
create_table :achievements do |t|
t.string :name
t.integer :points
t.text :proc
end
create_table :trigger_constants do |t|
t.string :key
t.integer :val
end
create_table :user_achievements do |t|
t.integer :user_id
t.integer :achievement_id
end
The achievements.proc column contains the Ruby code you evaluate to determine if the achievement should be triggered or not. Typically this gets loaded in, wrapped, and ends up as a utility method you can call:
class Achievement < ActiveRecord::Base
def proc
#proc ||= eval("Proc.new { |user| #{read_attribute(:proc)} }")
rescue
nil # You might want to raise here, rescue in ApplicationController
end
def triggered_for_user?(user)
# Double-negation returns true/false only, not nil
proc and !!proc.call(user)
rescue
nil # You might want to raise here, rescue in ApplicationController
end
end
The TriggerConstant class defines various parameters you can tweak:
class TriggerConstant < ActiveRecord::Base
def self.[](key)
# Make a direct SQL call here to avoid the overhead of a model
# that will be immediately discarded anyway. You can use
# ActiveSupport::Memoizable.memoize to cache this if desired.
connection.select_value(sanitize_sql(["SELECT val FROM `#{table_name}` WHERE key=?", key.to_s ]))
end
end
Having the raw Ruby code in your DB means that it is easier to adjust the rules on the fly without having to redeploy the application, but this might make testing more difficult.
A sample proc might look like:
user.max_weight_lifted > TriggerConstant[:brickhouse_weight_required]
If you want to simplify your rules, you might create something that expands $brickhouse_weight_required into TriggerConstant[:brickhouse_weight_required] automatically. That would make it more readable by non-technical people.
To avoid putting the code in your DB, which some people may find to be in bad taste, you will have to define these procedures independently in some bulk procedure file, and pass in the various tuning parameters by some kind of definition. This approach would look like:
module TriggerConditions
def max_weight_lifted(user, options)
user.max_weight_lifted > options[:weight_required]
end
end
Adjust the Achievement table so that it stores information on what options to pass in:
create_table :achievements do |t|
t.string :name
t.integer :points
t.string :trigger_type
t.text :trigger_options
end
In this case trigger_options is a mapping table that is stored serialized. An example might be:
{ :weight_required => :brickhouse_weight_required }
Combining this you get a somewhat simplified, less eval happy outcome:
class Achievement < ActiveRecord::Base
serialize :trigger_options
# Import the conditions which are defined in a separate module
# to avoid cluttering up this file.
include TriggerConditions
def triggered_for_user?(user)
# Convert the options into actual values by converting
# the values into the equivalent values from `TriggerConstant`
options = trigger_options.inject({ }) do |h, (k, v)|
h[k] = TriggerConstant[v]
h
end
# Return the result of the evaluation with these options
!!send(trigger_type, user, options)
rescue
nil # You might want to raise here, rescue in ApplicationController
end
end
You'll often have to strobe through a whole pile of Achievement records to see if they've been achieved unless you have a mapping table that can define, in loose terms, what kind of records the triggers test. A more robust implementation of this system would allow you to define specific classes to observe for each Achievement, but this basic approach should at least serve as a foundation.
Let's say you have "lineitems" and you used to define a "make" off of a line_item.
Eventually you realize that a make should probably be on its own model, so you create a Make model.
You then want to remove the make column off of the line_items table but for every line_item with a make you want to find_or_create_by(line_item.make).
How would I effectively do this in a rails migration? I'm pretty sure I can just run some simple find_or_create_by for each line_item but I'm worried about fallback support so I was just posting this here for any tips/advice/right direction.
Thanks!
I guess you should check that the Make.count is equal to the total unique makes in lineitems before removing the column, and raise an error if it does not. As migrations are transactional, if it blows up, the schema isn't changed and the migration isn't marked as executed. Therefore, you could do something like this:
class CreateMakesAndMigrateFromLineItems < ActiveRecord::Migration
def self.up
create_table :makes do |t|
t.string :name
…
t.timestamps
end
makes = LineItem.all.collect(:&make).uniq
makes.each { |make| Make.find_or_create_by_name make }
Make.count == makes.length ? remove_column(:line_items, :make) : raise "Boom!"
end
def self.down
# You'll want to put logic here to take you back to how things were before. Just in case!
drop_table :makes
add_column :line_items, :make
end
end
You can put regular ruby code in your migration, so you can create the new table, run some code across the old model moving the data into the new model, and then delete the columns from the original model. This is even reversible so your migration will still work in both directions.
So for your situation, create the Make table and add a make_id to the lineitem. Then for each line item, find_or_create with the make column on lineitem, setting the returned id to the new make_id on lineitem. When you are done remove the old make column from the lineitem table.