I am developing on RoR 4, with Oracle, PostGreSQL and MSSQL as target databases.
I am building a hierarchy of 4 objects, for which I need to display parent-child relationships through the same query whatever level I start from. Not easy to figure out, but the hint is that none of the object should have identical IDs.
The issue here is that rails maintains a dedicated sequence for each object, so duplicated IDs will appear for sure.
How can I create a sequence to fill a unique_id field which remains unique for all my data ?
Thanks for your help,
Best regards,
Fred
I finally found this solution:
1 - create a sequence to be used by each of concerned objects
class CreateGlobalSequence < ActiveRecord::Migration
def change
execute "CREATE SEQUENCE global_seq INCREMENT BY 1 START WITH 1000"
end
end
2 - Declare this sequence to be used for identity columns in each of concerned models
class BusinessProcess < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
class BusinessRule < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
and so on. It works fine.
Rails is great !
Thanks for your help, and best regards,
Fred
Id column for each table is unique identifier for each table record. It will not make any impact on other table Id column.
Don't know why you need this. But you can achieve it by some extent. Like below :
class CreateSimpleModels < ActiveRecord::Migration
def self.up
create_table :simple_models do |t|
t.string :xyz
t.integer :unique_id
t.timestamps
end
execute "CREATE SEQUENCE simple_models_unique_id_seq OWNED BY
simple_models.unique_id INCREMENT BY 1 START WITH 100000"
end
def self.down
drop_table :simple_models
execute "DELETE SEQUENCE simple_models_unique_id_seq"
end
end
But after 100000 record in db it will again going to similar for other model.
The default id column has the identity attribute, which is stored per-table. If your models fit the bill for Single Table Inheritance you'd be able to define a custom id attribute on the base class. In your case since you said it's a hierarchy that might be the way to go.
The harder? (STI is a bit to digest but very powerful) way of doing this involves what I'm working on this similar issue with a shared PAN (Private Account Number in this system) in a shared namespace.
class CreatePans < ActiveRecord::Migration
def change
create_table :pans do |t|
t.string :PAN
t.timestamps
end
end
end
class AddPanIdToCustomers < ActiveRecord::Migration
def change
add_column :customers, :pan_id, :integer
end
end
The first migration will add the ID table, the second adds the foreign key to the customers table. You'll also need to add the relationships to the models has_many :pans and belongs_to :customers. You can then refer to their identity by the :pan_id attribute (however you name it). It's a roundabout way of doing things, but in my case business requirements force it - hacky as it is.
Related
I have a ruby on Rails 4 app, using devise and with a User model and a Deal model.
I am creating a user_deals table for has_many/has_many relationship between User and Deal.
Here is the migration
class CreateUserDeals < ActiveRecord::Migration
def change
create_table :user_deals do |t|
t.belongs_to :user
t.belongs_to :deal
t.integer :nb_views
t.timestamps
end
end
end
When a user load a Deal (for example Deal id= 4), I use a method called show
controllers/deal.rb
#for the view of the Deal page
def show
end
In the view of this Deal id=4 page, I need to display the nb of views of the Devise's current_user inside the Deal page the user is currently on.
deal/show.html
here is the nb of views of user: <% current_user.#{deal_id}.nb_views%>
Lets' say I have 10M+ user_deals lines, I wanted to know if I should use an index
add_index :user_deals, :user_id
add_index :user_deals, :deal_id
or maybe
add_index(:deals, [:user_id, deal_id])
Indeed in other situations I would have said Yes, but here I don't know how Rails works behind the scenes. It feels as if Rails is aware of what to do without me needing to speed up the process,...as if when Rails loads this view that there is no SQL query (such as 'find the nb of views WHERe user_id= x and deal_id= Y')....because I'm using just for the current_user who is logged-in (via devise's current_user) and for deal_id Rails knows it as we are on the very page of this deal (show page) so I just pass it as a parameter.
So do I need an index to speed it up or not?
Your question on indexes is a good one. Rails does generate SQL* to do its magic so the normal rules for optimising databases apply.
The magic of devise only extends to the current_user. It fetches their details with a SQL query which is efficient because the user table created by devise has helpful indexes on it by default. But these aren't the indexes you'll need.
Firstly, there's a neater more idiomatic way to do what you're after
class CreateUserDeals < ActiveRecord::Migration
def change
create_join_table :users, :deals do |t|
t.integer :nb_views
t.index [:user_id, :deal_id]
t.index [:deal_id, :user_id]
t.timestamps
end
end
end
You'll notice that migration included two indexes. If you never expect to create a view of all users for a given deal then you won't need the second of those indexes. However, as #chiptuned says indexing each foreign key is nearly always the right call. An index on an integer costs few write resources but pays out big savings on read. It's a very low cost default defensive position to take.
You'll have a better time and things will feel clearer if you put your data fetching logic in the controller. Also, you're showing a deal so it will feel right to make that rather than current_user the centre of your data fetch.
You can actually do this query without using the through association because you can do it without touching the users table. (You'll likely want that through association for other circumstances though.)
Just has_many :user_deals will do the job for this.
To best take advantage of the database engine and do this in one query your controller can look like this:
def show
#deal = Deal.includes(:user_deals)
.joins(:user_deals)
.where("user_deals.user_id = ?", current_user.id)
.find(params["deal_id"])
end
Then in your view...
I can get info about the deal: <%= #deal.description %>
And thanks to the includes I can get user nb_views without a separate SQL query:
<%= #deal.user_deals.nb_views %>
* If you want to see what SQL rails is magically generating just put .to_sql on the end. e.g. sql_string = current_user.deals.to_sql or #deal.to_sql
Yes, you should use an index to speed up the querying of the user_deals records. Definitely at least on user_id, but probably both [:user_id, :deal_id] as you stated.
As for why you don't see a SQL query...
First off, your code in the view appears to be incorrect. Assuming you have set up a has_many :deals, through: :user_deals association on your User class, it should be something like:
here is the nb of views of user: <%= current_user.deals.find(deal_id).nb_views %>
If you see the right number showing up for nb_views, then a query should be made when the view is rendered unless current_user.deals is already being loaded earlier in the processing or you've got some kind of caching going on.
If Rails is "aware", there is some kind of reason behind it which you should figure out. Expected base Rails behavior is to have a SQL query issued there.
Is a cleaner way of indexing other tables not:
class CreateUserDeals < ActiveRecord::Migration
def change
create_table :user_deals do |t|
t.references :user
t.references :deal
t.integer :nb_views
t.timestamps
end
end
end
I'm writing a migration to add a column to a table. The value of the column is dependent on the value of two more existing columns. What is the best/fastest way to do this?
Currently I have this but not sure if it's the best way since the groups table is can be very large.
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Groups = Group.all.each do |g|
c = "red" if g.is_active && is_live
c = "green" if g.is_active
c = "orange"
g.update_attribute(:type, c)
end
end
def self.down
end
end
It's generally a bad idea to reference your models from your migrations like this. The problem is that the migrations run in order and change the database state as they go, but your models are not versioned at all. There's no guarantee that the model as it existed when the migration was written will still be compatible with the migration code in the future.
For example, if you change the behavior of the is_active or is_live attributes in the future, then this migration might break. This older migration is going to run first, against the new model code, and may fail. In your basic example here, it might not crop up, but this has burned me in deployment before when fields were added and validations couldn't run (I know your code is skipping validations, but in general this is a concern).
My favorite solution to this is to do all migrations of this sort using plain SQL. It looks like you've already considered that, so I'm going to assume you already know what to do there.
Another option, if you have some hairy business logic or just want the code to look more Railsy, is to include a basic version of the model as it exists when the migration is written in the migration file itself. For example, you could put this class in the migration file:
class Group < ActiveRecord::Base
end
In your case, that alone is probably sufficient to guarantee that the model will not break. Assuming active and live are boolean fields in the table at this time (and thus would be whenever this migration was run in the future), you won't need any more code at all. If you had more complex business logic, you could include it in this migration-specific version of model.
You might even consider copying whole methods from your model into the migration version. If you do that, bear in mind that you shouldn't reference any external models or libraries in your app from there, either, if there's any chance that they will change in the future. This includes gems and even possibly some core Ruby/Rails classes, because API-breaking changes in gems are very common (I'm looking at you, Rails 3.0, 3.1, and 3.2!).
I would highly suggest doing three total queries instead. Always leverage the database vs. looping over a bunch of items in an array. I would think something like this could work.
For the purposes of writing this, I'll assume is_active checks a field active where 1 is active. I'll assume live is the same as well.
Rails 3 approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.where(active: 1, live: 1).update_all(type: "red")
Group.where(active: 1, live: 0).update_all(type: "green")
Group.where(active: 0, live: 0).update_all(type: "orange")
end
end
Feel free to review the documentation of update_all here.
Rails 2.x approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.update_all("type = red", "active = 1 AND live = 1")
Group.update_all("type = red", "active = 1 AND live = 0")
Group.update_all("type = red", "active = 0 AND live = 0")
end
end
Rails 2 documentation
I would do this in a
after_create
# or
after_save
in your ActiveRecord model:
class Group < ActiveRecord::Base
attr_accessor :color
after_create :add_color
private
def add_color
self.color = #the color (wherever you get it from)
end
end
or in the migration you'd probably have to do some SQL like this:
execute('update groups set color = <another column>')
Here is an example in the Rails guides:
http://guides.rubyonrails.org/migrations.html#using-the-up-down-methods
In a similar situation I ended up adding the column using add_column and then using direct SQL to update the value of the column. I used direct SQL and not the model per Jim Stewart's answer, since then it doesn't depend on the current state of the model vs. the current state of the table based on migrations being run.
class AddColorToGroup < ActiveRecord::Migration
def up
add_column :groups, :color, :string
execute "update groups set color = case when is_active and is_live then 'red' when is_active then 'green' else 'orange' end"
end
def down
remove_column :groups, :color
end
end
I want to create a model 'Relation' which extends ActiveRecord::Base, set it's table name as 'questions_tags', and without primary key. What should I do?
class Relation < ActiveRecord::Base
set_table_name 'questions_tags' # set table name, right?
# how to define 'no-pk'?
end
UPDATE
I know use 'create_table' can solve this problem, but this is just what I want to know: What is the magic behind create_table(:id=>false)? How can I get the same effect without using create_table(:id=>false)?
Create a migration that looks like this:
class CreateQuestionsTags < ActiveRecord::Migration
def self.up
create_table :questions_tags, {:id => false, :force => true} do |t|
...
t.timestamps
end
end
def self.down
drop_table :questions_tags
end
end
If you're looking to create a pivot table, as it looks like from the table name, then AR will handle that in the background.
However, if you're looking to create a table with more feilds then:
1) rename your table to "realtions" please
2) use a primary key "id"
There's no good reason not to be using a primary key in a table, and it is very likely that you might well find yourself regretting it later.
Why don't you want a PK?
Active Record expects a PK, and I don't see what harm it can do.
Let's say you have "lineitems" and you used to define a "make" off of a line_item.
Eventually you realize that a make should probably be on its own model, so you create a Make model.
You then want to remove the make column off of the line_items table but for every line_item with a make you want to find_or_create_by(line_item.make).
How would I effectively do this in a rails migration? I'm pretty sure I can just run some simple find_or_create_by for each line_item but I'm worried about fallback support so I was just posting this here for any tips/advice/right direction.
Thanks!
I guess you should check that the Make.count is equal to the total unique makes in lineitems before removing the column, and raise an error if it does not. As migrations are transactional, if it blows up, the schema isn't changed and the migration isn't marked as executed. Therefore, you could do something like this:
class CreateMakesAndMigrateFromLineItems < ActiveRecord::Migration
def self.up
create_table :makes do |t|
t.string :name
…
t.timestamps
end
makes = LineItem.all.collect(:&make).uniq
makes.each { |make| Make.find_or_create_by_name make }
Make.count == makes.length ? remove_column(:line_items, :make) : raise "Boom!"
end
def self.down
# You'll want to put logic here to take you back to how things were before. Just in case!
drop_table :makes
add_column :line_items, :make
end
end
You can put regular ruby code in your migration, so you can create the new table, run some code across the old model moving the data into the new model, and then delete the columns from the original model. This is even reversible so your migration will still work in both directions.
So for your situation, create the Make table and add a make_id to the lineitem. Then for each line item, find_or_create with the make column on lineitem, setting the returned id to the new make_id on lineitem. When you are done remove the old make column from the lineitem table.
I'd like to know which is the preferred way to add records to a database table in a Rails Migration. I've read on Ola Bini's book (Jruby on Rails) that he does something like this:
class CreateProductCategories < ActiveRecord::Migration
#defines the AR class
class ProductType < ActiveRecord::Base; end
def self.up
#CREATE THE TABLES...
load_data
end
def self.load_data
#Use AR object to create default data
ProductType.create(:name => "type")
end
end
This is nice and clean but for some reason, doesn't work on the lasts versions of rails...
The question is, how do you populate the database with default data (like users or something)?
Thanks!
The Rails API documentation for migrations shows a simpler way to achieve this.
http://api.rubyonrails.org/classes/ActiveRecord/Migration.html
class CreateProductCategories < ActiveRecord::Migration
def self.up
create_table "product_categories" do |t|
t.string name
# etc.
end
# Now populate the category list with default data
ProductCategory.create :name => 'Books', ...
ProductCategory.create :name => 'Games', ... # Etc.
# The "down" method takes care of the data because it
# drops the whole table.
end
def self.down
drop_table "product_categories"
end
end
Tested on Rails 2.3.0, but this should work for many earlier versions too.
You could use fixtures for that. It means having a yaml file somewhere with the data you want to insert.
Here is a changeset I committed for this in one of my app:
db/migrate/004_load_profiles.rb
require 'active_record/fixtures'
class LoadProfiles < ActiveRecord::Migration
def self.up
down()
directory = File.join(File.dirname(__FILE__), "init_data")
Fixtures.create_fixtures(directory, "profiles")
end
def self.down
Profile.delete_all
end
end
db/migrate/init_data/profiles.yaml
admin:
name: Admin
value: 1
normal:
name: Normal user
value: 2
You could also define in your seeds.rb file, for instance:
Grid.create :ref_code => 'one' , :name => 'Grade Única'
and after run:
rake db:seed
your migrations have access to all your models, so you shouldn't be creating a class inside the migration.
I am using the latest rails, and I can confirm that the example you posted definitely OUGHT to work.
However, migrations are a special beast. As long as you are clear, I don't see anything wrong with an ActiveRecord::Base.connection.execute("INSERT INTO product_types (name) VALUES ('type1'), ('type2')").
The advantage to this is, you can easily generate it by using some kind of GUI or web front-end to populate your starting data, and then doing a mysqldump -uroot database_name.product_types.
Whatever makes things easiest for the kind of person who's going to be executing your migrations and maintaining the product.
You should really not use
ProductType.create
in your migrations.
I have done similar but in the long run they are not guaranteed to work.
When you run the migration the model class you are using is the one at the time you run the migration, not the one at the time you created the migration. You will have to be sure you never change your model in such a way to stop you migration from running.
You are much better off running SQL for example:
[{name: 'Type', ..}, .. ].each do |type|
execute("INSERT INTO product_types (name) VALUES ('#{type[:name]} .. )
end