Rails: change database model on production enviroment - ruby-on-rails

i need to convert some data, but not sure how to do it.
I have a professional model, which had an foreign key. We decided that wasn't enough and changed the "has many" to a HABTM model, but now in the production environment, i need to convert the data from the foo_id field to the professional_foo joint table.
The "add table" migration will be executed before the "drop column" one, but how should i set up a conversion knowing that i have databases in use that use the old form and i will have new setups of the system that will be made straight to the last code version and because that, wont need any conversion to be done. (initializes scripts on newest version are already fixed.

I recommend doing the conversion of data in the migration between the add table and the drop column. This migration should be run when the new code is deployed so the new code will work with the new data structure and new installations that don't have any data yet will just run the migration very fast since there won't be data to convert.
The migration would look something like:
class OldProfessional < ActiveRecord::Base
self.table_name = "professionals"
has_many :foos
end
class NewProfessional < ActiveRecord::Base
self.table_name = "professionals"
has_and_belongs_to_many :foos
end
class MigrateFoosToHasAndBelongsToMany < ActiveRecord::Migration
def up
OldProfessional.all.each do |old_pro|
new_pro = NewProfessinoal.find(old_pro.id)
old_pro.foos.each do |foo|
new_pro.foos << foo
end
new_pro.save!
end
end
def down
NewProfessional.all.each do |new_pro|
old_pro = OldProfessional.find(new_pro.id)
new_pro.foos.each do |foo|
old_pro.foos << foo
end
old_pro.save!
end
end
end

Related

Ruby on Rails : how to generate an application-wide sequence?

I am developing on RoR 4, with Oracle, PostGreSQL and MSSQL as target databases.
I am building a hierarchy of 4 objects, for which I need to display parent-child relationships through the same query whatever level I start from. Not easy to figure out, but the hint is that none of the object should have identical IDs.
The issue here is that rails maintains a dedicated sequence for each object, so duplicated IDs will appear for sure.
How can I create a sequence to fill a unique_id field which remains unique for all my data ?
Thanks for your help,
Best regards,
Fred
I finally found this solution:
1 - create a sequence to be used by each of concerned objects
class CreateGlobalSequence < ActiveRecord::Migration
def change
execute "CREATE SEQUENCE global_seq INCREMENT BY 1 START WITH 1000"
end
end
2 - Declare this sequence to be used for identity columns in each of concerned models
class BusinessProcess < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
class BusinessRule < ActiveRecord::Base
self.sequence_name = "global_seq"
...
end
and so on. It works fine.
Rails is great !
Thanks for your help, and best regards,
Fred
Id column for each table is unique identifier for each table record. It will not make any impact on other table Id column.
Don't know why you need this. But you can achieve it by some extent. Like below :
class CreateSimpleModels < ActiveRecord::Migration
def self.up
create_table :simple_models do |t|
t.string :xyz
t.integer :unique_id
t.timestamps
end
execute "CREATE SEQUENCE simple_models_unique_id_seq OWNED BY
simple_models.unique_id INCREMENT BY 1 START WITH 100000"
end
def self.down
drop_table :simple_models
execute "DELETE SEQUENCE simple_models_unique_id_seq"
end
end
But after 100000 record in db it will again going to similar for other model.
The default id column has the identity attribute, which is stored per-table. If your models fit the bill for Single Table Inheritance you'd be able to define a custom id attribute on the base class. In your case since you said it's a hierarchy that might be the way to go.
The harder? (STI is a bit to digest but very powerful) way of doing this involves what I'm working on this similar issue with a shared PAN (Private Account Number in this system) in a shared namespace.
class CreatePans < ActiveRecord::Migration
def change
create_table :pans do |t|
t.string :PAN
t.timestamps
end
end
end
class AddPanIdToCustomers < ActiveRecord::Migration
def change
add_column :customers, :pan_id, :integer
end
end
The first migration will add the ID table, the second adds the foreign key to the customers table. You'll also need to add the relationships to the models has_many :pans and belongs_to :customers. You can then refer to their identity by the :pan_id attribute (however you name it). It's a roundabout way of doing things, but in my case business requirements force it - hacky as it is.

add a database column with Rails migration and populate it based on another column

I'm writing a migration to add a column to a table. The value of the column is dependent on the value of two more existing columns. What is the best/fastest way to do this?
Currently I have this but not sure if it's the best way since the groups table is can be very large.
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Groups = Group.all.each do |g|
c = "red" if g.is_active && is_live
c = "green" if g.is_active
c = "orange"
g.update_attribute(:type, c)
end
end
def self.down
end
end
It's generally a bad idea to reference your models from your migrations like this. The problem is that the migrations run in order and change the database state as they go, but your models are not versioned at all. There's no guarantee that the model as it existed when the migration was written will still be compatible with the migration code in the future.
For example, if you change the behavior of the is_active or is_live attributes in the future, then this migration might break. This older migration is going to run first, against the new model code, and may fail. In your basic example here, it might not crop up, but this has burned me in deployment before when fields were added and validations couldn't run (I know your code is skipping validations, but in general this is a concern).
My favorite solution to this is to do all migrations of this sort using plain SQL. It looks like you've already considered that, so I'm going to assume you already know what to do there.
Another option, if you have some hairy business logic or just want the code to look more Railsy, is to include a basic version of the model as it exists when the migration is written in the migration file itself. For example, you could put this class in the migration file:
class Group < ActiveRecord::Base
end
In your case, that alone is probably sufficient to guarantee that the model will not break. Assuming active and live are boolean fields in the table at this time (and thus would be whenever this migration was run in the future), you won't need any more code at all. If you had more complex business logic, you could include it in this migration-specific version of model.
You might even consider copying whole methods from your model into the migration version. If you do that, bear in mind that you shouldn't reference any external models or libraries in your app from there, either, if there's any chance that they will change in the future. This includes gems and even possibly some core Ruby/Rails classes, because API-breaking changes in gems are very common (I'm looking at you, Rails 3.0, 3.1, and 3.2!).
I would highly suggest doing three total queries instead. Always leverage the database vs. looping over a bunch of items in an array. I would think something like this could work.
For the purposes of writing this, I'll assume is_active checks a field active where 1 is active. I'll assume live is the same as well.
Rails 3 approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.where(active: 1, live: 1).update_all(type: "red")
Group.where(active: 1, live: 0).update_all(type: "green")
Group.where(active: 0, live: 0).update_all(type: "orange")
end
end
Feel free to review the documentation of update_all here.
Rails 2.x approach
class AddColorToGroup < ActiveRecord::Migration
def self.up
add_column :groups, :color, :string
Group.update_all("type = red", "active = 1 AND live = 1")
Group.update_all("type = red", "active = 1 AND live = 0")
Group.update_all("type = red", "active = 0 AND live = 0")
end
end
Rails 2 documentation
I would do this in a
after_create
# or
after_save
in your ActiveRecord model:
class Group < ActiveRecord::Base
attr_accessor :color
after_create :add_color
private
def add_color
self.color = #the color (wherever you get it from)
end
end
or in the migration you'd probably have to do some SQL like this:
execute('update groups set color = <another column>')
Here is an example in the Rails guides:
http://guides.rubyonrails.org/migrations.html#using-the-up-down-methods
In a similar situation I ended up adding the column using add_column and then using direct SQL to update the value of the column. I used direct SQL and not the model per Jim Stewart's answer, since then it doesn't depend on the current state of the model vs. the current state of the table based on migrations being run.
class AddColorToGroup < ActiveRecord::Migration
def up
add_column :groups, :color, :string
execute "update groups set color = case when is_active and is_live then 'red' when is_active then 'green' else 'orange' end"
end
def down
remove_column :groups, :color
end
end

Migrating DATA - not just schema, Rails

Sometimes, data migrations are required. As time passes, code changes and migrations using your domain model are no longer valid and migrations fail. What are the best practices for migrating data?
I tried make an example to clarify the problem:
Consider this. You have a migration
class ChangeFromPartnerAppliedToAppliedAt < ActiveRecord::Migration
def up
User.all.each do |user|
user.applied_at = user.partner_application_at
user.save
end
end
this runs perfectly fine, of course. Later, you need a schema change
class AddAcceptanceConfirmedAt < ActiveRecord::Migration
def change
add_column :users, :acceptance_confirmed_at, :datetime
end
end
class User < ActiveRecord::Base
before_save :do_something_with_acceptance_confirmed_at
end
For you, no problem. It runs perfectly. But if your coworker pulls both these today, not having run the first migration yet, he'll get this error on running the first migration:
rake aborted!
An error has occurred, this and all later migrations canceled:
undefined method `acceptance_confirmed_at=' for #<User:0x007f85902346d8>
That's not being a team player, he'll be fixing the bug you introduced. What should you have done?
This is a perfect example of the Using Models in Your Migrations
class ChangeFromPartnerAppliedToAppliedAt < ActiveRecord::Migration
class User < ActiveRecord::Base
end
def up
User.all.each do |user|
user.applied_at = user.partner_application_at
user.save
end
end
Edited after Mischa's comment
class ChangeFromPartnerAppliedToAppliedAt < ActiveRecord::Migration
class User < ActiveRecord::Base
end
def up
User.update_all('applied_at = partner_application_at')
end
end
Best practice is: don't use models in migrations. Migrations change the way AR maps, so do not use them at all. Do it all with SQL. This way it will always work.
This:
User.all.each do |user|
user.applied_at = user.partner_application_at
user.save
end
I would do like this
update "UPDATE users SET applied_at=partner_application_at"
Some times 'migrating data' could not be performed as a part of schema migration, like discussed above. Sometimes 'migrating data' means 'fix historical data inconstancies' or 'update your Solr/Elasticsearch' index, so its a complex task. For these kind of tasks, check out this gem https://github.com/OffgridElectric/rails-data-migrations
This gem was designed to decouple Rails schema migrations from data migrations, so it wont cause downtimes at deploy time and make it easy to manage in overall

Add Rows on Migrations

I'd like to know which is the preferred way to add records to a database table in a Rails Migration. I've read on Ola Bini's book (Jruby on Rails) that he does something like this:
class CreateProductCategories < ActiveRecord::Migration
#defines the AR class
class ProductType < ActiveRecord::Base; end
def self.up
#CREATE THE TABLES...
load_data
end
def self.load_data
#Use AR object to create default data
ProductType.create(:name => "type")
end
end
This is nice and clean but for some reason, doesn't work on the lasts versions of rails...
The question is, how do you populate the database with default data (like users or something)?
Thanks!
The Rails API documentation for migrations shows a simpler way to achieve this.
http://api.rubyonrails.org/classes/ActiveRecord/Migration.html
class CreateProductCategories < ActiveRecord::Migration
def self.up
create_table "product_categories" do |t|
t.string name
# etc.
end
# Now populate the category list with default data
ProductCategory.create :name => 'Books', ...
ProductCategory.create :name => 'Games', ... # Etc.
# The "down" method takes care of the data because it
# drops the whole table.
end
def self.down
drop_table "product_categories"
end
end
Tested on Rails 2.3.0, but this should work for many earlier versions too.
You could use fixtures for that. It means having a yaml file somewhere with the data you want to insert.
Here is a changeset I committed for this in one of my app:
db/migrate/004_load_profiles.rb
require 'active_record/fixtures'
class LoadProfiles < ActiveRecord::Migration
def self.up
down()
directory = File.join(File.dirname(__FILE__), "init_data")
Fixtures.create_fixtures(directory, "profiles")
end
def self.down
Profile.delete_all
end
end
db/migrate/init_data/profiles.yaml
admin:
name: Admin
value: 1
normal:
name: Normal user
value: 2
You could also define in your seeds.rb file, for instance:
Grid.create :ref_code => 'one' , :name => 'Grade Única'
and after run:
rake db:seed
your migrations have access to all your models, so you shouldn't be creating a class inside the migration.
I am using the latest rails, and I can confirm that the example you posted definitely OUGHT to work.
However, migrations are a special beast. As long as you are clear, I don't see anything wrong with an ActiveRecord::Base.connection.execute("INSERT INTO product_types (name) VALUES ('type1'), ('type2')").
The advantage to this is, you can easily generate it by using some kind of GUI or web front-end to populate your starting data, and then doing a mysqldump -uroot database_name.product_types.
Whatever makes things easiest for the kind of person who's going to be executing your migrations and maintaining the product.
You should really not use
ProductType.create
in your migrations.
I have done similar but in the long run they are not guaranteed to work.
When you run the migration the model class you are using is the one at the time you run the migration, not the one at the time you created the migration. You will have to be sure you never change your model in such a way to stop you migration from running.
You are much better off running SQL for example:
[{name: 'Type', ..}, .. ].each do |type|
execute("INSERT INTO product_types (name) VALUES ('#{type[:name]} .. )
end

Rails model validators break earlier migrations

I have a sequence of migrations in a rails app which includes the following steps:
Create basic version of the 'user' model
Create an instance of this model - there needs to be at least one initial user in my system so that you can log in and start using it
Update the 'user' model to add a new field / column.
Now I'm using "validates_inclusion_of" on this new field/column. This worked fine on my initial development machine, which already had a database with these migrations applied. However, if I go to a fresh machine and run all the migrations, step 2 fails, because validates_inclusion_of fails, because the field from migration 3 hasn't been added to the model class yet.
As a workaround, I can comment out the "validates_..." line, run the migrations, and uncomment it, but that's not nice.
Better would be to re-order my migrations so the user creation (step 2) comes last, after all columns have been added.
I'm a rails newbie though, so I thought I'd ask what the preferred way to handle this situation is :)
The easiest way to avoid this issue is to use rake db:schema:load on the second machine, instead of db:migrate. rake db:schema:load uses schema.rb to load the most current version of your schema, as opposed to migrating it up form scratch.
If you run into this issue when deploying to a production machine (where preserving data is important), you'll probably have to consolidate your migrations into a single file without conflicts.
You can declare a class with the same name inside the migration, it will override your app/models one:
class YourMigration < ActiveRecord::Migration
class User < ActiveRecord::Base; end
def self.up
# User.create(:name => 'admin')
end
end
Unfortunately, your IDE may try to autocomplete based on this class (Netbeans does) and you can't use your model logic in there (except if you duplicate it).
I'm having to do this right now. Building upon BiHi's advice, I'm loading the model manually then redefining methods where I need to.
load(File.join(RAILS_ROOT,"app/models/user.rb"))
class User < ActiveRecord::Base
def before_validation; nil; end # clear out the breaking before_validation
def column1; "hello"; end # satisfy validates_inclusion_of :column1
end
In your migration, you can save your user skipping ActiveRecord validation:
class YourMigration < ActiveRecord::Migration
def up
user = User.new(name: 'admin')
user.save(validate: false)
end
end

Resources