I have some data that needs to be edited online (ie cannot be a seed), but that also needs to be the same in all environments. So far the only thing i found is the has_alter_ego gem, but it does not seem to be supported anymore.
Example:
I make many changes to the default_settings table in my development database
I would like to keep only these changes transferred from my development to production database (and not the other tables which have test data)
I would rather not use a seed unless there is a way to edit seeds from the web
One option that i'm considering is having a separate database.
Anyone have a clean solution to this problem? Thanks!
How about you define a second sqlite3 database which gets checked in with your app for just this table, and use it for all three environments? For example, the sqlite3 file could be named other_db.sqlite3:
config/database.yml:
... (your other settings for dev, test, and prod databases)
other_db:
database: db/other_db.sqlite3
adapter: sqlite3
timeout: 5000
app/models/external.rb:
class External < ActiveRecord::Base
self.abstract_class = true
establish_connection :other_db
end
app/models/cross_environment_data.rb
class CrossEnvironmentData < External
...
end
In this case (though thoroughly discouraged), I would put it in a migration.
/db/migrate/_edit_data.rb
class EditData < ActiveRecord::Migration
def self.up
<Do your edits here>
end
def self.down
<undo edits here>
end
end
Then:
rake db:migrate
Now you have the same data across all environments.
For example if you wanted to append some text to the end of a chunk of text across all records of a Post.text.
script/generate migration append_text
/db/migrate/_append_text.rb
class AppendText < ActiveRecord::Migration
def self.up
Post.each{ |p| p.update_attribute(:text, "#{p.text} my additional text")}
end
def self.down
raise ActiveRecord::IrreversibleMigration
end
end
When you run:
rake db:migrate
on development, the changes will propagate accordingly and when you deploy to your other environments, you will have to run the same command and they will receive the same changes. This is more of an irreversible migration because you won't be able to validate that the data did not change since the time of migration, so make sure this is what you want to do (run it on dev) =)
NEW SOLUTION:
https://github.com/ricardochimal/taps
when you want to transfer a list of tables
$ taps push postgres://dbuser:dbpassword#localhost/dbname http://httpuser:httppassword#example.com:5000 --tables logs,tags
Related
Rails 3.2 app running on Heroku with Postgres.
I added an index add_index :lines, :event_id
Events have_many Lines. There are about about 2 million Lines and a million Events.
I pushed to Heroku and migrated.
Does it take time? Does it slow things down at first?
It blocks insert, update and delete operations on your lines table until the index build is finished. So yes, if you haven't added your index concurrently then it may have severe effect on your heroku database.
For zero downtime migrations create indicies concurrently. In ActiveRecord 4 or higher it can be done as follows:
class AddIndexToAsksActive < ActiveRecord::Migration
# By default, ActiveRecord migrations are run inside a transaction.
# In order to pass it through, we need to use the following method
# to run our migration without a transaction.
disable_ddl_transaction!
def change
add_index :asks, :active, algorithm: :concurrently
end
end
Here is a good article on the subject
Large data fields migration in rails 6 (lagre data)means more then 3000 above char
Create migration in rails
def change
execute "CREATE INDEX index_table_on_field ON table(MD5(field));"
end
I am trying to use a secondary database connection for some of my migrations in the following way:
# app/models/staging/migration.rb
class Staging::Migration < ActiveRecord::Migration
def self.connection
ActiveRecord::Base.establish_connection(:staging_db).connection
end
end
# db/migrate/<timestamp>_create_foo.rb
class CreateFoo < Staging::Migration
....
end
In my database.yml the staging_db connection is configured.
When I run rake db:migrate, the table foo is created correctly in the staging_db schema, and the table schema_migrations is created in the RAILS_ENV=development connection. However db:migrate reports the following error (which fails subsequent migrations):
Table 'staging_db.schema_migrations'
doesn't exist
Is there a way to tell Staging::Migration to look for the schema_migrations table in the current RAILS_ENV connection?
BTW, I am aware of the fact that staging_db is then not RAILS_ENV-aware. This is fine for me since every server has its environment configured through a separate database.yml which is not in my repo.
You should try do this before your first migration in the staging_db:
ActiveRecord::Base.connection.initialize_schema_migrations_table
This will create a schema migration table in the staging db. If this is not what you want you will have to manipulate some other things. The schema_migrations_table_name determines which table contains the migration versions:
def schema_migrations_table_name
Base.table_name_prefix + 'schema_migrations' + Base.table_name_suffix
end
So if you have a table_name_prefix defined it will cause the schema_migration_table to look in the staging db.
I already did this but outside of rails, it should not be very different in rails, here is how I do it:
The first thing is to connect your database before your migrations are executed, in rails the best place may be in an initializer:
MyModel.establish_connection({
:adapter => "mysql2",
:database => "mydb",
:username => "root",
:encoding => 'utf8'
})
The hash will be usually loaded from an yml file but this is the result you want in the end.
MyModel can be an abstract class if you have multiple models in this database.
Next in your migration when you want to migrate this database you just have to do this:
class DoDomething < ActiveRecord::Migration
def self.connection
MyModel.connection
end
def self.up
add_column [...]
end
end
One thing to note when doing things this way is that there will be only one schema_migrations table and it will be in the "main" database.
working with sqlite3 for local dev. Prod DB is MySql.
Have a migration file for a column change.
class ChangeDateToOrders < ActiveRecord::Migration
def self.up
change_column(:orders, :closed_date, :datetime)
end
def self.down
change_column(:orders, :closed_date, :date)
end
end
Errors out saying index name 'temp_index_altered_orders_on_closed_location_id_and_parent_company_id' on table 'altered_orders' is too long; the limit is 64 characters
Know there is a limitation on index name with sqlite, but is there a workaround for this?
EDIT
Workaround I used.
class ChangeDateToOrders < ActiveRecord::Migration
def self.up
remove_index(:orders, [:closed_location_id, :parent_company_id])
change_column(:orders, :closed_date, :datetime)
add_index(:orders, [:closed_location_id, :parent_company_id], :name => "add_index_to_orders_cli_pci")
end
def self.down
remove_index(:orders, :name => "add_index_to_orders_cli_pci")
change_column(:orders, :closed_date, :date)
add_index(:orders, [:closed_location_id, :parent_company_id])
end
end
Personally, I like my production and development environments to match as much as possible. Its helps avoid gotchas. If I were deploying MySQL I would run my development environment with MySQL too. Besides, I am also not super familiar with SQLite so this approach appeals to my lazy side - I only need to know the ins and outs of one db.
You could hack your copy of active record; Add the following
opts[:name] = opts[:name][0..63] # can't be more than 64 chars long
Around line 535 (in version 3.2.9) of $GEM_HOME/gems/activerecord-3.2.9/lib/active_record/connection_adapters/sqlite_adapter.rb
It's a hack but it might get you past a hurdle. If I had more time, I'd look in to writing a test and sending a pull request to rails core team.
I'd like to know which is the preferred way to add records to a database table in a Rails Migration. I've read on Ola Bini's book (Jruby on Rails) that he does something like this:
class CreateProductCategories < ActiveRecord::Migration
#defines the AR class
class ProductType < ActiveRecord::Base; end
def self.up
#CREATE THE TABLES...
load_data
end
def self.load_data
#Use AR object to create default data
ProductType.create(:name => "type")
end
end
This is nice and clean but for some reason, doesn't work on the lasts versions of rails...
The question is, how do you populate the database with default data (like users or something)?
Thanks!
The Rails API documentation for migrations shows a simpler way to achieve this.
http://api.rubyonrails.org/classes/ActiveRecord/Migration.html
class CreateProductCategories < ActiveRecord::Migration
def self.up
create_table "product_categories" do |t|
t.string name
# etc.
end
# Now populate the category list with default data
ProductCategory.create :name => 'Books', ...
ProductCategory.create :name => 'Games', ... # Etc.
# The "down" method takes care of the data because it
# drops the whole table.
end
def self.down
drop_table "product_categories"
end
end
Tested on Rails 2.3.0, but this should work for many earlier versions too.
You could use fixtures for that. It means having a yaml file somewhere with the data you want to insert.
Here is a changeset I committed for this in one of my app:
db/migrate/004_load_profiles.rb
require 'active_record/fixtures'
class LoadProfiles < ActiveRecord::Migration
def self.up
down()
directory = File.join(File.dirname(__FILE__), "init_data")
Fixtures.create_fixtures(directory, "profiles")
end
def self.down
Profile.delete_all
end
end
db/migrate/init_data/profiles.yaml
admin:
name: Admin
value: 1
normal:
name: Normal user
value: 2
You could also define in your seeds.rb file, for instance:
Grid.create :ref_code => 'one' , :name => 'Grade Única'
and after run:
rake db:seed
your migrations have access to all your models, so you shouldn't be creating a class inside the migration.
I am using the latest rails, and I can confirm that the example you posted definitely OUGHT to work.
However, migrations are a special beast. As long as you are clear, I don't see anything wrong with an ActiveRecord::Base.connection.execute("INSERT INTO product_types (name) VALUES ('type1'), ('type2')").
The advantage to this is, you can easily generate it by using some kind of GUI or web front-end to populate your starting data, and then doing a mysqldump -uroot database_name.product_types.
Whatever makes things easiest for the kind of person who's going to be executing your migrations and maintaining the product.
You should really not use
ProductType.create
in your migrations.
I have done similar but in the long run they are not guaranteed to work.
When you run the migration the model class you are using is the one at the time you run the migration, not the one at the time you created the migration. You will have to be sure you never change your model in such a way to stop you migration from running.
You are much better off running SQL for example:
[{name: 'Type', ..}, .. ].each do |type|
execute("INSERT INTO product_types (name) VALUES ('#{type[:name]} .. )
end
I have a sequence of migrations in a rails app which includes the following steps:
Create basic version of the 'user' model
Create an instance of this model - there needs to be at least one initial user in my system so that you can log in and start using it
Update the 'user' model to add a new field / column.
Now I'm using "validates_inclusion_of" on this new field/column. This worked fine on my initial development machine, which already had a database with these migrations applied. However, if I go to a fresh machine and run all the migrations, step 2 fails, because validates_inclusion_of fails, because the field from migration 3 hasn't been added to the model class yet.
As a workaround, I can comment out the "validates_..." line, run the migrations, and uncomment it, but that's not nice.
Better would be to re-order my migrations so the user creation (step 2) comes last, after all columns have been added.
I'm a rails newbie though, so I thought I'd ask what the preferred way to handle this situation is :)
The easiest way to avoid this issue is to use rake db:schema:load on the second machine, instead of db:migrate. rake db:schema:load uses schema.rb to load the most current version of your schema, as opposed to migrating it up form scratch.
If you run into this issue when deploying to a production machine (where preserving data is important), you'll probably have to consolidate your migrations into a single file without conflicts.
You can declare a class with the same name inside the migration, it will override your app/models one:
class YourMigration < ActiveRecord::Migration
class User < ActiveRecord::Base; end
def self.up
# User.create(:name => 'admin')
end
end
Unfortunately, your IDE may try to autocomplete based on this class (Netbeans does) and you can't use your model logic in there (except if you duplicate it).
I'm having to do this right now. Building upon BiHi's advice, I'm loading the model manually then redefining methods where I need to.
load(File.join(RAILS_ROOT,"app/models/user.rb"))
class User < ActiveRecord::Base
def before_validation; nil; end # clear out the breaking before_validation
def column1; "hello"; end # satisfy validates_inclusion_of :column1
end
In your migration, you can save your user skipping ActiveRecord validation:
class YourMigration < ActiveRecord::Migration
def up
user = User.new(name: 'admin')
user.save(validate: false)
end
end