Ruby on Rails Migration - Create New Database Schema - ruby-on-rails

I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight

Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.

I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.

You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.

Related

How to keep overview of the migrations?

I have a question regarding my migrations in rails.
Normally when i want to add a colum to a model for i dont make extra migrations but instead i perform this steps:
rake db:rollback
next i change the migration file in db/migrations and rerune:
rake db:migrate
The biggest problem is that when i do this i loose my data.
Previous i wrote migrations from the command line with for example
rake g migration Add_Column_House_to_Users house:string
The problem with this approach is that my db/migrations folder afterwards get very large and not very clear! I mean at the end i dont know wich variables the object has! Im not an expert in rails and would like to ask you how to keep the overview over the migrations!Thanks
Just a minor thought - I just use the file db/migrate/schema.rb to determine whats in the database as opposed to tracking through the migrations
You definitely shouldn't use db:rollback with a table with existing data.
I have a few production RonR apps with a ton of data and there are 100+ entries in the migrations table and adding new migrations to tweak tables is the rails way to do things. Not sure what you mean by lucid, but your schema and data model are going to change over time and that is ok and expected.
One tip. The migrations are great, but they are just the beginning, you can include complex logic as needed to fix your existing data (like so)
Changing data in existing table:
def up
add_column :rsvps, :column_name_id, :integer
update_data
end
def update_data
rsvps = Rsvp.where("other_column is not null")
rsvps.each do |rsvp|
invite = Blah.find(rsvp.example_id)
...
rsvp.save
end
end
Another tip: backup your production database often (should do this anyway), but use it to test all of your migrations before deploying. I run scripts like this all the time for local testing:
mysql -u root -ppassword
drop database mydatabase_dev;
create database mydatabase_dev;
use mydatabase_dev;
source /var/www/bak/mydatabase_backup_2013-10-04-16.28.06.sql
exit
rake db:migrate

Rails rake db:schema:dump against SQL Server database with multiple schemas

The situation I have is that we have multiple schemas on SQL Server that we need to be able to do schema:dump and migrations against. One schema is for our new Rails application, the other schema is for a legacy system that we have dependencies on.
When running rake db:schema:dump our new schema tables are correctly created in the schema.rb file. The legacy schema tables do not end up in schema.rb. I'm wondering how others are dealing with this issue.
Another consideration I have given to this is since our legacy schema tables are fairly static would be to add these to a separate file once and then create a before hook for rake db:schema:load that would run that file prior to the schema.rb. Is there a before hook for rake db:schema:load; if so what is that?
Here is how I ended up solving this issue.
I added a before hooks into schema load and schema dump within hooks.rake as described below.
namespace :project do
namespace :db do
task :before_schema_load => :environment do
add_tables
end
task :before_schema_dump => :environment do
add_ignored_tables
end
end
end
Rake::Task['db:schema:dump'].enhance(['project:db:before_schema_dump'])
Rake::Task['db:schema:load'].enhance(['project:db:before_schema_load'])
Within the add_tables functionality I've manually created what is essentially a static schema.rb equivalent for my legacy tables since these will change infrequently (possibly never).
Within the add_ignored_tables I've added tables to the ActiveRecord::SchemaDumper.ignore_tables array to indicate tables that are outside of my schema that I don't want dumped to schema.rb. In my case this is everything that isn't under my current app's schema. In my situation everything that I want outside of my app's schema is specified within the add_tables so those tables as well should not end up in the schema.rb.
There is some material about multi-tenant databases using Postgres, the one I've referenced before is http://blog.jerodsanto.net/2011/07/building-multi-tenant-rails-apps-with-postgresql-schemas/. There is also a gem (https://github.com/influitive/apartment), which may also be inspiration, or even a solution for you.

What is the best current practise to insert default/initial data into a ruby on rails database?

I have a new rails app with a fresh database and I want to add some default entries, like admin with default password etc.
How should I proceed?
I know of two possibilities, that both have drawbacks:
Use ruby code in migration: User.create!(:email => "admin#example.com", :password => "abc")
Use SQL code in migration: INSERT INTO users (id, email, password) VALUES (1, 'admin#example.com', 'abc')
The first alternative can break if I alter my code in later versions. The second is somewhat DBMS dependent.
As I do not plan to change my database, I would go with SQL code, but are there better alternatives?
Personally I'd use seed data
http://railscasts.com/episodes/179-seed-data
One advantage of this is that rake db:setup calls rake db:seed after it has created the database. Which is perfect for new machines.
The best way is to use Ruby code in db/seeds.rb.
The comment at the start of that file says:
This file should contain all the record creation needed to seed the
database with its default values. The data can then be loaded with
the rake db:seed (or created alongside the db with db:setup).
This is preferred over putting initialization code in the migration files.
I use the seed_fu gem, I haven't run into anything yet that it couldn't handle for me:
https://github.com/mbleigh/seed-fu

Ruby on Rails migration, change table to MyISAM

How does one create a Rails migration properly so that a table gets changed to MyISAM in MySQL? It is currently InnoDB. Running a raw execute statement will change the table, but it won't update db/schema.rb, so when the table is recreated in a testing environment, it goes back to InnoDB and my fulltext searches fail.
How do I go about changing/adding a migration so that the existing table gets modified to MyISAM and schema.rb gets updated so my db and respective test db get updated accordingly?
I didn't find a great way to do this. You could change your schema.rb like someone suggested and then run: rake db:schema:load, however, this will overwrite your data.
The way I did it was (assuming you are trying to convert a table called books):
Save the existing data from the CLI: CREATE TABLE tmp SELECT * FROM books;
In your new migration file, drop the books table and recreate it with :options => "ENGINE=MyISAM" like someone said in the comment
Copy the contents back: INSERT INTO books SELECT * FROM tmp
i think that if you change your schema format (config.active_record.schema_format) from :ruby to :sql, all sql will be saved there.
i'd do some tests on a fresh app first if i were you, see how it works.
You can run any sql in migrations. This worked for me:
class ChangeMapOnlyUsersEngine < ActiveRecord::Migration[5.1]
def change
MyModel.connection.execute("ALTER TABLE my_models ENGINE = 'MyISAM';")
end
end
When I did this in the other direction (InnoDB -> MyISAM) it worked fine, without loss of data so I don't think it's neccesary to create temporary tables or similar. Note that MyISAM doesn't support transactions, so any tests against the database for a corresponding ActiveRecord model will be persisted, with a risk of test pollution.

Way to view Rails Migration output

Is there an easy way to see the actual SQL generated by a rails migration?
I have a situation where a migration to change a column type worked on my local development machine by partially failed on the production server.
My postgreSQL versions are different between local and production (7 on production, 8 on local) so I'm hoping by looking at the SQL generated on the successful migration locally I can work out a SQL statement to run on production to fix things....
Look at the log files: log/development.log locally vs log/production.log on your server.
I did some digging and found another way this can be achieved too... (This way only gives you the SQL so it was a bit easier for me to read)
Postgresql will log all the queries executed if you put this line in your config file: (there's an example which has been commented out in the "What to log" section of the config file)
log_statement = 'all'
Then I rolled back and re-ran my migration locally to find the SQL I was looking for.
This method also gives you the SQL in a format where you can easily paste it into something like PGAdmin's query builder and mess around with it.
You could set the logger to STDOUT at the top of your migration's change, up, or down methods. Example:
class SomMigration < ActiveRecord::Migration
def change
ActiveRecord::Base.logger = Logger.new(STDOUT)
# ...
end
end
Or see this answer for adding SQL logging to all rake tasks

Resources