How does one create a Rails migration properly so that a table gets changed to MyISAM in MySQL? It is currently InnoDB. Running a raw execute statement will change the table, but it won't update db/schema.rb, so when the table is recreated in a testing environment, it goes back to InnoDB and my fulltext searches fail.
How do I go about changing/adding a migration so that the existing table gets modified to MyISAM and schema.rb gets updated so my db and respective test db get updated accordingly?
I didn't find a great way to do this. You could change your schema.rb like someone suggested and then run: rake db:schema:load, however, this will overwrite your data.
The way I did it was (assuming you are trying to convert a table called books):
Save the existing data from the CLI: CREATE TABLE tmp SELECT * FROM books;
In your new migration file, drop the books table and recreate it with :options => "ENGINE=MyISAM" like someone said in the comment
Copy the contents back: INSERT INTO books SELECT * FROM tmp
i think that if you change your schema format (config.active_record.schema_format) from :ruby to :sql, all sql will be saved there.
i'd do some tests on a fresh app first if i were you, see how it works.
You can run any sql in migrations. This worked for me:
class ChangeMapOnlyUsersEngine < ActiveRecord::Migration[5.1]
def change
MyModel.connection.execute("ALTER TABLE my_models ENGINE = 'MyISAM';")
end
end
When I did this in the other direction (InnoDB -> MyISAM) it worked fine, without loss of data so I don't think it's neccesary to create temporary tables or similar. Note that MyISAM doesn't support transactions, so any tests against the database for a corresponding ActiveRecord model will be persisted, with a risk of test pollution.
Related
I created a model for comments at the start of a project, but have now come to the realisation I need to create some polymorphic relations so I can use comments with a number of other models as well. Considering the the code I already have etc, I'm thinking it might be easier for me to just start again from scratch so I can build all the views/controllers etc in the correct way for my new polymorphic world.
I see that I can run rails destroy model comments to achieve this but I have two questions on that:
Will this delete the model, migrations AND the actual DB table?
What are the implications when I want to create a new model with the exact same name?
In order to completely remove all columns &
tables that migration has created you need to run:
rails db:migrate:down VERSION=012345678 (where 012345678 should be the version number of your migration)
.............................
rails destroy model Comments
will delete your Model, pending migration, tests and fixtures
So destroy it's the opposite of generate:
$ bin/rails destroy model Oops
invoke active_record
remove db/migrate/20120528062523_create_oops.rb
remove app/models/oops.rb
invoke test_unit
remove test/models/oops_test.rb
remove test/fixtures/oops.yml
And, you can now create a new Model with the same name, as there's no trace of your previous one :)
If you have already migrated the database after creating the model:
First, rollback the changes to the database:
rake db:migrate:down VERSION=20100905201547
where version is the timestamp identifying the migration. For example, if your migration file is called 20170411182948_create_comments.rb then your version parameter should be 20170411182948
Then run
rails destroy model comments
The first command will delete the table from the actual database. The second command will delete the model and the migration file. Make sure you run them in that order as the first command is dependent on the migration file to perform the rollback (which is deleted during the second command).
If you have have not migrated the database:
The table would not have been added to your database. You can go ahead and delete your model and migration files manually or use the destroy command.
You might need to remove the table thoroughly.
Run this :
sqlite3 db/development.sqlite3
Then :
sqlite> drop table table_name;
sqlite> .quit
I have a question regarding my migrations in rails.
Normally when i want to add a colum to a model for i dont make extra migrations but instead i perform this steps:
rake db:rollback
next i change the migration file in db/migrations and rerune:
rake db:migrate
The biggest problem is that when i do this i loose my data.
Previous i wrote migrations from the command line with for example
rake g migration Add_Column_House_to_Users house:string
The problem with this approach is that my db/migrations folder afterwards get very large and not very clear! I mean at the end i dont know wich variables the object has! Im not an expert in rails and would like to ask you how to keep the overview over the migrations!Thanks
Just a minor thought - I just use the file db/migrate/schema.rb to determine whats in the database as opposed to tracking through the migrations
You definitely shouldn't use db:rollback with a table with existing data.
I have a few production RonR apps with a ton of data and there are 100+ entries in the migrations table and adding new migrations to tweak tables is the rails way to do things. Not sure what you mean by lucid, but your schema and data model are going to change over time and that is ok and expected.
One tip. The migrations are great, but they are just the beginning, you can include complex logic as needed to fix your existing data (like so)
Changing data in existing table:
def up
add_column :rsvps, :column_name_id, :integer
update_data
end
def update_data
rsvps = Rsvp.where("other_column is not null")
rsvps.each do |rsvp|
invite = Blah.find(rsvp.example_id)
...
rsvp.save
end
end
Another tip: backup your production database often (should do this anyway), but use it to test all of your migrations before deploying. I run scripts like this all the time for local testing:
mysql -u root -ppassword
drop database mydatabase_dev;
create database mydatabase_dev;
use mydatabase_dev;
source /var/www/bak/mydatabase_backup_2013-10-04-16.28.06.sql
exit
rake db:migrate
I am currently working on a rails app where we are using mongoid/mongoDB on the back-end. I understand that I don't need ActiveRecord like migration to migrate the schema, but I do need to migrate data as I change mongoid model definitions. Is anyone else out there running into the same scenario, if so how are you handling it?
Even though you're not making schema changes, you may need to move data between fields, or remove fields that are no longer used in the codebase. It's nice to have migrations that you can run when you deploy new code. I recommend using a gem called mongoid_rails_migrations. This provides you with migration generators like you're used to and provides some organization to migrating data.
class MyMigration < Mongoid::Migration
def self.up
MyModel.all.each do |model|
# label was renamed to name
model.set :name, model[:label] # copy the data from the old field to the new one
model.remove_attribute :label # remove the old field from the document
model.save!
end
end
end
Write a custom rake task to migrate the data as needed
This question addresses the same issue of creating custom migrations in a mongoid setup.
Runtime changing model with mongodb/mongoid
I had the some scenario recently, where I have to do some data migration only once (basically update dirty data);
So what I did have a mongoid migrations in /db/migrate/ and override the db:migrate task so that it creates a collection in mongo db of that app itself, say "migrations", that record the migration that got fired, with that, none of the migration will run again, and you can keep adding migrations with some hierarchy (if in case migration is interdependent).
I'm fairly new to Ruby on Rails here.
I have 2 migrate files that were provided. The first one, prefixed with 001, creates a table and some columns for that table. The next migrate file, prefixed with 002, inserts rows into the table created in file 001.
Running the migration (rake db:migrate in command line) correctly creates the table but doesn't insert any of the data which is the problem. The code from the insertion looks like this (except with a lot more Student.create statements,
class AddStudentData < ActiveRecord::Migration
def self.up
...
Student.create(:name => "Yhi, Manfredo", :gender => "M")
...
end
def self.down
Student.delete_all
end
end
My understanding is that Student is a model object, so my Student model looks like this,
class Student < ActiveRecord::Base
end
Do I need to explicitly define a create method in Student or is that something that's given? (This project was made using scaffold)
Thanks.
Edit 1: I used Damien's suggestion and called create! instead of create but got the same response. Then what I did to see whether the code was even reaching that far was call this,
Student.create12312313!(:name => "foo", :gender => "M")
which is obviously invalid code and the migrate didn't throw any error.
Edit2: Answer found. The schema_migrations table had its version set to 3, and I only had 3 different migration files so it never ran any of the migration files I had. That's why nothing was ever updating, and the bogus creates I used were never throwing errors. The reason the student data wasn't inserted the first time was because a certain table was already in the database and it caused a conflict the first time I migrated. So what I was really looking for wasn't db:migrate but rather db:reset Several hours well spent.
The create method is inherited from ActiveRecord::Base.
So no, you don't need to define it.
One reason why your datas could not be included would be that you have validations that doesn't pass.
You can easily see the error making your datas not being included by using create! instead of create.
So if the model can't be created, an exception will be thrown and the migrations will fail.
You may want to look at Data Seeding in rails 2.3.4. And is your rails migrations really running 001_create_whatever.rb? or were you just using that as an example? since 2.2.2 (iirc) migrations have been using timestamps such as 10092009....create_whatever.rb
How old is your rails version?
The migrations won't run if their schema number is in the database.
For older versions of rails, there will be a single row with the highest migration performed in it.
For newer versions, every migration gets a unique time-stamp as its version number, and its own row in schema_migrations when it gets added.
I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.