How temporarily disable "needs_migration?" check when testing migration? - ruby-on-rails

I've written the spec to test my migration but when I run it I got an error:
ActiveRecord::PendingMigrationError:
Migrations are pending. To resolve this issue, run:
bin/rake db:migrate RAILS_ENV=test
I've tried to disable the migration check in the before section but that check is running before all tests.
How to disable the migration check for testing purposes?

Testing Rails migration is a bit of a pain so I would rather step back and think about if this needs to be in a Rails migration / tested in a Rails migration.
There are basically two different types of migrations
Schema migrations
Use mostly Rails built in functions. Unless you do some handcrafted SQL I wouldn't bother testing this and trust the framework here.
Data migrations
Data migrations are used to backfill or change data. As data is one of your most valuable assets and loosing or corrupting it is very painful I would definitely recommend to write tests for data migrations.
As mentioned, testing migrations is a bit of a pain so I would try to abstract the data migration code in it's own (service) class. Something like
class DataMigration::UpdateUsername
def self.run
new.run
end
def run
User.all do |batch|
user.update(name: user.name.capitalize)
end
end
end
You can now test the data migration like a normal class like this:
it 'does capitalize the name' do
user = create(:user, name: 'name')
DataMigration::UpdateUsername.run
expect(user.reload.name).to eq('NAME')
end
Now we can use this class in our Rails migration or e.g. just use it in a Rake task. Using it in a Rake task also has the advantages that we can pass in parameters, run several data migrations in parallel (e.g. you have a large data set) or even in a background job which you can't in a Rails migration.
Example
class DataMigration::UpdateUsername
def initialize(start_id:, finish_id:)
#start_id = start_id
#finish_id = finish_id
end
def run
User.find_in_batches(start: start_id, finish: finish_id) do |batch|
batch.each do |user|
user.update(name: user.name.capitalize)
end
end
end
end
Now we can create a custom task for this
namespace :db do
desc "Runs user data migration"
task :update_user, [:start, :finish] do |task, args|
DataMigration::UpdateUsername.new(start_id: args[:start], finish_id: args[:finish])
end
end
rake db:update_user[0, 10000]
rake db:update_user[10000, 20000]
# ...

In config/environments/test.rb add the line
config.active_record.migration_error = false

Related

What is the best way to update data post migration?

In Rails / Activerecord I have changed a field to make it required; I want to run
AppVersion.where('content_rating IS NULL').each {|av| av.update_column('content_rating', 7) }
to ensure that content_rating is not null.
From what I read, Migrations are not a good place to actually change records. Is there a "do this once" way to run code within the Rails structure?
Yes, you can create a rake task:
http://railsguides.net/2012/03/14/how-to-generate-rake-task/
$ rails g task update_version update_rating_column
$ create lib/tasks/update_version.rake
namespace :update_version do
desc "Update content_rating"
task :update_rating_column => :environment do
AppVersion.where('content_rating IS NULL').each {|av| av.update_column('content_rating', 7) }
end
end
You can run the task in the migration if needed:
Execute a Rake task from within migration?

Rails Migrations - Modify rows based on condition

I need to update a table data in database using RAILS migrations.
Sample:
Table: Table_A(Col_A(number), Col_B(varchar),...)
Query: UPDATE Table_A SET Col_B = "XXX" where Col_B = "YYY"
What would be the best way to do this using RAILS Migrations. I am not even sure if RAILS migrations is the way to go for updating data in database. Any explanation would be helpful.
It's usually better to do these sorts of big data updates in a rake task. I usually write them so they have two versions: rake change_lots_of_data:report and rake change_lots_of_data:update. The 'report' version just executes the where clause and spits out a list of what would be changed. The 'update' version uses the very same where clause but makes the changes.
Some advantages of doing it this way are:
Migrations are saved for changing the database structure
You can run the 'report' version as often as you want to make sure the right records are going to be updated.
It's easier to unit test the class called by the rake task.
If you ever need to apply the same criteria to make the change again, you can just run the rake task again. It's possible but trickier to do that with migrations.
I prefer to do any database data changes in a rake task so that's it's
Obvious
Repeatable
Won't later be executed via rake db:migrate
The code:
namespace :update do
desc "Update table A to set Col_B to YYY"
task :table_a => :environment do
TableA.where(Col_B: "YYY").update_all(Col_B: "XXX")
end
end
end
Then you can rake update:table_a to execute the update.
This should be done in a rake task...
namespace :onetime do
task :update_my_data => :environment do
TableA.where(Col_B: "YYY").update_all(Col_B: "XXX")
end
end
Then after you deploy:
rake onetime:update_my_data
At my company we delete the contents of the onetime namespace rake task after it's been run in production. Just a convention for us I guess.
More details about the update_all method: http://apidock.com/rails/ActiveRecord/Relation/update_all
You can do like this:
class YourMigration < ActiveRecord::Migration
def up
execute('UPDATE Table_A SET Col_B = "XXX" where Col_B = "YYY"')
end
def down
end
end
Or:
class YourMigration < ActiveRecord::Migration
def up
update('UPDATE Table_A SET Col_B = "XXX" where Col_B = "YYY"')
end
def down
end
end
ActiveRecord::Base.connection.execute("update Table_A set Col_B = 'XXX' where Col_B = 'YYY')

Rails 3 + DataMapper - database not created/destroyed between tests

I'll try here since the mailing list for DM doesn't seem to have much input from other users unfortunately.
I'm reasonably sure this isn't something we have to do manually, but maybe I'm wrong. I've removed ActiveRecord from my project and have started creating models in DataMapper. It's all working, but I want to write unit tests for my models (and functional for my controllers). However, my test database is not cleaned between test runs (easily proven with a test). AR takes care of this for you, but it seems like the DM guys haven't considered this in their dm-rails project.
In a desperate attempt to wipe the slate clean, I dropped all tables in my test database. Now instead of my unit tests failing because the environment is dirty, they fail because the schema doesn't exist. Looking at the rake tasks available to me, I cannot restore my test DB without also wiping my development database. I'm starting to go insane and hoping a fellow DM + Rails 3 user can nudge me in the right direction.
Specifically, when I run my unit tests, all test data should be removed between the test methods. Also, if I make a change to the schema, I should be able to run my tests and they should work.
I tried putting DataMapper.auto_migrate! in a setup callback in my test_helper.rb, but this doesn't seem to create the schema (the tests still fail due to the tables not existing when they try to insert/select records).
I've seen https://github.com/bmabey/database_cleaner, but do we really have to bring an external library into Rails just to do something that DM probably already has (seemingly undocumented) support for? This also doesn't address the issue of recreating the schema.
The answer came back on the mailing list that it's basically a do-it-yourself situation, so to save others the hassle if they end up having to do this too:
Create a .rake file under lib/tasks, called something like test_db_setup.rake:
require File.dirname(__FILE__) + '/../../test/database_dumper'
# Custom logic that runs before the test suite begins
# This just clones the development database schema to the test database
# Note that each test does a lightweight teardown of just truncating all tables
namespace :db do
namespace :test do
desc "Reset the test database to match the development schema"
task :prepare do
Rake::Task['db:schema:clone'].invoke
end
end
namespace :schema do
desc "Literally dump the database schema into db/schema/**/*.sql"
task :dump => :environment do
DatabaseDumper.dump_schema(:directory => "#{Rails.root}/db/schema", :env => Rails.env)
end
desc "Clones the development schema into the test database"
task :clone => [:dump, :environment] do
DatabaseDumper.import_schema(:directory => "#{Rails.root}/db/schema", :env => "test")
end
end
end
task 'test:prepare' => 'db:test:prepare'
This uses the :test:prepare hook that Rails provides, which runs just before the test suite begins. It copies the schema from your development database into .sql files under db/schema/ (one per table/view), then it imports those .sql files into your test database.
You'll need the utility class I wrote for this to work (currently it's written for MySQL >= 5.0.1. You'll have to adjust the logic if you need a different database.
# Utility class for dumping and importing the database schema
class DatabaseDumper
def self.dump_schema(options = {})
options[:directory] ||= "#{Rails.root}/db/schema"
options[:env] ||= Rails.env
schema_dir = options[:directory]
clean_sql_directory(schema_dir)
Rails::DataMapper.configuration.repositories[options[:env]].each do |repository, config|
repository_dir = "#{schema_dir}/#{repository}"
adapter = DataMapper.setup(repository, config)
perform_schema_dump(adapter, repository_dir)
end
end
def self.import_schema(options = {})
options[:directory] ||= "#{Rails.root}/db/schema"
options[:env] ||= "test"
schema_dir = options[:directory]
Rails::DataMapper.configuration.repositories[options[:env]].each do |repository, config|
repository_dir = "#{schema_dir}/#{repository}"
adapter = DataMapper.setup(repository, config)
perform_schema_import(adapter, repository_dir)
end
end
private
def self.clean_sql_directory(path)
Dir.mkdir(path) unless Dir.exists?(path)
Dir.glob("#{path}/**/*.sql").each do |file|
File.delete(file)
end
end
def self.perform_schema_dump(adapter, path)
Dir.mkdir(path) unless Dir.exists?(path)
adapter.select("SHOW FULL TABLES").each do |row|
name = row.values.first
type = row.values.last
sql_dir = "#{path}/#{directory_name_for_table_type(type)}"
Dir.mkdir(sql_dir) unless Dir.exists?(sql_dir)
schema_info = adapter.select("SHOW CREATE TABLE #{name}").first
sql = schema_info.values.last
f = File.open("#{sql_dir}/#{name}.sql", "w+")
f << sql << "\n"
f.close
end
end
def self.directory_name_for_table_type(type)
case type
when "VIEW"
"views"
when "BASE TABLE"
"tables"
else
raise "Unknown table type #{type}"
end
end
def self.perform_schema_import(adapter, path)
tables_dir = "#{path}/tables"
views_dir = "#{path}/views"
{ "TABLE" => tables_dir, "VIEW" => views_dir }.each do |type, sql_dir|
Dir.glob("#{sql_dir}/*.sql").each do |file|
name = File.basename(file, ".sql")
drop_sql = "DROP #{type} IF EXISTS `#{name}`"
create_sql = File.open(file, "r").read
adapter.execute(drop_sql)
adapter.execute(create_sql)
end
end
end
end
This will also leave the .sql files in your schema directory, so you can browse them if you want a reference.
Now this will only wipe your database (by installing a fresh schema) as the test suite starts up. It won't wipe the tests between test methods. For that you'll want to use DatabaseCleaner. Put it in your test_helper.rb:
require 'database_cleaner'
DatabaseCleaner.strategy = :truncation, {:except => %w(auctionindexview helpindexview)}
class ActiveSupport::TestCase
setup :setup_database
teardown :clean_database
private
def setup_database
DatabaseCleaner.start
end
def clean_database
DatabaseCleaner.clean
end
end
Now you should be good to go. Your schema will be fresh when you start running the tests, you'll have a copy of your SQL in the db/schema directory, and your data will be wiped between test methods. A word of warning if you're enticed by the transaction strategy of DatabaseCleaner... this is rarely a safe strategy to use in MySQL, since none of the MySQL table types currently support nested transactions, so your application logic will likely break the teardown. Truncate is still fast, and much safer.

Getting started with the Friendly ORM

I'm following this tutorial: http://friendlyorm.com/
I'm using InstantRails to run MySQL locally. To run Ruby and Rails, I'm using normal Windows installations.
When I run Friendly.create_tables! I only get an empty Array returned: => [] and no tables are created in my 'friendly_development' database.
Author of Friendly here.
You'll have to require all of your models before calling Friendly.create_tables! Otherwise, there's no way for Friendly to know which models exist. In a future revision, I'll automatically preload all your models.
I have a rake task, with help from a guy called Sutto, that will load in all your models and then call Friendly.create_tables! and print out all the tables involved.
namespace :friends do
desc "load in all the models and create the tables"
task :create => :environment do
puts "-----------------------------------------------"
Dir[Rails.root.join("app", "models", "*.rb")].each { |f|File.basename(f, ".rb").classify.constantize }
tables = Friendly.create_tables!
tables.each do |table|
puts "Table '#{table}'"
end
puts "-----------------------------------------------"
end
end
rake friends:create
not much to go on here. My guess is that it can't find your model file that you are creating in the path?

are fixtures loaded when using the sql dump to create a test database

Because of some non standard table creation options I am forced to use the sql dump instead of the standard schema.rb (i.e. I have uncommented this line in the environment.rb config.active_record.schema_format = :sql). I have noticed that when I use the sql dump that my fixtures do not seem to be loaded into the database. Some data is loaded into it but, I am not sure where it is coming from. Is this normal? and if it is normal can anybody tell me where this other data is coming from?
This is a very old question but even almost 10 years later, the answer is still the same - it seems that fixtures ignore the schema format and are hard-coded to look for YAML files. Here's the Rake task as of Rails 5.2-stable:
https://github.com/rails/rails/blob/5-2-stable/activerecord/lib/active_record/railties/databases.rake#L198
Line 214 uses Dir["#{fixtures_dir}/**/*.yml"] to find files, so only .yml will be read.
Solutions revolve around loading your SQL fixtures into an otherwise empty database, then dumping them as YAML using the yaml_db gem or something such as that described in this blog post. Since links to blog posts often die quite quickly, I've replicated the source below:
namespace :db do
desc 'Convert development DB to Rails test fixtures'
task to_fixtures: :environment do
TABLES_TO_SKIP = %w[ar_internal_metadata delayed_jobs schema_info schema_migrations].freeze
begin
ActiveRecord::Base.establish_connection
ActiveRecord::Base.connection.tables.each do |table_name|
next if TABLES_TO_SKIP.include?(table_name)
conter = '000'
file_path = "#{Rails.root}/test/fixtures/#{table_name}.yml"
File.open(file_path, 'w') do |file|
rows = ActiveRecord::Base.connection.select_all("SELECT * FROM #{table_name}")
data = rows.each_with_object({}) do |record, hash|
suffix = record['id'].blank? ? conter.succ! : record['id']
hash["#{table_name.singularize}_#{suffix}"] = record
end
puts "Writing table '#{table_name}' to '#{file_path}'"
file.write(data.to_yaml)
end
end
ensure
ActiveRecord::Base.connection.close if ActiveRecord::Base.connection
end
end
end
The code above was published on July 16, 2017 by Yi Zeng. You'd put this in a file called something like lib/tasks/to_fixtures.rake. I loaded my SQL fixture data into the otherwise empty/clean test-mode database, then ran RAILS_ENV=test bundle exec rake db:to_fixtures. It worked as-is for me under Rails 5.2.3.
If you are loading the DB from the script you dumped, that should be all that is in there. If you see anything else I would try dropping the db and recreating it from the script to make sure.
Also, if you just want to load the fixtures, you can run:
rake db:fixtures:load
Update:
You may want to look for a way to include your options in the migrations. In my experiance it nearly always pays off to do things the rails way. If it helps, I would add custom options for using mysql cluster by using the :options option on create table:
class CreateYourTable < ActiveRecord::Migration
def self.up
create_table :your_table, :options => "ENGINE=NDBCLUSTER" do |t|
#...
end
end

Resources