I'm attempting to write my own timestamps method that gets run during the migration. The one that is in place now adds a NOT_NULL constraint on the field, and I really really don't want that.
The problem I have is that I have a multi schema'd database. Where each major client gets their own schema. When we on-board a new client we create a new tenant record then run a migration for a newly minted schema.
The new schema is supposed to be an exact copy of the tables in the other schemas, except of course with no data.
The last migration I ran was using a slightly older version of rails. Still in the 3's but a smidge older. When it created the timestamps they were NULLable.
When I ran migration the other day (on a new rails)... Well all the fields are now NOT_NULL
I have code that was developed with the idea that updated_at was only populated when the record was updated... not when it was created. (third party apps and database "functions" create the records)..
The third party apps and database functions that create records are falling down on the new schema...
I've gone in and removed all the NOT_NULL constraints on all the tables manually, but I don't want to have to write the cleanup right into my migration task, so that all future tables are corrected..
I figured the best thing to do was to override the timestamps method that was changed, back to one that didn't break existing code.
So there's the reason I need to revert/override..
My question now is... How do I override the method. I can't see a clear class path to it and I'm not exactly sure how to override it..
Put this in a monkey patch... Easy as!
class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter::TableDefinition
def timestamps(*args)
options = args.extract_options!
column(:created_at, :datetime, options)
column(:updated_at, :datetime, options)
end
end
As Maniek said. Updates to rails will be ignored because of this "fix".
But his initial offering does the same. Also to accommodate his fix, you'd need to go back through ol' migrations and replace "timestamps" with the new code. Add to that that you'd have to replace all future auto-generated migration too.
I don't think that fits well with DRY.. Nor does it fit in SPOT.
Just B carefulllll!
what's wrong with:
create_table :foo do |t|
t.text :bar
t.datetime :created_at
t.datetime :updated_at
end
?
Related
Recently I realized that none of my database columns have default values. If an object is instantiated and then saved, it might have nil values for any fields that aren't filled out.
I can make it so that I explicitly initialize these values, but then I've got some places where two controllers are creating the same object, but I didn't think of moving that code into a separate module.
I can also choose to update my migrations to specify default values, which seems cleaner.
I've decided to go with migrations. It is not a good idea to edit old migrations, so I'm creating a new migration and specifying that I want to change certain columns.
Looking at how I should change columns, I have determined that I will use this
def change
change_column :products, :size, :default => 0
end
Will this modify existing records that currently have nil set for those records?
No, it will not update your old records.
You should NEVER change old migrations, it is a relief you noticed that.
I suggest you to create a rake task that will update all the fields OR you can do it directly on console like the code below.
Product.update_all({ size: 0 }, { size: nil })
I'm looking for some advise on the following:
You have an app live with customers and real data
While developing new features, you need to add a column to lets say the projects table
This new column is a UID of some type which is generating by the model using a before_save
This all works fine for new projects moving forward. But all existing projects are nil for that column and everything breaks.
How do you handle his in the Rails world?
Thanks
You could simply create a rake task that pulls in all projects without a UID and and one to each project.
After you run the migration run the task. All of your projects should now have a UID.
I think this should be handled within the migration script, rather than a Rake task.
If I understand correctly, it'll only ever need to be performed once, at the time the column is added to historical records. In my mind, a migration script shouldn't leave the app with a broken data set. Migrations are designed for more than just schema changes.
Here's an example:
def self.up
change_table :projects do |t|
t.integer 'new_column'
end
Project.reset_column_information
Project.all.each do |project|
project.new_column = some_value
project.save
end
end
The reset_column_information method makes Rails aware of the new column you just added.
Simple question that used to puzzle me about Rails:
Is it possible to describe a Model's structure from within the model rb file?
From what I understand a model's data structure is kept within the migration, and the model.rb file is supposed to contain only the business logic.
Why is it so? Why does it make more sense to migrate the database with a rake task than to extract it from the class?
The reason migrations are stored separately is so that you can version your database. This would be unwieldy if done inline in the model.
Other ORMs (like DataMapper) do store the schema in the model definition. I think it's really convenient to be able to see model attributes right there, but it is unfortunate to not have the history of your database structure.
What I really wish is that running the migrations would just insert some comments at the top of the model file detailing the schema. That should be a simple hack.
Migrations do not simply show the state of the database schema.
They define the transitions from one state to another.
In a comment to cam's post, you said having the schema in the model would do the same thing, if you had the model's source stored in a VCS, you could look up the previous versions of the schema.
Here is why that is not equivalent to migrations:
Schema Version 1
string :name
string :password
string :token
Schema Version 2
string :username
string :displayname
string :password
string :token
So, what did I do here? What happened to "name"? Did I rename it to username? Or maybe I renamed it to displayname? Or did I drop it entirely?
You don't know. There's no way to tell. You only see the "before" and "after" of the schema. You don't see the transition.
Let's instead look at what I really did with this migration:
class UpdateNameFields < ActiveRecord::Migration
def self.up
rename_column :users, :name, :username
add_column :users, :displayname
User.update_all("displayname = username")
end
def self.down
remove_column :users, :displayname
rename_column :users, :username, :name
end
end
See, I had been using "name" for usernames. But you wouldn't be able to tell that without the migration here. Plus, in an effort to not have my new displayname column be blank on all my existing records, I have seeded it with everyone's existing usernames. This lets me gently introduce this new feature - I can use it and know that existing records aren't going to just see a blank field.
Note that this is a trivial example. Because it was so trivial, you could take a guess that it was one of a couple possible options. But had it been a much more complex transition, you simply would not know.
Transitions from one schema to another can involve a more than just adding/deleting/renaming columns. I gave a little example above in my User.update_all. Whatever code you might need to execute to migrate data to the new schema, you can put in the migration.
When people say migrations are about "versioning the database", they don't just mean versioning the snapshot of the schema. They mean the ability to move between those schema states, and triggering all of the actions involved in going from one state to another.
When is it acceptable to raise an ActiveRecord::IrreversibleMigration exception in the self.down method of a migration? When should you take the effort to actually implement the reverse of the migration?
If you are dealing with production-grade systems then yes, it is very bad. If it is your own pet project, then anything is allowed (if nothing else, it will be a learning experience :) though chances are that sooner rather than later, even in a pet project, you will find yourself having put a cross over a reverse migration only to have to undo that migration a few days later, be it via rake or manually.)
In a production scenario, you should always make the effort to write and test a reversible migration in the eventuality that you go through it in production, then discover a bug which forces you to roll back (code and schema) to some previous revision (pending some non-trivial fix -- and an otherwise unusable production system.)
Reverse migrations range from mostly trivial (removing columns or tables that were added during migration, and/or changing column types, etc.) to somewhat more involved (execute of JOINed INSERTs or UPDATEs), but nothing is so complex as to justify "sweeping it under the rug". If nothing else, forcing yourself to think of ways to achieve reverse migrations can give you new insight into the very problem that your forward migration is fixing.
You might occasionally run into a situation where a forward migration removes a feature, resulting in data being discarded from the database. For obvious reasons, the reverse migration cannot resuscitate discarded data. Although one could, in such cases, recommend having the forward migration automatically save the data or keep it around in the eventuality of rollback as an alternative to outright failure (save to yml, copy/move to a special table, etc.), you don't have to, as the time required to test such an automated procedure could exceed the time required to restore the data manually (should the need arise.) But even in such cases, instead of just failing, you can always make the reverse migration conditionally and temporarily fail pending some user action (i.e. test for the existence of some required table that has to be restored manually; if missing, output "I have failed because I cannot recreate table XYZ from nothingness; manually restore table XYZ from backup then run me again, and I will not fail you!")
If you are destroying data, you can make a backup of it first.
e.g.
def self.up
# create a backup table before destroying data
execute %Q[create table backup_users select * from users]
remove_column :users, :timezone
end
def self.down
add_column :users, :timezone, :string
execute %Q[update users U left join backup_users B on (B.id=U.id) set U.timezone = B.timezone]
execute %Q[drop table backup_users]
end
In a production scenario, you should always make the effort to write and test a reversible migration in the eventuality that you go through it in production, then discover a bug which forces you to roll back (code and schema) to some previous revision (pending some non-trivial fix -- and an otherwise unusable production system.)
Having a reversible migration is fine for development and staging, but assuming well tested code it should be extremely rare that you would ever want to migrate down in production. I build into my migrations an automatic IrreversibleMigration in production mode. If I really needed to reverse a change, I could use another "up" migration or remove the exception. That seems sketchy though. Any bug that would cause a scenario this dire is a sign that the QA process is seriously screwed up.
Feeling like you need an irreversible migration is probably a sign you've got bigger problems looming. Maybe some specifics would help?
As for your second question: I always take the 'effort' to write the reverse of migrations. Of course, I don't actually write the .down, TextMate inserts it automatically when creating the .up.
Reversible Data Migration makes it easy to create reversable data migrations using yaml files.
class RemoveStateFromProduct < ActiveRecord::Migration
def self.up
backup_data = []
Product.all.each do |product|
backup_data << {:id => product.id, :state => product.state}
end
backup backup_data
remove_column :products, :state
end
def self.down
add_column :products, :state, :string
restore Product
end
end
IIRC, you'll have the IrreversibleMigration when changing a datatype in the migration.
I think another situation when it's ok is when you have a consolidated migration. In that case a "down" doesn't really make sense, as it would drop all the tables (except tables added after the consolidation). That's probably not what you'd want.
I don't have a Rails environment set up and this is actually quite hard to find a quick answer for, so I'll ask the experts.
When Rails creates a table based on your "model" that you have set up, does Rails create a table that mirrors this model exactly, or does it add in more fields to the table to help it work its magic? If so, what other fields does it add and why? Perhaps you could cut and paste the table structure, or simply point me to a doc or tutorial section that addresses this.
If you're building a completely new application, including a new database, then you can build the whole back end with migrations. Running
ruby script/generate model User name:string
produces both a user.rb file for the model and a migration:
class CreateUsers < ActiveRecord::Migration
def self.up
create_table :users do |t|
t.string :name
t.timestamps
end
end
def self.down
drop_table :users
end
end
You can see that by default the generate script adds "timestamps" for (created and last updated) and they're managed automatically if allowed to remain present.
Not visible, but important, is that an extra column, "id", is created to be the single primary key. It's not complusory, though - you can specify your own primary key in the model, which is useful if you're working with a legacy schema. Assuming you retain id as the key, then Rails will use whatever RDBMS-specific features are available for new key values.
In ActiveRecord, models are created from database tables, not the other way around.
You may also want to look into Migrations, which is a way of describing and creating the database from Ruby code. However, the migration is not related to the model; the model is still created at runtime based on the shape of the database.
There are screencasts related to ActiveRecord and Migrations on the Rails site: http://www.rubyonrails.org/screencasts
Here's the official documentation for ActiveRecord. It agrees with Brad. You might have seen either a different access method or a migration (which alters the tables and thus the model)
I have had a little experience moving legacy databases into Rails and accessing Rails databases from outside scripts. That sounds like what you're trying to do. My experience is in Rails databases built on top of MySQL, so your mileage may vary.
The one hidden field is the most obvious --- the "id" field (an integer) that Rails uses as its default primary key. Unless you specify otherwise, each model in Rails has an "id" field that is a unique, incremented integer primary key. This "id" field will appear automatically in any model generated within Rails through a migration, unless you tell Rails not to do so (by specifying a different field to be the primary key). If you work with Rails databases from outside Rails itself, you should be careful about this value.
The "id" field is a key part of the Rails magic because it is used to define Rails' associations. Say you relate two tables together --- Group and Person. The Group model will have an "id" field, and the Person model should have both its own "id" field and a "group_id" field for the relationship. The value in "group_id" will refer back to the unique id of the associated Group. If you have built your models in a way that follows those conventions of Rails, you can take advantage of Rails' associations by saying that the Group model "has_many :people" and that the Person model "belongs_to :group".
Rails migrations also, by default, want to add "created_at" and "updated_at" fields (the so-called "timestamps"), which are datetime fields. By default, these take advantage of the "magic" in the database --- not in Rails itself --- to automatically update whenever a record is created or modified. I don't think these columns will trip you up, because they should be taken care of at the database level, not by any special Rails magic.