I am working on an application that is already deployed to some test and staging systems and various developers workstations. I need to add some additional reference data but i'm not sure how to add it.
Most of the advice says use seed.rb, however my understanding is that this is only run once, when the application is initially deployed. Since we don't want to rebuild the test and staging databases just so that we can add 1 row of reference data, is there another way to add the data?
I'm thinking of using a db migration, is this the correct approach?
Thanks
Structure your seed.rb file to allow ongoing creation and updating of data. You are not limited to running a seed file only once and if you think it's only used for initial deployment you will miss out on the flexibility it can offer in setting reference data.
A seed file is just ruby so you can do things like:
user = User.find_or_initialize_by(email: 'bob#example.com')
user.name = 'Bob'
user.password = 'secret'
user.role = 'manager'
user.save!
This will create new data if it doesn't exist or update the data if it finds some.
If you structure your seed file correctly you can also create and update dependent objects.
I recommend using the bang save to ensure that exceptions are raised in the event that an object cannot be saved. This is the easiest method of debugging the seed.
I use the seedbank gem to provide more structure to my seed data, including setting data per environment, dependent seeds and more.
I don't recommend using migrations for seed data. There is a lack of flexibility (how do you target seed data to just one environment for instance) and no real way to build up a reusable set of data that can be run at any time to refresh a particular environment. You would also have a set of migrations which have no reference to your schema and you would have to create new migrations every time you wanted to generate new or vary current data.
You can use a migration, but that's not the safest option you have.
Say, for example, you add a record to a table via a migration, then in the future you change that table's schema. When you'll install the app somewhere, you won't be able to run rake db:migrate.
Seeds are always advisable because rake db:seed can be run on a completely migrated schema.
If it's just for a record, go for the rails console.
It's best to use an idempotent method like this in seed.rb or another task called by seed.rb:
Contact.find_by_email("test#example.com") || Contact.create(email: "test#example.com", phone: "202-291-1970", created_by: "System")
# This saves you an update to the DB if the record already exists.
Or similar to #nmott's:
Contact.find_or_initialize_by_email("test#example.com").update_attributes(phone: "202-291-1970", created_by: "System")
# this performs an update regardless, but it may be useful if you want to reset your data.
or use assign_attributes instead of update_attributes if you want to assign multiple attributes before saving.
I use the seed file to add instances to new or existing tables all the time. My solution is simple. I just comment out all the other seed data in the db/seeds.rb file so that only the new seed data is live code. Then run bin/rake db:seed.
I did something like this in seed.rb
users_list = [
{id: 1, name: "Diego", age: "25"},
{id: 2, name: "Elano", age: "27"}
]
while !users_list.empty? do
begin
User.create(users_list)
rescue
users_list = users_list.drop(1) #removing the first if the id already exist.
end
end
If a item in the list with the given id already exist it will return a exception, then we remove that item and try it again, until the users_list array is empty.
This way you don't need to search each object before include it, but you will not be able tho update the values already inserted like in #nmott code.
Instead of altering seeds.db, which you probably want to use for seeding new databases, you can create a custom Rake task (RailsCast #66 Custom Rake Tasks).
You can create as many Rake tasks as you want. For instance, lets say you have two servers, one running version 1.0 of your app, the other one running 1.1, and you want to upgrade both to 1.2. Then you can create lib/tasks/1-0-to-1-2.rake and lib/tasks`1-1-to-1-2.rake since you may need different code depending on the version of your app.
Related
What I want to do is to dump the database into a custom created .rb file.
I found a seed_dump gem that allows me to do this:
rails db:seed:dump FILE=db/seeds/my_db_file_name.rb
Then I noticed that my datebase is our of order, so I found this on SO to include the ids:
rails db:seed:dump FILE=db/seeds/my_db_file_name.rb EXCLUDE=[]
Seemed fine until I wanted to add new record to my database. Turned out that reseting primary keys solved the problem:
def reset_pk
ActiveRecord::Base.connection.tables.each do |t|
ActiveRecord::Base.connection.reset_pk_sequence!(t)
end
redirect_to root_url
end
What I am now want to do is to simplify the dumping process, as for now, every time I dump the data base the records are "out of order", what I will explain below.
Let's assume I have a two models: Lab and Offer. Lab can have many offers. So in order to create an Offer object I first have to create a Lab object. But when I dump the schema my file looks like this:
Offer.create...
Offer.create...
Offer.create
Lab.create...
Lab.create...
Lab.create...
and if I try to seed it, it won't do it as Offers are created before Labs, and it should be the other way.
My question is, is there a way to actually keep the relationships while dumping the database so Labs get created first?
EDIT
I managed to do something like this:
rails db:seed:dump FILE=db/seeds/my_db_file_name.rb EXCLUDE=[] MODELS="Lab, Offer"
This one keeps the order as I want it, but I wonder if there is a simple way (in case of having 15, 20 models instead of just 2).
You could try dumping/restoring one table at a time in the order you wish.
I'm new to rails and I haven't been able to find a definitive answer to this question.
Let's say I have
Project.create!([{title: "foo", description: "bar"}])
in my seeds.rb file and then run
$rake db:seed
twice. Would there be two near-identical entries in the database or would it override the initial entry?
It will duplicate.
If you want to run multiple times, but prevent duplication. I guess you could:
Use validation in one key field like putting validate_uniqueness_of :key_attribute
Test the count of your table like:
MyClass.create if MyClass.count == 0
Better solution might be to use find_or_create_by method. See the docs: http://easyactiverecord.com/blog/2014/03/24/using-find-or-create-with-multiple-attributes/
It just runs the file. Rails does nothing for you, as far as preventing creation of duplicate seed data. If your file creates a record, it will attempt to create that record each time you seed. It's completely up to you to prevent this, in the case that you don't want duplicate seed data.
If you want to create a record unless it already exists, use find_or_create_by:
Project.find_or_create_by_title_and_description "foo", "bar"
This will create a Project with the given title and description unless it already exists, letting you run rake db:seed as many times as you want without creating duplicates.
I'm working on a Rails 4 app. We have a seed.rb file that contains some fundamental data that our team works with, e.g.:
Country.delete_all
Country.create!(name: "United States", code: "US", description: "")
Every once in a while, we need to add more seed data to this file. But running rake db:seed, would first wipe the records in the referenced tables before inserting them back in again. Since the ID columns in these tables are auto generated, the old IDs would be lost. So, in someone's local development environment, lots of personal test data would be invalidated, because foreign keys are broken. Removing .delete_all from each model in seed.rb could pass as a solution, but running db:seed would output lots of errors.
So, I'm looking for a best practice in generating seed data while preserving the old primary keys (ID's in my case).
Thanks!
If you don't need to remove them first you can use
Country.where(name: "United States").first_or_create
This means that it will only be created if it doesn't already exist and so only the new seeds would be created, leaving the old records the same
I'm a junior Rails developer and at work we faced the following problem:
Needed to update the value of a column only for one record.
What we did is creating a migration like this:
class DisableAccessForUser < ActiveRecord::Migration
def change
User.where(name: "User").first.update_column(:access, false)
end
end
Are migrations only for schema changes?
What other solutions do you suggest?
PS: I can only change it with code. No access to console.
The short version is, since migrations are only for schema changes, you wouldn't want to use them to change actual data in the database.
The main issue is that your data-manipulating migration(s) might be ignored by other developers if they load the DB structuring using either rake db:schema:load or rake db:reset. Both of which merely load the latest version of the structure using the schema.rb file and do not touch the migrations.
As Nikita Singh also noted in the comments, I too would say the best method of changing row data is to implement a simple rake task that can be run as needed, independent of the migration structure. Or, for a first time installation, the seed.rb file is perfect to load initial system data.
Hope that rambling helps.
Update
Found some documentation in some "official" sources:
Rails Guide for Migrations - Using Models in your Migrations. This section gives a description of a scenario in which data-manipulation in the migration files can cause problems for other developers.
Rails Guide for Migrations - Migrations and Seed Data. Same document as above, doesn't really explain why it is bad to put seed or data manipulation in the migration, merely says to put all that in the seed.rd file.
This SO answer. This person basically says the same thing I wrote above, except they provide a quote from the book Agile Web Development with Rails (3rd edition), partially written by David Heinemeier Hansson, creator of Rails. I won't copy the quote, as you can read it in that post, but I believe it gives you a better idea of why seed or data manipulation in migrations might be considered a bad practice.
Migrations are fine for schema changes. But when you work on much collaborated projects like pulling code everyday from lot of developers.
Chances are you might miss some migrations(Value update migrations..No problem for schema changes) Because migrations depends on the timestamps.
So what we do is create a rake task in a single namespace to update some table values( Be careful it does not overwrites)
And invoke all the rake task in that NameSpace whenever we update the code from Git.
Making data changes using classes in migrations is dangerous because it's not terribly future proof. Changes to the class can easily break the migration in the future.
For example, let's imagine you were to add a new column to user (sample_group) and access that column in a Rails lifecycle callback that executes on object load (e.g. after_initialize). That would break this migration. If you weren't skipping callbacks and validations on save (by using update_column) there'd be even more ways to break this migration going forward.
When I want to make data changes in migrations I typically fall back to SQL. One can execute any SQL statement in a migration by using the execute() method. The exact SQL to use depends on the database in use, but you should be able to come up with a db appropriate query. For example in MySQL I believe the following should work:
execute("UPDATE users SET access = 0 WHERE id IN (select id from users order by id limit 1);")
This is far more future proof.
There is nothing wrong with using a migration to migrate the data in your database, in the right situation, if you do it right.
There are two related things you should avoid in your migrations (as many have mentioned), neither of which preclude migrating data:
It's not safe to use your models in your migrations. The code in the User model might change, and nobody is going to update your migration when that happens, so if some co-worker takes a vacation for 3 months, comes back, and tries to run all the migrations that happened while she was gone, but somebody renamed the User model in the mean time, your migration will be broken, and prevent her from catching up. This just means you have to use SQL, or (if you are determined to keep even your migrations implementation-agnostic) include an independent copy of an ActiveRecord model directly in your migration file (nested under the migration class).
It also doesn't make sense to use migrations for seed data, which is, specifically, data that is to be used to populate a new database when someone sets up the app for the first time so the app will run (or will have the data one would expect in a brand new instance of the app). You can't use migrations for this because you don't run migrations when setting up your database for the first time, you run db:schema:load. Hence the special file for maintaining seed data: seeds.rb. This just means that if you do need to add data in a migration (in order to get production and everyone's dev data up to speed), and it qualifies as seed data (necessary for the app to run), you need to add it to seeds.rb too!
Neither of these, however, mean that you shouldn't use migrations to migrate the data in existing databases. That is what they are for. You should use them!
A migrations is simply a structured way to make database changes, both schema and data.
In my opinion there are situations in which using migrations for data changes is legitimate.
For example:
If you are holding data which is mostly constant in your database but changes annually, it is fine to make a migration each year to update it. For example, if you list the teams in a soccer league a migration would be a good way to update the current teams in each year.
If you want to mass-alter an attribute of a large table. For example if you had a slug column in your user and the name "some user" would be translated to the slug "some_user" and now you want to change it to "some.user". This is something I'd do with a migration.
Having said that, I wouldn't use a migration to change a single user attribute. If this is something which happens occasionally you should make a dashboard which will allow you to edit this data in the future. Otherwise a rake task may be a good option.
This question is old and I think rails approach changed over time here. Based on https://edgeguides.rubyonrails.org/active_record_migrations.html#migrations-and-seed-data it's OK to feed new columns with data here. To be more precise your migration code should contain also "down" block:
class DisableAccessForUser < ActiveRecord::Migration
def up
User.where(name: "User").first.update_column(:access, false)
end
def down
User.where(name: "User").first.update_column(:access, true)
end
end
If you use seeds.rb to pre-fill data, don't forget to include new column value there, too:
User.find_or_create_by(id: 0, name: 'User', access: false)
If I remember correctly, changing particular records may work, but I'm not sure about that.
In any case, it isn't a good practice, migrations should be user for schema changes only.
For updating one record I would use console. Just type 'rails console' in terminal and input code to change attributes.
I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model:
class SomeModel
include MongoMapper::Document
key :some_key, String
end
Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this:
class SomeModel
include MongoMapper::Document
key :some_key, String
key :some_new_key, String, :required => true
end
How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this
class SomeModel
include MongoMapper::Document
key :some_new_key, String, :required => true
end
But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space?
With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 -> version2 migration) and to delete the values for some_key (in the version2 -> version3 migration).
What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist?
EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.
Check out Mongrations... I just finished reading about it and it looks like what you're after.
http://terrbear.org/?p=249
http://github.com/terrbear/mongrations
Cheers! Kapslok
One option is to use the update operation to update all of your data at once. Multi update is new in the development releases so you'll need to use one of those.
You can try this contraption I just made, but it only works with mongoid and rails 3 (beta 3) at the moment. http://github.com/adacosta/mongoid_rails_migrations . It'll be upgraded to rails 3 when it goes final.
Also another gem for MongoMapper migrations https://github.com/alexeypetrushin/mongo_mapper_ext
Mongrations is a super old gem, completely deprecated. I recommend NOT using it.
Exodus is a really cool migration framework for Mongo, that might be what you want:
https://github.com/ThomasAlxDmy/Exodus
We just build this one: https://github.com/eberhara/mongration - it is a regular node module (you can find it on npm).
We needed a good mongodb migration framework, but could not find any - so we built one.
It has lot's of better features than the regular migration frameworks:
Checksum (issues an error when a previosuly ran migration does not match its old version)
Persists migration state to mongo (there is no regular state file)
Full support to replica sets
Automatic handle rollbacks (developers must specify the rollback procedures)
Ability to run multiple migrations (sync or async) at the same time
Ability to run migrations against different databases at the same time
Hope it helps!
Clint,
You can write code to do updates -- though it seems that for updating a record based on its own fields is not supported.
In such a case, I did the following and ran it against the server:
------------------------------
records = Patient.all()
records.each do |p|
encounters = p.encounters
if encounters.nil? || encounters.empty?
mra = p.updated_at
#puts "\tpatient...#{mra}"
else
mra = encounters.last.created_at
#puts "\tencounter...#{mra}"
end
old = p.most_recent_activity
p.most_recent_activity = mra
p.save!
puts "#{p.last_name} mra: #{old} now: #{mra}"
end
------------------------------
I bet you could hook into Activerecord::Miration to automate and track your "migration" scripts.
MongoDB is a schema-less database. That's why there are no migrations. In the database itself, it doesn't matter whether the objects have the key :some_key or the key :some_other_key at any time.
MongoMapper tries to enforce some restrictions on this, but since the database is so flexible, you will have to maintain those restrictions yourself. If you need a key on every object, make sure you run a script to update those keys on pre-existing objects, or handle the case of an object that doesn't have that key as you come across them.
I am fairly new to MongoDB myself, but as far as I can see, due to the flexibility of the schema-less db this is how you will need to handle it.