I have been building our grails app to AWS Elastic Beanstalk through Jenkins for awhile now without issue, jumping and building between branches for years. This became an issue, though, when adding the grails test suite into the build.
I set up a test database for jenkins to use itself and let grails populate all the table data on its own, it worked for several months until recently when I decided to deploy a branch that was around 6 months old to one of our development environments. As you can guess, a branch 6 months old was missing some columns that were in more recent releases, and hence in the database, so grails deleted those columns, and tested and deployed without issue.
The problem arose when I went to deploy a more recent branch to a different environment, and grails test-app started failing due to sql errors because the app was trying to use a column that didn't exist on that table.
I dug into it further and discovered in the logs that when grails should have been trying to update the tables because they already existed and just needed a column added, it was trying to insert the tables instead. Obviously this caused issue with the tables already existing and the database not being updated.
Does anyone have any knowledge on how to force grails test-app to update the database tables instead of try to insert them? This has never happened in the use of the app, so I know this is localized to an issue with the test-suite, but the documentation on it is kind of bad, especially for grails 2.3.11 so I can't find anything.
This ended up being due to the old branch that was deployed having a different dbCreate value for the test environment, so for some reason when it made it's changes with the dbCreate property of "update" then switched back to "create-drop" it no longer could drop the tables before re-adding them.
Related
I'm looking to introduce the Grails DBMigrations plugin into an existing application.
My understanding is that after installation the first thing to do is to create the initial changelog.groovy which I've done with the command
dbm-generate-gorm-changelog changelog.groovy
This does create the file correctly, and contains all the necessary commands to recreate the database schema.
Secondly my understanding is I should then issue the command dbm-changelog-sync to indicate that the changelog script has been executed.
What should happen if I then issue the command dbm-gorm-diff ?
At this point I'm expecting to see an empty changelist... because the initial schema was created, committed and no changes have been made to any domains, however I see a bunch of entries.. for modifying column types and a few index creation entries.
Any advice appreciated, I've reached this point because I want to update the database in a production env database, and don't want to start writing manual tests and SQL in bootstrap to update the DB as this will surely lead ultimately to a maintenance migraine. Using DBMigrations appears to be the way forward but either I don't understand it, or it's buggy and don't want to risk using it.
And as others have commented in other threads, I'm restarting the grails console between issuing commands to avoid reloading problems.
Thanks
Dave,
The important thing to remember about the migrations plugin is that is output from dbm-gorm-diff is not meant to be taken as gospel. It is simply meant as a way to hopefully save you from some typing. Anything generated automatically from the plugin should be reviewed and analyzed to determine that it is what you desire.
I am using Grails 2.3.5 with database migration plugin in new project for understand how it is working. But sometimes dbm-gorm-diff provide empty changelog file,even changes is there.
For example,
i have the person domain class with out any properties.
When initially creating change log, it will create 2 fields id and version in change log.
After that, added 2fields name,age into that person class. then did dbm-update and dbm-gorm-diff that give like following.
databaseChangeLog = {
}
Sometimes gives the changes. some times is not working. Please help me. Why it is working like that. Sorry for bad english.
Using the following tutorial works for me. Make sure you remove dbCreate from your DataSource.groovy. According to the tutorial the workflow is as follows:
Setup
Remove dbCreate from DataSource.groovy
Initially run grails dbm-generate-gorm-changelog changelog.groovy
Sync the changelog with your db by running grails dbm-changelog-sync
Changing domain
Change domain class
Run grails dbm-gorm-diff <your-filename>.groovy --add
Run grails dbm-changelog-sync
Hope this helps
I have spent some time searching for the answer to this same issue.
Caveat: I am using the Grails interactive shell to issue commands, including the dbm-* commands.
By brute force alone, I have come to the conclusion that the domain classes are not reliably reloading. To get consistent results (especially with the generation of new changeLog files), any time I modify a domain class, I stop and restart the Grails interactive shell before calling dbm-gorm-diff. I've tried issuing other commands like clean, compile, package and refresh-dependencies and they just aren't working, and the -reloading flag at the start up of the Grails interactive shell doesn't seem to make any difference either.
Restarting the Grails interactive shell, however, does seem to work reliably, thought it galls me to do so :)
Those who do not use the interactive shell should not be having this problem since the domain classes are loaded with every command call.
This blog has detailed step by step explanation, specially Migrating old databases section helped us in migrating successfully.
I'm considering using Ruby on Rails for my next project. Understanding the deployment of a rails website is easy enough to understand (sounds like I'll be using Phusion Passenger)
But now I'm trying to figure out the database. I see a lot about "database migrations", which allow me to update the database using ruby code. I also see that I'm allowed to create both an up and down variant of these migrations.
However, I can only fathom how this works cleanly in a single direction. Imagine if I suddenly say "The color column cannot be null". So, the up will make it required and give all NULL entries a default value. But what will the down do? If you care about it being identical to how it started, you can't just set the default values back to NULL.
This doesn't really matter much for releases to production. That will likely just be done in a single direction (in the up direction). However, I want to use Gerrit for code reviews as well as setting up a bot to run a build before allowing check-ins...
So how could that work? From one code review to the next, the build server will check out the new set of code, and run the migrations? But when this happens, it won't even retain the migration code from before, so how could it run the down steps? As an simpler example, I do not see how I could check out an old version of the code and "db migrate" backwards.
Yes, you can't check out an old version of the code and then run a down migration from a newer version of the code. You would need to run the down migration before rolling back to the older code.
There are many, many cases where a down migration is just not practical or possible. That's not necessarily a bad thing. It just means that you have defined a 'point of no return', where you won't be able to restore your database to an earlier state.
Migrations like creating a table or adding a column are easily reversed by simply destroying that table or removing that column. However, if you are doing something more complex, such as adding default values or moving data around, then you can tell Rails that it's not possible to reverse this migration:
def down
raise ActiveRecord::IrreversibleMigration
end
I would recommend that Gerrit should never assume anything about the database. It should start with a fresh database each time a new version is deployed, and run db:migrate to run all your migrations. You can use gems like factory_girl to populate your app with demo data for testing purposes.
I have a development env and a beta env for the app that I am building. Yesterday I came across a strange error.
I wrote a migration to change one of the tables and it worked fine on my dev env. Once I deployed the changes to the beta env it ran and but when I access the page it started giving me trouble. The new columns that I added were all undefined columns when it comes to the beta environment. So looked at the schema for the column on the beta side using column_names function on the class and it still had the old columns while logging into mysql there and checking the fields in the table shows me the new ones.
Anyone have any idea why is the schema not updating while the database was updated. Is there anyway to update the schema for a class, like some function or something.
I'm having this problem too. So far, the only solution I've found is to rename the table.
I'm not a Rails developer (currently) so please forgive my ignorance on this.
One thing I've always liked about Rails is migrations and how it fills a need that's common across all languages and platforms. With that said, I am curious to understand what a certain scenario would result with the changes made in 2.1.
Rails 2.1 and higher, from what I can tell, made two changes to the migrations logic. The first was to use timestamp based script names when generated in order to reduce the probability of 2 developers working on the same file at the same time before adding the file to source control. So instead of 002_test.rb, it is now 20090729123456_test.rb when the script is generated.
The second item was that the Schema_Info table was replaced with the Schema_Migrations table that presented a list of migrations and not just the latest version number.
Looking through the Rails source, I noticed that it took the "current version" of the schema as the max version found in the Schema_Migration table.
Here's the scenario I'm trying to figure out:
Developer A generates a new script: 20090729120000_test.rb.
Developer B generates a new script: 20090729130000_test.rb.
Developer B migrates his script to the database first by not specifying the version number and assuming that Developer A's script isn't added yet.
What happens when Developer A adds his script and tries to migrate to the latest version since his script version (based on the time stamp) is less than the currently applied version now?
I'm not positive, but I believe that he would have to do a "rake db:rollback" to undo the Developer B migration, then run "rake db:migrate" to do both of them in the proper order. Of course, if two developers are working independently on tables that require no integration with one another (as this case shows, since Developer B didn't have to wait for Developer A to run his migration), developer A can simply add one to the timestamp of Developer B's migration and it will be in proper order once again.
The short answer is: don't worry about it.
rake db:migrate will attempt to run any migrations that are not found in the schema_migrations table. It doesn't matter if there are newer migrations that have already been run.
If B is dependent on A and must be run in that order, then you might have a problem, but that's an issue between the developers.