I'm coming from the world of python, django where usually our deployment flow was as follow:
tar/gz our code release
unpack on the production
Run db migration manually via south
Run the app
Grails is a little bit different than python/django mainly because the end product is a compiled war. My biggest problem is the manual DB migration. I don't want to run it automatically, one suggested solution that I saw is to use dbm-update-sql to generate manual sql file, but in order to produce it I need my local DB to have the same version as the production DB - I don't like it.
any other suggestions ? it looks to me like the only way to run it manually is to deploy yhe source code on the machine and run the dbm commands there.
You can run dbm-update-sql against the production database, it won't make any changes since like all of the -sql scripts it's there to show you what would be done in the event of a real migration. To be safe, create a user that doesn't have permission to make any changes and use that when you run the script. Create a custom environment in DataSource.groovy with that user info and the production connection info and specify that environment when running the script.
I'd highly recommend not deploying the source to your production systems. Since, you want to manually control your database migrations, outside of the normal flow of a Grails application I'd recommend you look at using liquibase as a stand alone tool.
Obviously since you don't want to manage having a copy of your production schema to diff against this is going to be a lot of manual work for you (e.g. keeping your changes up to date).
The database migration plugin can be used to create sql scripts that you manually run, but you do need a production schema to diff against. I'd recommend you head this route, but you seem set against doing so.
Related
I've never worked with Capistrano before and currently I am fighting the urge to just scrap it and go back to my old manual ways.
As I understand, Capistrano V3 does not create the initial database because they feel it is the duty of the DB administrator.
So I must be missing something but I have followed their instructions but the initial cap staging deploy fails when it gets to the rake db:migrate step because the database does not exist.
Because of the failure, the symlink for current -> releases never gets created.
Is it just accepted general practice that we SSH into our boxes and cd into the first folder under releases and manually run rake db:create...?
And then from there, am I supposed to just run cap staging deploy again so that it finishes creating the symlinks?
Seems hacky for something that is supposed to make things easier and I am not sure if I am understanding this correctly or not.
Thanks.
It does make sense to leave certain things out of a deployment. As the initial set up and the routine deployments are very separate functions and require different specialties, or in large deployments even different skillsets. That said.. I'm totally with you - on the first deploy having to manually set up the database and certain files (specifically linked files like secrets.yml) is a step that just wastes my time.
I use this plugin:
https://github.com/capistrano-plugins/capistrano-postgresql
just add the require capistrano/postgresql to your capfile as you would any plugin
then run cap staging setup before the first time you run cap staging deploy
To give you some context, I'm trying to use Figaro to safely add in environment variables without having to worry about security risks. The problem is is that I can't seem to get Engine Yard to play nice with production.
I went and did a touch application.yml and then vim application.yml, i, and then command+v to insert that api keys and what not. I know the ENV['VARIABLES'] work because of development and all my rspec and cucumber tests (which utilize the APIs), passed.
When I've got everything ready, I add within the .gitignore:
# Ignore application configuration
/config/application.yml
Afterwards, I deploy the site. I open it up and data isn't going to the APIs anymore. OK...
cd into config and discover application.yml isn't there anymore. Paste it back in... Redeploy the site since now it understands it has to ignore that file and I'm not seeing changes on production. Check back... and its gone again!
Stumped on what's going on.
Simply putting a file into your deployed application's filesystem will not work because you get a clean environment each time you deploy. EngineYard cannot know that you want that particular file copied to that particular location without a little bit of extra work.
Their official recommendation is to put your YAML configuration files in /data/<app>/shared/config and symlink them to /data/<app>/current/config each time you deploy using deploy hooks.
I'm trying to follow the tutorial here.
I have declared the dependency for the database migration plugin in my BuildConfig.groovy file with runtime ":database-migration:1.0" and then compiled. I have also comment out the dbCreate line of my production settings in my DataSource.groovy file. My production database is empty with no tables in it.
I then try to run the two commands to generate my initial change log:
grails dbm-create-changelog
grails prod dbm-generate-gorm-changelog --add changelog-1.0.groovy
The problem is the first command creates tables in my development database, not my production database. Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created. I get several errors that say Hibernate failed to index the database and I a bunch of errors like this:
| Error 2012-07-10 08:40:28,704 [Compass Gps Index [pool-11-thread-2]] ERROR util.JDBCExceptionReporter - Table 'mygrailsapp_prod.some_class' doesn't exist
Even when I comment out my development settings in my DataSource.groovy file Grails is still looking for my development database. I should point out though if I drop the prod off the second command a the changelog-1.0.groovy file generates fine, though I am unclear if will somehow be messed up because it was generated off the development database (which had no tables in it until I ran the first command) instead of the production database.
What am I doing wrong here?
The problem is the first command creates tables in my development database, not my production database.
That's probably because it is running against the development environment and you still have its dbCreate set to "update"
Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created.
That's not entirely accurate. From the link you posted it says after that step: "Note that your database will remain empty!" The database tables will only get created when you execute a dbm-update command. That's when the changelog actually executes.
I think the blog you linked to isn't entirely accurate on the prod switch for the second command. Nothing about your Domains is environment specific. So just leave that off and you should be able to keep going. I'm not sure why that error is being thrown. It really doesn't make sense.
Following the documentation I was able to get the database-migration plugin working on an existing project which already has quite a few tables and is running in production. I went through the following process locally:
Pulled down latest production database
Source production database to local dev
grails dbm-generate-changelog changelog.groovy
grails dbm-changelog-sync
grails dbm-gorm-diff 2012-06-25-latest.groovy --add
grails dbm-update
I understand why I had to do each of those locally to get to a point of applying future change sets. However, now I want to run my 2012-06-25-latest.groovy on one of my test servers. It already has the latest database based on our production database.
I tried just running dbm-update but without the sync it failed creating tables that already exist. So I ran dbm-changelog-sync but then when running dbm-update it didn't actually apply the latest file.
I know that I can add a context (tag) to the change sets and specify that context when running dbm-update but I want to know if this is the only way to go about this or if my workflow needs to be modified; what is the best way to go about applying the changelog to the test server?
What I ended up doing is deleting all the rows in the DATABASECHANGELOG table where the FILENAME = '2012-06-25-latest.groovy'. I then ran dbm-status and it told me I had 256 changes waiting. I then ran dbm-update and all is well.
I'm not sure this is how it was supposed to be done, but it seems to have worked.
UPDATE: In addition to this I could probably run the entire migration on an empty database, and then do a mysqldump of the production database with INSERT statements only.
This could be a noob problem but I couldn't find a solution so far.
I'm developing a Rails app locally that uses SQLite, I've set up a local Git repo, and the dotcloud push command is using this. Locally I use the dev environment and on DotCloud it automatically uses the prod env, which is great. The problem is that each time I do a push my prod db on DotCloud gets lost, no matter how minor the changes are to the codebase, and I have to run 'rake db:migrate' to set it up again. I don't have a prod db locally, only the dev and test dbs.
Put your DB in ~/data/ as described here and create a symbolic link at deploy time:
ln -s ~/data/production.sqlite3 ~/current/db/production.sqlite3
You should not have your SQLite database file in version control. If you had multiple developers it would conflict every single time somebody merges the latest changes. And as you've noticed, it will also be pushed up to production.
You should add the db file to .gitignore. If it's already in version control, you'll probably have to git rm the file first.
The problem is that every time you deploy, the old version of your deployed app is wiped, and replaced with the new code, and your sqlite db is usually within your app files. I'm not a dotcloud user I don't know it it works, but you can try to setup a shared folder, where you put the production database on the server, which is outside of your rails app.
Not really sure how git is setup on DotCloud.com, but I'm assuming there is a bare repo that you push to and another repo that pull from the bare when a suitable git hook has been executed. You need to find out if you can configure that last pull to use the ours merge strategy.