I'm trying to follow the tutorial here.
I have declared the dependency for the database migration plugin in my BuildConfig.groovy file with runtime ":database-migration:1.0" and then compiled. I have also comment out the dbCreate line of my production settings in my DataSource.groovy file. My production database is empty with no tables in it.
I then try to run the two commands to generate my initial change log:
grails dbm-create-changelog
grails prod dbm-generate-gorm-changelog --add changelog-1.0.groovy
The problem is the first command creates tables in my development database, not my production database. Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created. I get several errors that say Hibernate failed to index the database and I a bunch of errors like this:
| Error 2012-07-10 08:40:28,704 [Compass Gps Index [pool-11-thread-2]] ERROR util.JDBCExceptionReporter - Table 'mygrailsapp_prod.some_class' doesn't exist
Even when I comment out my development settings in my DataSource.groovy file Grails is still looking for my development database. I should point out though if I drop the prod off the second command a the changelog-1.0.groovy file generates fine, though I am unclear if will somehow be messed up because it was generated off the development database (which had no tables in it until I ran the first command) instead of the production database.
What am I doing wrong here?
The problem is the first command creates tables in my development database, not my production database.
That's probably because it is running against the development environment and you still have its dbCreate set to "update"
Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created.
That's not entirely accurate. From the link you posted it says after that step: "Note that your database will remain empty!" The database tables will only get created when you execute a dbm-update command. That's when the changelog actually executes.
I think the blog you linked to isn't entirely accurate on the prod switch for the second command. Nothing about your Domains is environment specific. So just leave that off and you should be able to keep going. I'm not sure why that error is being thrown. It really doesn't make sense.
Related
I'm working on a Rails app with a few collaborators and we decided to begin using separate database.yml files for some time until we can a configuration that works for all of us.
After adding database.yml to the .gitignore file and pushing a version without it, I realized that this would likely prevent the Heroku app from running.
My confusion is that the deployment was successful and the database.yml file was not needed. Why is this? Is our old database.yml file cached?
This is actually the expected behavior. For more details see: https://devcenter.heroku.com/articles/rails-database-connection-behavior
Which boils down to (for Rails 4.1+):
While the default connection information will be pulled from
DATABASE_URL, any additional configuration options in your
config/database.yml will be merged in.
Heroku will always use DATABASE_URL and merge the rest from database.yml to the config contained in that url.
Ah yes the old db config developer war.
Heroku actually uses the solution to this issue - Rails merges the database configuration from database.yml with a hash created from parsing ENV["DATABASE_URL"]. The ENV var takes precedence over the file based configuration.
When you first push a Rails app, Heroku automatically attaches a Postgres addon and sets ENV["DATABASE_URL"] and presto your app magically connects to the database.
Even if you add complete nonsense settings like setting the database name in database.yml the ENV var still wins.
How can this solve our developer war?
Do the opposite of what you are currently doing. Strip everything except the bare minimum required to run the application out of database.yml and check it back into version control.
Developers can use direnv or one of the many tools available to set ENV[DATABASE_URL] to customize the settings while database.yml should be left untouched unless you actually need to tweak the db.
I'm coming from the world of python, django where usually our deployment flow was as follow:
tar/gz our code release
unpack on the production
Run db migration manually via south
Run the app
Grails is a little bit different than python/django mainly because the end product is a compiled war. My biggest problem is the manual DB migration. I don't want to run it automatically, one suggested solution that I saw is to use dbm-update-sql to generate manual sql file, but in order to produce it I need my local DB to have the same version as the production DB - I don't like it.
any other suggestions ? it looks to me like the only way to run it manually is to deploy yhe source code on the machine and run the dbm commands there.
You can run dbm-update-sql against the production database, it won't make any changes since like all of the -sql scripts it's there to show you what would be done in the event of a real migration. To be safe, create a user that doesn't have permission to make any changes and use that when you run the script. Create a custom environment in DataSource.groovy with that user info and the production connection info and specify that environment when running the script.
I'd highly recommend not deploying the source to your production systems. Since, you want to manually control your database migrations, outside of the normal flow of a Grails application I'd recommend you look at using liquibase as a stand alone tool.
Obviously since you don't want to manage having a copy of your production schema to diff against this is going to be a lot of manual work for you (e.g. keeping your changes up to date).
The database migration plugin can be used to create sql scripts that you manually run, but you do need a production schema to diff against. I'd recommend you head this route, but you seem set against doing so.
Following the documentation I was able to get the database-migration plugin working on an existing project which already has quite a few tables and is running in production. I went through the following process locally:
Pulled down latest production database
Source production database to local dev
grails dbm-generate-changelog changelog.groovy
grails dbm-changelog-sync
grails dbm-gorm-diff 2012-06-25-latest.groovy --add
grails dbm-update
I understand why I had to do each of those locally to get to a point of applying future change sets. However, now I want to run my 2012-06-25-latest.groovy on one of my test servers. It already has the latest database based on our production database.
I tried just running dbm-update but without the sync it failed creating tables that already exist. So I ran dbm-changelog-sync but then when running dbm-update it didn't actually apply the latest file.
I know that I can add a context (tag) to the change sets and specify that context when running dbm-update but I want to know if this is the only way to go about this or if my workflow needs to be modified; what is the best way to go about applying the changelog to the test server?
What I ended up doing is deleting all the rows in the DATABASECHANGELOG table where the FILENAME = '2012-06-25-latest.groovy'. I then ran dbm-status and it told me I had 256 changes waiting. I then ran dbm-update and all is well.
I'm not sure this is how it was supposed to be done, but it seems to have worked.
UPDATE: In addition to this I could probably run the entire migration on an empty database, and then do a mysqldump of the production database with INSERT statements only.
I've been using mysql forever. never really needed anything fancier. But I'm using heroku a lot and while I'm working, I like free search, so I'm using the acts_as_tsearch plugin. If you go to the git repository, it tells you:
* Preparing your PostgreSQL database
Add a text search configuration 'default':
CREATE TEXT SEARCH CONFIGURATION public.default ( COPY = pg_catalog.english )
So guess what? I
changed from mysql to postgresql in my rails config
ran that "CREATE TEXT" code in the sql pain of pgAdmin (a gui for postgres)
noticed that now my development DB has something called an "FTS configuration"
tried the search functionality and it works GREAT
But I'm having trouble getting that configuration to show up in the schema. When I did rake db:dump it doesn't make it in there. I know I can add this line to the schema.rb:
execute 'CREATE TEXT SEARCH CONFIGURATION public.default ( COPY = pg_catalog.english )'
and that works, but how can I get that configuration into the schema without my having to hand-add it? Can I create a file that is also loaded after schema.rb when someone types rake db:load?
And for the postgres people, a question: What does that CREATE TEXT SEARCH CONFIGURATION... do?
Why don't you try adding it to a migration file and rake that against the heroku db?
I'm working on a Rails site that connects to an Oracle database, and though I didn't build the site from scratch, I'm doing maintenance work. The site uses the delayed_jobs plugin to handle some background tasks and I'd like to be able to run rake jobs:work on the development server to periodically process all jobs in the queue (due to the server's configuration, running a daemonized version of the script on the development server isn't an option). However, whenever I try running the command, I get the following classic Oracle error:
error while trying to retrieve text for error ora-12154
Ordinarily, I'd think this would be an authentication problem (e.g. incorrect credentials in database.yml), but the site is up and running fine (and doing lots of database stuff). I've tried adding RAILS_ENV=production as a parameter to rake to force it to run in in the production environment, but got the same error (there are two separate rails installations for the production and development versions of the site, and I've set the "development" and "production" credentials in development's db config file to be identical).
I'm not sure what could be causing this error, and I don't have a ton of experience using Oracle with rails. Any suggestions?
Thanks a lot!
Justin
EDIT (10/26/09): Still can't figure out what's causing the problem here. The app continues to run (and talk to the database) without a problem, but rake keeps throwing DB errors. So does script/console, which shows a prompt but first complains with the same Oracle error message. I'm going to keep looking, but I'm running out of ideas...
EDIT(10/26/09, later): Following the advice of this link, I set both ORACLE_HOME and TNS_ADMIN to point to the directory where tnsnames.ora lives. Just setting ORACLE_HOME had no obvious effect, but now that TNS_ADMIN points to the right place, I've started getting segmentation faults whenever I try to open the console or run rake:
/usr/local/lib/ruby/site_ruby/1.8/oci8.rb:184: [BUG] Segmentation Fault
and get booted unceremoniously back to the prompt. Any further ideas?
Finally got it...turns out that ORACLE_HOME wasn't being correctly set as an environment variable for my user account. Now rake, script/console, etc. are humming along happily.
The oracle error says the following:
ORA-12154 is generated by the oracle network layer. TNS error message is thrown during the logon process to a database. This error indicates that the communication software in Oracle ( SQL *Net or Net8 ) did not recognize the host/service name specified in the connection parameters. This error almost always indicates a misconfiguration of the oracle tns entries.
Can you connect to your oracle instance using sqlplus or another db tool?
It is odd that the app runs fine though.
Is there an $ORACLE_SID laying around somewhere that could be pointing to a db that doesn't exist?
IN sql server I would probaly run profuiler to see what is actually being sent vice what I think I have set up. I'm sure Oracle aslo has some type of profiling utility. I would try that and see, you may find it isn't using the credentials you tink it is.
Well, as Mike mentioned, ora-12145 means TNS couldn't resolve the database identifier (TNS is Oracle's name-to-database mapping, sorta-kinda like DNS). If you can find your tnsnames.ora file, you can see what databases are configured there and compare that to the database.yml file. The fact that it works as a delayed job but not from the command line is a bit odd, though, and makes me think that perhaps there are some environment variables being set in one context that aren't in the other.
If neither of those pan out, there's a long list of troubleshooting suggestions at http://ora-12154.ora-code.com that are specific to that error code.