This is probably a dumb question, but it seems like a reasonable use case to me (unless I'm missing some obvious mistake). Here's where I'm at:
In production, I wish to provide database connection parameters via environment variables to connect to a database hosted off-site (e.g. in AWS)
In production I need to be able to apply migrations to the database easily using the TypeORM CLI (typeorm migration:run)
In development, I wish to provide a docker-compose.yml file so that new developers can quickly spin up a working copy of my application using docker-compose up -d, and quickly see the results of their code changes (I'm still working on a solution to watch for changes and re-compile, because my application is written in TypeScript and has to be built before it can be run)
For my unit tests, I want to spin up an empty database in order to populate it with hard-coded data on each test (ensuring idempotency and consistency of tests) and I want these tests to be performant and low-cost
In development I wish to be able to create migrations easily using the TypeORM CLI (typeorm migration:generate) -- this is where I'm having trouble
In production TypeORM will read from the environment variables by default, so this works fine out of the box. I can even run typeorm migration:run and, assuming the connection parameters are correct, it works.
Because I'm using Docker Compose to spin up a working copy of the application in development, I can supply environment variables in the docker-compose.yml file so that my application can connect to the database within Docker. This is fine -- still using environment variables for everything.
For the unit tests, I integrate TypeORM into my application like so (this is a HapiJS plugin):
import * as path from "path";
import * as Hapi from "#hapi/hapi";
import {createConnection, getConnectionOptions} from "typeorm";
export default {
name: "TypeORM",
version: "1.0.0",
register: async function (server:Hapi.Server, options:Hapi.ServerRegisterOptions)
{
const connectionOptions = await getConnectionOptions();
Object.assign(connectionOptions, {
entities: [path.join(__dirname, "../models/**/*")],
migrations: [path.join(__dirname, "../migrations/**/*")],
// subscribers: [path.join(__dirname, "../subscribers/**/*")],
synchronize: false
});
if ((options as any).useInMemoryDb)
{
console.log("Switching to in-memory database for unit testing");
Object.assign(connectionOptions, {
type: "sqlite",
host: null,
port: null,
username: null,
password: null,
database: ":memory:",
synchronize: true
});
}
const connection = await createConnection(connectionOptions);
server.decorate("server", "getConnectionManager", () => connection.manager);
server.decorate("server", "getConnection", () => connection);
}
};
Now in my unit tests I simply set useInMemoryDb to true and it will ignore the environment variables and instead spin up an in-memory Sqlite database. Since it's in-memory it's very performant, but has limitations on the amount of data it can hold and lacks persistence -- both problems that don't matter for unit tests. Since it's Sqlite and not MySQL that does limit me to a subset of SQL operations, but that's fine -- we simply document what features are not supported and ensure developers stick to the SQL spec.
Now for the problem of creating migrations...
If I type typeorm migration:generate on a development machine, it will fail because there are no connection options (no environment variables are set). Oddly, I tried creating a .env file to set the environment variables, but TypeORM didn't read the file despite the documentation saying it would (I still got a "no connection options were found" error). I could work around this by creating an ormconfig.json file, instead. However this is where I run into a snag.
Because I'm using Docker Compose to run my application in development, the database lives inside of an ephemeral container with no exposed ports that may not even be running when someone attempts to create a migration. The ormconfig.json file must therefore point to something that I can guarantee exists on the developer's machine. I can't trust that a developer has MySQL or Postgres or anything else installed, so I fall back to Sqlite.
If I use an in-memory Sqlite database and set synchronize: true, then no migrations are created because the database always matches my model definition. If I use an in-memory Sqlite database and set synchronize: false, then the migration created includes all tables and columns, not just new ones. I need to perform a migration:run before I perform a migration:generate, but if I just put something like typeorm migration:run && typeorm migration:generate into my NPM script, then the Sqlite database has been destroyed by the time migration:run finishes running and migration:generate starts.
I considered writing to a Sqlite database on disk and cleaning it up when I was done (e.g. typeorm migration:run && typeorm migration:generate && rm tmpdb) but I don't like the idea of the ormconfig.json specifying a file on disk as this file then needs to be excluded in the .gitignore and may accidentally be created by some one-off code execution (e.g. someone running the application without using Docker Compose)
I also don't like that I only want to spin up a single-use database for this one specific use case (generating a migration) yet in order for the CLI to find the configuration options I'm having to specify them in a global configuration file (ormconfig.json) -- this feels very hacky. A one-off use case should utilize a one-off solution, not something with lord knows how many side effects.
So is there a solution to this general problem? I want to be able to generate migrations based on my model definitions and my existing migrations, regardless of the existence of a database at the time that migration:generate is run.
Related
I'm coming from the world of python, django where usually our deployment flow was as follow:
tar/gz our code release
unpack on the production
Run db migration manually via south
Run the app
Grails is a little bit different than python/django mainly because the end product is a compiled war. My biggest problem is the manual DB migration. I don't want to run it automatically, one suggested solution that I saw is to use dbm-update-sql to generate manual sql file, but in order to produce it I need my local DB to have the same version as the production DB - I don't like it.
any other suggestions ? it looks to me like the only way to run it manually is to deploy yhe source code on the machine and run the dbm commands there.
You can run dbm-update-sql against the production database, it won't make any changes since like all of the -sql scripts it's there to show you what would be done in the event of a real migration. To be safe, create a user that doesn't have permission to make any changes and use that when you run the script. Create a custom environment in DataSource.groovy with that user info and the production connection info and specify that environment when running the script.
I'd highly recommend not deploying the source to your production systems. Since, you want to manually control your database migrations, outside of the normal flow of a Grails application I'd recommend you look at using liquibase as a stand alone tool.
Obviously since you don't want to manage having a copy of your production schema to diff against this is going to be a lot of manual work for you (e.g. keeping your changes up to date).
The database migration plugin can be used to create sql scripts that you manually run, but you do need a production schema to diff against. I'd recommend you head this route, but you seem set against doing so.
I'm running the rails-composer script with
rails new myproject -m https://raw.github.com/RailsApps/rails-composer/master/compser.rb
And everything goes along smoothly until it asks me if I want to go ahead and drop the db in. I say yes. All drops fail, and all creations fail.
Everything else finished fine. And testing the site brings up a beautiful error site with lots of details on how the database password wasn't accepted. But of course it was never created...
How do I give the script permission to create the database without a password? I've tried preempting creating the username as the app name in postgres. I tried building the Rails project as the postgres user.
It should be a simple and straightforward solution since the rest is automated.
Did you specify something other than SQLite for the database?
From the README:
Choose “SQLite” for the easiest setup. If you choose PostgreSQL or
MySQL, the databases must be installed and running before you run
Rails Composer.
Rails composer does not install and setup your database server for you. It assumes there's a properly named database already present before you run this if you're using PostgreSQL or MySQL.
I'm trying to follow the tutorial here.
I have declared the dependency for the database migration plugin in my BuildConfig.groovy file with runtime ":database-migration:1.0" and then compiled. I have also comment out the dbCreate line of my production settings in my DataSource.groovy file. My production database is empty with no tables in it.
I then try to run the two commands to generate my initial change log:
grails dbm-create-changelog
grails prod dbm-generate-gorm-changelog --add changelog-1.0.groovy
The problem is the first command creates tables in my development database, not my production database. Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created. I get several errors that say Hibernate failed to index the database and I a bunch of errors like this:
| Error 2012-07-10 08:40:28,704 [Compass Gps Index [pool-11-thread-2]] ERROR util.JDBCExceptionReporter - Table 'mygrailsapp_prod.some_class' doesn't exist
Even when I comment out my development settings in my DataSource.groovy file Grails is still looking for my development database. I should point out though if I drop the prod off the second command a the changelog-1.0.groovy file generates fine, though I am unclear if will somehow be messed up because it was generated off the development database (which had no tables in it until I ran the first command) instead of the production database.
What am I doing wrong here?
The problem is the first command creates tables in my development database, not my production database.
That's probably because it is running against the development environment and you still have its dbCreate set to "update"
Then the second command fails to create the changelog-1.0.groovy file it is supposed to create (I assume) because the production database never had any tables created.
That's not entirely accurate. From the link you posted it says after that step: "Note that your database will remain empty!" The database tables will only get created when you execute a dbm-update command. That's when the changelog actually executes.
I think the blog you linked to isn't entirely accurate on the prod switch for the second command. Nothing about your Domains is environment specific. So just leave that off and you should be able to keep going. I'm not sure why that error is being thrown. It really doesn't make sense.
Following the documentation I was able to get the database-migration plugin working on an existing project which already has quite a few tables and is running in production. I went through the following process locally:
Pulled down latest production database
Source production database to local dev
grails dbm-generate-changelog changelog.groovy
grails dbm-changelog-sync
grails dbm-gorm-diff 2012-06-25-latest.groovy --add
grails dbm-update
I understand why I had to do each of those locally to get to a point of applying future change sets. However, now I want to run my 2012-06-25-latest.groovy on one of my test servers. It already has the latest database based on our production database.
I tried just running dbm-update but without the sync it failed creating tables that already exist. So I ran dbm-changelog-sync but then when running dbm-update it didn't actually apply the latest file.
I know that I can add a context (tag) to the change sets and specify that context when running dbm-update but I want to know if this is the only way to go about this or if my workflow needs to be modified; what is the best way to go about applying the changelog to the test server?
What I ended up doing is deleting all the rows in the DATABASECHANGELOG table where the FILENAME = '2012-06-25-latest.groovy'. I then ran dbm-status and it told me I had 256 changes waiting. I then ran dbm-update and all is well.
I'm not sure this is how it was supposed to be done, but it seems to have worked.
UPDATE: In addition to this I could probably run the entire migration on an empty database, and then do a mysqldump of the production database with INSERT statements only.
We have an existing Rails 3 app that has been copied and loaded on a separate server. We've setup the posgres DB for this server; and also configured the database.yaml, pg gems, etc to setup for the port.
However, only the database schema can be migrate...though all the data files has the correct content.
I've tried variations of the db migrate, dump, resets, load, etc. But I'm not success getting the actual data in the database. Again, the server migration is for identical hardware/software config. So, its Rails3.1/Postgres9/Ruby 1.92
I don't get any errors, the data doesn't populate. The ultimate goal is to have an identical app on the 2 servers.
Any ideas? I've already spent 4 days fighting. Many thanks!!
"...the actual data in the database"
If you have an existing database with transactional data - then I think you want to use postgres tools to move the database? maybe I am not understanding the question correctly?
on the source machine
pg_dump DATABASE_NAME > ~/DATABASE_NAME_dump.sql
copy the dump file to the target machine
on the target machine
bundle exec rake db:create
psql DATABASE_NAME < ~/DATABASE_NAME_dump.sql
lots of good information here - http://www.postgresql.org/docs/9.0/static/backup.html
Have you tried the taps gem?
It enables you to transfer schema and data from one instance to another.