Background
I have a relatively new Grails project using 3.0.14. I am looking to integrate liquibase for database migrations via the Database Migration plugin (2.0.0.RC4).
I have a large enough domain model so far that I have used the plugin to 'seed' an initial changelog. This is straight from the docs, and works as intended:
grails dbm-generate-gorm-changelog changelog.groovy
What I am now trying to test/get working is the dbm-gorm-diff command, which will take changes to the domain model and create a changelog that can be applied. This is where I am running into issues.
The Grails documentation suggest removing the dbCreate block from the datasource to ensure that Hibernate doesn't do the updating, and Liquibase can take over. Great, exactly what I want.
The Issue
When I remove dbCreate, Grails/hibernate still seems to update the database before the Database Migration plugin has a chance to do the diff. When doing the diff, it is already too late to see changes, so the changelogs do not contain the right data.
Config
dataSource:
pooled: true
jmxExport: true
driverClassName: org.h2.Driver
username: sa
password:
environments:
development:
dataSource:
dbCreate: verify
driverClassName: org.postgresql.Driver
dialect: org.hibernate.dialect.PostgreSQLDialect
url: jdbc:postgresql://127.0.0.1:5432/liquibase_test
username: dbuser
password: dbuser
logSql: false
formatSql: true
(I am aware that the dbCreate is set to verify. More on this later)
Steps Taken
Create a new postgres database - dbcreate -U dbuser liquibase_test
Run the initial changelog on the new database - grails dbm-update
Verify that the database is now up to date, and check that select * from databasechangelog equals the number of changes in changelog.groovy
Add a new simple domain class:
class TestDomain {
int testInt
}
Run the plugin to get the diff - grails dbm-gorm-diff add-simple-domain.groovy. The command fails with an exception:
:DataModel:dbmGormDiff
Command execution error: liquibase.command.CommandExecutionException: java.lang.NullPointerException
DataModel:dbmGormDiff FAILED
Now, remove the config dbCreate: verify from above, and run again
This completes successfully without exception, but there are issues:
the command created add-simple-domain.groovy, but it has no mention of the new domain class I just created. (It has index/sequences, but I think this is a known issue)
the new domain class has been added to the database(!?) (checked in PgAdmin)
the table databasechangelog still has the original row count, and even when interrogated no reference to the new domain class
So, I'm at a loss to explain what is going on. I can deal with the extra create/drop indexes & sequences, but I can't seem to get the liquibase stuff working. Can anyone shed some light on this for me?
Edit
I did some more digging into the NullPointer, and it seems to come from the class liquibase/ext/hibernate/snapshot/ForeignKeySnapshotGenerator.java:45, where the plugin is trying to construct a foreign key to the inherited table id field (using tablePerHierarchy false for this inheritance). I couldn't find anything that seemed related to this error after a decent search.
Edit #2
I have found an issue on Github for the tablePerHierarchy NPE: https://github.com/grails-plugins/grails-database-migration/issues/68
Update your application.yml (or application.groovy) configuration for your datasource:
dataSource:
dbCreate: none
Setting to "none" is not the same thing as removing dbCreate entirely - you need to set it explicitly to overwrite any defaults that are being set elsewhere.
"none" seems not to work for me when using JNDI datasources and still causes the ddl to run. I set it to "ignore" to be able to use db-migrations with JNDI datasources in Grails 3.0.x
I ended up getting this to work by setting hibernate.hbm2ddl.auto = 'none' in my application.groovy. Interestingly, when I tried to instead put this same config in my application.yml it had no effect.
I suspect there may be other forces at play here as I tried replicating the behaviour on a fresh Grails project without issue.
For the time being I have settled on using the hibernate property in the groovy file, though I am still curious as to why I couldn't get the config to work for me like a vanilla project.
Related
I am using spring boot and trying to import spring session.
All I have done is add one single line to the application.yaml:
spring:
datasource:
password: xx
url: jdbc:postgresql://c:5432/dbname?schema=public
username: pg
session:
store-type: jdbc
Then I found that two tables spring_session and spring_session_attributes auto generated in my database. This is expected except one thing: these two tables are generated in a different table schema. However, the tables generated by jpa(hibernate) are being put in another schema.
I tired to dig into the source codes, however, I cannot find the codes which are calling the org/springframework/session/jdbc/xx-h2.sql to create the table.
What's the magic?
Every time I run migration:generate, it creates a migration that regenerates the entire database schema (rather than a migration for just the recent changes to my entities). I'm using Typeorm version 0.2.7, the latest version.
My ormconfig.json is:
{
"host": "localhost",
"logging": false,
"port": 5432,
"synchronize": false,
"type": "postgres",
"entities": ["build/server/entity/*.js"],
"migrations": ["build/server/migration/*.js"],
"cli": {
"entitiesDir": "build/server/entity",
"migrationsDir": "build/server/migration"
},
"username": "***",
"password": "***",
"database": "***"
}
When I run typeorm migration:generate -n SomeEntityChanges, the new migration file contains instructions for creating and linking up tables for all my entities, even though most of them already have corresponding migrations in build/server/migration.
When I run typeorm migration:run, I can see that there are no pending migrations, and that the migrations that cover the existing entities have been run (i.e. they're in my migrations table).
What am I missing? The docs say that the migration:generate command should just generate a migration with the recent changes.
This might sound really stupid but I was having the same issue and the problem was the name of my database. My database name was mydbLocal with a capital L, but when typeorm was reading in order to generate a new migration it would look for mydblocal and since there was no schema with that name it caused the generation to regenerate the whole schema. It seems like it is a bug because on the parsing of the schema it looked for the lower case one but when running the migration it went into the real one (upper case L).
Anyway the way I solved it was by changing my db name to all lowercase and also editing my ormconfig database name to be all lowercase.
It is a really strange situation but this solved my problem. Hopefully it will help someone else too.
As mentioned is one of the previous answers, the issue for me was indeed camel-casing the database name. Changing database name to all lowercase seems to have fixed the migration generate issue.
However, in my new project, I noticed that the entity table name override also seems to have the same behavior. Strangely, this was not an issue in the previous projects for me.
//Previous table name causing migration file to regenerate
#Entity({
name: 'TempTable',
})
//New table name which stops the regeneration
#Entity({
name: 'temp_table',
})
Hope this helps someone facing the same issue.
Removing {schema: 'public'} from the all of my #Entity definitions fixed it for me with Postgresql
Previous:
#Entity({schema: 'public'})
Working:
#Entity()
It is because your database is probably empty. TypeOrm computes the diff between your actual codebase entities and your actual database, thus generating the migration.
Check your ormconfig.json, as it is what is read by typeorm CLI to generate the migration, it probably points to an empty database, thus generating all tables in the migration.
Just migrate on your database, and run generate again.
I kept having this issue on MySQL and after hours of tinkering I finally resolved the problem.
Assuming the { schema: "public" } fix didn't work for you, here are the actions I had to take to clean up a terrible migration generation.
First of all, make sure you're running the latest version of TypeORM. I was down by a few minor version, and this alone was enough to give me countless index-based errors.
If you're up to date, great! Here's the bad news: your entities are wrong. The biggest issue I kept running into was the default property on most of my duplications. From what I've come to understand, both Postgres and MySQL return different-than-expected results when TypeORM is trying to compare database defaults to the defined default.
For example: on a type "decimal" with a 4-digit trailing decimal, default: 0 works fine when building your column, but MySQL actually returns "0.0000", meaning no matter how many times you run this update, the default will never be a literal zero. TypeORM sees this being different, and wants to change the existing MySQL default back to a normal zero.
This error spanned across everything from default: null to "tinyint" booleans being listed as int in my schema.
Read the generated output carefully and check each entity for the property being mentioned. Some of this was fixed by updating to the latest version of TypeORM, but I managed to completely clear almost 250 table alterations by ensuring the default data actually matched what MySQL stores.
I was having the same issue, my solution was not pretty decent but actually "works".
In the migration with the whole database, search for the tables involved, filter them, and delete all the other queries generated by typeorm, after deleted, run the migration command and it will work, creating/altering the tables you desired.
It's not scalable but remember, here they specify to alter only few tables before migration.
The rule of thumb is to generate a migration after each entity change.
1st
pay attention to lowercase or upercase
2nd
if u use upsercase name put it on ""
3rd
if u use schema add it to ormconfig
like:
"schema": "public",
Good Luck
In my case after around four migrations new migrations consisted of old changes. Searched the web, checked/tried the other answers from here. The only thing which helped in my case was just to remove the project, clone it again, setting it up again (adding the .env, run npm install), run the current migrations, generating the new migration file with only the new changes as expected. Of course this is not showing the root cause, but after that I could continue working without problems.
rm -rf project
git clone project
cd project
npm ci
ts-node ./node_modules/typeorm/cli migration:run
These two comments (1, 2) made me try this.
You should use migration:create instead of migration:generate. What I recommend is, inside your package.json:
{
...
scripts: {
"migration:create": "NODE_ENV=local npm run typeorm -- migration:create -n",
"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/.bin/typeorm"
}
}
then you can just run:
$ npm run migration:create NameOfYourMigration
to create your migration successfully.
I want to use the database-migration grails plugin for database migration. When I start my Grails app the first time all the database tables are created automatically. The production setting in my DataSource.groovy is:
production {
dataSource {
dbCreate = "update"
url = "jdbc:mysql://localhost/myapp?useUnicode=yes&characterEncoding=UTF-8"
username = "test"
password = "test"
dialect = org.hibernate.dialect.MySQL5InnoDBDialect
properties {
validationQuery = "select 1"
testWhileIdle = true
timeBetweenEvictionRunsMillis = 60000
}
}
}
In my config.groovy I set:
grails.plugin.databasemigration.updateOnStart = true
grails.plugin.databasemigration.updateOnStartFileNames = ['changelog.groovy']
When I add properties to my domain classes I need to adjust the changelog file.
What is the best way to do database migration in this case? What are the steps I have to do when I add or remove columns?
As you're probably aware, the dbcreate directive is not recommended for production use:
You can also remove the dbCreate setting completely, which is recommended once your schema is relatively stable and definitely when your application and database are deployed in production.
So keep in mind that you will need to remove this (or set to 'none').
Initial Baseline Workflow
Define current state
Create database from change log or mark as up-to-date
Set config options
The first step is to get the changelog to reflect the current state. If you've got an existing database, you want to use that to define the baseline. Otherwise, use GORM to define the tables.
These commands will generate a baseline for your database. Also I choose to use the groovy DSL format rather than liquibase XML, because readability.
Existing Database
If you've got a production database with data already, its a little bit tricky. You will need to access the database or a copy of it from your grails environment. If you manipulate a copy, you will need to apply the updates back to your production (and potentially manage it as a planned outage).
The command is:
grails [environment] dbm-generate-changelog changelog.groovy
...where environment optionally specifies the dev/test/prod/custom environment the database is defined as.
Following that, mark the database as 'up-to-date' with regards to the changelog:
grails [environment] dbm-changelog-sync
Then reapply the database to production, if neccesary.
New Database
If you don't have an existing database (or don't care):
grails dbm-generate-gorm-changelog changelog.groovy
Then, to create the database from the changelog:
grails [environment] dbm-update
Configuration
You've already correctly got the options set:
grails.plugin.databasemigration.updateOnStart = true
grails.plugin.databasemigration.updateOnStartFileNames = ['changelog.groovy']
These options simply mean that the plugin will attempt to apply the changes to the database when the application starts.
Development Workflow
Make changes to domains
Generate changelog identifying differences
(Backup and) Update the database
So now you've got a database up-to-date, and you're smashing out changes to the domain classes, adding new ones and changing validation properties.
Each time you want to record your changes, you want to compare your GORM classes to what exists in the database, and create a new changelog file to record the difference:
grails [environment] dbm-gorm-diff [meaningful name].groovy --add
Here environment is the database you are comparing against, and meaningful name should reflect in some way the change being applied (perhaps a JIRA issue key, or a version number, or a description).
The --add flag will insert an include statement in changelog.groovy.
If you've configured updateOnStart, then you're done! Otherwise, to manually process the update, reuse the command:
grails [environment] dbm-update
RTFM
Plugin documentation - Getting Started
Plugin documentation - General Usage
Confile's answer above points to a good tutorial that goes into detail about manual changes to changelogs
Liquibase documentation - Changesets (Uses the XML format, but useful for understanding concepts)
The approach that I would use is to migrate every table to a Grails domain with the mapping (very important!) properly set.
Then leave Grails to create the database the first time and then populate it with a previous backup of the database you want to migrate.
After this set Grails config to update the database every time it starts.
I know it seems a little bit messy but if I´ve to do it I would´ve do it this way.
Hope it helps :)
I found a very good tutorial, which explains the solution to my problem:
Grails Db Migration Tutorial
Workflow consists of following steps:
1) Install the plugin using the command grails install-plugin database-migration
2) After setting up the plugin run the command:
grails dbm-generate-gorm-changelog changelog.groovy or changelog.xml
By default it will generate a file on location grails-app/migrations/changelog.groovy or .xml
3) set dataSource dbcreate='none'
3) Now, run
grails dbm-changelog-sync
this will create a table name databasechangelog and will insert entries according to your existing schema.
Thats it.
I am writing Symfony project (using symfony 1.4 ver. with Propel as its ORM) where some data is stored in MySQL database and some other data is stored on another server in PostgreSQL database.
To be more precise I want to store some models in MySQL database and other models in PostgreSQL database at the same time and do it seamlessly without explicit database switching (I mean Propel will use proper database connection and SQL dialect to retrieve/store data). Models from MySQL part will not have relations with PostgreSQL.
Is it possible? If yes I also would like to know how to setup development environment (I want to access different MySQL/PostgreSQL DBs in developement and production environments).
UPD: I've found question on SO reagrding this problem: Multiple databases support in Symfony But i have to check if it works with recent versions of Symfony.
i work with symfony every day and in fact you can have 2 databases in order to store unrelated parts of the model. You need to set up both connection in you database.yml (i'm unfamiliar with posgress so you will have to figure out how to set it up correclty):
mysql_connection:
class: sfPropelDatabase
param:
phptype: mysql
classname: MysqlPropelPDO
dsn: ''
username: user
password: pass
encoding: UTF-8
pooling: true
postgress_connection:
class: sfPropelDatabase
param:
phptype: postgres
classname: PostgresPropelPDO
dsn: ''
username: user
password: pass
encoding: UTF-8
pooling: true
Once you have done that, we should get started with the schema.yml file or files (as you will be using 2 databases i would suggest to have 2 files, one for the mysql and another for the postgres database):
mysql_schema.yml file:
//This is how you tell witch connection you are using for this tables
connection: mysql_connection
classes:
CLassName:
tableName: table_name
columns:
id:
name:
type: varchar(255)
required: true
[...]
postgres_schema.yml file:
connection: postgress_connection
classes:
ClassName:
tableName: table_name
columns:
id:
name:
type: varchar(255)
required: true
[...]
Once you have finished setting up your schema files, you should be good to go, create all classes and start to have fun. Hope this helps
I believe you can!
Google has quite a few results for this, try
http://snippets.symfony-project.org/snippet/194
Thats based on an older version of propel/symfony, but from a quick look, I believe it's still valid. Plus there are recent comments suggesting it works.
I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.