How to generate sql file from Liquibase without DATABASECHANGELOG inserts? - ant

I have the following problem : I need to produce migration file for a database in production. Currently I'm using ant and following ant task :
<liquibase:updateDatabase changeLogFile=db.changelog-master.xml" databaseRef="oracle-database" outputFile="out_ora.sql" />
But my file includes insert statements for DATABASECHANGELOG table. How can I produce output file without this statements ? (I wouldn't like to delete this statements manually or by some script later).

You can use this extension: https://github.com/liquibase/liquibase-nochangelogupdate
Just add the jar to your classpath and liquibase will not output any databasechangelog SQL

If you want to filter the insert/update from ChangeSets irrespective of Liquibase Insert/update statements retaining only the Create, Alter scripts.
Define a property with list of classes to be excluded
sqlgenerator.exclude=liquibase.statement.core.InsertOrUpdateStatement,liquibase.statement.core.InsertStatement,liquibase.statement.core.UpdateStatement,liquibase.statement.core.GetNextChangeSetSequenceValueStatement,liquibase.statement.core.MarkChangeSetRanStatement,liquibase.statement.core.RemoveChangeSetRanStatusStatement,liquibase.statement.core.UpdateChangeSetChecksumStatement
Implement a SQL Generator class that filters all the SQL statements falls into the class category listed above.
Spring injects the property into the class. Be sure to create the class under the package "liquibase.sqlgenerator.ext".
#Component
public class FilteredSQLGenerator extends AbstractSqlGenerator<AbstractSqlStatement> {
private static final Logger LOGGER = Logger.getLogger(FilteredSQLGenerator.class);
private static String[] excludeArr = new String[0];
#Value("${sqlgenerator.exclude}")
private String exclude;
#PostConstruct
public void init() {
LOGGER.debug(" Exclude List set to : " + exclude);
if (StringUtils.isNotBlank(exclude)) {
excludeArr = StringUtils.split(exclude, ',');
}
}
#Override
public ValidationErrors validate(AbstractSqlStatement statement, Database database, SqlGeneratorChain sqlGeneratorChain) {
return sqlGeneratorChain.validate(statement, database);
}
#Override
public int getPriority() {
return 1000;
}
#Override
public Sql[] generateSql(AbstractSqlStatement statement, Database database, SqlGeneratorChain sqlGeneratorChain) {
String clazzName = statement.getClass().getName();
for (String exclude : excludeArr) {
if (exclude.equals(clazzName)) {
return new Sql[0];
}
}
return sqlGeneratorChain.generateSql(statement, database);
}
}

Related

Deploying a transaction event listener in a Neo4jDesktop installation

I have created a project that contains an ExtensionFactory subclass annotated as #ServiceProvider that returns a LifecycleAdapter subclass which registers a transaction event listener in its start() method, as shown in this example. The code is below:
#ServiceProvider
public class EventListenerExtensionFactory extends ExtensionFactory<EventListenerExtensionFactory.Dependencies> {
private final List<TransactionEventListener<?>> listeners;
public EventListenerExtensionFactory() {
this(List.of(new MyListener()));
}
public EventListenerExtensionFactory(List<TransactionEventListener<?>> listeners) {
super(ExtensionType.DATABASE, "EVENT_LISTENER_EXT_FACTORY");
this.listeners = listeners;
}
#Override
public Lifecycle newInstance(ExtensionContext context, Dependencies dependencies) {
return new EventListenerLifecycleAdapter(dependencies, listeners);
}
#RequiredArgsConstructor
private static class EventListenerLifecycleAdapter extends LifecycleAdapter {
private final Dependencies dependencies;
private final List<TransactionEventListener<?>> listeners;
#Override
public void start() {
DatabaseManagementService managementService = dependencies.databaseManagementService();
listeners.forEach(listener -> managementService.registerTransactionEventListener(
DEFAULT_DATABASE_NAME, listener));
dependencies.log()
.getUserLog(EventListenerExtensionFactory.class)
.info("Registering transaction event listener for database " + DEFAULT_DATABASE_NAME);
}
}
interface Dependencies {
DatabaseManagementService databaseManagementService();
LogService log();
}
}
It works fine in an integration test:
public AbstractDatabaseTest(TransactionEventListener<?>... listeners) {
URI uri = Neo4jBuilders.newInProcessBuilder()
.withExtensionFactories(List.of(new EventListenerExtensionFactory(List.of(listeners))))
.withDisabledServer()
.build()
.boltURI();
driver = GraphDatabase.driver(uri);
session = driver.session();
}
Then I copy the jar file in the plugins directory of my desktop database:
$ cp build/libs/<myproject>.jar /mnt/c/Users/albert.gevorgyan/.Neo4jDesktop/relate-data/dbmss/dbms-7fe3cbdb-11b2-4ca2-81eb-474edbbb3dda/plugins/
I restart the database and even the whole desktop Neo4j program but it doesn't seem to identify the plugin or to initialize the factory: no log messages are found in neo4j.log after the start event, and the transaction events that should be captured by my listener are ignored. Interestingly, a custom function that I have defined in the same jar file actually works - I can call it in the browser. So something must be missing in the extension factory as it doesn't get instantiated.
Is it possible at all to deploy an ExtensionFactory in a Desktop installation and if yes, what am I doing wrong?
It works after I added a provider configuration file to META-INF/services, as explained in https://www.baeldung.com/java-spi. Neo4j finds it then.

Getting so many warning while using List with custom POJO Java class in apache beam java

I am new to Apache beam,I am using Apache beam and as runner using Dataflow in GCP.I am getting following error while executing pipeline.
coder of type class org.apache.beam.sdk.coders.ListCoder has a #structuralValue method which does not return true when the encoding of the elements is equal. Element [Person [businessDay=01042020, departmentId=101, endTime=2020-04-01T09:06:02.000Z, companyId=242, startTime=2020-04-01T09:00:33.000Z], Person [businessDay=01042020, departmentId=101, endTime=2020-04-01T09:07:47.000Z, companyId=242, startTime=2020-04-01T09:06:03.000Z], Person [businessDay=01042020, departmentId=101, endTime=2020-04-01T09:48:25.000Z, companyId=242, startTime=2020-04-01T09:07:48.000Z]]
PCollection is like PCollection< KV < String,List < Person > > > and PCollection< KV < String,Iterable < List < Person > > > >
I have implemented Person as serializable POJO class and override equals and hash method also.But i think i need to write custom ListCoder for person also and register in the pipeline.
I am not sure how to resolve this issue,please help.
Here is a working example.
If you clone the repo, under the playground root dir, run ./gradlew run, then you can verify the effect. You could also run with ./gradlew run --args='--runner=DataflowRunner --project=$YOUR_PROJECT_ID --tempLocation=gs://xxx/staging --stagingLocation=gs://xxx/staging' to run it on Dataflow.
The Person class should look like this if you build it from scratch:
class Person implements Serializable {
public Person(
String businessDay,
String departmentId,
String companyId
) {
this.businessDay = businessDay;
this.departmentId = departmentId;
this.companyId = companyId;
}
public String companyId() {
return companyId;
}
public String businessDay() {
return businessDay;
}
public String departmentId() {
return departmentId;
}
#Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other == null) {
return false;
}
if (getClass() != other.getClass()) {
return false;
}
Person otherPerson = (Person) other;
return this.businessDay.equals(otherPerson.businessDay)
&& this.departmentId.equals(otherPerson.departmentId)
&& this.companyId.equals(otherPerson.companyId);
}
#Override
public int hashCode(){
return Objects.hash(this.businessDay, this.departmentId, this.companyId);
}
private final String businessDay;
private final String departmentId;
private final String companyId;
}
I recommend
using AutoValue instead of creating POJO from scratch. Here are some examples. You can view the whole project here. The advantage is that you don't have to implement the equals and hashCode from scratch every time you create a new object type.
In the KV, if the key is an iterable such as a List, wrap it in an object and explicitly deterministically serialize it (example) because the serialization in Java is underterministic.

Entity Framework Code First Seed database

I am using E.F. with Code First approach... Now I have to fill the database with some data... but I cannot do it easily... Simething is not clear...
I have googled but every solution I have found suppose I must create a custom Initilization... but I do not want to create a custom Initialization..
What I need is a method the everytime I launch some tests DROP the database, RE-CREATE it and fill it with some data.
What I have tried to do is:
public PublicAreaContext()
: base("PublicAreaContext")
{
Database.SetInitializer<PublicAreaContext>(new DropCreateDatabaseAlways<PublicAreaContext>());
}
Then I have tried to impement the Seed method inside the Configuration class. So:
internal sealed class Configuration : DbMigrationsConfiguration<PublicAreaContext>
{
protected override void Seed(PublicAreaContext context)
{
...
}
}
But when I try to debug I never go in the seed method... I go in constructor of the Configuration class, but not in the seed... why?
Thanx for your help
You are confusing seed methods. There is one for initializers that you can use when your database is created and there is one for migrations. See http://blog.oneunicorn.com/2013/05/28/database-initializer-and-migrations-seed-methods/
Since you are using DropCreateDatabaseAlways, migrations won't run, so do something like this:
public static class MyDatabase
{
public static void Initialize()
{
Database.SetInitializer(new MyInitializer());
}
}
public class MyInitializer : DropCreateDatabaseAlways<PublicAreaContext>
{
protected override void Seed(PublicAreaContext context)
{
base.Seed(context);
context.Roles.Add(new Role
{
ID = 1,
Name = "User",
});
context.Roles.Add(new Role
{
ID = 2,
Name = "Admin",
});
context.SaveChanges();
}
}
Other examples:
http://www.codeguru.com/csharp/article.php/c19999/Understanding-Database-Initializers-in-Entity-Framework-Code-First.htm
http://www.techbubbles.com/aspnet/seeding-a-database-in-entity-framework/

groovy script having access to domain classes

I would like to have a groovy script that can access my domain classes and extract all properties from there.
I have not written any groovy-scripts so far within my Grails application.
How do I do this?
I am thinking of something like
run-script <scriptname>
In the script I would like to
For all Domain classes
For all Fields
println (<database-table-name>.<database-field-name>)
What would be the easiest approach to achieve this.
Below I'm including a script code using which you can list down all the domain classes with their properties. This script generates a Map that contains the db mapping for domain and its properties. If you have a different requirement, you can achieve that using the same approach.
import org.codehaus.groovy.grails.commons.DefaultGrailsDomainClass
import org.codehaus.groovy.grails.commons.DomainClassArtefactHandler
import org.codehaus.groovy.grails.orm.hibernate.persister.entity.GroovyAwareSingleTableEntityPersister as GASTEP
import org.hibernate.SessionFactory
//Include script dependencies required for task dependencies
includeTargets << grailsScript("Bootstrap")
target(grailsDomianMappings: "List down field details for all grails domain classes") {
//Task dependencies required for initialization of app. eg: initialization of sessionFactory bean
depends(compile, bootstrap)
System.out.println("Running script...")
//Fetch session factory from application context
SessionFactory sessionFactory = appCtx.getBean("sessionFactory")
//Fetch all domain classes
def domains = grailsApp.getArtefacts(DomainClassArtefactHandler.TYPE)
GASTEP persister
List<String> propertyMappings = []
Map<String, List<String>> mappings = [:]
//Iterate over domain classes
for (DefaultGrailsDomainClass domainClass in domains) {
//Get class meta data
persister = sessionFactory.getClassMetadata(domainClass.clazz) as GASTEP
propertyMappings = []
//fetch table name mapping
String mappedTable = persister.tableName
//fetch all properties for domain
String[] propertyNames = persister.propertyNames
propertyNames += persister.identifierPropertyName
//fetch column name mappings for properties
propertyNames.each {
propertyMappings += persister.getPropertyColumnNames(it).first()
}
mappings.put(mappedTable, propertyMappings)
}
//Print data
mappings.each { String table, List<String> properties ->
properties.each { String property ->
System.out.println("${table}.${property}")
}
System.out.println("++++++++++++++++++++++++++++++++++++++++++++++")
}
}
setDefaultTarget(grailsDomianMappings)

Avoiding table changes when mapping legacy database tables in Grails?

I have an applicaton that contains some tables that are auto-generated from Grails domain classes and one legacy table (say table legacy) that have been created outside of Grails but are being mapped by Grails domain classes. Mapping the columns in the legacy database is trivial, but I would like to disable the adding of extra fields and indexes that Grails tries to take care of for said table.
My question is: How do I instruct Grails not to make any table changes to the legacy table (changes such as adding indexes, foreign keys, version columns, etc.)?
Please note that I do not want to disable the automatic schema generation/updating for all tables, only for the mapped table legacy.
The only way I've been able to do stuff like this is a custom Configuration class:
package com.foo.bar;
import java.util.ArrayList;
import java.util.List;
import org.codehaus.groovy.grails.orm.hibernate.cfg.GrailsAnnotationConfiguration;
import org.hibernate.HibernateException;
import org.hibernate.dialect.Dialect;
import org.hibernate.dialect.HSQLDialect;
import org.hibernate.tool.hbm2ddl.DatabaseMetadata;
public class DdlFilterConfiguration extends GrailsAnnotationConfiguration {
private static final String[] IGNORE_NAMES = { "legacy" };
#Override
public String[] generateSchemaCreationScript(Dialect dialect) throws HibernateException {
return prune(super.generateSchemaCreationScript(dialect), dialect);
}
#Override
public String[] generateDropSchemaScript(Dialect dialect) throws HibernateException {
return prune(super.generateDropSchemaScript(dialect), dialect);
}
#Override
public String[] generateSchemaUpdateScript(Dialect dialect, DatabaseMetadata databaseMetadata) throws HibernateException {
return prune(super.generateSchemaUpdateScript(dialect, databaseMetadata), dialect);
}
private String[] prune(String[] script, Dialect dialect) {
if (dialect instanceof HSQLDialect) {
// do nothing for test env
return script;
}
List<String> pruned = new ArrayList<String>();
for (String command : script) {
if (!isIgnored(command)) {
pruned.add(command);
}
}
return pruned.toArray(new String[pruned.size()]);
}
private boolean isIgnored(String command) {
command = command.toLowerCase();
for (String table : IGNORED_NAMES) {
if (command.startsWith("create table " + table + " ") ||
command.startsWith("alter table " + table + " ") ||
command.startsWith("drop table " + table + " ")) {
return true;
}
}
return false;
}
}
Put this in src/java (it can't be written in Groovy because of a weird compilation error) and register it in DataSource.groovy using the 'configClass' attribute:
dataSource {
pooled = true
driverClassName = ...
username = ...
password = ...
dialect = ...
configClass = com.foo.bar.DdlFilterConfiguration
}
My solution was a bit simpler.
in the mapping section of the Domain class, I just set version false and I named the 'id' column.
class DomainClass {
static mapping = {
table 'legacyName'
version false
columns{
id column: 'legacy_id'
}
}
}
You can try using Hibernate annotations to specify things such as column name, table, etc instead of creating a normal domain class. For more info see the "Mapping with Hibernate Annotations" section of the following link.
http://www.grails.org/Hibernate+Integration

Resources