How can I force Liquibase to recalculate checksums without re-running the statements? - checksum

We're using Liquibase 3.2 with Java 6. Is there a way I can force Liquibase to recalculate checksums without re-running the same statements from our Liquibase files? In our database, I run this ...
update DATABASECHANGELOG set md5sum = null where 1;
However, when I run my Liquibase change scripts, certain executions still fail with the following errors ...
invoking liquibase change script with file /tmp/deploywork/db.changelog-master.xml
running /usr/java/liquibase/liquibase --logLevel=info --driver=com.mysql.jdbc.Driver --classpath=/usr/java/jboss/modules/com/mysql/main/mysql-connector-java-5.1.22-bin.jar --changeLogFile=/tmp/deploywork/db.changelog-master.xml --url="jdbc:mysql://myservername:3306/my_db" --username=username --password=password update
INFO 5/13/15 2:15 PM: liquibase: Successfully acquired change log lock
INFO 5/13/15 2:15 PM: liquibase: Reading from my_db.DATABASECHANGELOG
INFO 5/13/15 2:15 PM: liquibase: Successfully released change log lock
Unexpected error running Liquibase: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
SEVERE 5/13/15 2:15 PM: liquibase: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
liquibase.exception.ValidationFailedException: Validation Failed:
3 change sets check sum
db.changelog-1.0.xml::1357593229391-25::rob (generated) is now: 7:5cfe9ecd779a71b6287ef2360a6979bf
db.changelog-7.0.xml::create-address-email-index::davea is now: 7:da0132e30ebd6a1bc52d9a39bb8c56d7
db.changelog-7.0.xml::add-myproject-event-object-id-col::davea is now: 7:2eab5d784647ce33ef3488aa8c383443
at liquibase.changelog.DatabaseChangeLog.validate(DatabaseChangeLog.java:181)
at liquibase.Liquibase.update(Liquibase.java:191)
at liquibase.Liquibase.update(Liquibase.java:174)
at liquibase.integration.commandline.Main.doMigration(Main.java:997)
at liquibase.integration.commandline.Main.run(Main.java:170)
at liquibase.integration.commandline.Main.main(Main.java:89)
Here is one of the change sets that the script is complaining about …
<changeSet author="davea" id="add-myproject-event-object-id-col">
<addColumn tableName="sb_myproject_event">
<column name="OBJECT_ID" type="VARCHAR(32)"/>
</addColumn>
<createIndex indexName="SB_myproject_EVENT_IDX"
tableName="sb_myproject_event"
unique="false">
<column name="OBJECT_ID" type="varchar(32)" />
</createIndex>
<sql>update sb_myproject_event set object_id=LEFT(SUBSTRING_INDEX(event_data, '"id":"', -2), 24) where object_id is null and event_data is not null;</sql>
<!-- Delete older events that no longer need to be processed -->
<sql>delete from sb_myproject_event where id not in (select q.* from (select e.id FROM sb_myproject_event e, (select object_id, max(date_processed) d from sb_myproject_event group by object_id) o where e.object_id = o.object_id and e.date_processed = o.d) q);</sql>
</changeSet>
As I said, I only want to recalculate checksums (have to do this because we're changing Liquibase versions).

Rather than clearing the checksums yourself using SQL, it will probably be better to let Liquibase do that by using the clearCheckSums command:
https://docs.liquibase.com/commands/community/clearchecksums.html
Removes current checksums from database. On next run checksums will be recomputed.

Use the command below if you are using maven:
mvn liquibase:clearCheckSums

For those interested in what liquisebase:clearCheckSums does is it basically runs this sql
UPDATE databasechangelog SET MD5SUM = NULL
Alternatively, you can truncate the 'databasechangelog' table; the new changelog entries will be inserted the next time liquibase is run.
If you are using jHipster they fixed this behavior in v7.x so that liquibase knows to re-compute check sums. (This is if you are getting the error related to failed check sum validations and you are sure the changelog entries you have are correct).

Related

Neo4j fails to start up due to trigger execution failure

I have a triggered called create-owner-notification. It worked like a charm after creating it, but after a restart of Neo4j it blocks a successful start up of the database. The error found in debug.log is:
[a.t.Trigger] Error executing trigger create-owner-notification in phase before Unknown function 'apoc.create.uuid' (line 5, column 83 (offset: 193))
"MERGE (o)<-[:RELATES_TO]-(n:Notification:OwnerNotification{date_created: dt, uid: apoc.create.uuid()})<-[:HAS_NOTIFICATION{read: false, date_received: dt}]-(u)"
at org.neo4j.kernel.impl.query.QueryExecutionKernelException.asUserException(QueryExecutionKernelException.java:35)
[..]
As can be seen it doesn't recognise the function apoc.create.uuid.
My first idea was to simply remove the apoc.create.uuid dependency and instead call the native randomUUID() function instead. However, there is no way I can modify or remove the current trigger, for the database cannot start. If I set the config value apoc_trigger_enabled to false the database does start successfully but I cannot interact with the trigger in order to remove it.
If need be I can share further logging or config files.
How can I get the database to startup and successfully get the trigger to work again?

SQL Workbench/J WbDataDiff TimeStampt

I'm executing data diff command in SQL-Workbench/J to compare two PostgreSQL 10 tables. As following
WbDataDiff -referenceProfile="prod"
-targetProfile="dev"
-referenceTables=public."Product"
-file=migrate_staging.sql
-includeDelete=false
-sqlDateLiterals="ansi"
But the SQL output of the command is not compatible with the SQL. Notice in the following example Timestamp should have been like '2020-07-14T16:00:48.918167' with no spaces and with single quotes. I've tried "ansi", "dbms" and "default" for the parameter sqlDateLiterals, but the output was the same.
UPDATE "Product"
SET "UpdateDate" = 2020 -07 -14 T16:00:48.918167
WHERE "OID" = 11109;
So how can I output the proper Timestamp format using this command?
Turns out this was a bug that has been fixed in the dev build as of 15/07/2020

Informix: create table <table name> as select * from <old table> locked the DB, how to unlock it?

I was doing some R&D on table field alterations. So, I needed a clone of an table.
I ran the command "create table <table name> as select * from <old table>" and it worked.
However, when I ran the second time, I cancelled the command in between and after that I am getting the below error.
$ select count(*) from my_table_copy;
SQL -211: Cannot read system catalog (systables).
ISAM -154: ISAM error: Lock Timeout Expired
SQLSTATE: IX000 at /dev/stdin:1
When I tried to fetch the DB through Open Admin, there also am getting the error:
256 : Database query failed: -
Error: -244 [Informix][Informix ODBC Driver][Informix]Could not do a
physical-order read to fetch next row. sqlerrm(systables)
(SQLExecute[-244] at
How to resolve this?
Thanks,
You must be getting these lock errors, because engine is rolling back your clone table transaction.
Check with "onstat -x" if there is a transaction with an R on the flags column. The est. rb_time column shows an estimate of recovery complete process.
My suggestion? If you don't need exactly the same actual data on the new table, you can put a "SET ISOLATION TO DIRTY READ;" right before your create table command.

Can't connect to CFS node

I removed (or decommisioned, can't remember) a DSE analytics node (with IP 10.14.5.50) a couple of months ago. When I now try to execute a dse shark (CREATE TABLE ccc AS SELECT ...) query I now receiving:
15/01/22 13:23:17 ERROR parse.SharkSemanticAnalyzer: org.apache.hadoop.hive.ql.parse.SemanticException: 0:0 Error creating temporary folder on: cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db. Error encountered near token 'TOK_TMP_FILE'
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1256)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1053)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8342)
at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:105)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
at shark.SharkDriver.compile(SharkDriver.scala:215)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:347)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:240)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: java.io.IOException: Error connecting to node 10.14.5.50:9160 with strategy STICKY.
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:216)
at org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:270)
at org.apache.hadoop.hive.ql.Context.getExternalTmpFileURI(Context.java:363)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1253)
... 12 more
I guess the above error is due to my keyspace referring to the old node:
shark> DESCRIBE DATABASE mykeyspace;
OK
mykeyspace cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db
Time taken: 0.997 seconds
Is there any way for me to fix this incorrect database path?
Tried (but failed) workaround to recreate the database: In cqlsh I created a keyspace thekeyspace and added a table thetable. I the opened up dse hive (and noticed that DESCRIBE DATABASE thekeyspace is giving me a correct cfs path). However, I am unable to drop the the database using DROP DATABASE thekeyspace.
Additional information:
I have no external tables in my keyspace.
Making the SELECT against the tables works.
Setting -hiveconf cassandra.host=WORKING_NODE_IP does not help.
The following commands return proper IP:s (ie. not X.X.X.50):
dsetool listjt
dsetool jobtracker
dsetool sparkmaster
I am getting the same error when I execute the query using dse hive.
No Shark variable is referring to X.X.X.50 when I execute set; in its REPL.
I am running DSE 4.5.
Stumbled across this page that says you need to TRUNCATE "HiveMetaStore"."MetaStore" (in cqlsh) after removing Hive nodes. That did the trick.

db2 stored procedure error when update statement follows rollback

I have a stored procedure where at the end I check for errors and if there are errors I perform a rollback and then update the status on the batch table to 'FAILED'. When I run the stored procedure I regularly get an SQLCODE 818 error saying there is 'a timestamp conflict occurred'.
When I remove the update statement that changes the status on the batch table, I do not get the error.
What is the best practice to perform these actions so I avoid getting the error?
The section of code looks like this:
IF v_error_count > 0 THEN
-- Batch failed
ROLLBACK;
UPDATE batch_table bt
SET bt.batch_status = 'FAILED'
WHERE batch_id = input_batch_id;
END IF;
Thanks for any help.
SQL Code -818 indicates that the internal timestamp DB2 uses to ensure consistency between the running module matches the DBRM version created when the SQL Statements were precompiled.
You might check with your DBA (or someone else at your site), because the specific steps you have to perform may vary. For a general overview, you can see this article on IBM Knowledge Center.

Resources