How to use the "recovery tool" to repair a perfino h2 database? - perfino

Our Perfino server crashed recently, logging since then the ERROR shown below. (There are some clues hinting to an OutOfMemory resulting in a corrupt db.)
It is suggested: 'Possible solution: use the recovery tool'. But neither the official perfino documentation nor the logs offer more instructions on how to proceed.
So here the question: how to use the recovery tool?
Stacktrace:
ERROR [collector] server: could not load transaction data
org.h2.jdbc.JdbcSQLException: File corrupted while reading record: "[495834] stream data key:64898 pos:11 remaining:0". Possible solution: use the recovery tool; SQL statement:
SELECT value FROM transaction_names WHERE id=? [90030-176]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:344)
at org.h2.message.DbException.get(DbException.java:178)
at org.h2.message.DbException.get(DbException.java:154)
at org.h2.index.PageDataIndex.getPage(PageDataIndex.java:242)
at org.h2.index.PageDataNode.getNextPage(PageDataNode.java:233)
at org.h2.index.PageDataLeaf.getNextPage(PageDataLeaf.java:400)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:95)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:53)
at org.h2.index.IndexCursor.next(IndexCursor.java:278)
at org.h2.table.TableFilter.next(TableFilter.java:361)
at org.h2.command.dml.Select.queryFlat(Select.java:533)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:646)
at org.h2.command.dml.Query.query(Query.java:323)
at org.h2.command.dml.Query.query(Query.java:291)
at org.h2.command.dml.Query.query(Query.java:37)
at org.h2.command.CommandContainer.query(CommandContainer.java:91)
at org.h2.command.Command.executeQuery(Command.java:197)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:109)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)
at com.perfino.a.f.b.a.a(ejt:70)
at com.perfino.a.f.o.a(ejt:880)
at com.perfino.a.f.o.a(ejt:928)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.aa.a(ejt:783)
at com.perfino.a.f.o.a(ejt:847)
at com.perfino.a.f.o.a(ejt:792)
at com.perfino.a.f.o.a(ejt:787)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.ac.a(ejt:1011)
at com.perfino.b.a.b(ejt:68)
at com.perfino.b.a.c(ejt:82)
at com.perfino.a.f.o.a(ejt:1006)
at com.perfino.a.i.b.d.a(ejt:168)
at com.perfino.a.i.b.d.b(ejt:155)
at com.perfino.a.i.b.d.b(ejt:52)
at com.perfino.a.i.b.d.a(ejt:45)
at com.perfino.a.i.a.b.a(ejt:94)
at com.perfino.a.c.a.b(ejt:105)
at com.perfino.a.c.a.a(ejt:37)
at com.perfino.a.c.c.run(ejt:57)
at java.lang.Thread.run(Thread.java:745)

Notice: I couldn't recover my database with the procedure described below. I'm still keeping this post as reference, as the probability of a successful recovery will depend on how broken the database is, and there is no evidence that this procedure is invalid.
Perfino uses by default the H2 Database Engine as its persistence storage. H2 has a recovery tool and a run script tool to import sql statements:
# 1. Create a dump of the current database using the tool [1]
# This tool creates a 'config.h2.sql' and a 'perfino.h2.sql' db dump
cd ${PERFINO_DATA_DIR}
java -cp ${PATH_TO_H2_LIB}/h2*.jar org.h2.tools.Recover
# 2. Rename the corrupt database file to e.g. *bkp
mv perfino.h2.db perfino.h2.db.bkp
# 3. Import the dump from step 1, ignoring errors
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino \
-script perfino.h2.sql -checkResults
[1]: Perfino includes a version of the h2.jar under ${PERFINO_INSTALL_DIR}/lib/common/h2.jar. You could of course download the official jar and try with it, but in my case, I could only restore the database with the jar supplied with perfino.
This failed for me with a
Exception in thread "main" org.h2.jdbc.JdbcSQLException: Feature not supported: "Restore page store recovery SQL script can only be restored to a PageStore file".
If this happens to you, try:
# 1. Delete database and mv files
cd ${PERFINO_DATA_DIR}
rm perfino.h2.db perfino.mv.db
# 2. Create a PageStore database manually
touch perfino.h2.db
# 3. try with MV_STORE=FALSE on the url [2]
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino;MV_STORE=FALSE \
-checkResults \
-continueOnError
[2]: force h2 to recreate a pagestore db instead of the new storage engine (See this thread in metabase)

I found this article trying to repair a Confluence internal h2 database and this worked for me. Here's a shell script as a gist on my GitHub with what I did - you'll have to adjust for your environment.

Related

How do i remove Docker completely?

i installed docker follow this post
https://blog.ssdnodes.com/blog/getting-started-docker-vps/
sudo curl -sS https://get.docker.com/ | sh
this leak seem unsafe, i wanted to remove everything with docker and reinstall again in other method.
after remove all and check with
find / -name '*docker*'
the log
/proc/sys/net/ipv4/conf/docker0
/proc/sys/net/ipv4/neigh/docker0
/proc/sys/net/ipv6/conf/docker0
/proc/sys/net/ipv6/neigh/docker0
/proc/1/task/1/net/dev_snmp6/docker0
/proc/1/net/dev_snmp6/docker0
/proc/2/task/2/net/dev_snmp6/docker0
/proc/2/net/dev_snmp6/docker0
/proc/3/task/3/net/dev_snmp6/docker0
/proc/3/net/dev_snmp6/docker0
/proc/84/task/84/net/dev_snmp6/docker0
/proc/84/net/dev_snmp6/docker0
/proc/92/task/92/net/dev_snmp6/docker0
/proc/92/net/dev_snmp6/docker0
/proc/96/task/96/net/dev_snmp6/docker0
/proc/96/net/dev_snmp6/docker0
/proc/114/task/114/net/dev_snmp6/docker0
/proc/114/net/dev_snmp6/docker0
/proc/127/task/127/net/dev_snmp6/docker0
/proc/127/net/dev_snmp6/docker0
/proc/133/task/133/net/dev_snmp6/docker0
/proc/133/net/dev_snmp6/docker0
/proc/134/task/134/net/dev_snmp6/docker0
/proc/134/net/dev_snmp6/docker0
/proc/151/task/151/net/dev_snmp6/docker0
/proc/151/net/dev_snmp6/docker0
/proc/368/task/368/net/dev_snmp6/docker0
/proc/368/net/dev_snmp6/docker0
/proc/371/task/371/net/dev_snmp6/docker0
/proc/371/net/dev_snmp6/docker0
/proc/372/task/372/net/dev_snmp6/docker0
/proc/372/net/dev_snmp6/docker0
/proc/373/task/373/net/dev_snmp6/docker0
/proc/373/task/376/net/dev_snmp6/docker0
/proc/373/task/390/net/dev_snmp6/docker0
/proc/373/net/dev_snmp6/docker0
/proc/386/task/386/net/dev_snmp6/docker0
/proc/386/net/dev_snmp6/docker0
/proc/393/task/393/net/dev_snmp6/docker0
/proc/393/net/dev_snmp6/docker0
/proc/404/task/404/net/dev_snmp6/docker0
/proc/404/net/dev_snmp6/docker0
/proc/407/task/407/net/dev_snmp6/docker0
/proc/407/net/dev_snmp6/docker0
/proc/408/task/408/net/dev_snmp6/docker0
/proc/408/net/dev_snmp6/docker0
/proc/416/task/416/net/dev_snmp6/docker0
/proc/416/net/dev_snmp6/docker0
/proc/448/task/448/net/dev_snmp6/docker0
/proc/448/net/dev_snmp6/docker0
/proc/572/task/572/net/dev_snmp6/docker0
/proc/572/net/dev_snmp6/docker0
/proc/574/task/574/net/dev_snmp6/docker0
/proc/574/net/dev_snmp6/docker0
/proc/2523/task/2523/net/dev_snmp6/docker0
/proc/2523/net/dev_snmp6/docker0
/proc/2526/task/2526/net/dev_snmp6/docker0
/proc/2526/net/dev_snmp6/docker0
/proc/3110/task/3110/net/dev_snmp6/docker0
/proc/3110/net/dev_snmp6/docker0
/proc/3111/task/3111/net/dev_snmp6/docker0
/proc/3111/net/dev_snmp6/docker0
/proc/3114/task/3114/net/dev_snmp6/docker0
/proc/3114/net/dev_snmp6/docker0
/usr/bin/pm2-docker
/usr/libexec/docker
/usr/lib/node_modules/pm2/bin/pm2-docker
/usr/lib/node_modules/pm2/node_modules/systeminformation/lib/dockerSocket.js
/usr/lib/node_modules/pm2/node_modules/systeminformation/lib/docker.js
/usr/lib/node_modules/pm2/node_modules/#pm2/io/docker-compose.yml
/usr/lib/firewalld/services/docker-swarm.xml
/usr/lib/firewalld/services/docker-registry.xml
/sys/devices/virtual/net/docker0
/sys/class/net/docker0
/var/cache/yum/x86_64/7/docker-ce-stable
/var/lib/docker-engine
/var/lib/yum/repos/x86_64/7/docker-ce-stable
/etc/yum.repos.d/docker-ce.repo
/etc/systemd/system/docker.service.d
how do i remove docker completely ? is this file inside my host are safe ?
Quick summary of my comments so far, so you can accept it as an answer:
You downloaded the script from a trustworthy source (docker.com) and via HTTPS so there is extremely little risk of your system being compromised.
If your system was compromised, uninstalling docker would likely not solve the problem.
With those two caveats out of the way: The script you ran does a lot of magic to determine your operating system, and then delegates the actual installation to the appropriate package manager, meaning you can simply uninstall it through the usual package management tools of your Linux distribution.

How do I configure SaltStack to transfer a file (or install a package) for the first time?

I am running two instances of RedHat. I have SaltMaster installed on one machine and SaltMinion installed on another. I am using a free version of Salt. I want to test SaltStack to do a basic configuration management task. If it can transfer a file from SaltMaster to SaltMinion, that would be great. If it can install Apache web server on SaltMinion, that would be great. Either task will help me learn. My learning goal is semi-flexible.
I can use salt '*' test.ping. The response is True. I tried this command: salt '*' state.apply
I got this error:
> hostname.fqdn:
> Data failed to compile:
> ----------
> No matching salt environment for environment 'qa' found
> ----------
> No matching sls found for 'qa1' in env 'qa'
> ----------
> No matching sls found for 'base1' in env 'base'
> ----------
> No matching salt environment for environment 'dev' found
> ----------
> Specified SLS base1 in saltenv dev is not available on the salt master or through a configured fileserver
I modified the /etc/salt/master file. I uncommented these lines:
fileserver_backend:
- git
- roots
I tried this command again: salt '*' state.apply
I received this error:
> [ERROR ] Error parsing configuration file: /etc/salt/master -
> expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^
I have been following these directions here:
https://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
I created a webserver.sls file.
I inserted these lines as the content:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
I do not see how the three lines in the directions above would be enough to configure SaltStack to work. Where would the apache installation media need to be? Where would the transfer happen from? Am I supposed to download the media to SaltMaster? I would assume so. But where would I put it? I have a satellite server for yum commands to work.
Alternatively, how do I get SaltStack to transfer a file from SaltMaster to SaltMinion?
The first error ([...]No matching sls found for 'qa1' in env 'qa'[...]) indicates that you have configured a lot of different environments (file_roots), which are not present on your master's filesystem. Your approach to solve this goes in the correct direction, but leads to this error:
[ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^
You should no longer be able to test.ping your minion, as the salt master should not run anymore, does it? To solve it just read the error message. It tells you with which point in your salt master configuration file salt is unhappy.
The fileserver_backend configures which types of backend should be available. You should check the file_roots configuration to actually define which roots are available. Roots refer to salt states folders in your filesystem.
A very simple config might look like that:
file_roots:
base:
- /srv/salt
It assumes that /srv/salt is the root of your state tree - which effectively means, that your webserver.sls should be located in this folder.
Your webserver.sls looks promising - it should install apache2 on a minion, when you apply it.
Managing configuration files on the master and transferring them to the minions is something salt can easily achieve. A simple state might look like:
/etc/myawesomeconfigurationfile.conf:
file.managed:
source: salt://myawesomefile # refers to /srv/salt/myawesomefile
user: root
group: root
mode: 640
You also asked for media files that you want to manage. If you talk about application related data it is not a good idea to use salt to move them around. IMO other approaches like NFS, GlusterFS or anything else that decouples user content from your application would be a better approach.

"Unexpected error running Liquibase: Unknown Reason" liquibase 3.3.5 and grails war file

This is the command I am running:
java -jar /root/liquibase/liquibase.jar \
--driver=com.mysql.jdbc.Driver \
--logLevel=debug \
--changeLogFile=migrations/changelog.xml \
--classpath=/usr/share/tomcat7/lib/mysql.jar:/var/lib/tomcat7/webapps/myApp.war \
--url="jdbc:mysql://127.0.0.1:3306/mydb" \
--username=myuser \
--password=mypass \
--contexts=MYCONTEXT \
update
this fails with the following unhelpful error message:
Unexpected error running Liquibase: Unknown Reason
SEVERE 9/9/15 2:23 PM: liquibase: Unknown Reason
java.lang.AbstractMethodError
at liquibase.database.DatabaseFactory.register(DatabaseFactory.java:87)
at liquibase.database.DatabaseFactory.<init>(DatabaseFactory.java:29)
at liquibase.database.DatabaseFactory.getInstance(DatabaseFactory.java:40)
at liquibase.integration.commandline.CommandLineUtils.createDatabaseObject(CommandLineUtils.java:50)
at liquibase.integration.commandline.Main.doMigration(Main.java:884)
at liquibase.integration.commandline.Main.run(Main.java:175)
at liquibase.integration.commandline.Main.main(Main.java:94)
I have no idea where to look. I have verified that the jars and wars are correct, i.e.
ls /root/liquibase/liquibase.jar
ls /usr/share/tomcat7/lib/mysql.jar
ls /var/lib/tomcat7/webapps/revolve.war
All list the corresponding file.
Any ideas?
the war exploded looks like this:
WEB-INF\classes\migrations\
changelog.xml
lots_of_other_changes.xml
WEB-INF\classes\migrations\sql
lots of sql files
I have tired lots of variations, including:
--changeLogFile=WEB-INF/classes/migrations/changelog.xml \
the changelogs work find if I run them on my local pc outside of a war file, although I do have to cd into the directory where the main changelog.xml file is first, otherwise it does not work.
The main changelog looks like this:
<databaseChangeLog
:
<include file="baseline.xml"/>
<include file="something.xml"/>
and these included files have things like this:
<changeSet id="something" author="me">
<comment>something</comment>
<sqlFile path="sql//something//new_things.sql" />
</changeSet>
NOTE:
using the grails in-application auto-updater feature with the data-migration 1.4.0 plug wont work, as it is hard coded to use liquibase version 2.0.5 which has major bugs.
using liquibase 3.4+ is not currently an option due to incompatibility.
If I try the command without including the war file location, I correctly get an error saying could not find the changelog file expected, and it creates the DATABASECHANGELOCK table in the db (so that side is ok)
No matter what I change changeLogFile ot, it always gives this error, even something completely wrong.
We dont want to go through the pain of generating diff sql, and running that sql.
I suspect the issue is to do with the relative paths of the includes in the change sets.
I also tried changing all the
<include file="something.xml"/>
to
<include file="something.xml" relativeToChangelogFile="true" />
and
<sqlFile path="sql//something//new_things.sql" />
to
<sqlFile relativeToChangelogFile="true" path="sql//something//new_things.sql" />
But this made no difference - same error. I tried exploding the war and running it on the exploded files - this works, but is not what we want (as there is no way to explode the war on the production machines - they dont have the jar command, and if we deploy the war to live servers, without the DB changes first, the live system will fail)
I don't know about Grails, but I'm immediately noticing those doubled-slashes in your paths. I would expect them to cause problems.
Are you perhaps thinking of the need to quote Windows-style backslashes? Where you see things like: C:\\path\\to\\file.dat.
But you don't need to do that for Unix-style slashes (or forward-slashes, if you will). (And if you did you'd still have to quote them with backslashes, like this: \/path\/to\/file.dat -- but don't do that.)
I have the same situation: using LiquiBase with a Grails application. I had to fall back to liquibase-core-2.0.5.jar, which is embedded in the Grails User Library. I could not figure out how to exclude the jar. Unfortunately, this version of LiquiBase is considered very buggy.

What does this Neo4j batch loader error number mean

I've been using the Neo4j batch loader for a while now and tonight started running into issues building my graph from a fresh database export. Running it yields the following:
> java -servjava -server -Xmx4G -jar ~/Dev/github.com/jexp/batch-import/target/batch-import-jar-with-dependencies.jar ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Usage: Importer data/dir nodes.csv relationships.csv [node_index node-index-name fulltext|exact nodes_index.csv rel_index rel-index-name fulltext|exact rels_index.csv ....]
Using: Importer ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Using Existing Configuration File
........................
Importing 2412268 Nodes took 4 seconds
.....................
Total import time: 9 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=2412269
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl.java:917)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserterImpl.java:471)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:136)
at org.neo4j.batchimport.Importer.doImport(Importer.java:214)
at org.neo4j.batchimport.Importer.main(Importer.java:78)
I was able to successfully run the batch loader for the nodes.csv and rels.csv that are included in its own repository, so I'm thinking that the issue is somewhere in my rels.csv file. However, it's a pretty big file and I would like to know what id=2412269 means, as it seems like the best starting point for diagnosing the failure.
Any ideas?
_howard
This means that in the rels.csv file, you are trying to create a relationship for a node referenced by id = 2412269 . But no such node has been created in your nodes.csv file.
After working through the issue with the author of the importer, it turns out that the issue was that I had single, unescaped quotes in my nodes.csv file. So the rels.csv record was pointing to a node that could not be created in nodes.csv. Unfortunately, the error reported on the console was not exactly the error causing the issue.

launch cassandra-cli error

I get the following errors when I try to run cassandra-cli.
manuzhang#manuzhang-U24E:~/git/cassandra-trunk$ bin/cassandra-cli -h localhost -p 9160
Column Family assumptions read from /home/manuzhang/.cassandra-cli/assumptions.json
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version Unknown
Exception in thread "main" java.lang.AssertionError
at org.apache.cassandra.cli.CliClient.loadHelp(CliClient.java:178)
at org.apache.cassandra.cli.CliClient.getHelp(CliClient.java:171)
at org.apache.cassandra.cli.CliClient.printBanner(CliClient.java:197)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:312)
That line is:
final InputStream is = CliClient.class.getClassLoader().getResourceAsStream("org/apache/cassandra/cli/CliHelp.yaml");
assert is != null;
The file is actually located in $CASSANDRA_HOME/src/resources/org/apache/cassandra/cli.
I have run it successfully for several times.
well, solved by ant build in terminal.
I think it's because I'm building from source and from time to time I modify some codes.
but just adding several lines of comments cannot reproduce the problem.

Resources