How do I recover from TransactionFailureException? - neo4j

Something went wrong with the application in the middle of a transaction (the thread was killed, which resulted in a ThreadDeath etc.), so the transaction failed, but no new transaction could be started after:
org.neo4j.kernel.api.exceptions.TransactionFailureException:
Kernel has encountered some problem, please perform neccesary action (tx recovery/restart)
What are the actions I should undertake to deal with this issue?
Update: I forgot to mention that I have encountered this type of error before and managed to (at least temporarily) fix it by deleting transaction log files. But now, apparently, there aren't any *nioneo* (IIRC) files in the neo4j data directory at all! Did the location or names of log files change? Or am I missing something? There are neostore.transaction.db.x files, which, upon grepping seem to contain chunks of my data. I did start the fresh instance of the application (fortunately that was a test), so I can't check it now, but if I deleted them, would I be able to restart an app from the previous state?

One of the improvements in Neo4j 2.2 was the unification of transaction logs, those are now in neostore.transaction.db.x.
In case the db does not start any more you can try to remove them (but be sure to keep a backup copy) and restart the database. However try a restart with these files in place beforehand. If the presence of transaction logs causes the database not to start up I would consider this being a bug.

Related

how perforce submit atomic operation actually work

perforce submit is atomic.
that means if the change list contains 3 files to be checked in,
and the the operation successfully checked in the first 2 files, but then something when wrong
when operating on the 3rd file, it has the ability to rollback the operation of the first 2 files so that everything went back to its original state.
I'm awed and really impressed with that, and tried to search for how that actually works, but couldn't seem to find it.
will appreciate a lot if someone could help me understand technical details on how all this works at the background.
thank you very much in advance.
The fact that Perforce uses a database as its source of truth makes this very simple:
The depot files are locked (in the database, as if you'd run p4 lock).
The new revision contents are uploaded to the depot archive.
The database tables are locked for the final set of checks that everything is okay.
The new revision records are written into the database and all the locks are released.
If the submit fails somewhere in part 2, nothing needs to be rolled back, because the new revision contents don't overwrite anything, and they aren't visible as part of the file history until step 4. (An unfortunate side effect of this is that Perforce can actually "leak" disk space on a failed submit, but this is generally pretty minor compared to the expected normal increase in disk usage over time.)
If you'd like to be able to watch this happen in real time, tailing the journal (P4JOURNAL) will show you the database writes as they happen, and tailing the log (P4LOG) will show you the individual phases of the submit operation from when the user initiates the operation to when the change is fully committed.

FME Server fails some jobs due to not being able to read SDE file but others succeed

looking for some FME help if anyone can. I am having a bit of an issue where I am running a workspace through FME server to turn GML into single line sql using a GEODATABASE_SDE writer. Have a few other workbenches doing same thing for different data sets and they work fine. This particular one however runs 21 jobs in server reading in the different files, 2 succeed and write features to the sql database, 19 fail with
An error occurred while attempting to retrieve the connection
parameters from the connection file
I can't figure out why it would work for 2 (different 2 each time) and not the others. The SDE file works fine in Arc Catalogue to connect to the DB.
I have tried rebuilding the writers to make sure that they were pointing at the right SDE connection and did not have some reference to an old one left.
Has anyone encountered this before or have any ideas on what is causing this? Thanks in advance for any help

How to Cleanly shutdown database in neo4j

I want to import some database at my neo4j local server. I unpacked data from archive do data/databases change active.db and allow_format_migration had changed to true.
But now when I use bin/neo4j start I had error in log/neo4j. There are many lines but I think problem is with it:
Please see the attached cause exception "The database is not cleanly shutdown. The database needs recovery, in order to recover the database, please run the old version of the database on this store.".
What I did wrong?
I've read ton's of information and found the answer.
Just need to delete transaction files inside your database folder and run neo4j.

neo4j cluster: No such log version

I try use neo4j HA cluster (neo4j 2.0.1), but I got error "No such log version: ..." after database copied from master.
I deleted all *.log files, but it was not help for me.
Can you help me with this problem?
TIA.
"Log" in Neo4j refers to the write-ahead-log that the database uses to ensure durability and consistency. It is stored in files named something like nioneo_logical_log.vX, where X is a number. You should never delete these files manually, even if the database is turned off, this may lead to data loss. If you wish to limit the amount of disk space used by neo for logs, use this configuration:
keep_logical_logs=128M
The error you are seeing means that the database copied cannot perform an incremental update to catch up with the master, because the log files have been deleted. This may happen normally, if the copy is very old, or if you have keep_logical_logs set to false.
What you need to do to resolve this is to set keep_logical_logs to some value that makes sense to you, perhaps "keep_logical_logs=2 days" to keep logs up to two days old around. Once you've done that, you need to do a new full store copy on your slave, which can be triggered by shutting the slave off, deleting the database folder (data/graph.db), and starting the slave back up again.
Newer versions of Neo will do this automatically.

What is .appname.sqlite.migrationdestination_xxxx file? Does it cause sql corruption error?

I'm developing iOS application using CoreData.
And I have got application datas from user that includes following hidden files.
Documents/.appname.sqlite.migrationdestination_xxxx (549MB)
Documents/.appname.sqlite.migrationdestination_xxxx-shm (721KB)
Documents/.appname.sqlite.migrationdestination_xxxx-wal (0Byte)
And There are appname.sqlite, appname.sqlite-wal and appname.sqlite-shm in same Documents folder.appname.sqlite is main sql file for app. and -wal, -shm seems to be generated automatically by iOS.
(I learned from What are the .db-shm and .db-wal extensions in Sqlite databases?)
I think migrationdestination file is just progress data for migration.
Maybe it remains when user's device failed to migrate. (e.g. iOS terminate my app when my app has been in background long-time.)
By the way, some users using my app have got this trouble.
Mar 10 13:33:24 xxxx-xx-iPhone XXXXXXXX[5416] : CoreData:
error: (11) Fatal error. The database at
/var/mobile/Applications/95D2823D-37E4-4596-9507-B58571D32EBB/Documents/appname.sqlite
is corrupted. SQLite error code:11, 'database disk image is
malformed'
And i found this tips.
Core Data store corruption
One of answer says -wal and -shm cause this error. so i removed it.
However, user still gets same error. So i think migrationdestination may cause this error.
I'll test it to remove tomorrow. Then i report the result it to here.
So Does anyone have same trouble, suggestions, answers?
Thank you for reading my issue.
These are files that exist during a migration. If you are seeing these files then your migration failed. Check your crash logs on that device and confirm.
Are you migrating in the -applicationDidFinishLaunching... method? Are you getting a bad food crash? Those are common situations that will cause a migration to fail in the middle of the migration.
Finally, I found a solution for 'database disk image is malformed'.
I renamed these files
before
Documents/.appname.sqlite.migrationdestination_xxxx
Documents/.appname.sqlite.migrationdestination_xxxx-shm
Documents/.appname.sqlite.migrationdestination_xxxx-wal
after
Documents/appname.sqlite
Documents/appname.sqlite-shm
Documents/appname.sqlite-wal
I deleted old appname.sqlite, appname.sqlite-shm, appname.sqlite-wal files.
I guessed why this error happens.
Maybe user has succeeded migration.
So there are two sql files. appname.sqlite and .appname.sqlite.migrationdestination_xxxx.
Then iOS is going to exchange these files, but if app is killed by some reason, appname.sqlite remains in the middle of deleting.
but app look appname.sqlite every time, then iOS guess wrong it's corrupted, so my app can't run.
Now i'm going to make these exchange script and apply my app.
Thank you for listing my issue.

Resources