I'm running a database on Neo4j v3.5.11 CE via Docker volume on AWS. I want to upgrade to 4.4.9, so I created a tar of ./graph.db and brought it back to my dev box. I extracted to /var/lib/neo4j/data/databases. I mounted it to a neo4j v3.5.11 container and it starts fine. I can see all the data via localhost:7474.
Next I try mounting to neo4j v4.4.0 via:
docker run -d -p 7474:7474 -p 7687:7687 -v /var/lib/neo4j/data:/var/lib/neo4j/data -v /var/lib/neo4j/plugins:/plugins -v /var/lib/neo4j/logs:/var/log/neo4j -e NEO4J_AUTH=none -e NEO4J_dbms_allow__upgrade=true --name neo4j neo4j:4.0.0
Neo4j fails: "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j." This is odd because it was upgraded from 3.5.5 and has been running on 3.5.11--never touched by a newer version.
docker logs neo4j-apoc
Fetching versions.json for Plugin 'apoc' from https://neo4j-contrib.github.io/neo4j-apoc-procedures/versions.json
Installing Plugin 'apoc' from https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/4.0.0.7/apoc-4.0.0.7-all.jar to /plugins/apoc.jar
Applying default values for plugin apoc to neo4j.conf
Skipping dbms.security.procedures.unrestricted for plugin apoc because it is already set
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2022-09-10 14:18:32.888+0000 WARN Unrecognized setting. No declared setting with name: apoc.export.file.enabled
2022-09-10 14:18:32.892+0000 WARN Unrecognized setting. No declared setting with name: apoc.import.file.enabled
2022-09-10 14:18:32.893+0000 WARN Unrecognized setting. No declared setting with name: apoc.import.file.use_neo4j_config
2022-09-10 14:18:32.921+0000 INFO ======== Neo4j 4.0.0 ========
2022-09-10 14:18:32.934+0000 INFO Starting...
2022-09-10 14:18:48.713+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#123d7057' was successfully initialized, but failed to start. Please see the attached cause exception "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j.". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#123d7057' was successfully initialized, but failed to start. Please see the attached cause exception "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j.".
I tried a couple things:
1.) Deleting the transaction logs: sudo rm graph.db/neostore.transaction.db.* It throws the same exact transaction log error, even though there are no transaction logs in the directory;
2.) Tried a database recovery by adding this to the run command: -e NEO4J_unsupported_dbms_tx__log_fail__on__corrupted__log__files=false This fails with "Unknown store version 'SF4.3.0'":
2022-09-10 15:39:48.458+0000 INFO Starting...
2022-09-10 15:40:34.529+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'".
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'".
Any ideas appreciated! Thanks!
Deleting transaction logs is never a good idea. What you want to do is add an environment variable:
dbms.allow_upgrade=true
Then it should work as the docs states that you can update the latest 3.5 to 4.0.0 Neo4j version.
Related
I have been running a neo4j database for a while without problem.
Yesterday our server OS was update from centos-8 to centos-stream, and since this upgrade our neo4j database does not start anymore.
The service is listed as running but on every query it says: Unable to get a routing table for database 'neo4j' because this database is unavailable.
I cannot log in to the cypher shell because it shuts down with the same error.
I'm not sure what I can do here, I would like to not reset the database as we need the information in there.
The versions I'm running are as follows:
neo4j-4.3.6-1.noarch
neo4j-java11-adapter-1-1.noarch
I had similar issue, so I checked the logs and found the below error in the logs.
Caused by: java.nio.file.AccessDeniedException:
/var/lib/neo4j/data/databases/neo4j/neostore
The fix for the above issue is to change the ownership of noe4j databases folders and files.
cd /var/lib/neo4j/data
sudo chown -R neo4j:neo4j databases/
sudo chown -R neo4j:neo4j transactions/
Restart the neo4j service
sudo systemctl restart neo4j.service
I am trying to install thingsboard PE instance on Ubuntu 18.04.3 LTS
while I run this command
sudo /usr/share/thingsboard/bin/install/install.sh --loadDemo
I am facing the following error
Error: Could not find or load main class org.springframework.boot.loader.PropertiesLauncher
ThingsBoard installation failed!
the thingsboard.log file is not present in /var/log/thingsboard
Can anyone please suggest me what's the reason for this error?
Make sure the thingsboard database does actually exist. I had the same error. I did follow the instructions, but possibly the CREATE DATABASE was not followed by ';' or it needed to be started.
Restart the ubuntu server.
Log in and log onto PSQL: psql -U postgres -d postgres -h 127.0.0.1 -W
Check for existence of the database: \list
If it does not exists, run: CREATE DATABASE thingsboard;
Run \list again and make sure it exists. Quit by running \q.
Re-run the demo create script: sudo /usr/share/thingsboard/bin/install/install.sh --loadDemo
Happy Thing-ing.
Along with the above as Antony Horne has suggested, also you can remove the ThingsBoard related directories under /tmp/. Remove and recreate the 'thingsboard' database. Make sure to set the password as 'postgres'. Then re-run the command
/usr/share/thingsboard/bin/install/install.sh --loadDemo
It should be fine.
Though after that, when I tried to run the 'thingsboard' as
service thingsboard start
Its is not working as 'thingsboard' is unrecognized. Couldn't solve that yet.
I am trying to run a docker with jenkins in it as below command:
docker run --rm -p 2222:2222 -p 9080:9080 -p 8081:8081 -p 9418:9418
-tivjenkinsci/workflow-demo
I continuously get below errors
INFO: Failed mkdirs of /var/jenkins_home/caches
[7412] Connection from 127.0.0.1:57701
[7412] Extended attributes (16 bytes) exist
[7412] Request upload-pack for '/repo'
[4140] [7412] Disconnected
[7415] Connection from 127.0.0.1:39829
[7415] Extended attributes (16 bytes) exist
[7415] Request upload-pack for '/repo'
[4140] [7415] Disconnected
I am following:https://github.com/jenkinsci/workflow-aggregator-plugin/blob/master/demo/README.md
My configuration:
OS : CentOS Linux release 7.2.1511 (Core)
user : jenkins
Checked inside the docker : directory /var/jenkins_home/caches was
getting created as jenkins user, having another directory:
git-f20b64796d6e86ec7654f683c3eea522
EVERYTHING IS DEFAULT
So if I google that error, I find a page: https://recordnotfound.com/git-plugin-jenkinsci-31194/issues (I know, not the project you're looking at but maybe same or similar issue) and that page if you just do a text search for that error, you'll see a line:
fix logging "Failed mkdirs of /var/jenkins_home/caches" when the directory already exists
it indicates that this is an open issue and that it was logged 11 days ago albeit for a different repo. if you delete the folder does it fix the issue? Maybe monitor that bug report for a fix or log an issue against the workflow-aggregator-plugin.
I'm trying to setup OrientDB distributed configuration with docker. But I'm getting error when starting second node -
2015-10-09 17:14:14:066 WARNI
[node1444321499719]->[[node1444321392311]] requesting deploy of
database 'testDB' on local server... [OHazelcastPlugin] 2015-10-09
17:14:14:117 INFO [node1444321499719]<-[node1444321392311] received
updated status node1444321499719.testDB=SYNCHRONIZING
[OHazelcastPlugin] 2015-10-09 17:14:14:119 INFO
[node1444321499719]<-[node1444321392311] received updated status
node1444321392311.testDB=SYNCHRONIZING [OHazelcastPlugin] 2015-10-09
17:14:15:935 WARNI [node1444321499719] moving existent database
'testDB' located in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB' and get a fresh copy
from a remote node... [OHazelcastPlugin] 2015-10-09 17:14:15:936 SEVER
[node1444321499719] error on moving existent database 'testDB' located
in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
[OHazelcastPlugin][node1444321499719] Error on starting distributed
plugin
com.orientechnologies.orient.server.distributed.ODistributedException:
Error on moving existent database 'testDB' located in
'/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.backupCurrentDatabase(OHazelcastPlugin.java:1007)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.requestDatabase(OHazelcastPlugin.java:954)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installDatabase(OHazelcastPlugin.java:893)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installNewDatabases(OHazelcastPlugin.java:1426)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:184)
at com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:979)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:346)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)
I don't have this error if I'm starting orientdb cluster without docker.
Also I can move it in container
[root#64f6cc1eba61 orientdb]# mv -v /orientdb/databases/testDB
/orientdb/databases/../backup/databases/testDB
'/orientdb/databases/testDB' ->
'/orientdb/databases/../backup/databases/testDB'
'/orientdb/databases/testDB/distributed-config.json' ->
'/orientdb/databases/../backup/databases/testDB/distributed-config.json'
removed '/orientdb/databases/testDB/distributed-config.json' removed
directory: '/orientdb/databases/testDB' [root#64f6cc1eba61 orientdb]#
ls -l /orientdb/databases/../backup/databases/testDB total 4
-rw-r--r--. 1 root root 455 Oct 9 11:32 distributed-config.json [root#64f6cc1eba61 orientdb]#
I'm using OrientDB version 2.1.3
This was reported and fixed:
https://github.com/orientechnologies/orientdb/issues/4891
Set the 'distributed.backupDirectory' variable to a specific directory and the issue should be gone.
By the way, running orient distributed in docker is our experience currently a no go:
- Docker does not support multicast yet, you can work around it, but it's painful. But the main problem:
- Docker doesn't reuse ip addresses on restart, so a container restart will give it a new ip address, this messes up your cluster big time.
We abandoned using orient distributed with docker until docker is fixed on both issues (I believe it is both on the roadmap).
If you experience otherwise, I'm happy to hear your thoughts.
my ruby application was working fine until earlier this week, the system crashed, and now I get an error message on the page...
Error message:
Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
Exception class:
PhusionPassenger::UnknownError
Application root:
/home/deploy/www/gm-git
The system is set up as such...
• Ruby on Rails 3.0.8
• Mongoid 2.0.2
• Redis & Resque for background processing
I have tried the following boot sequence but withou success...
/opt/redis/redis-server
/opt/mongodb/bin/mongod --force --logpath /opt/mongodb/bin
rake environment rescue:workers
touch /home/proyectos/gm/test_git/goldenmile
I have updated this many times to for example
/opt/redis/redis-server
/opt/mongodb/bin/mongod --fork --logpath /opt/mongodb/bin
/opt/mongodb/bin/mongod --repair
rake environment rescue:workers
touch /home/proyectos/gm/test_git/goldenmile
But I get error messages in the terminal such as
/opt/mongodb/bin/mongod --start
ERROR: unknown option start
/opt/mongodb/bin/mongod --status
ERROR: unknown option status
/opt/mongodb/bin/mongod start
Invalid command: start
/opt/mongodb/bin/mongod/ --logpath /opt/mongodb/bin
-bash: /opt/mongodb/bin/mongod/: Not a directory
/opt/mongodb/bin/mongod/ --logpath /opt/mongodb/bin/mongod/
-bash: 2b.: command not found
/opt/mongodb/bin/mongod/ --logpath /opt/mongodb/bin/mongod
-bash: /opt/mongodb/bin/mongod/: Not a directory
Any help/directions would be most useful
The Mongo::ConnectionFailure error for localhost:27017 is typical when mongod is not running. You should check for this using the ps command, restart mongod if necessary, and then run the mongo shell. If the mongo shell reports "Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84" then go back and check the status of your mongod server process.
The following lines are suspicious.
/opt/mongodb/bin/mongod --fork --logpath /opt/mongodb/bin
/opt/mongodb/bin/mongod --repair
It appears that you are trying to run the mongod server and repair at the same time. The repair attempt will bail out if mongod is already running, and I'm guessing that you really intended to run it before starting db service. If you are on a 64-bit platform, journaling is on by default and you should not do a repair to recover a consistent state, see http://www.mongodb.org/display/DOCS/Durability+and+Repair
Also, repair can take a long time, and during this time, you cannot start another mongod for db service. I suspect that this is your underlying problem - that repair was running, causing attempts to start db service to fail.
Regardless, it appears that mongod is not running. Options like --start, --status, start are all unknown as stated, and mongod also prints a full list of options for each of these errors. Having a trailing slash after a shell command (after mongo) is telling the OS to look for a directory. Did you try to execute mongod in the shell with no options? What happens? You need to get mongod running as a db server (not repair) and should verify that it is up and that you can connect using the mongo shell before you go any further.
Let us know how it works out.