How to remove NEO4J from CE edition - neo4j

I am using Neo4j CE .. there are system and neo4j databases.
-bash-4.2$ neo4j --version
neo4j 4.3.2
I am facing startup the NEO4J database..
neo4j#system> show databases;
| name | address | role | requestedStatus | currentStatus | error | default | home |
| "neo4j" | "awsneodevldb01.est1933.com:7687" | "standalone" | "online" | "offline" | "An error occurred! Unable to start DatabaseId{483e7f9b[neo4j]}." | TRUE | TRUE |
how can I find out why unable to start NEO4J
how to remove it and create an empty ENO4J

I recreate an empty ENO4J on CE. .. by ref. other post on this stackoverFlow.
stop neo4j
remove /var/lib/neo4j/data/databases/neo4j
remove /var/lib/neo4j/data/transactions/neo4j
start neo4j
You have to remove the neo4j directory under transactions too.

ref. you can find out why the neo4j database could not startup from following log file.
/var/log/neo4j/debug.log
Caused by: java.lang.RuntimeException: Fail to start 'DatabaseId{483e7f9b[neo4j]}' since transaction logs were found, while database files are missing.

Related

Neo4J: "Database in use: false" after loading dump + migrate

I'm running a Neo4J 5.3.0 community edition and tried to load the dump from https://github.com/neo4j-graph-examples/twitch. I managed to load the dump and migrate the database.
My question: how do I use the "twitch" database?
cosh#osmingestor:~$ sudo neo4j-admin database info
Database name: neo4j
Database in use: true
Last committed transaction id:-1
Store needs recovery: true
Database name: system
Database in use: true
Last committed transaction id:-1
Store needs recovery: true
Database name: twitch
Database in use: false
Store format version: record-aligned-1.1
Store format introduced in: 5.0.0
Last committed transaction id:62742
Store needs recovery: false
show databases doesn't show it for my user "neo4j":
neo4j#neo4j> show databases;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| name | type | aliases | access | address | role | writer | requestedStatus | currentStatus | statusMessage | default | home | constituents |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "neo4j" | "standard" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "online" | "" | TRUE | TRUE | [] |
| "system" | "system" | [] | "read-write" | "localhost:7687" | "primary" | TRUE | "online" | "online" | "" | FALSE | FALSE | [] |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I tried to grant access to the database by executing:
neo4j#neo4j> grant all database PRIVILEGES ON DATABASES * TO neo4j;
Unsupported administration command: grant all database PRIVILEGES ON DATABASES * TO neo4j
Adding "neo4j" as the default admin did not help as well (was working though):
neo4j-admin dbms set-default-admin neo4j

After adding baseline while running flyway migration getting the message - Schema version : null and no migration needed

I am using Flyway with Docker and oracle to create a new schemas with db objects. I executed flyway migration and baseline it. After that even though I run Flyway clean or removed the database, cleaning the data volume. Then recreated the empty oracle database and call the flyway migration I am getting the message as below
"Current version of schema "GLDDBAMST": null
Schema "GLDDBAMST" is up to date. No migration necessary.
Schema version: null"
Please let me know how can I remove the baseline and start the migration again after cleaning the schema.
------------------------Full Message---------------
US_C02XJ1EWJGH5:goldnet-flyway rmehta068$ make config-flyway-migrate
docker-compose -f compose/flyway.yml up config_flyway
WARNING: Found orphan containers (oracle) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting flyway_config ... done
Attaching to flyway_config
flyway_config | Flyway Community Edition 5.2.4 by Boxfuse
flyway_config | Database: jdbc:oracle:thin:#oracle:1521/ORCLPDB1.localdomain (Oracle 12.2)
flyway_config | Successfully dropped schema "GLDDBAMST" (execution time 00:00.160s)
flyway_config | Creating schema "GLDDBAMST" ...
flyway_config | Creating Schema History table: "GLDDBAMST"."flyway_schema_history"
flyway_config | Current version of schema "GLDDBAMST": null
flyway_config | Schema "GLDDBAMST" is up to date. No migration necessary.
flyway_config | Schema version: null
flyway_config |
flyway_config | +----------+---------+------------------------------+--------+---------------------+---------+
flyway_config | | Category | Version | Description | Type | Installed On | State |
flyway_config | +----------+---------+------------------------------+--------+---------------------+---------+
flyway_config | | | | << Flyway Schema Creation >> | SCHEMA | 2019-02-04 20:31:01 | Success |
flyway_config | +----------+---------+------------------------------+--------+---------------------+---------+
flyway_config |
flyway_config exited with code 0
CollapseĀ 
Message Input

Does Release Management 2013 rollback across tags

We're heavy users of tags and I'm confused how tags and rollbacks interact together.
I understand that rollbacks cascade (at least within a sequence) from this article:
http://incyclesoftware.com/2014/03/understanding-rollbacks-release-management/
But I'm not clear how this would interact when you use tags, i.e. we tag servers by what features are installed on them (web, database, service) and vary the mix of features depending on environment (i.e. DEV might have web & services running on the same machine, but UAT & PROD would have seperate machines)
So does the rollback go back across the tag boundaries? If for example your sequence looked like this
+--Database tag --+
| Backup DB |
| | |
| Update DB |
| | | <- Runs against SQL server
| +--Rollback--+ |
| | Restore DB | |
| +------------+ |
+-----------------+
|
+---Web Tag-------+
| Do Stuff | <- Runs against WEB server
+-----------------+
|
+---Service tag----+
| Backup |
| | |
| Install new ver | <- Runs against Service server
| | |
| Smoke test |
| | |
| +--Rollback----+ |
| | Replace with | |
| | backup | |
| +--------------+ |
+------------------+
Would a roll back inside the service tag cause the database tag to execute it's rollback? Do rollbacks cascade across sequences?
I haven't had time to set this up yet and test so I thought I'd ask the question instead.
By accident I've managed to test this out with a suitable release and roll back does roll back across the tags as #joerage says.
It appears I was wrong... faulty memory and all that. Rollbacks work across tag boundaries.
I generally recommend against using rollback blocks, since their behavior is generally backwards, unpredictable, and not immediately obvious. The current best practice is actually to not use agent-based releases at all, as they will not be portable to the forthcoming Release Management Service.

Cypher load CSV eager and long action duration

im loading a file with 85K lines - 19M,
server has 2 cores, 14GB RAM, running centos 7.1 and oracle JDK 8
and it can take 5-10 minutes with the following server config:
dbms.pagecache.memory=8g
cypher_parser_version=2.0
wrapper.java.initmemory=4096
wrapper.java.maxmemory=4096
disk mounted in /etc/fstab:
UUID=fc21456b-afab-4ff0-9ead-fdb31c14151a /mnt/neodata
ext4 defaults,noatime,barrier=0 1 2
added this to /etc/security/limits.conf:
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 40000
* hard nofile 40000
added this to /etc/pam.d/su
session required pam_limits.so
added this to /etc/sysctl.conf:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
disabled journal by running:
sudo e2fsck /dev/sdc1
sudo tune2fs /dev/sdc1
sudo tune2fs -o journal_data_writeback /dev/sdc1
sudo tune2fs -O ^has_journal /dev/sdc1
sudo e2fsck -f /dev/sdc1
sudo dumpe2fs /dev/sdc1
besides that,
when running a profiler, i get lots of "Eagers", and i really cant understand why:
PROFILE LOAD CSV WITH HEADERS FROM 'file:///home/csv10.csv' AS line
FIELDTERMINATOR '|'
WITH line limit 0
MERGE (session :Session { wz_session:line.wz_session })
MERGE (page :Page { page_key:line.domain+line.page })
ON CREATE SET page.name=line.page, page.domain=line.domain,
page.protocol=line.protocol,page.file=line.file
Compiler CYPHER 2.3
Planner RULE
Runtime INTERPRETED
+---------------+------+---------+---------------------+--------------------------------------------------------+
| Operator | Rows | DB Hits | Identifiers | Other |
+---------------+------+---------+---------------------+--------------------------------------------------------+
| +EmptyResult | 0 | 0 | | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, page, session | MergeNode; Add(line.domain,line.page); :Page(page_key) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Eager | 9 | 0 | line, session | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, session | MergeNode; line.wz_session; :Session(wz_session) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +ColumnFilter | 9 | 0 | line | keep columns line |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Filter | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Extract | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +LoadCSV | 9 | 0 | line | |
+---------------+------+---------+---------------------+--------------------------------------------------------+
all the labels and properties have indices / constrains
thanks for the help
Lior
He Lior,
we tried to explain the Eager Loading here:
And Marks original blog post is here: http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
Rik tried to explain it in easier terms:
http://blog.bruggen.com/2015/07/loading-belgian-corporate-registry-into_20.html
Trying to understand the "Eager Operation"
I had read about this before, but did not really understand it until Andres explained it to me again: in all normal operations, Cypher loads data lazily. See for example this page in the manual - it basically just loads as little as possible into memory when doing an operation. This laziness is usually a really good thing. But it can get you into a lot of trouble as well - as Michael explained it to me:
"Cypher tries to honor the contract that the different operations
within a statement are not affecting each other. Otherwise you might
up with non-deterministic behavior or endless loops. Imagine a
statement like this:
MATCH (n:Foo) WHERE n.value > 100 CREATE (m:Foo {m.value = n.value + 100});
If the two statements would not be
isolated, then each node the CREATE generates would cause the MATCH to
match again etc. an endless loop. That's why in such cases, Cypher
eagerly runs all MATCH statements to exhaustion so that all the
intermediate results are accumulated and kept (in memory).
Usually
with most operations that's not an issue as we mostly match only a few
hundred thousand elements max.
With data imports using LOAD CSV,
however, this operation will pull in ALL the rows of the CSV (which
might be millions), execute all operations eagerly (which might be
millions of creates/merges/matches) and also keeps the intermediate
results in memory to feed the next operations in line.
This also
disables PERIODIC COMMIT effectively because when we get to the end of
the statement execution all create operations will already have
happened and the gigantic tx-state has accumulated."
So that's what's going on my load csv queries. MATCH/MERGE/CREATE caused an eager pipe to be added to the execution plan, and it effectively disables the batching of my operations "using periodic commit". Apparently quite a few users run into this issue even with seemingly simple LOAD CSV statements. Very often you can avoid it, but sometimes you can't."

Removal of Role PostgreSQL Failed - cache lookup failed for database

This is my first time using PostgreSQL for production.
I made a database blog_production with username blog_production and generated password from gemfile capistrano-postgresql. Once it is generated, I tried to delete database blog_production with this command from terminal:
$ sudo -u postgres dropdb blog_production
After that I tried to delete user blog_production with this command:
$ sudo -u postgres droprole blog_production
And it returned dropuser: removal of role "blog_production" failed: ERROR: cache lookup failed for database 16417
1.) Why is this happening?
2.) I also tried to delete from psql using DELETE FROM pg_roles WHERE rolname='blog_production' but it returned the same error (cache lookup failed)
3.) How do I solve this problem?
Thank you.
Additional Information
PostgreSQL Version
PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
(1 row)
select * from pg_shdepend where dbid = 16417;
dbid | classid | objid | objsubid | refclassid | refobjid | deptype
-------+---------+-------+----------+------------+----------+---------
16417 | 1259 | 16419 | 0 | 1260 | 16418 | o
16417 | 1259 | 16426 | 0 | 1260 | 16418 | o
16417 | 1259 | 16428 | 0 | 1260 | 16418 | o
(3 rows)
select * from pg_database where oid = 16417;
datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datacl
---------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+---------------+--------
(0 rows)
select * from pg_authid where rolname = 'blog_production'
rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil
-----------------+----------+------------+---------------+-------------+--------------+-------------+----------------+--------------+-------------------------------------+---------------
blog_production | f | t | f | f | f | t | f | -1 | md5d4d2f8789ab11ba2bd019bab8be627e6 |
(1 row)
Somehow the DROP database; didn't drop the shared dependencies correctly. PostgreSQL still thinks that your user owns three tables in the database you dropped.
Needless to say this should not happen; it's almost certainly a bug, though I don't know how we'd even begin tracking it down unless you know exactly what commands you ran etc to get to this point, right from creating the DB.
If the PostgreSQL install's data isn't very big and if you can share the contents, can I get you to stop the database server and make a tarball of the whole database directory, then send it to me? I'd like to see if I can tell what happened to get you to this point.
Send a dropbox link to craig#2ndquadrant.com . Just:
sudo service postgresql stop
sudo tar cpjf ~abrahamks/abrahamks-postgres.tar.gz \
/var/lib/postgresql/9.1/main \
/etc/postgresql/9.1/main \
/var/log/postgresql/postgresql-9.1-main-*.
/usr/lib/postgresql/9.1
sudo chown abrahamks ~abrahamks/abrahamks-postgres.tar.gz
and upload abrahamks-postgres.tar.gz from your home folder.
Replace abrahamks with your username on your system. You might need to adjust the paths above if I'm misremembering where the PostgreSQL data lives on Debian-derived systems.
Note that this contains all your databases not just the one that was an issue, and it also contains your PostgreSQL user accounts.
(If you're going to send me a copy, do so before continuing):
Anyway, since the database is dropped, it is safe to manually remove the dependencies that should've been removed by DROP DATABASE:
DELETE FROM pg_shdepend WHERE dbid = 16417
It should then be possible to DROP USER blog_production;

Resources