Neo4j cache configurations - neo4j

I have been playing around with heap size for JVM and the file store cache size in Neo4j. It seems like setting the memory mapped buffer to be handled by the OS does not have any effect on the system.
I tried setting the JVM heap quite large with a tiny cache and it was exactly as fast as a if the cache was large.
So my question is: How can I configure the system to allow me to control the cache? Is this an issue with the batching as it says that this uses the JVM heap?
I am using the following python script to fill up the database
neo4j.GraphDatabaseService("http://localhost:7474/db/data/")
f = open('indexslowdown_exp.txt','w')
f.write("Properties\t,\tSpeed\n")
total_time = timedelta(0)
name = 0
for y in range(0,1000):
batch = neo4j.WriteBatch(graph_db)
for x in range(0,100):
batch.create({"name":name})
name += 1
for x in range(0,100):
rand_node_A = random.randint(0,name-1)
rand_node_B = random.randint(0,name-1)
batch.append_cypher("START n=node("+str(rand_node_A)+"), m=node("+str(rand_node_B)+") CREATE (n)-[r:CONNECTED]->(m)")
start_time = datetime.now()
batch.submit()
end_time = datetime.now()
total_time = (end_time-start_time)
f.write(str(name)+" , "+str((total_time)/200)+"\n")
print "Inserting nodes: " + str(total_time)
f.close()
Neo4j.properties file:
use_memory_mapped_buffers=true
/# Default values for the low-level graph engine
neostore.nodestore.db.mapped_memory=1k
neostore.relationshipstore.db.mapped_memory=1k
neostore.propertystore.db.mapped_memory=2k
neostore.propertystore.db.strings.mapped_memory=1k
neostore.propertystore.db.arrays.mapped_memory=1k
# Enable this to be able to upgrade a store from an older version
#allow_store_upgrade=true
# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true
# Autoindexing
# Enable auto-indexing for nodes, default is false
#node_auto_indexing=true
# The node property keys to be auto-indexed, if enabled
#node_keys_indexable=name,age
# Enable auto-indexing for relationships, default is false
#relationship_auto_indexing=true
# The relationship property keys to be auto-indexed, if enabled
#relationship_keys_indexable=name,age
neo4j-wrapper:
wrapper.java.additional=-Dorg.neo4j.server.properties=conf/neo4j-server.properties
wrapper.java.additional=-Djava.util.logging.config.file=conf/logging.properties
wrapper.java.additional=-Dlog4j.configuration=file:conf/log4j.properties
#********************************************************************
# JVM Parameters
#********************************************************************
wrapper.java.additional=-XX:+UseConcMarkSweepGC
wrapper.java.additional=-XX:+CMSClassUnloadingEnabled
# Uncomment the following lines to enable garbage collection logging
wrapper.java.additional=-Xloggc:data/log/neo4j-gc.log
wrapper.java.additional=-XX:+PrintGCDetails
wrapper.java.additional=-XX:+PrintGCDateStamps
wrapper.java.additional=-XX:+PrintGCApplicationStoppedTime
wrapper.java.additional=-XX:+PrintTenuringDistribution
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=200
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=400
#********************************************************************
# Wrapper settings
#********************************************************************
# Override default pidfile and lockfile
#wrapper.pidfile=../data/neo4j-server.pid
#wrapper.lockfile=../data/neo4j-server.lck
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.name=neo4j
# User account to be used for linux installs. Will default to current
# user if not set.
wrapper.user=

What are you most concerned with? Batch insertion performance? If so, MMIO will be most effective if you can fit the entire graph into memory, so if you can estimate the number of nodes and relationships, you can come up with a rough calculation of the size you need for those two stores.
Also given that you look to only be inserting primitives, you can likely estimate the size for the property store that you need. If you're going to store strings and arrays (of a larger type), you can increase the MMIO settings for those two stores, as well, but if you don't need them, set them low.
Approx. size of node store: # of nodes * 14 bytes (if you're using the latest Neo4j; 9 bytes if Neo4j is < 2.0)
Approx. size of relationship store: # of rels * 33 bytes
Remember: There's a near 1:1 correspondence between the store sizes on disk and in memory for the filesystem cache.
As well, a larger JVM heap doesn't necessarily mean greater performance; in fact, the MMIO sizes (depending on your value for the setting use_memory_mapped_buffers) may lie outside of the JVM heap. Regardless, a large JVM heap can also introduce longer GC pauses and other GC-related issues, so don't make it too big.
HTH

For your queries:
Use Parameters!
Use Labels & Indexes in 2.0.x
Try not to use random in performance test loops
Make sure to use streaming

Related

Behavior of docker compose v3's deploy resources limits 'cpus' parameter setting (is it an absolute number or a percentage of available cores)

Folks,
With regards to docker compose v3's 'cpus' parameter setting (under 'deploy' 'resources' 'limits') to limit the available CPUs to a service, is it an absolute number that specifies the count of CPUs or is it a more useful percentage of available CPUs setting.
From what i read it appears to be an absolute number, where in, say if a host has 4 CPUs and one were to set two services in the compose file with 0.5 then both the services combined can only use a max of 1 CPU (0.5 each) while leaving the 3 remaining CPUs idle.
But thinking loudly it appears to me that it would be nicer if this is a percentage of available cores setting in which case for the same previous example this would result in both services each being able to use up to 2 CPUs each & thereby the two combined could use up all 4 when needed. This way when i increase or decrease the available cores the relative settings would help avoid modifying this value again.
EDIT(09/10/21):
On reading this it appears that the above can be achieved with 'cpu-shares' setting instead of setting 'cpus'. Is my understanding correct?
The doc for 'cpu-shares' however mentions the below cautionary note,
"It does not guarantee or reserve any specific CPU access."
If the above is achieved with this setting, then what does it mean (what is to lose) to not have a guarantee or reservation?
EDIT(09/13/21):
Just to summarize,
The 'cpus' parameter setting is an an absolute number that refers to the number of CPUs a service has reserved for it to use at all times. Correct?
The 'cpu-shares' parameter setting is a relative weight number the value of which is used to compute/determine the percentage of total available CPU that a service can use only when there is contention. Correct?

How to speed up my neo4j instance

I'm running neo4j version 2.2.5. I love all the CYPHER language, Python integration, ease of use, and very responsive user community.
I've developed a prototype of an application and am encountering some very poor performance times. I've read a lot of links related to performance tuning. I will attempt to outline my entire database here so that someone can provide guidance to me.
My machine is a MacBook Pro, 16GB of RAM, and 500GB SSD. It's very fast for everything else I do in Spark + Python + Hadoop. It's fast for Neo4j too, BUT when I get to like 2-4M nodes then it's insanely slow.
I've used both of these commands to start up neo4j, thinking they will help, and neither is that helpful:
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseConcMarkSweepGC
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseG1GC
My neo4j.properties file is as follows:
################################################################
# Neo4j
#
# neo4j.properties - database tuning parameters
#
################################################################
# Enable this to be able to upgrade a store from an older version.
#allow_store_upgrade=true
# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 75% of RAM minus the max Java heap size.
dbms.pagecache.memory=6g
# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true".
#keep_logical_logs=7 days
# Enable shell server so that remote clients can connect via Neo4j shell.
#remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces).
#remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337.
#remote_shell_port=1337
# The type of cache to use for nodes and relationships.
#cache_type=soft
To create my database from a fresh start, I first create these indexes, they are on all of my node types, and edges that I'm using.
CREATE CONSTRAINT ON (id:KnownIDType) ASSERT id.id_type_value IS UNIQUE;
CREATE CONSTRAINT ON (p:PerspectiveKey) ASSERT p.perspective_key IS UNIQUE;
CREATE INDEX ON :KnownIDType(id_type);
CREATE INDEX ON :KnownIDType(id_value);
CREATE INDEX ON :KNOWN_BY(StartDT);
CREATE INDEX ON :KNOWN_BY(EndDT);
CREATE INDEX ON :HAS_PERSPECTIVE(Country);
I have 8,601,880 nodes.
I run this query, and it takes 9 minutes.
MATCH (l:KnownIDType { id_type:'CodeType1' })<-[e1:KNOWN_BY]-(m:KnownIDType { id_type:'CodeType2' })-[e2:KNOWN_BY]->(n:KnownIDType)<-[e3:KNOWN_BY]-(o:KnownIDType { id_type:'CodeType3' })-[e4:KNOWN_BY]->(p:KnownIDType { id_type:'CodeType4' }), (n)-[e5:HAS_PERSPECTIVE]->(q:PerspectiveKey {perspective_key:100})
WHERE 1=1
AND l.id_type IN ['CodeType1']
AND m.id_type IN ['CodeType2']
AND n.id_type IN ['CodeTypeA', 'CodeTypeB', 'CodeTypeC']
AND o.id_type IN ['CodeType3']
AND p.id_type IN ['CodeType4']
AND 20131231 >= e1.StartDT and 20131231 < e1.EndDT
AND 20131231 >= e2.StartDT and 20131231 < e2.EndDT
AND 20131231 >= e3.StartDT and 20131231 < e3.EndDT
AND 20131231 >= e4.StartDT and 20131231 < e4.EndDT
WITH o, o.id_value as KnownIDValue, e5.Country as Country, count(distinct p.id_value) as ACount
WHERE AmbiguousCount > 1
RETURN 20131231 as AsOfDate, 'CodeType' as KnownIDType, 'ACount' as MetricName, count(ACount) as MetricValue
;
I'm looking for more like 15s or less response time. Like I do with < 1M nodes.
What would you suggest? I am happy to provide more information if you tell me what you need.
Thanks a bunch in advance.
Here are a couple of ideas how to speed up your query:
Don't use IN if there is only one element. Use =
With a growing number of nodes, the index lookup will obviously take longer. Instead of having a single label with an indexed property, you could use the id_type property as label. Something like (l:KnownIDTypeCode1)<-[e1:KNOWN_BY]-(m:KnownIDTypeCode2).
Split up the query in two parts. First MATCH your KNOWN_BY path, then collect what you need using WITH and MATCH the HAS_PERSPECTIVE part.
The range queries on the StartDT and EndDT property could be slow. Try to remove them to test if this slows down the query.
Also, it looks like you could replace the >= and < with =, sind you use the same date everywhere.
If you really have to filter date ranges a lot, it might help to implement it in your graph model. One option would be to use Knownby nodes instead of KNOWN_BY relationships and connect them to Date nodes.
First upgrade to version of 2.3, because it should improve performance - http://neo4j.com/release-notes/neo4j-2-3-0/
Hint
It doesn't make sense to use IN for array with one element.
Profile your query with EXPLAIN and PROFILE
http://neo4j.com/docs/stable/how-do-i-profile-a-query.html
Martin, your second recommendation, has sped up my matching paths to single digit seconds, I am grateful for your help. Thank you. While it involved a refactoring the design of my graph, and query patterns, it's improved the performance exponentially. I decided to create CodeType1, CodeType2, CodeType[N] as nodes labels, and minimized the use of node properties, except for keeping the temporality properties on the edges. Thank you again so much! Please let me know if there is anything I can do to help.

All queries are slow with neo4j

I have written a variety of queries using cypher that take no less than 200ms per query. They're very straightforward, so I'm having trouble identifying where the bottleneck is.
Simple Match with Parameters, 2200ms:
Simple Distinct Match with Parameters, 200ms:
Pathing, 2500ms:
At first I thought the issue was a lack of resources, because I was running neo4j and my application on the same box. While the performance monitor indicated that CPU and memory were largely free'd up and available, I moved the neo4j server to another local box and observed similar latency. Both servers are workstations with fairly new Xeon processors, 12GB memory and SSDs for the data storage. All of the above leads me to believe that the latency isn't due to my hardware. OS is Windows 7.
The graph has less than 200 nodes and less than 200 relationships.
I've attached some queries that I send to neo4j along with the configuration for the server, database, and JVM. No plugins or extensions are loaded.
Pastebin Links:
Database Configuration
Server Configuration
JVM Configuration
[Expanding a bit on a comment I made earlier.]
#TFerrell: Your comments state that "all nodes have labels", and that you tried applying indexes. However, it is not clear if you actually specified the labels in your slow Cypher queries. I noticed from your original question statement that neither of your slower queries actually specified a node label (which presumably should have been "Project").
If your Cypher query does not specify the label for a node, then the DB engine has to test every node, and it also cannot apply an index.
So, please try specifying the correct node label(s) in your slow queries.
Is that the first run or a subsequent run of these queries?
You probably don't have a label on your nodes and no index or unique constraint.
So Neo4j has to scan the whole store for your node pulling everything into memory, loading the properties and checking.
try this:
run until count returns 0:
match (n) where not n:Entity set n:Entity return count(*);
add the constraint
create constraint on (e:Entity) assert e.Id is unique;
run your query again:
match (n:Element {Id:{Id}}) return n
etc.
It seems there is something wrong with the automatic memory mapping calculation when you are on Windows (memory mapping on heap).
I just looked at your messages.log and added up some numbers, so it seems the mmio alone is enough to fill your java heap space (old-gen) leaving no room for the database, caches etc.
Please try to amend that by fixing the mmio config in your conf/neo4j.properties to more sensible values (than the auto-calculation).
For your small store just uncommenting the values starting with #neostore. (i.e. remove the #) should work fine.
Otherwise something like this (fitting for a 3GB heap) for a larger graph (2M nodes, 10M rels, 20M props,10M long strings):
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=250M
neostore.propertystore.db.mapped_memory=250M
neostore.propertystore.db.strings.mapped_memory=250M
neostore.propertystore.db.arrays.mapped_memory=0M
Here are the added numbers:
auto mmio: 134217728 + 134217728 + 536870912 + 536870912 + 1073741824 = 2.3GB
stores sizes: 1073920 + 1073664 + 3221698 + 3221460 + 1073786 = 9MB
JVM max: 3.11 RAM : 13.98 SWAP: 27.97 GB
max heaps: Eden: 1.16, oldgen: 2.33
taken from:
neostore.propertystore.db.strings] brickCount=8 brickSize=134144b mappedMem=134217728b (storeSize=1073920b)
neostore.propertystore.db.arrays] brickCount=8 brickSize=134144b mappedMem=134217728b (storeSize=1073664b)
neostore.propertystore.db] brickCount=6 brickSize=536854b mappedMem=536870912b (storeSize=3221698b)
neostore.relationshipstore.db] brickCount=6 brickSize=536844b mappedMem=536870912b (storeSize=3221460b)
neostore.nodestore.db] brickCount=1 brickSize=1073730b mappedMem=1073741824b (storeSize=1073786b)

Neo4j server - slow import

I have an application which used embedded neo4j earlier but now I migrated to neo4j server (using java rest binding). I need to import 4k nodes, around 40k properties and 30k relationships at a time. When I did import with embedded neo4j, it used to take 10-15 minutes, but it is taking more than 3 hours for neo4j server for the same data, which is unacceptable. How can I configure the server to import the data faster.
This is my what my neo4j.properties looks like
# Default values for the low-level graph engine
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=1G
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=500M
#neostore.propertystore.db.arrays.mapped_memory=130M
# Enable this to be able to upgrade a store from 1.4 -> 1.5 or 1.4 -> 1.6
#allow_store_upgrade=true
# Enable this to specify a parser other than the default one. 1.5, 1.6, 1.7 are available
#cypher_parser_version=1.6
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true
# Autoindexing
# Enable auto-indexing for nodes, default is false
node_auto_indexing=true
# The node property keys to be auto-indexed, if enabled
node_keys_indexable=primaryKey
# Enable auto-indexing for relationships, default is false
relationship_auto_indexing=true
# The relationship property keys to be auto-indexed, if enabled
relationship_keys_indexable=XY
cache_type=weak
Can you share the code that you use for importing the data?
The java-rest-binding is just a thin wrapper around the verbose REST API which is not intended for data import.
I recommend to use cypher queries in batches using parameters if you want to import more data. Check out RestCypherQueryEngine(restGraphDb.getRestAPI()) for that. And see restGraphDB.executeBatch() for executing multiple queries in a single request.
Just don't rely on the results of those queries to make decisions later in your import.
Or import the data embedded and then copy the directory over to the servers data/graph.db directory.

Decrease rails boot time

I found this blog about reducing rails boot time.
I set these environment variables in my bashrc.
export RUBY_HEAP_MIN_SLOTS=800000
export RUBY_HEAP_FREE_MIN=100000
export RUBY_HEAP_SLOTS_INCREMENT=300000
export RUBY_HEAP_SLOTS_GROWTH_FACTOR=1
export RUBY_GC_MALLOC_LIMIT=79000000
And it did reduce my boot time by half.
Now i would like to know why this decreased my boot time and what do these environment variables mean?
RUBY_HEAP_MIN_SLOTS (default 10_000) - the initial number of heap slots and minimum number of slots at all times. One heap slot can hold one Ruby object.
RUBY_HEAP_FREE_MIN (default 4_096) - the number of free slots that should be present after the garbage collector finishes running. If there are fewer than those defined, it allocates new ones according to RUBY_HEAP_SLOTS_INCREMENT and RUBY_HEAP_SLOTS_GROWTH_FACTOR parameters
RUBY_HEAP_SLOTS_INCREMENT (default 10_000) - the number of new slots to allocate when all initial slots are used. The second heap.
RUBY_HEAP_SLOTS_GROWTH_FACTOR (default 1.8) - multiplication factor used to determine how many new slots to allocate (RUBY_HEAP_SLOTS_INCREMENT * multiplication factor). For heaps #3 and onward.
RUBY_GC_MALLOC_LIMIT (default 8_000_000) - The number of C data structures that can be allocated before triggering the garbage collector.
The default settings for the Ruby garbage collector are not optimized for Rails, which uses a lot of memory and creates and destroys huge objects frequently. The optimal values depend on the application itself, and you can profile garbage collection under different settings: http://www.ruby-doc.org/core-2.0/GC/Profiler.html
You can also monitor the GC using New Relic, gdb.rb, or using gems like scrap (https://github.com/cheald/scrap/tree/master).
Here are some articles you may be interested in:
https://www.coffeepowered.net/2009/06/13/fine-tuning-your-garbage-collector/
http://technology.customink.com/blog/2012/03/16/simple-garbage-collection-tuning-for-rails/
http://snaprails.tumblr.com/post/241746095/rubys-gc-configuration

Resources