I have an application which used embedded neo4j earlier but now I migrated to neo4j server (using java rest binding). I need to import 4k nodes, around 40k properties and 30k relationships at a time. When I did import with embedded neo4j, it used to take 10-15 minutes, but it is taking more than 3 hours for neo4j server for the same data, which is unacceptable. How can I configure the server to import the data faster.
This is my what my neo4j.properties looks like
# Default values for the low-level graph engine
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=1G
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=500M
#neostore.propertystore.db.arrays.mapped_memory=130M
# Enable this to be able to upgrade a store from 1.4 -> 1.5 or 1.4 -> 1.6
#allow_store_upgrade=true
# Enable this to specify a parser other than the default one. 1.5, 1.6, 1.7 are available
#cypher_parser_version=1.6
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true
# Autoindexing
# Enable auto-indexing for nodes, default is false
node_auto_indexing=true
# The node property keys to be auto-indexed, if enabled
node_keys_indexable=primaryKey
# Enable auto-indexing for relationships, default is false
relationship_auto_indexing=true
# The relationship property keys to be auto-indexed, if enabled
relationship_keys_indexable=XY
cache_type=weak
Can you share the code that you use for importing the data?
The java-rest-binding is just a thin wrapper around the verbose REST API which is not intended for data import.
I recommend to use cypher queries in batches using parameters if you want to import more data. Check out RestCypherQueryEngine(restGraphDb.getRestAPI()) for that. And see restGraphDB.executeBatch() for executing multiple queries in a single request.
Just don't rely on the results of those queries to make decisions later in your import.
Or import the data embedded and then copy the directory over to the servers data/graph.db directory.
Related
I'm trying to figure out the best or a reasonable approach to defining alerts in InfluxDB. For example, I might use the CPU batch tickscript that comes with telegraf. This could be setup as a global monitor/alert for all hosts being monitored by telegraf.
What is the approach when you want to deviate from the above setup for a host, ie instead of X% for a specific server we want to alert on Y%?
I'm happy that a distinct tickscript could be created for the custom values but how do I go about excluding the host from the original 'global' one?
This is a simple scenario but this needs to meet the needs of 10,000 hosts of which there will be 100s of exceptions and this will also encompass 10s/100s of global alert definitions.
I'm struggling to see how you could use the platform as the primary source of monitoring/alerting.
As said in the comments, you can use the sideload node to achieve that.
Say you want to ensure that your InfluxDB servers are not overloaded. You may want to allow 100 measurements by default. Only on one server, which happens to get a massive number of datapoints, you want to limit it to 10 (a value which is exceeded by the _internal database easily, but good for our example).
Given the following excerpt from a tick script
var data = stream
|from()
.database(db)
.retentionPolicy(rp)
.measurement(measurement)
.groupBy(groupBy)
.where(whereFilter)
|eval(lambda: "numMeasurements")
.as('value')
var customized = data
|sideload()
.source('file:///etc/kapacitor/customizations/demo/')
.order('hosts/host-{{.hostname}}.yaml')
.field('maxNumMeasurements',100)
|log()
var trigger = customized
|alert()
.crit(lambda: "value" > "maxNumMeasurements")
and the name of the server with the exception being influxdb and the file /etc/kapacitor/customizations/demo/hosts/host-influxdb.yaml looking as follows
maxNumMeasurements: 10
A critical alert will be triggered if value and hence numMeasurements will exceed 10 AND the hostname tag equals influxdb OR if value exceeds 100.
There is an example in the documentation handling scheduled downtimes using sideload
Furthermore, I have created an example available on github using docker-compose
Note that there is a caveat with the example: The alert flaps because of a second database dynamically generated. But it should be sufficient to show how to approach the problem.
What is the cost of using sideload nodes in terms of performance and computation if you have over 10 thousand servers?
Managing alerts manually directly in Chronograph/Kapacitor is not feasible for big number of custom alerts.
At AMMP Technologies we need to manage alerts per database, customer, customer_objects. The number can go into the 1000s. We've opted for a custom solution where keep a standard set of template tickscripts (not to be confused with Kapacitor templates), and we provide an interface to the user where only expose relevant variables. After that a service (written in python) combines the values for those variables with a tickscript and using the Kapacitor API deploys (updates, or deletes) the task on the Kapacitor server. This is then automated so that data for new customers/objects is combined with the templates and automatically deployed to Kapacitor.
You obviously need to design your tasks to be specific enough so that they don't overlap and generic enough so that it's not too much work to create tasks for every little thing.
We migrated from Google Dataflow 1.9 to Apache Beam 0.6. We are noticing a change in the behavior to the timestamps after applying the globalwindow. In Google Dataflow 1.9, we would get the correct timestamps in the DoFn after windowing/combine function. Now we get some huge value for the timestamp e.g. 9223371950454775, Did the default behavior for the globalwindow change in Apache Beam version?
input.apply(name(id, "Assign To Shard"), ParDo.of(new AssignToTest()))
.apply(name(id, "Window"), Window
.<KV<Long, ObjectNode >>into(new GlobalWindows())
.triggering(Repeatedly.forever(
AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardMinutes(1))))
.discardingFiredPanes())
.apply(name(id, "Group By Shard"), GroupByKey.create())
.appy(.....) }
TL;DR: When you are combining a bunch of timestamped values, you need to choose a timestamp for the result of the aggregation. There are multiple good answers for this output timestamp. In Dataflow 1.x the default was the minimum of the input timestamps. Based on our experience with 1.x in Beam the default was changed to the end of the window. You can restore the prior behavior by adding .withTimestampCombiner(TimestampCombiner.EARLIEST) to your Window transform.
I'll unpack this. Let's use the # sign to pair up a value and its timestamp. Focusing on just one key, you have timestamped values v1#t1, v2#t2, ..., etc. I will stick with your example of a raw GroupByKey even though this also applies to other ways of combining the values. So the output iterable of values is [v1, v2, ...] in arbitrary order.
Here are some possibilities for the timestamp:
min(t1, t2, ...)
max(t1, t2, ...)
the end of the window these elements are in (ignoring input timestamps)
All of these are correct. These are all available as options for your OutputTimeFn in Dataflow 1.x and TimestampCombiner in Apache Beam.
The timestamps have different interpretations and they are useful for different things. The output time of the aggregated value governs the downstream watermark. So choosing earlier timestamps holds the downstream watermark more, while later timestamps allows it to move ahead.
min(t1, t2, ...) allows you to unpack the iterable and re-output v1#t1
max(t1, t2, ...) accurately models the logical time that the aggregated value was fully available. Max does tend to be the most expensive, for reasons to do with implementation details.
end of the window:
models the fact that this aggregation represents all the data for the window
is very easy to understand
allows downstream watermarks to advance as fast as possible
is extremely efficient
For all of these reasons, we switched the default from the min to end of window.
In Beam, you can restore the prior behavior by adding .withTimestampCombiner(TimestampCombiner.EARLIEST) to your Window transform. In Dataflow 1.x you can migrate to Beam's defaults by adding .withOutputTimeFn(OutputTimeFns.outputAtEndOfWindow()).
Another technicality is that the user-defined OutputTimeFn is removed and replaced by the TimestampCombiner enum, so there are only these three choices, not a whole API to write your own.
I'm running neo4j version 2.2.5. I love all the CYPHER language, Python integration, ease of use, and very responsive user community.
I've developed a prototype of an application and am encountering some very poor performance times. I've read a lot of links related to performance tuning. I will attempt to outline my entire database here so that someone can provide guidance to me.
My machine is a MacBook Pro, 16GB of RAM, and 500GB SSD. It's very fast for everything else I do in Spark + Python + Hadoop. It's fast for Neo4j too, BUT when I get to like 2-4M nodes then it's insanely slow.
I've used both of these commands to start up neo4j, thinking they will help, and neither is that helpful:
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseConcMarkSweepGC
./neo4j-community-2.2.5/bin/neo4j start -Xms512m -Xmx3g -XX:+UseG1GC
My neo4j.properties file is as follows:
################################################################
# Neo4j
#
# neo4j.properties - database tuning parameters
#
################################################################
# Enable this to be able to upgrade a store from an older version.
#allow_store_upgrade=true
# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 75% of RAM minus the max Java heap size.
dbms.pagecache.memory=6g
# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true".
#keep_logical_logs=7 days
# Enable shell server so that remote clients can connect via Neo4j shell.
#remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces).
#remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337.
#remote_shell_port=1337
# The type of cache to use for nodes and relationships.
#cache_type=soft
To create my database from a fresh start, I first create these indexes, they are on all of my node types, and edges that I'm using.
CREATE CONSTRAINT ON (id:KnownIDType) ASSERT id.id_type_value IS UNIQUE;
CREATE CONSTRAINT ON (p:PerspectiveKey) ASSERT p.perspective_key IS UNIQUE;
CREATE INDEX ON :KnownIDType(id_type);
CREATE INDEX ON :KnownIDType(id_value);
CREATE INDEX ON :KNOWN_BY(StartDT);
CREATE INDEX ON :KNOWN_BY(EndDT);
CREATE INDEX ON :HAS_PERSPECTIVE(Country);
I have 8,601,880 nodes.
I run this query, and it takes 9 minutes.
MATCH (l:KnownIDType { id_type:'CodeType1' })<-[e1:KNOWN_BY]-(m:KnownIDType { id_type:'CodeType2' })-[e2:KNOWN_BY]->(n:KnownIDType)<-[e3:KNOWN_BY]-(o:KnownIDType { id_type:'CodeType3' })-[e4:KNOWN_BY]->(p:KnownIDType { id_type:'CodeType4' }), (n)-[e5:HAS_PERSPECTIVE]->(q:PerspectiveKey {perspective_key:100})
WHERE 1=1
AND l.id_type IN ['CodeType1']
AND m.id_type IN ['CodeType2']
AND n.id_type IN ['CodeTypeA', 'CodeTypeB', 'CodeTypeC']
AND o.id_type IN ['CodeType3']
AND p.id_type IN ['CodeType4']
AND 20131231 >= e1.StartDT and 20131231 < e1.EndDT
AND 20131231 >= e2.StartDT and 20131231 < e2.EndDT
AND 20131231 >= e3.StartDT and 20131231 < e3.EndDT
AND 20131231 >= e4.StartDT and 20131231 < e4.EndDT
WITH o, o.id_value as KnownIDValue, e5.Country as Country, count(distinct p.id_value) as ACount
WHERE AmbiguousCount > 1
RETURN 20131231 as AsOfDate, 'CodeType' as KnownIDType, 'ACount' as MetricName, count(ACount) as MetricValue
;
I'm looking for more like 15s or less response time. Like I do with < 1M nodes.
What would you suggest? I am happy to provide more information if you tell me what you need.
Thanks a bunch in advance.
Here are a couple of ideas how to speed up your query:
Don't use IN if there is only one element. Use =
With a growing number of nodes, the index lookup will obviously take longer. Instead of having a single label with an indexed property, you could use the id_type property as label. Something like (l:KnownIDTypeCode1)<-[e1:KNOWN_BY]-(m:KnownIDTypeCode2).
Split up the query in two parts. First MATCH your KNOWN_BY path, then collect what you need using WITH and MATCH the HAS_PERSPECTIVE part.
The range queries on the StartDT and EndDT property could be slow. Try to remove them to test if this slows down the query.
Also, it looks like you could replace the >= and < with =, sind you use the same date everywhere.
If you really have to filter date ranges a lot, it might help to implement it in your graph model. One option would be to use Knownby nodes instead of KNOWN_BY relationships and connect them to Date nodes.
First upgrade to version of 2.3, because it should improve performance - http://neo4j.com/release-notes/neo4j-2-3-0/
Hint
It doesn't make sense to use IN for array with one element.
Profile your query with EXPLAIN and PROFILE
http://neo4j.com/docs/stable/how-do-i-profile-a-query.html
Martin, your second recommendation, has sped up my matching paths to single digit seconds, I am grateful for your help. Thank you. While it involved a refactoring the design of my graph, and query patterns, it's improved the performance exponentially. I decided to create CodeType1, CodeType2, CodeType[N] as nodes labels, and minimized the use of node properties, except for keeping the temporality properties on the edges. Thank you again so much! Please let me know if there is anything I can do to help.
I have been playing around with heap size for JVM and the file store cache size in Neo4j. It seems like setting the memory mapped buffer to be handled by the OS does not have any effect on the system.
I tried setting the JVM heap quite large with a tiny cache and it was exactly as fast as a if the cache was large.
So my question is: How can I configure the system to allow me to control the cache? Is this an issue with the batching as it says that this uses the JVM heap?
I am using the following python script to fill up the database
neo4j.GraphDatabaseService("http://localhost:7474/db/data/")
f = open('indexslowdown_exp.txt','w')
f.write("Properties\t,\tSpeed\n")
total_time = timedelta(0)
name = 0
for y in range(0,1000):
batch = neo4j.WriteBatch(graph_db)
for x in range(0,100):
batch.create({"name":name})
name += 1
for x in range(0,100):
rand_node_A = random.randint(0,name-1)
rand_node_B = random.randint(0,name-1)
batch.append_cypher("START n=node("+str(rand_node_A)+"), m=node("+str(rand_node_B)+") CREATE (n)-[r:CONNECTED]->(m)")
start_time = datetime.now()
batch.submit()
end_time = datetime.now()
total_time = (end_time-start_time)
f.write(str(name)+" , "+str((total_time)/200)+"\n")
print "Inserting nodes: " + str(total_time)
f.close()
Neo4j.properties file:
use_memory_mapped_buffers=true
/# Default values for the low-level graph engine
neostore.nodestore.db.mapped_memory=1k
neostore.relationshipstore.db.mapped_memory=1k
neostore.propertystore.db.mapped_memory=2k
neostore.propertystore.db.strings.mapped_memory=1k
neostore.propertystore.db.arrays.mapped_memory=1k
# Enable this to be able to upgrade a store from an older version
#allow_store_upgrade=true
# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true
# Autoindexing
# Enable auto-indexing for nodes, default is false
#node_auto_indexing=true
# The node property keys to be auto-indexed, if enabled
#node_keys_indexable=name,age
# Enable auto-indexing for relationships, default is false
#relationship_auto_indexing=true
# The relationship property keys to be auto-indexed, if enabled
#relationship_keys_indexable=name,age
neo4j-wrapper:
wrapper.java.additional=-Dorg.neo4j.server.properties=conf/neo4j-server.properties
wrapper.java.additional=-Djava.util.logging.config.file=conf/logging.properties
wrapper.java.additional=-Dlog4j.configuration=file:conf/log4j.properties
#********************************************************************
# JVM Parameters
#********************************************************************
wrapper.java.additional=-XX:+UseConcMarkSweepGC
wrapper.java.additional=-XX:+CMSClassUnloadingEnabled
# Uncomment the following lines to enable garbage collection logging
wrapper.java.additional=-Xloggc:data/log/neo4j-gc.log
wrapper.java.additional=-XX:+PrintGCDetails
wrapper.java.additional=-XX:+PrintGCDateStamps
wrapper.java.additional=-XX:+PrintGCApplicationStoppedTime
wrapper.java.additional=-XX:+PrintTenuringDistribution
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=200
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=400
#********************************************************************
# Wrapper settings
#********************************************************************
# Override default pidfile and lockfile
#wrapper.pidfile=../data/neo4j-server.pid
#wrapper.lockfile=../data/neo4j-server.lck
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.name=neo4j
# User account to be used for linux installs. Will default to current
# user if not set.
wrapper.user=
What are you most concerned with? Batch insertion performance? If so, MMIO will be most effective if you can fit the entire graph into memory, so if you can estimate the number of nodes and relationships, you can come up with a rough calculation of the size you need for those two stores.
Also given that you look to only be inserting primitives, you can likely estimate the size for the property store that you need. If you're going to store strings and arrays (of a larger type), you can increase the MMIO settings for those two stores, as well, but if you don't need them, set them low.
Approx. size of node store: # of nodes * 14 bytes (if you're using the latest Neo4j; 9 bytes if Neo4j is < 2.0)
Approx. size of relationship store: # of rels * 33 bytes
Remember: There's a near 1:1 correspondence between the store sizes on disk and in memory for the filesystem cache.
As well, a larger JVM heap doesn't necessarily mean greater performance; in fact, the MMIO sizes (depending on your value for the setting use_memory_mapped_buffers) may lie outside of the JVM heap. Regardless, a large JVM heap can also introduce longer GC pauses and other GC-related issues, so don't make it too big.
HTH
For your queries:
Use Parameters!
Use Labels & Indexes in 2.0.x
Try not to use random in performance test loops
Make sure to use streaming
I have built a 5-node cluster using Riak 2.0pre11 on EC2 servers. Installed Riak, got it working, then repeated the same actions on 4 more servers using a bash script. At that point I used riak-admin cluster join riak#node1.example.com on nodes 2 thru 5 to form a cluster.
Using the Python Riak client I wrote a script to send 10,000 documents to Riak. Works fine and I can wrote another script to retrieve a doc which worked fine. Other than specifying the use of protobufs I haven't specified any other options when storing keys. I stored all the docs via a connection to node1.
However Riak seems to be storing all 3 replicas on the same node, in other words the storage used on node1 is about 3x the original HTML docs.
The script connected to node 1 and that is where all docs are stored. I changed the script to connect to node 2 and send 10,000 more which also all ended up in node 1. I used the command du -h /data/riak/bitcask to verify the aggregate stored size of the objects. On nodes 2 thru 4 there is only a few K which is the overhead of an empty Bitcask datastore.
For each document I specified the key similar to this
http://www.example.com/blogstore/007529.html4787somehash4787947:2014-03-12T19:14:32.887951Z
The first part of all keys are identical (testing), only the .html name and the ISO 8601 timestamp are different. Is it possible that I have somehow subverted the perfect hashing function?
Basically I used a default config. What could be wrong? Since Riak 2.0 uses a different config format, here is a fragment of the generated config for riak-core in the old format:
{riak_core,
[{enable_consensus,false},
{platform_log_dir,"/var/log/riak"},
{platform_lib_dir,"/usr/lib/riak/lib"},
{platform_etc_dir,"/etc/riak"},
{platform_data_dir,"/var/lib/riak"},
{platform_bin_dir,"/usr/sbin"},
{dtrace_support,false},
{handoff_port,8099},
{ring_state_dir,"/datapool/riak/ring"},
{handoff_concurrency,2},
{ring_creation_size,64},
{default_bucket_props,
[{n_val,3},
{last_write_wins,false},
{allow_mult,true},
{basic_quorum,false},
{notfound_ok,true},
{rw,quorum},
{dw,quorum},
{pw,0},
{w,quorum},
{r,quorum},
{pr,0}]}]}
If the bitcask directory only grows on a single node, it sounds like the nodes might not be communicating. Please run riak-admin member-status to verify that all nodes in the cluster are active.
Once you have issued the riak-admin cluster join <node> commands on all the nodes joining the cluster, you will also need to run riak-admin cluster plan to verify that the plan is correct before committing it using riak-admin cluster commit. These commands are described in greater detail here..