I have an API in Django and its structure is something like -
FetchData():
run cypher query1
run cypher query2
run cypher query3
return
When I run these queries in neo4j query window each take around 100ms. But when I call this API, query1 takes 1s and other 2 take expected 100ms to execute. This pattern is repeated every time I call the API.
Can anyone explain what should be done here to run the first query in expected time.
Neo4j tries to cache the graph in RAM. Upon first invocations caches are not warmed up yet, so it takes longer to do the IO operations. Subsequent invocations don't hit IO and read directly from RAM.
That sounds weird. The cache should only need to be warmed if the server or db is shut down, not after each of your API calls. Are you using paramterized queries? The only thing I can think of is maybe each set of queries is different causing them to have to be re-parsed and re-planned.
Related
I have a rather long and complex paginated query. I'm trying to optimize it. In the worst case - first, I have to execute the data query in a one call to Neo4j, and then I have to execute pretty much the same query for the count. Of course, I do everything in one transaction. Anyway, I don't like the overall execution time, so I extracted the most common part for both - data and count queries and execute it on the first call. This common query returns the IDs of nodes, which I then pass as parameters to the rest of data and count queries. Now, everything works much faster. One thing I don't like is that a common query can sometimes return quite a large set of IDs.. it can be 20k..50k Long IDs.
So my question is - because I'm doing this in a one transaction - is there a way to preserve such Set of IDs somewhere in Neo4j between common query and data/count query calls and just refer them somehow in the subsequent data/count queries without moving between app JVM and Neo4j?
Also, am I crazy for doing this, or is this a good approach to optimize a complex paginated query?
Only with a custom procedure.
Otherwise you'd need to return them.
But usually it's uncommon to both provide counts (even google doesn't provide "real" counts) and data.
One way is to just stream the results with the reactive driver as long as the user scrolls.
Otherwise I would just query for pageSize+1 and return "more than pageSize results".
If you just stream the id's back (and don't collect them as an aggregation) you can start using the id's received already to issue your new queries (even in parallel).
I am using Neo4j community 4.2.1, playing with graph databases. I plan to operate on lots of data and want to get familiar with indexes and stuff.
However, I'm stuck at a very basic level because in the browser Neo4j reports query runtimes which have nothing to do with reality.
I'm executing the following query in the browser at http://localhost:7687/:
match (m:Method),(o:Method) where m.name=o.name and m.name <> '<init>' and
m.signature=o.signature and toInteger(o.access)%8 in [1,4]
return m,o
The DB has ~5000 Method labels.
The browser returns data after about 30sec. However, Neo4j reports
Started streaming 93636 records after 1 ms and completed after 42 ms, displaying first 1000 rows.
Well, 42ms and 30sec is really far away from each other! What am I supposed to do with this message? Did the query take only milliseconds and the remaining 30secs were spent rendering the stuff in the browser? What is going on here? How can I improve my query if I cannot even tell how long it really ran?
I modified the query, returning count(m) + count(n) instead of m,n which changed things, now runtime is about 2secs and Neo4j reports about the same amount.
Can somebody tell me how I can get realistic runtime figures of my queries without using the stop watch of my cell?
When processing the same query repeatedly, can i assume that there is almost no I/O cost except the first query, since memory maybe holds the data that the following queries need. Assume that the query fits the memory.
If the page cache is sized reasonably well that assumption should be true. Note that the graph is cached, but not the query results. So every time you run the query the graph is traversed (but nodes/rels/properties are hopefully picked from page cache and not from ssd/hdd).
In my scenario I have a few dozens of Cypher queries executed one after another. If any of them returns some data (reveals some knowledge), at the end of the loop the graph is changed accordingly and all the queries are executed again.
Currently I store all the queries as Strings. There are never more than 20 loops, but still having to parse all the queries every time seems a an overhead. Is there a way to optimize it, like by storing the queries in some precompiled state? Or there's nothing to worry about?
Any other hints that would make the above scenario work faster?
As others have pointed out in the comments, you should use query parameters where possible. This has two benefits:
You can reuse the queries in your code without having to parse / construct the strings given whatever values you want to include.
Performance. The cypher compiler caches the execution plan for Cypher queries (ie queries it has seen before). If you use query parameters you will not incur the overhead of generating the query plan when executing the Cypher query again.
http://neo4j.com/docs/stable/cypher-parameters.html
http://neo4j.com/docs/stable/tutorials-cypher-parameters-java.html
I am new to using Neo4j and have setup a test graph db in neo4j for organizing some click stream data with a very small subset of what we actually use on a day to day basis. This graph has about 23 million nodes and 34 million relationships. The queries seem to be taking forever to run i.e. I haven't seen the response come back even after waiting for more than 30 mins.
The data is organized as Year->Month->Day->Session{1..n}->Event{1..n}
I am running the db on a Windows 7 machine with 1.5 gb of heap allocated to Neo4j server
These are the configurations in the neo4j-wrapper.conf
wrapper.java.additional.1=-Dorg.neo4j.server.properties=conf/neo4j-server.properties
wrapper.java.additional.2=-Djava.util.logging.config.file=conf/logging.properties
wrapper.java.additional.3=-Dlog4j.configuration=file:conf/log4j.properties
wrapper.java.additional.6=-XX:+UseParNewGC
wrapper.java.additional.7=-XX:+UseConcMarkSweepGC
wrapper.java.additional.8=-Xloggc:data/log/neo4j-gc.log
wrapper.java.initmemory=1500
wrapper.java.maxmemory=1500
This is what my query looks like
START n=node(3)
MATCH (n)-[:HAS]->(s)
WITH distinct s
MATCH (s)-[:HAS]->(e) WHERE e.page_name = 'Login'
WITH s.session_id as session, e
MATCH (e)-[:FOLLOWEDBY*0..1]->(e1)
WITH count(session) as session_cnt, e.page_name as startPage, e1.page_name as nextPage
RETURN startPage, nextPage, session_cnt
Also i have these properties set
node_auto_indexing=true
node_keys_indexable=name,page_name,geo_country
relationship_auto_indexing=true
Can anyone help me to figure out what might be wrong.
Even when I run portions of the query it takes 10-15 minutes before I can see a response.
Note: I have no other applications running on the Windows Machine
Why would you want to return all the nodes in the first place?
If you really want to do that, use the transactional http endpoint and curl to stream the response:
I tested it with a database of 100k nodes. It takes 0.9 seconds to transfer them (1.5MB) over the wire.
If you transfer all their properties by using "return n", it takes 1.4 seconds and results in 4.1MB transferred.
If you just want to know how many nodes are in your db. use something like this instead:
match (n) return count(*);