Query times vary too much between runs - neo4j

I am trying out neo4j to work on my grade project and I decided to create a database with the datagen provided by LDBC. This datagen came with these queries.
So I decided to run one of those queries and collected it's execution time (I ran them using cypher-shell command). So I collected these times:
553575, 558724, 556443, 556675, 551304, 555385, 552896
Note: I took the first run out because as said in this post the first time seemed to be a spike due to cold-cache.
Everything smooth until here, then I've decided to create some paths with new labels, making the database grow from 1.5GB to 2.4GB. The I decided to run again the queries expecting for them to be slower. For my surprise I saw that the times were substantially smaller.
382331, 380566, 405636, 405953, 407277, 391804, 371134, ....
From this post. I expected them to be like all be around a same time (like the first run) and perhaps bigger, but definitely not smaller.
For the purpose of this question, I am using an AWS machine, especifically the i3.4xlarge instance (16 vCPUs, 122GB RAM and 2x 1.9TB NVMe SSD). The current version I am using of Neo4j is 3.3.5 and I am using cypher-shell as stated above.
Additionally I have no other programs/process running on this machine so I think we can just eliminate that issue (and despite that being the case, it should slow them down, not speed them up).
Thanks in advance, I am just asking to see if someone may shed some light for me on this issue :)
EDIT:
The changes I've made were added some paths between :Message nodes and :Person's. Also some between :Person and :Person, some between :TagClass (other label) and :Person. Additionally, the DB had index on :Tag(name), :Person(name), :TagClass(name) and :Country(name) but for the purpose of executing the query, dropped those.
The original query:
PROFILE
MATCH (tag:Tag {name: 'Arnold_Schwarzenegger'})
MATCH (tag)<-[:HAS_TAG]-(message1:Message)-[:HAS_CREATOR]->(person1:Person)
MATCH (tag)<-[:HAS_TAG]-(message2:Message)-[:HAS_CREATOR]->(person1)
OPTIONAL MATCH (message2)<-[:LIKES]-(person2:Person)
OPTIONAL MATCH (person2)<-[:HAS_CREATOR]-(message3:Message)<-[like:LIKES]-(p3:Person)
RETURN
person1.id,
count(DISTINCT like) AS authorityScore
ORDER BY
authorityScore DESC,
person1.id ASC
LIMIT 100
These are profiles from both runs, due to space only added one plan. I can assure you, they're both the exact same (even on db hits, paths expanded, everything):
1st run:
| Plan | Statement | Version | Planner | Runtime | Time | DbHits | Rows |
| "PROFILE" | "READ_ONLY" | "CYPHER 3.3" | "COST" | "INTERPRETED" | 556443 | 0 | 100 |
2nd:
| Plan | Statement | Version | Planner | Runtime | Time | DbHits | Rows |
| "PROFILE" | "READ_ONLY" | "CYPHER 3.3" | "COST" | "INTERPRETED" | 371134 | 0 | 100 |
Plan:
+----------------------+----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| Operator | Estimated Rows | Rows | DB Hits | Cache H/M | Identifiers | Other |
+----------------------+----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +ProduceResults | 100 | 100 | 0 | 0/0 | anon[349], anon[355], authorityScore, person1.id | 0.0 |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Projection | 100 | 100 | 0 | 0/0 | anon[349], anon[355], authorityScore, person1.id | 0.0; {person1.id : , authorityScore : } |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Top | 100 | 100 | 0 | 0/0 | anon[349], anon[355] | 0.0; 100; anon[355], anon[349] |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +EagerAggregation | 126593 | 1842 | 49948155 | 0/0 | anon[349], anon[355] | 0.0; anon[349] |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Apply | 16025884097 | 49948155 | 0 | 0/0 | person2, anon[219], message3, anon[60], tag, anon[136], p3, message1, anon[167], anon[271], message2, anon[91], person1, like | 0.0 |
| |\ +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Optional | 7317 | 49948155 | 0 | 0/0 | person2, message3, p3, anon[271], like | 0.0 |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Filter | 5037 | 49897917 | 49897917 | 0/0 | person2, message3, p3, anon[271], like | 0.0; p3:Person |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Expand(All) | 5037 | 49897917 | 108680609 | 0/0 | person2, message3, p3, anon[271], like | 0.0; (message3)<-[like:LIKES]-(p3) |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Filter | 7028 | 58782692 | 58782692 | 0/0 | anon[271], message3, person2 | 0.0; message3:Message |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Expand(All) | 7028 | 58782692 | 58866898 | 0/0 | anon[271], message3, person2 | 0.0; (person2)<-[anon[271]:HAS_CREATOR]-(message3) |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Argument | 7317 | 130898 | 0 | 0/0 | person2 | 0.0 |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +OptionalExpand(All) | 7317 | 130898 | 220191 | 0/0 | person2, anon[219], anon[60], tag, anon[136], message1, anon[167], message2, anon[91], person1 | 0.0; (message2)<-[anon[219]:LIKES]-(person2); person2:Person |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +NodeHashJoin | 7317 | 51779 | 0 | 0/0 | anon[60], tag, anon[136], message1, anon[167], message2, anon[91], person1 | 0.0; tag, person1 |
| |\ +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Filter | 341165 | 4241 | 4241 | 0/0 | tag, anon[136], anon[167], message2, person1 | 0.0; person1:Person |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Expand(All) | 341165 | 4241 | 8482 | 0/0 | tag, anon[136], anon[167], message2, person1 | 0.0; (message2)-[anon[167]:HAS_CREATOR]->(person1) |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Filter | 341165 | 4241 | 4646 | 0/0 | anon[136], message2, tag | 0.0; message2:Message |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Expand(All) | 372142 | 4646 | 4647 | 0/0 | anon[136], message2, tag | 0.0; (tag)<-[anon[136]:HAS_TAG]-(message2) |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +Filter | 1608 | 1 | 16080 | 0/0 | tag | 0.0; tag.name = { AUTOSTRING0} |
| | | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| | +NodeByLabelScan | 16080 | 16080 | 16081 | 0/0 | tag | 0.0; :Tag |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Filter | 341165 | 4241 | 4241 | 0/0 | anon[60], tag, message1, anon[91], person1 | 0.0; person1:Person |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Expand(All) | 341165 | 4241 | 8482 | 0/0 | anon[60], tag, message1, anon[91], person1 | 0.0; (message1)-[anon[91]:HAS_CREATOR]->(person1) |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Filter | 341165 | 4241 | 4646 | 0/0 | anon[60], message1, tag | 0.0; message1:Message |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Expand(All) | 372142 | 4646 | 4647 | 0/0 | anon[60], message1, tag | 0.0; (tag)<-[anon[60]:HAS_TAG]-(message1) |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +Filter | 1608 | 1 | 16080 | 0/0 | tag | 0.0; tag.name = { AUTOSTRING0} |
| | +----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------+
| +NodeByLabelScan | 16080 | 16080 | 16081 | 0/0 | tag | 0.0; :Tag |
+----------------------+----------------+----------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------+------------------------------ --------------------------------+
... Results ...
100 rows available after 371134 ms, consumed after another 0 ms
Note: SO Doesn't allow me to write over 30000 characters so had to reduce the text.

The Cypher planner takes the current characteristics of the DB into account when deciding how to carry out a query. Any changes to a DB can alter the execution plan, which can greatly affect the timing.
The best way to understand timing differences after updating a DB is to do before-and-after comparisons of the query profile. Without knowing what query you are using, and especially without the before-and-after profile information (neither of which were provided), it is difficult to impossible to figure out why the timing changed.
The only guess that can be made, with the little information available, is that the timing had changed because the DB characteristics had changed in ways that caused the planner to alter the execution plan.

Related

MemSQL takes 15GB memory for 10MB of data

I have installed memsql 5.1.2 in following manner with following resources.
Google cloud server
HDD: 100GB
Machine type: n1-standard-4 (4 vCPUs, 15 GB memory)
Implementation:
2 MEMSQL NODES running on same machine on the following ports
3306 Master Aggregator
3307 Leaf
Resource Utilization:
Memory 14.16 GB / 14.69 GB
Paging 0 B/s
Database size - 10MB
1818 memsql 1.1% 77% /var/lib/memsql/leaf-3307/memsqld --defaults-file=/var/lib/memsql/leaf-3307/memsql.cnf --pid-file=/var/lib/memsql/leaf-3307/data/memsqld.pid --user=memsql
2736 memsql 0.3% 16% /var/lib/memsql/master-3306/memsqld --defaults-file=/var/lib/memsql/master-330
Note: There is no Swap memory implemented in the server.
Database size is taken by running a query on information_schema.TABLES.
All data resides as row store since we have to run queries by considering many relationships among tables.
As soon as the memsql is up the memory goes up to 70% and it keep on increasing and after 2-3 hours memsql gives the following error when try connect with it and connection also can not be done after that.
OperationalError: (1836, "Leaf 'xx.xxx.x.xx':3307 failed while executing this query. Try re-running the query.")
[Mon Mar 27 09:26:31.163455 2017] [:error] [pid 1718] [remote xxx.xxx.xxx.xxx:9956]
The only solution is to restart the server since it has taken up all the memory.
What I can do for this? Is there an issue in the way it's implemented? Any logs should I attach here?
Show status extended; query gives the following result
+-------------------------------------+------------------------------------------------------------------------+
| Variable_name | Value |
+-------------------------------------+------------------------------------------------------------------------+
| Aborted_clients | 48 |
| Aborted_connects | 1 |
| Bytes_received | 85962135 |
| Bytes_sent | 545322701 |
| Connections | 1626 |
| Max_used_connections | 69 |
| Queries | 364793 |
| Questions | 364793 |
| Threads_cached | 19 |
| Threads_connected | 50 |
| Threads_created | 69 |
| Threads_running | 1 |
| Threads_background | 1 |
| Threads_idle | 0 |
| Ready_queue | 0 |
| Idle_queue | 0 |
| Context_switches | 1626 |
| Context_switch_misses | 0 |
| Uptime | 22270 |
| Auto_attach_remaining_seconds | 0 |
| Data_directory | /var/lib/memsql/leaf-3307/data |
| Plancache_directory | /var/lib/memsql/leaf-3307/plancache |
| Transaction_logs_directory | /var/lib/memsql/leaf-3307/data/logs |
| Segments_directory | /var/lib/memsql/leaf-3307/data/columns |
| Snapshots_directory | /var/lib/memsql/leaf-3307/data/snapshots |
| Threads_waiting_for_disk_space | 0 |
| Seconds_until_expiration | -1 |
| License_key | 11111111111111111111111111111111 |
| License_type | community |
| Query_compilations | 62 |
| Query_compilation_failures | 0 |
| GCed_versions_last_sweep | 0 |
| Average_garbage_collection_duration | 21 ms |
| Total_server_memory | 9791.4 MB |
| Alloc_thread_stacks | 70.0 MB |
| Malloc_active_memory | 1254.7 (+0.0) MB |
| Malloc_cumulative_memory | 7315.5 (+0.2) MB |
| Buffer_manager_memory | 1787.8 MB |
| Buffer_manager_cached_memory | 77.2 (-0.1) MB |
| Buffer_manager_unrecycled_memory | 0.0 MB |
| Alloc_skiplist_tower | 263.8 MB |
| Alloc_variable | 501.4 MB |
| Alloc_large_variable | 2.4 MB |
| Alloc_table_primary | 752.6 MB |
| Alloc_deleted_version | 92.9 MB |
| Alloc_internal_key_node | 72.1 MB |
| Alloc_hash_buckets | 459.1 MB |
| Alloc_table_metadata_cache | 1.1 MB |
| Alloc_unit_images | 34.8 MB |
| Alloc_unit_ifn_thunks | 0.6 MB |
| Alloc_object_code_images | 11.6 MB |
| Alloc_compiled_unit_sections | 17.3 MB |
| Alloc_databases_list_entry | 17.9 MB |
| Alloc_plan_cache | 0.1 MB |
| Alloc_replication_large | 232.0 MB |
| Alloc_durability_large | 7239.1 MB |
| Alloc_sharding_partitions | 0.1 MB |
| Alloc_security | 0.1 MB |
| Alloc_log_replay | 0.9 MB |
| Alloc_client_connection | 3.0 MB |
| Alloc_protocol_packet | 6.1 (+0.1) MB |
| Alloc_large_incremental | 0.8 MB |
| Alloc_table_memory | 2144.2 MB |
| Alloc_variable_bucket_16 | allocs:10877846 alloc_MB:166.0 buffer_MB:179.0 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_24 | allocs:4275659 alloc_MB:97.9 buffer_MB:106.8 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_32 | allocs:2875801 alloc_MB:87.8 buffer_MB:93.4 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_40 | allocs:724489 alloc_MB:27.6 buffer_MB:31.0 cached_buffer_MB:1.2 |
| Alloc_variable_bucket_48 | allocs:377060 alloc_MB:17.3 buffer_MB:19.8 cached_buffer_MB:0.9 |
| Alloc_variable_bucket_56 | allocs:228720 alloc_MB:12.2 buffer_MB:14.0 cached_buffer_MB:0.8 |
| Alloc_variable_bucket_64 | allocs:150214 alloc_MB:9.2 buffer_MB:10.1 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_72 | allocs:35264 alloc_MB:2.4 buffer_MB:2.9 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_80 | allocs:14920 alloc_MB:1.1 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_88 | allocs:5582 alloc_MB:0.5 buffer_MB:0.6 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_104 | allocs:8075 alloc_MB:0.8 buffer_MB:1.0 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_128 | allocs:8892 alloc_MB:1.1 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_160 | allocs:17614 alloc_MB:2.7 buffer_MB:3.0 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_200 | allocs:30454 alloc_MB:5.8 buffer_MB:6.9 cached_buffer_MB:0.6 |
| Alloc_variable_bucket_248 | allocs:4875 alloc_MB:1.2 buffer_MB:1.5 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_312 | allocs:371 alloc_MB:0.1 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_384 | allocs:30 alloc_MB:0.0 buffer_MB:0.1 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_480 | allocs:11 alloc_MB:0.0 buffer_MB:0.1 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_600 | allocs:57 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_752 | allocs:62 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_936 | allocs:42 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1168 | allocs:106 alloc_MB:0.1 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1480 | allocs:126 alloc_MB:0.2 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1832 | allocs:0 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_2288 | allocs:1 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_2832 | allocs:33 alloc_MB:0.1 buffer_MB:1.1 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_3528 | allocs:16 alloc_MB:0.1 buffer_MB:0.5 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_4504 | allocs:49 alloc_MB:0.2 buffer_MB:0.8 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_5680 | allocs:66 alloc_MB:0.4 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_6224 | allocs:30 alloc_MB:0.2 buffer_MB:1.0 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_7264 | allocs:94 alloc_MB:0.7 buffer_MB:1.5 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_9344 | allocs:70 alloc_MB:0.6 buffer_MB:2.6 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_11896 | allocs:14 alloc_MB:0.2 buffer_MB:2.4 cached_buffer_MB:1.2 |
| Alloc_variable_bucket_14544 | allocs:7 alloc_MB:0.1 buffer_MB:2.4 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_18696 | allocs:18 alloc_MB:0.3 buffer_MB:3.2 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_21816 | allocs:4 alloc_MB:0.1 buffer_MB:0.4 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_26184 | allocs:6 alloc_MB:0.1 buffer_MB:0.9 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_32728 | allocs:13 alloc_MB:0.4 buffer_MB:2.4 cached_buffer_MB:1.4 |
| Alloc_variable_bucket_43648 | allocs:12 alloc_MB:0.5 buffer_MB:1.4 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_65472 | allocs:7 alloc_MB:0.4 buffer_MB:2.8 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_130960 | allocs:3 alloc_MB:0.4 buffer_MB:2.2 cached_buffer_MB:1.9 |
| Alloc_variable_cached_buffers | 21.4 MB |
| Alloc_variable_allocated | 438.7 MB |
| Successful_read_queries | 9048 |
| Successful_write_queries | 19096 |
| Failed_read_queries | 0 |
| Failed_write_queries | 4 |
| Rows_returned_by_reads | 75939 |
| Rows_affected_by_writes | 245 |
| Execution_time_of_reads | 7864 ms |
| Execution_time_of_write | 180311 ms |
| Transaction_buffer_wait_time | 0 ms |
| Transaction_log_flush_wait_time | 0 ms |
| Row_lock_wait_time | 0 ms |
| Ssl_accept_renegotiates | 0 |
| Ssl_accepts | 0 |
| Ssl_callback_cache_hits | 0 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_ctx_verify_depth | 18446744073709551615 |
| Ssl_ctx_verify_mode | 0 |
| Ssl_default_timeout | 0 |
| Ssl_finished_accepts | 0 |
| Ssl_finished_connects | 0 |
| Ssl_session_cache_hits | 0 |
| Ssl_session_cache_misses | 0 |
| Ssl_session_cache_overflows | 0 |
| Ssl_session_cache_size | 20480 |
| Ssl_session_cache_timeouts | 0 |
| Ssl_sessions_reused | 0 |
| Ssl_used_session_cache_entries | 0 |
| Ssl_verify_depth | 0 |
| Ssl_verify_mode | 0 |
| Ssl_cipher | |
| Ssl_cipher_list | |
| Ssl_version | |
| Ssl_session_cache_mode | SERVER |
+-------------------------------------+------------------------------------------------------------------------+
From the status output, we can see:
10GB total memory on the leaf node
7GB Alloc_durability_large
You can see what these variables mean here: https://help.memsql.com/hc/en-us/articles/115001091386-What-Is-Using-Memory-on-My-Leaves-
Most interesting is the large amount in Alloc_durability_large, which is unusual. Do you have a large number of databases and/or partitions? (You can check by counting the number of rows in SHOW DATABASES EXTENDED on the leaf nodoe.) Each will require a fixed amount of transaction buffer memory (default is 64 MB).

Finding root of a tree in a directed graph

I have a tree structure like node(1)->node(2)->node(3). I have name as an property used to retrieve a node.
Given a node say node(3), i wanna retrieve node(1).
Query tried :
MATCH (p:Node)-[:HAS*]->(c:Node) WHERE c.name = "node 3" RETURN p LIMIT 5
But, not able to get node 1.
Your query will not only return "node 1", but it should at least include one path containing it. It's possible to filter the paths to only get the one traversing all the way to the root, however:
MATCH (c:Node {name: "node 3"})<-[:HAS*0..]-(p:Node)
// The root does not have any incoming relationship
WHERE NOT (p)<-[:HAS]-()
RETURN p
Note the use of the 0 length, which matches all cases, including the one where the start node is the root.
Fun fact: even if you have an index on Node:name, it won't be used (unless you're using Neo4j 3.1, where it seems to be fixed since 3.1 Beta2 at least) and you have to explicitly specify it.
MATCH (c:Node {name: "node 3"})<-[:HAS*0..]-(p:Node)
USING INDEX c:Node(name)
WHERE NOT (p)<-[:HAS]-()
RETURN p
Using PROFILE on the first query (with a numerical id property instead of name):
+-----------------------+----------------+------+---------+-------------------------+----------------------+
| Operator | Estimated Rows | Rows | DB Hits | Variables | Other |
+-----------------------+----------------+------+---------+-------------------------+----------------------+
| +ProduceResults | 0 | 1 | 0 | p | p |
| | +----------------+------+---------+-------------------------+----------------------+
| +AntiSemiApply | 0 | 1 | 0 | anon[23], c -- p | |
| |\ +----------------+------+---------+-------------------------+----------------------+
| | +Expand(All) | 1 | 0 | 3 | anon[58], anon[67] -- p | (p)<-[:HAS]-() |
| | | +----------------+------+---------+-------------------------+----------------------+
| | +Argument | 1 | 3 | 0 | p | |
| | +----------------+------+---------+-------------------------+----------------------+
| +Filter | 1 | 3 | 3 | anon[23], c, p | p:Node |
| | +----------------+------+---------+-------------------------+----------------------+
| +VarLengthExpand(All) | 1 | 3 | 5 | anon[23], p -- c | (c)<-[:HAS*]-(p) |
| | +----------------+------+---------+-------------------------+----------------------+
| +Filter | 1 | 1 | 3 | c | c.id == { AUTOINT0} |
| | +----------------+------+---------+-------------------------+----------------------+
| +NodeByLabelScan | 3 | 3 | 4 | c | :Node |
+-----------------------+----------------+------+---------+-------------------------+----------------------+
Total database accesses: 18
and on the second one:
+-----------------------+----------------+------+---------+-------------------------+------------------+
| Operator | Estimated Rows | Rows | DB Hits | Variables | Other |
+-----------------------+----------------+------+---------+-------------------------+------------------+
| +ProduceResults | 0 | 1 | 0 | p | p |
| | +----------------+------+---------+-------------------------+------------------+
| +AntiSemiApply | 0 | 1 | 0 | anon[23], c -- p | |
| |\ +----------------+------+---------+-------------------------+------------------+
| | +Expand(All) | 1 | 0 | 3 | anon[81], anon[90] -- p | (p)<-[:HAS]-() |
| | | +----------------+------+---------+-------------------------+------------------+
| | +Argument | 1 | 3 | 0 | p | |
| | +----------------+------+---------+-------------------------+------------------+
| +Filter | 1 | 3 | 3 | anon[23], c, p | p:Node |
| | +----------------+------+---------+-------------------------+------------------+
| +VarLengthExpand(All) | 1 | 3 | 5 | anon[23], p -- c | (c)<-[:HAS*]-(p) |
| | +----------------+------+---------+-------------------------+------------------+
| +NodeUniqueIndexSeek | 1 | 1 | 2 | c | :Node(id) |
+-----------------------+----------------+------+---------+-------------------------+------------------+
Total database accesses: 13

"Extract" intervals from series in Google Sheets

If I in Google Sheets have a series defined as
[29060, 29062, 29331, 29332, 29333, 29334, 29335, 29336, 29337, 29338, 29339, 29340, 29341,
29342, 29372, 29373].
How do I make them line up in intervals like this?
|To |From |
|29060 |29062 |
|29331 |29342 |
|29372 |29373 |
I can't find any good answers for this anywhere. Please, help!
Data/Formulas
A1:
29060, 29062, 29331, 29332, 29333, 29334, 29335, 29336, 29337, 29338, 29339, 29340, 29341,
29342, 29372, 29373
B1: =transpose(split(A1,",")). Converts the input text is an a vertical array.
C1: =FILTER(B1:B16,mod(ROW(B1:B16),2)<>0). Returns values in odd rows.
D1: =FILTER(B1:B16,mod(ROW(B1:B16),2)=0). Returns values in even rows.
E1: =ArrayFormula(FILTER(C1:C8,{TRUE();C2:C8<>D1:D7+1})). Returns values that start a range.
F1: =ArrayFormula(FILTER(D1:D8,{D1:D7+2<>D2:D8;TRUE()})). Returns values that end a range.
Result
Note: A1 values are not shown for readability.
+----+---+-------+-------+-------+-------+-------+
| | A | B | C | D | E | F |
+----+---+-------+-------+-------+-------+-------+
| 1 | | 29060 | 29060 | 29062 | 29060 | 29062 |
| 2 | | 29062 | 29331 | 29332 | 29331 | 29342 |
| 3 | | 29331 | 29333 | 29334 | 29372 | 29373 |
| 4 | | 29332 | 29335 | 29336 | | |
| 5 | | 29333 | 29337 | 29338 | | |
| 6 | | 29334 | 29339 | 29340 | | |
| 7 | | 29335 | 29341 | 29342 | | |
| 8 | | 29336 | 29372 | 29373 | | |
| 9 | | 29337 | | | | |
| 10 | | 29338 | | | | |
| 11 | | 29339 | | | | |
| 12 | | 29340 | | | | |
| 13 | | 29341 | | | | |
| 14 | | 29342 | | | | |
| 15 | | 29372 | | | | |
| 16 | | 29373 | | | | |
+----+---+-------+-------+-------+-------+-------+

Neo4j and Cypher: Match nodes that have a single relationship to a target node

I'm trying to identify nodes that have only one relationship of a given type.
Imagine a graph of Route and Stop nodes. A Route may have 0 or more Stops, a Stop may be shared between multiple Routes, a Stop must always have at least 1 Route. I want to match and delete Stops that will be orphaned if a given Route is deleted.
Before anyone says anything, I know that it would be easier to just find stops without routes after the route is deleted, but that isn't an option. We're also not worried about deleting the routes here, just the stops.
Here's my query:
MATCH (r1:Route { id: {route_id} })-[rel1:HAS_STOP]->(s:Stop)
MATCH (r2:Route)-[rel2:HAS_STOP]->(s)
WITH s, COUNT(rel2) as c
WHERE c = 1
MATCH s-[rel2]-()
DELETE s, rel2
This works perfectly... but is there a better way? It feels like it could be more efficient but I'm not sure how.
EDIT
Here a query that matches only the nodes that will be orphaned without deleting the current route :
MATCH (route:Route {id:'99e08bdf-130f-3fca-8292-27d616fa025f'})
WITH route
OPTIONAL MATCH (route)-[r:HAS_STOP]->(s)
WHERE NOT EXISTS((route)--(s)<-[:HAS_STOP]-())
DELETE r,s
and the execution plan :
neo4j-sh (?)$ PROFILE MATCH (route:Route {id:'99e08bdf-130f-3fca-8292-27d616fa025f'})
> WITH route
> OPTIONAL MATCH (route)-[r:HAS_STOP]->(s)
> WHERE NOT EXISTS((route)--(s)<-[:HAS_STOP]-())
> DELETE r,s;
+-------------------+
| No data returned. |
+-------------------+
Nodes deleted: 2
Relationships deleted: 2
EmptyResult
|
+UpdateGraph
|
+Eager
|
+OptionalMatch
|
+SchemaIndex(1)
|
+Filter
|
+SimplePatternMatcher
|
+SchemaIndex(1)
+----------------------+------+--------+--------------+----------------------------------------------------------------------------------------------------+
| Operator | Rows | DbHits | Identifiers | Other |
+----------------------+------+--------+--------------+----------------------------------------------------------------------------------------------------+
| EmptyResult | 0 | 0 | | |
| UpdateGraph | 2 | 4 | | DeleteEntity; DeleteEntity |
| Eager | 2 | 0 | | |
| OptionalMatch | 2 | 0 | | |
| SchemaIndex(1) | 1 | 2 | route, route | { AUTOSTRING0}; :Route(id) |
| Filter | 2 | 0 | | NOT(nonEmpty(PathExpression((route)-[ UNNAMED140]-(s),(160)-[ UNNAMED145:HAS_STOP]->(s), true))) |
| SimplePatternMatcher | 2 | 0 | route, s, r | |
| SchemaIndex(1) | 1 | 2 | route, route | { AUTOSTRING0}; :Route(id) |
+----------------------+------+--------+--------------+----------------------------------------------------------------------------------------------------+
Total database accesses: 8
** OLD ANSWER **
I let it here for helping maybe others :
In your query, you're not deleting the route nor the relationships to the stops that will not be orphaned. You can do all in one go.
This is what I have as query for the same use case than you, I also compared the two execution plans on a test graph, each route has about 160 stops and 2 stops that will be orphaned after the route deletion, the graph is available here : http://graphgen.neoxygen.io/?graph=JPnvQWZcQW685m
My query :
MATCH (route:Route {id:'e70ea0d4-03e2-3ca4-afc0-dfdc1754868e'})
WITH route
MATCH (route)-[r:HAS_STOP]->(s)
WITH r, collect(s) as stops
DELETE r, route
WITH filter(x in stops WHERE NOT x--()) as orphans
UNWIND orphans as orphan
DELETE orphan
Here is my profiled query :
neo4j-sh (?)$ PROFILE MATCH (route:Route {id:'1c565ac4-b72b-37c3-be7f-a38f2a7f66a8'})
> WITH route
> MATCH (route)-[r:HAS_STOP]->(s)
> WITH route, r, collect(s) as stops
> DELETE r, route
> WITH filter(x in stops WHERE NOT x--()) as orphans
> UNWIND orphans as orphan
> DELETE orphan;
+-------------------+
| No data returned. |
+-------------------+
Nodes deleted: 2
Relationships deleted: 157
EmptyResult
|
+UpdateGraph(0)
|
+UNWIND
|
+ColumnFilter(0)
|
+Eager
|
+Extract
|
+UpdateGraph(1)
|
+ColumnFilter(1)
|
+EagerAggregation
|
+SimplePatternMatcher
|
+SchemaIndex
+----------------------+------+--------+--------------+------------------------------+
| Operator | Rows | DbHits | Identifiers | Other |
+----------------------+------+--------+--------------+------------------------------+
| EmptyResult | 0 | 0 | | |
| UpdateGraph(0) | 1 | 1 | | DeleteEntity |
| UNWIND | 1 | 0 | | |
| ColumnFilter(0) | 157 | 0 | | keep columns orphans |
| Eager | 157 | 0 | | |
| Extract | 157 | 0 | | orphans |
| UpdateGraph(1) | 157 | 158 | | DeleteEntity; DeleteEntity |
| ColumnFilter(1) | 157 | 0 | | keep columns route, r, stops |
| EagerAggregation | 157 | 0 | | route, r |
| SimplePatternMatcher | 157 | 0 | route, s, r | |
| SchemaIndex | 1 | 2 | route, route | { AUTOSTRING0}; :Route(id) |
+----------------------+------+--------+--------------+------------------------------+
Total database accesses: 161
With your query :
I slightly modified your query to make use of schema indexes
And this is the Execution plan with your query, the difference in db accesses is quite high
PROFILE MATCH (r1:Route { id: '1c565ac4-b72b-37c3-be7f-a38f2a7f66a8' })
> WITH r1
> MATCH (r1)-[rel1:HAS_STOP]->(s:Stop)
> MATCH (r2:Route)-[rel2:HAS_STOP]->(s)
> WITH s, COUNT(rel2) as c
> WHERE c = 1
> MATCH s-[rel2]-()
> DELETE s, rel2;
+-------------------+
| No data returned. |
+-------------------+
Nodes deleted: 1
Relationships deleted: 1
EmptyResult
|
+UpdateGraph
|
+Eager
|
+SimplePatternMatcher(0)
|
+Filter(0)
|
+ColumnFilter
|
+EagerAggregation
|
+Filter(1)
|
+SimplePatternMatcher(1)
|
+Filter(2)
|
+SimplePatternMatcher(2)
|
+SchemaIndex
+-------------------------+------+--------+-----------------------+-----------------------------+
| Operator | Rows | DbHits | Identifiers | Other |
+-------------------------+------+--------+-----------------------+-----------------------------+
| EmptyResult | 0 | 0 | | |
| UpdateGraph | 1 | 2 | | DeleteEntity; DeleteEntity |
| Eager | 1 | 0 | | |
| SimplePatternMatcher(0) | 1 | 0 | UNNAMED200, s, rel2 | |
| Filter(0) | 1 | 0 | | c == { AUTOINT1} |
| ColumnFilter | 157 | 0 | | keep columns s, c |
| EagerAggregation | 157 | 0 | | s |
| Filter(1) | 4797 | 4797 | | hasLabel(r2:Route(4)) |
| SimplePatternMatcher(1) | 4797 | 4797 | r2, s, rel2 | |
| Filter(2) | 157 | 157 | | hasLabel(s:Stop(3)) |
| SimplePatternMatcher(2) | 157 | 157 | r1, s, rel1 | |
| SchemaIndex | 1 | 2 | r1, r1 | { AUTOSTRING0}; :Route(id) |
+-------------------------+------+--------+-----------------------+-----------------------------+
Total database accesses: 9912

Error Copying DBF/MDX files

I use the following code to copy dbf/mdx files from one folder to another:
procedure TfrmMain.MyCopyFile(S1, S2: string);
begin
if not FileExists(S2) then
CopyFile(PCHAR(S1), PCHAR(S2), true)
else
if Application.MessageBox(PCHAR('Overwrite existing file ' + S2 + '?'), 'File exists in folder',MB_YESNO + MB_DEFBUTTON1) = IDYES
then CopyFile(PCHAR(S1), PCHAR(S2), false)
end;
The code works fine when table name stays the same.
If I change the name of the table:
MyCopyFile(CurPath + '\orders.dbf', NewPath + '\ordly.dbf');
MyCopyFile(CurPath + '\orders.mdx', NewPath + '\ordly.mdx');
When I try to open ordly.dbf I get an error message:
Corrupt table/index header.
File: C:\DATA\2011\ORDLY.MDX
the problem is due to the mdx format stores inside the name of the data file associated (table name). because that when you rename a mdxfile the index still points to the old name of data file.
check this link to see the structure of the mdx file.
The Structure of Multiple Index files (*.mdx)
0 | Version number *1| ^
|-----------------------| |
1 | Date of creation | |
2 | YYMMDD | |
3 | | |
|-----------------------| |
4 | Data file name | File
5 | (no extension) | Header
: : |
: : |
19 | | |
|-----------------------| |
20 | Block size | |
| | |
|-----------------------| |
22 | Block size adder N | |
| | |
|-----------------------| |
24 | Production index flag | |
|-----------------------| |
25 | No. of entries in tag | | *2
|-----------------------| |
26 | Length of tag | | *3
|-----------------------| |
27 | (Reserved) | |
|-----------------------| |
28 | No.of tags in use | |
| | |
|-----------------------| |
30 | (Reserved) | |
| | |
|-----------------------| |
32 | No.of pages in tagfile| |
| | |
| | |
35 | | |
|-----------------------| |
36 | Pointer to first free | |
| page | |
| | |
39 | | |
|-----------------------| |
40 | No.of block available | |
| | |
| | |
43 | | |
|-----------------------| |
44 | Date of last update | |
| YYMMDD | |
46 | | |
|-----------------------| |
47 | (Reserved) | |
|-----------------------| |
48 | (Garbage) | |
: : |
: : |
| | | ___|=======================|
543| | _V___ / 0 | Tag header page no. |
|-----------------------| | / | |
544| Tag table entries | Tag / | |
| | Table | 3 | |
:.......................: | | |-----------------------| Tag
: : | | 4 | Tag name | table
:.......................: | | : :
: : | / : :
: : | / | |
:.......................:__|_/ 14 | |
: : | |-----------------------|
: : | 15 | Key format *4 |
: : | |-----------------------|
:.......................:__|_ 16 | Forward tag thread (<)|
: : | \ |-----------------------|
: : | \ 17 | Forward tag thread (>)|
: : | \ |-----------------------|
: : | | 18 | Backward tag thread *5|
| | | | |-----------------------|
| | | | 19 | (Reserved) |
M*N| |__V__ | |-----------------------|
|=======================| ^ | 20 | Key type *6 |
0| Pointer to root page | | | |-----------------------|
| | | | 21 | (Reserved) |
| | | | : :
3| | | | : :
|-----------------------| | | 31 | |
4| File size in pages | Tag | |-----------------------|
| | header| 32 | (Garbage) |
| | | | : :
7| | | | | |
|-----------------------| | \ N | |
8| Key format *7 | | \____|=======================|
|-----------------------| |
9| Key type *8 | |
|-----------------------| |
10| (Reserved) | |
| | |
|-----------------------| |
12| Index key length *9 | |
| | |
|-----------------------| |
14| Max.no.of keys/page | |
| | |
|-----------------------| |
16| Secondary key type *10| |
| | |
|-----------------------| |
18| Index key item length | |
| | |
|-----------------------| |
20| (Reserved) | |
| | |
| | |
|-----------------------| |
23| Unique flag | |
|-----------------------| |
| | |
: : |
: :__V__
N*M|=======================|

Resources