MemSQL: Leaf error: timed out from socket after -1 seconds - timeout

We are running a 4 node cluster currently, (3 machines with 8GB and one with 4GB RAM) and while trying run a left outer join of a table (about 7.5 GB) on itself. Few of the fields of the table have been stored as columnstore.
The structure of the query is somewhat like:
SELECT a.x, a.y from tableX left outer join tableY on a.z=b.z group by a.x;
Any ideas?

We've seen this error reproduce when the machine is under a significant amount of load (it triggers an internal TCP timeout, hence the -1 as it's not one of ours).
Were the machines running with a lot of load while you ran the query? E.g. you were running several other queries concurrently or something else on the machines?
We are actively working on resolving this in a general way (it is relatively hard to reproduce), and we will keep this thread up-to-date with our progress.

Related

ArangoDB Performance

I am exploring the use of Arangodb as a graph engine for a project I am working on that needs shortest path analysis.
My collections look like this:
a route network of ~3.5M edges in an edge collection (_to/_from)
a vertex collection ~2.7M vertices (geo index on [lat,lng]).
a trips collection with start/end locations (not mapped to nodes).
The first task is to snap the origin and destination coordinates of the trips to vertices in on the network. I am using the following query to do that:
FOR t IN trips
let snappedFrom = (
FOR x IN nodes
SORT GEO_DISTANCE([t.Orig_Long, t.Orig_Lat], [x.lng, x.lat]) ASC
LIMIT 1
RETURN x._id
)[0]
let snappedTo = (
FOR x IN nodes
SORT GEO_DISTANCE([t.Dest_Long, t.Dest_Lat], [x.lng, x.lat]) ASC
LIMIT 1
RETURN x._id
)[0]
UPDATE t._key WITH {snappedFrom,snappedTo} IN trips
This is taking around 3.5 hours, and I want to reduce that significantly if possible.
I am running on an AWS instance with 32GB of RAM and 8 cores. I notice that when running this query, it is only using a single core which is killing me.
I am curious about setting up the arangodb for pure performance. My use case is using the DB as a calculator really. In fact is likely it will be part of a CI/CD workflow when done. I don't need any safe guards in there, there wont be any parallel user requests, and if the data is bad, I just blow it away and start again.
I am using a standard install with docker
docker run -it --name=adb --rm -p 8528:8528 -v arangodb:/data -d -v /var/run/docker.sock:/var/run/docker.sock arangodb/arangodb-starter --starter.address=<$IP> --starter.mode=single
I am going to run into the same issue when I run shortest_path on all trips too, that will take forever if single core.
Any help with the config, better query, or even better AWS setups would be truly appreciated.
add Geo-Spatial Indexes
on Orig and Dest fields, that will enable server to optimize / speed up sub-queries
for further speeding up of processing run main query in batches, processing more smaller batches is faster than running over all documents at once

Spark JobServer, memory settings for release

I've set up a spark-jobserver to enable complex queries on a reduced dataset.
The jobserver executes two operations:
Sync with the main remote database, it makes a dump of some of the server's tables, reduce and aggregates the data, save the result as a parquet file and cache it as a sql table in memory. This operation will be done every day;
Queries, when the sync operation is finished, users can perform SQL complex queries on the aggregated dataset, (eventually) exporting the result as csv file. Every user can do only one query at time, and wait for its completion.
The biggest table (before and after the reduction, which include also some joins) has almost 30M of rows, with at least 30 fields.
Actually I'm working on a dev machine with 32GB of ram dedicated to the job server, and everything runs smoothly. Problem is that in the production one we have the same amount of ram shared with a PredictionIO server.
I'm asking how determine the memory configuration to avoid memory leaks or crashes for spark.
I'm new to this, so every reference or suggestion is accepted.
Thank you
Take an example,
if you have a server with 32g ram.
set the following parameters :
spark.executor.memory = 32g
Take a note:
The likely first impulse would be to use --num-executors 6
--executor-cores 15 --executor-memory 63G. However, this is the wrong approach because:
63GB + the executor memory overhead won’t fit within the 63GB capacity
of the NodeManagers. The application master will take up a core on one
of the nodes, meaning that there won’t be room for a 15-core executor
on that node. 15 cores per executor can lead to bad HDFS I/O
throughput.
A better option would be to use --num-executors 17 --executor-cores 5
--executor-memory 19G. Why?
This config results in three executors on all nodes except for the one
with the AM, which will have two executors. --executor-memory was
derived as (63/3 executors per node) = 21. 21 * 0.07 = 1.47. 21 – 1.47
~ 19.
This is explained here if you want to know more :
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/

Disconnected from Neo4j. Please check if the cord is unplugged

I am running simple queries on neo4j 2.1.7
I am trying to execute that query:
MATCH (a:Caller)-[:MADE_CALL]-(c:Call)-[:RECEIVED_CALL]-(b:Receiver) CREATE(a)-[:CALLED]->(b) RETURN a,b
While the query is executing, I am getting the following error
Disconnected from Neo4j. Please check if the cord is unplugged.
Then another error:
GC overhead limit exceeded
I'm working on windows server 2012 with 16G of RAM and here is my nodes.properties file:
**
`neostore.nodestore.db.mapped_memory=1800M
neostore.relationshipstore.db.mapped_memory=1G
#neostore.relationshipgroupstore.db.mapped_memory=10M
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=250M
neostore.propertystore.db.arrays.mapped_memory=10M
cache_type=weak
keep_logical_logs=100M size**`
and my neo4j-community.vmoption file:
**
-Xmx8192
-Xms4098
-Xmn1G
-include-options ${APPDATA}\Neo4j Community\neo4j-community.vmoptions**
I have 6 128 644 Nodes, 6 506 355 Relationships and 10 488 435 properties
Any solution?
TL;DR: Neo4j disconnected because your query is too inefficient. The solution is to improve the query.
Your Neo4j instance appears to have timed out and undergone a GC dump due to the computational intensiveness of your query. When you initialize the Neo4j database using the bash shell, you have the option of configuring certain JVM variables, of which include the amount of memory and heap size available to Neo4j. Should a query exceed these computational limitations, Neo4j automatically terminates the query, undergoes a GC dump, and disconnects.
Looking at the information you gave on the database, there are 6M nodes with 6M relationships. Considering that your query essentially looks for all pathways from Callers to Receivers across 6M nodes, then tries to perform bulk write operations, it's not surprising that Neo4j crashes/disconnects. I would suggest finding a way to limit the query (even with a simple LIMIT keyword) and running multiple smaller queries to get the job done.

All queries are slow with neo4j

I have written a variety of queries using cypher that take no less than 200ms per query. They're very straightforward, so I'm having trouble identifying where the bottleneck is.
Simple Match with Parameters, 2200ms:
Simple Distinct Match with Parameters, 200ms:
Pathing, 2500ms:
At first I thought the issue was a lack of resources, because I was running neo4j and my application on the same box. While the performance monitor indicated that CPU and memory were largely free'd up and available, I moved the neo4j server to another local box and observed similar latency. Both servers are workstations with fairly new Xeon processors, 12GB memory and SSDs for the data storage. All of the above leads me to believe that the latency isn't due to my hardware. OS is Windows 7.
The graph has less than 200 nodes and less than 200 relationships.
I've attached some queries that I send to neo4j along with the configuration for the server, database, and JVM. No plugins or extensions are loaded.
Pastebin Links:
Database Configuration
Server Configuration
JVM Configuration
[Expanding a bit on a comment I made earlier.]
#TFerrell: Your comments state that "all nodes have labels", and that you tried applying indexes. However, it is not clear if you actually specified the labels in your slow Cypher queries. I noticed from your original question statement that neither of your slower queries actually specified a node label (which presumably should have been "Project").
If your Cypher query does not specify the label for a node, then the DB engine has to test every node, and it also cannot apply an index.
So, please try specifying the correct node label(s) in your slow queries.
Is that the first run or a subsequent run of these queries?
You probably don't have a label on your nodes and no index or unique constraint.
So Neo4j has to scan the whole store for your node pulling everything into memory, loading the properties and checking.
try this:
run until count returns 0:
match (n) where not n:Entity set n:Entity return count(*);
add the constraint
create constraint on (e:Entity) assert e.Id is unique;
run your query again:
match (n:Element {Id:{Id}}) return n
etc.
It seems there is something wrong with the automatic memory mapping calculation when you are on Windows (memory mapping on heap).
I just looked at your messages.log and added up some numbers, so it seems the mmio alone is enough to fill your java heap space (old-gen) leaving no room for the database, caches etc.
Please try to amend that by fixing the mmio config in your conf/neo4j.properties to more sensible values (than the auto-calculation).
For your small store just uncommenting the values starting with #neostore. (i.e. remove the #) should work fine.
Otherwise something like this (fitting for a 3GB heap) for a larger graph (2M nodes, 10M rels, 20M props,10M long strings):
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=250M
neostore.propertystore.db.mapped_memory=250M
neostore.propertystore.db.strings.mapped_memory=250M
neostore.propertystore.db.arrays.mapped_memory=0M
Here are the added numbers:
auto mmio: 134217728 + 134217728 + 536870912 + 536870912 + 1073741824 = 2.3GB
stores sizes: 1073920 + 1073664 + 3221698 + 3221460 + 1073786 = 9MB
JVM max: 3.11 RAM : 13.98 SWAP: 27.97 GB
max heaps: Eden: 1.16, oldgen: 2.33
taken from:
neostore.propertystore.db.strings] brickCount=8 brickSize=134144b mappedMem=134217728b (storeSize=1073920b)
neostore.propertystore.db.arrays] brickCount=8 brickSize=134144b mappedMem=134217728b (storeSize=1073664b)
neostore.propertystore.db] brickCount=6 brickSize=536854b mappedMem=536870912b (storeSize=3221698b)
neostore.relationshipstore.db] brickCount=6 brickSize=536844b mappedMem=536870912b (storeSize=3221460b)
neostore.nodestore.db] brickCount=1 brickSize=1073730b mappedMem=1073741824b (storeSize=1073786b)

Efficient creation of Neo4j relationship indexes

Can you please explain the best way to add relationship indexes to a Neo4j database created using the BatchInserter?
Our database contains about 30 million nodes and about 300 million relationships. If we build this without any indexes then it takes about 10 hours (just calls to BatchInserter.createNode and BatchInserter.createRelationship).
However if we also try to create relationship indexes using LuceneBatchInserterIndexProvider with repeated calls to index.add then the process takes 12 hours to add everything but then gets stuck on indexProvider.shutdown and doesn't complete. The longest I have left it is 3 days. Can you please explain what it is doing at this point? I expected the work to be done during the calls to index.add. What is going on during shutdown that is taking so long?
Our PC has 64GB RAM and we have allocated 40GB to the JVM. During this shutdown step, Windows reports that 99% of the memory is in use (far more than allocated to the JVM) and the computer becomes almost unusable.
The configuration settings I am using are:
neostore.nodestore.db.mapped_memory = 1G
neostore.propertystore.db.mapped_memory = 1G
neostore.propertystore.db.index.mapped_memory = 1M
neostore.propertystore.db.index.keys.mapped_memory = 1M
neostore.propertystore.db.strings.mapped_memory = 1G
neostore.propertystore.db.arrays.mapped_memory = 1M
neostore.relationshipstore.db.mapped_memory = 10G
We've tried changing some of these but it didn't appear to make any difference.
We have also tried adding the relationship indexes as a separate step after first building the database without any indexes. In this case we used GraphDatabaseFactory.newEmbeddedDatabaseBuilder and GraphDatabaseService.index().forRelationships. Doing it this way seems to work although it was estimated that it would take around 6 days to complete. We have tried invoking commit at various different intervals which makes some difference but not significant. Most of the time seems to be spent just iterating over the relationships.
The only thing I can think of that may be abnormal about our data is that the relationships have about 20 properties on them. But even creating an index on just 1 of these properties doesn't work.
The file sizes without any indexes are:
neostore.nodestore.db 400MB
neostore.propertystore.db 100GB
neostore.propertystore.db.strings 2GB
neostore.relationshipstore.db 10GB
Can you please give us some advice on how to get this working either during the BatchInserter process or as a separate step?
We are using version 2.0.1 of the Neo4j jars.
Thanks, Damon

Resources