OutOfMemory Error on parsing a big SQL query - jsqlparser

While parsing a big SQL query, I am getting JVM OutOfMemory error. This error raised during SQL query translation using JSqlParser.
The below lines coped from the thread error stack:
Thread 0x648608688
at java.lang.OutOfMemoryError.<init>()V (Unknown Source)
at java.util.Arrays.copyOf([Ljava/lang/Object;I)[Ljava/lang/Object; (Unknown Source)
at java.util.ArrayList.ensureCapacity(I)V (Unknown Source)
at java.util.ArrayList.addAll(Ljava/util/Collection;)Z (Unknown Source)
The one way to come out of this heap OutOfMemory problem is increasing the configured heap size limit.
Is there any other ways / best practices to improve performance in order to reduce heap memory usage during SQL query translation using JSqlParser?

Related

Out of memory in neo4j using periodic commits

I'm trying to load a pretty large (~200 million rows) file in neo4j using LOAD CSV like this
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
'file:///home/manu/citation.csv.gz' AS line
MATCH (origin:`publication` {`id`: line.`cite_from`})
MATCH (destination:`publication` {`id`: line.`cite_to`})
MERGE (origin )-[rel:CITES ]->(destination );
but I keep seeing memory errors such as
raise CypherError.hydrate(**metadata)
neo4j.exceptions.TransientError: There is not enough memory to perform
the current task. Please try increasing 'dbms.memory.heap.max_size' in
the neo4j configuration (normally in 'conf/neo4j.conf' or, if you you
are using Neo4j Desktop, found through the user interface) or if you
are running an embedded installation increase the heap by using '-Xmx'
command line flag, and then restart the database.
when running the code, and in the server
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "neo4j.StorageMaintenance-14"
2018-12-05 15:44:32.967+0000 WARN Java heap space
java.lang.OutOfMemoryError: Java heap space
2018-12-05 15:44:32.968+0000 WARN Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2#b6328a3 in QueuedThreadPool[qtp483052300]#1ccacb0c{STARTED,8<=8<=14,i=1,q=0}[ReservedThreadExecutor#f5cbd17{s=0/1,p=0}]
Exception in thread "neo4j.ServerTransactionTimeout-6" Exception in thread "neo4j.TransactionTimeoutMonitor-11" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap
Of course I tried setting this dbms.memory.heap.max_size thing (up to 24 GB...above that, my 32-GB machine will not even be able to start neo4j), but am still getting those. The thing I don't quite get is: what's the purpose of the USING PERIODIC COMMIT part if (it seems) neo4j tries to load everything at once? When looking at the manual or, e.g., this thread you would think USING PERIODIC COMMIT is a fix for exactly the problem I'm having.
Any clue? The only workaround that comes to mind is splitting the file in several pieces, but that doesn't look like an elegant solution (also, if that works...couldn't neo4j do that for me transparently?)
EDIT: the query plan using EXPLAIN
Cheers.
Probably more a workaround than a "solution" but putting a UNIQUE constraint on the property that is extensively checked for that cypher query did the trick for me:
CREATE CONSTRAINT ON (p:publication) ASSERT p.id IS UNIQUE

propel orm - migration

I have this problem and I couldn't find very good informations. For those informations I have found, none is useful. Here it goes, I started a project with propel, I created a first database with a basic table in it, ran "php propel init", everything worked fine. Then I needed another table, I created in its schema.xml, but when I run any of those migration tools I get this error:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
It has been very frustrating, I feel internet lacks of this propel info.
The problem was I shouldn't manually edit the databae. Once it's created the best thing to do is use the XML to edit.

neo4j Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

I loaded a large graph of approx 600 million nodes and around 1 billion relationships using batch insert. I am now trying to run a query on a very small subset of a graph and I am getting Java heap space error. I know that I can set Java heap size in neo4j-wrapper.conf but I am still getting an error. Is there any other place where I might have set max heap size? Also, I am not sure why it is running out of memory. My query is:
MATCH (start:Label1)-[r]->(end:Label2) WHERE start.name='Name1' RETURN end.Name2
I know that the result set has less than 1000 nodes and I am limiting the search space (or maybe I am not)?
Try this:
MATCH (start:Label1)
WHERE start.name = 'Name1'
WITH start
MATCH (start)-[r]->(end:Label2)
RETURN end.name2
Also add the type of the relationship, if there is one.
You can try changing the memory maps of the cache as well, if this doesn't help:
http://neo4j.com/docs/stable/configuration-io-examples.html

Out of Memory error in opencv

I am trying to make a training data set from the frames of a videos.
For every new frame I am finding the Feature Vector(size is 3300X1) and concatenating with old feature vector to make a training data set. But after reading of 2000 frames I am getting below specified error.
and I am getting error in the below mentione code in second line, i.e
cv::Mat frameFV = getFeatureVectorFromGivenImage(curFrame, width, height);
cv::hconcat(trainingDataPerEmotion, frameFV, trainingDataPerEmotion);
At the time of getting error the size of the cv::Mat trainingDataPerEmotion is 3300X2000(nearly)
and I am releasing old video by using
cvReleaseCapture(&capture);
before going to process the new video. And the error is
OpenCV Error: Insufficient memory (Failed to allocate 3686404 bytes) in OutOfMemoryError, file /home/naresh/OpenCV-2.4.0/modules/core/src/alloc.cpp, line 52
terminate called after throwing an instance of 'cv::Exception'
what(): /home/mario/OpenCV-2.4.0/modules/core/src/alloc.cpp:52: error: (-4) Failed to allocate 3686404 bytes in function OutOfMemoryError
Can any one suggest me that how can I over come this problem and I have to save the big training data, to train my system.
Thank you.
Check first if you have not some memory leaks.
As far as I remember OpenCV OutOfMemory error is thrown actualy when some problems with allocation occurs.
If you still can not figure out some memory leak and find the case, you must provide more code. The best will be code that allow to reproduce your error.

Mahout runs out of heap space

I am running NaiveBayes on a set of tweets using Mahout. Two files, one 100 MB and one 300 MB. I changed JAVA_HEAP_MAX to JAVA_HEAP_MAX=-Xmx2000m ( earlier it was 1000). But even then, mahout ran for a few hours ( 2 to be precise) before it complained of heap space error. What should i do to resolve ?
Some more info if it helps : I am running on a single node, my laptop infact and it has 3GB of RAM (only) .
Thanks.
EDIT: I ran it the third time with <1/2 of the data that i used the first time ( first time i used 5.5 million tweets, second i used 2million ) and i still got a heap space problem. I am posting the complete error for completion purposes :
17 May, 2011 2:16:22 PM
org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: map 50% reduce 0%
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:62)
at java.lang.StringBuilder.<init>(StringBuilder.java:85)
at org.apache.hadoop.mapred.JobClient.monitorAndPrintJob(JobClient.java:1283)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1251)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureDriver.runJob(BayesFeatureDriver.java:63)
at org.apache.mahout.classifier.bayes.mapreduce.bayes.BayesDriver.runJob(BayesDriver.java:44)
at org.apache.mahout.classifier.bayes.TrainClassifier.trainNaiveBayes(TrainClassifier.java:54)
at org.apache.mahout.classifier.bayes.TrainClassifier.main(TrainClassifier.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
17 May, 2011 7:14:53 PM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0001
java.lang.OutOfMemoryError: Java heap space
at java.lang.String.substring(String.java:1951)
at java.lang.String.subSequence(String.java:1984)
at java.util.regex.Pattern.split(Pattern.java:1019)
at java.util.regex.Pattern.split(Pattern.java:1076)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureMapper.map(BayesFeatureMapper.java:78)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureMapper.map(BayesFeatureMapper.java:46)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
And i am posting the part of the bin/mahout script that i changed :
Original :
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1000m
if [ "$MAHOUT_HEAPSIZE" != "" ]; then
#echo "run with heapsize $MAHOUT_HEAPSIZE"
JAVA_HEAP_MAX="-Xmx""$MAHOUT_HEAPSIZE""m"
#echo $JAVA_HEAP_MAX
fi
Modified :
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx2000m
if [ "$MAHOUT_HEAPSIZE" != "" ]; then
#echo "run with heapsize $MAHOUT_HEAPSIZE"
JAVA_HEAP_MAX="-Xmx""$MAHOUT_HEAPSIZE""m"
#echo $JAVA_HEAP_MAX
fi
You're not specifying what process ran out of memory, which is important. You need to set MAHOUT_HEAPSIZE, not whatever JAVA_HEAP_MAX is.
Did you modify the heap size for the hadoop environment or the mahout one? See if this query on mahout list helps. From personal experience, I can suggest that you reduce the data size that you are trying to process. Whenever I tried to execute the Bayes classifier on my laptop, after running for a few hours, the heap space would get exhausted.
I'd suggest that you run this off EC2. I think the basic S3/EC2 option is free for usage.
When you start mahout process, you can runn "jps" it will show all the java process running on your machine with your user-id. "jps" will return you a process-id. You can find the process and can run "jmap -heap process-id" to see your heap space utilization.
With this approach you can estimate at which part of your processing memory is exhausted and where you need to increase.

Resources