Issues with Profiling java application using jProbe - memory

Im currently doing dynamic memory analysis, for our eclipse based application using jprobe.After starting the eclipse application and jprobe, when I try to profile the eclipse application, the application gets closed abruptly causing a Fatal error. A fatal error log file is generated. In the Fatal error log file, I could see that the PermGen space seems to be full. Below is a sample Heap summary which I got in the log file
Heap
def new generation total 960K, used 8K [0x07b20000, 0x07c20000, 0x08000000)
eden space 896K, 0% used [0x07b20000, 0x07b22328, 0x07c00000)
from space 64K, 0% used [0x07c00000, 0x07c00000, 0x07c10000)
to space 64K, 0% used [0x07c10000, 0x07c10000, 0x07c20000)
tenured generation total 9324K, used 5606K [0x08000000, 0x0891b000, 0x0bb20000)
the space 9324K, 60% used [0x08000000, 0x08579918, 0x08579a00, 0x0891b000)
compacting perm gen total 31744K, used 31723K [0x0bb20000, 0x0da20000, 0x2bb20000)
the space 31744K, 99% used [0x0bb20000, 0x0da1af00, 0x0da1b000, 0x0da20000)
ro space 8192K, 66% used [0x2bb20000, 0x2c069920, 0x2c069a00, 0x2c320000)
rw space 12288K, 52% used [0x2c320000, 0x2c966130, 0x2c966200, 0x2cf20000)
I tried to increase the permGen space, using the command -XX:MaxPermSize=512m. But that doesnt seem to work. I would like to know how to increase the PermGen size via command prompt. I would like to know if I have to go to the java location in my computer and execute the above command or should I increase the PermGen space specifically for the eclipse application or Jprobe ? Please advise.
Any help on this is much appreciated.

Related

Jenkins web ui is totally unresponsive

My jenkins instance has been running for over two years without issue but yesterday quit responding to http requests. No errors, just clocks and clocks.
I've restarted the service, then restarted the entire server.
There's been a lot of mention of a thread dump. I attempted to get that but I'm not sure that this is that.
Heap
PSYoungGen total 663552K, used 244203K [0x00000000d6700000, 0x0000000100000000, 0x0000000100000000)
eden space 646144K, 36% used [0x00000000d6700000,0x00000000e4df5f70,0x00000000fde00000)
from space 17408K, 44% used [0x00000000fef00000,0x00000000ff685060,0x0000000100000000)
to space 17408K, 0% used [0x00000000fde00000,0x00000000fde00000,0x00000000fef00000)
ParOldGen total 194048K, used 85627K [0x0000000083400000, 0x000000008f180000, 0x00000000d6700000)
object space 194048K, 44% used [0x0000000083400000,0x000000008879ee10,0x000000008f180000)
Metaspace used 96605K, capacity 104986K, committed 105108K, reserved 1138688K
class space used 12782K, capacity 14961K, committed 14996K, reserved 1048576K
Ubuntu 16.04.5 LTS
I prefer looking in the jenkins log file. There you can see error and then fix them.

java.lang.OutOfMemoryError: Requested array size exceeds VM limit

I’m running Neo4J 2.2.1 with 150G heap space on a box with 240G. I set the neo4j.neostore.nodestore.dbms.pagecache.memory to 60G (slightly less than 75% of remaining system memory as recommended). However, when I startup I get an error that the system can’t start because I’m trying to allocate an array whose size exceeds the maximum allowed size.
Further testing indicates that it is either the node_cache_array_fraction and the relationship_cache_array_faction are causing the problem. It is supposed to default to 1%. On an 150G heap that should be 1.5G. However the array size being generated is too long.
Explicitly setting node_cache_size and relationship_cache_size seems to address this although it is far from ideal.

ARM: Safe physical memory position (to reserve) for my ARM hypervisor in relation to a Linux/Android guest

I am developing a basic hypervisor on ARM (using the board Arndale Exynos 5250).
I want to load Linux(ubuntu or smth else)/Android as the guest. Currently I'm using a Linaro distribution.
I'm almost there, most of the big problems have already been dealt with, except for the last one: reserving memory for my hypervisor such that the kernel does not try to OVERWRITE it BEFORE parsing the FDT or the kernel command line.
The problem is that my Linaro distribution's U-Boot passes a FDT in R2 to the linux kernel, BUT the kernel tries to overwrite my hypervisor's memory before seeing that I reserved that memory region in the FDT (by decompiling the DTB, modifying the DTS and recompiling it). I've tried to change the kernel command-line parameters, but they are also parsed AFTER the kernel tries to overwrite my reserved portion of memory.
Thus, what I need is a safe memory location in the physical RAM where to put my hypervisor's code at such that the Linux kernel won't try to access (r/w) it BEFORE parsing the FDT or it's kernel command line.
Context details:
The system RAM layout on Exynos 5250 is: physical RAM starts at 0x4000_0000 (=1GB) and has the length 0x8000_0000 (=2GB).
The linux kernel is loaded (by U-Boot) at 0x4000_7000, it's size (uncompressed uImage) is less than 5MB and it's entry point is set to be at 0x4000_8000;
uInitrd is loaded at 0x4200_0000 and has the size less than 2MB
The FDT (board.dtb) is loaded at 0x41f0_0000 (passed in R2) and has the size less than 35KB
I currently load my hypervisor at 0x40C0_0000 and I want to reserve 200MB (0x0C80_0000) starting from that address, but the kernel tries to write there (a stage 2 HYP trap tells me that) before looking in the FDT or in the command line to see that the region is actually reserved. If instead I load my hypervisor at 0x5000_0000 (without even modifying the original DTB or the command line), it does not try to overwrite me!
The FDT is passed directly, not through ATAGs
Since when loading my hypervisor at 0x5000_0000 the kernel does not try to overwrite it whatsoever, I assume there are memory regions that Linux does not touch before parsing the FDT/command-line. I need to know whether this is true or not, and if true, some details regarding these memory regions.
Thanks!
RELATED QUESTION:
Does anyone happen to know what is the priority between the following: ATAGs / kernel-command line / FDT? For instance, if I reserve memory through the kernel command-line, but not in the FDT (.dtb) should it work or is the command-line overriden by the FDT? Is there somekind of concatenation between these three?
As per
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm/Booting, safe locations start 128MB from start of RAM (assuming the kernel is loaded in that region, which is should be). If a zImage was loaded lower in memory than what is likely to be the end address of the decompressed image, it might relocate itself higher up before it starts decompressing. But in addition to this, the kernel has a .bss region beyond the end of the decompressed image in memory.
(Do also note that your FDT and initrd locations already violate this specification, and that the memory block you are wanting to reserve covers the locations of both of these.)
Effectively, your reserved area should go after the FDT and initrd in memory - which 0x50000000 is. But anything > 0x08000000 from start of RAM should work, portably, so long as that doesn't overwrite the FDT, initrd or U-Boot in memory.
The priority of kernel/FDT/bootloader command line depends on the kernel configuration - do a menuconfig and check under "Boot options". You can combine ATAGS with the built-in command lines, but not FDT - after all, the FDT chosen node is supposed to be generated by the bootloader - U-boot's FDT support is OK so you should let it do this rather than baking it into the .dts if you want an FDT command line.
The kernel is pretty conservative before it's got its memory map since it has to blindly trust the bootloader has laid things out as specified. U-boot on the other hand is copying bits of itself all over the place and is certainly the culprit for the top end of RAM - if you #define DEBUG in (I think) common/board_f.c you'll get a dump of what it hits during relocation (not including the Exynos iRAM SPL/boot code stuff, but that won't make a difference here anyway).

Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in

I am using PHPbb , everything works fine,
But i am getting the following error in a single page inside admin.
Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in home/mytestsite/public_html/includes/template.php on line 458
How to fix this error?
As you can imagine, this error message occurs when PHP tries to use more memory than is avialable. I'm assuming that changing code is not an option but you CAN increase the amount of memory available to PHP.
To change the memory limit for one specific script, include a line such as this at the top of the script:
ini_set("memory_limit","20M");
The 12M (for example) sets the limit to 20 Megs. If this does not work, keep increasing the memory limit until your script fits or your server squeals for mercy.
You can also make this a permanent change for all PHP scripts running on the server by adding a line such as this to the server’s php.ini file:
memory_limit = 20M
Hope this helps

Mahout runs out of heap space

I am running NaiveBayes on a set of tweets using Mahout. Two files, one 100 MB and one 300 MB. I changed JAVA_HEAP_MAX to JAVA_HEAP_MAX=-Xmx2000m ( earlier it was 1000). But even then, mahout ran for a few hours ( 2 to be precise) before it complained of heap space error. What should i do to resolve ?
Some more info if it helps : I am running on a single node, my laptop infact and it has 3GB of RAM (only) .
Thanks.
EDIT: I ran it the third time with <1/2 of the data that i used the first time ( first time i used 5.5 million tweets, second i used 2million ) and i still got a heap space problem. I am posting the complete error for completion purposes :
17 May, 2011 2:16:22 PM
org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: map 50% reduce 0%
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:62)
at java.lang.StringBuilder.<init>(StringBuilder.java:85)
at org.apache.hadoop.mapred.JobClient.monitorAndPrintJob(JobClient.java:1283)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1251)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureDriver.runJob(BayesFeatureDriver.java:63)
at org.apache.mahout.classifier.bayes.mapreduce.bayes.BayesDriver.runJob(BayesDriver.java:44)
at org.apache.mahout.classifier.bayes.TrainClassifier.trainNaiveBayes(TrainClassifier.java:54)
at org.apache.mahout.classifier.bayes.TrainClassifier.main(TrainClassifier.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
17 May, 2011 7:14:53 PM org.apache.hadoop.mapred.LocalJobRunner$Job run
WARNING: job_local_0001
java.lang.OutOfMemoryError: Java heap space
at java.lang.String.substring(String.java:1951)
at java.lang.String.subSequence(String.java:1984)
at java.util.regex.Pattern.split(Pattern.java:1019)
at java.util.regex.Pattern.split(Pattern.java:1076)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureMapper.map(BayesFeatureMapper.java:78)
at org.apache.mahout.classifier.bayes.mapreduce.common.BayesFeatureMapper.map(BayesFeatureMapper.java:46)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
And i am posting the part of the bin/mahout script that i changed :
Original :
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1000m
if [ "$MAHOUT_HEAPSIZE" != "" ]; then
#echo "run with heapsize $MAHOUT_HEAPSIZE"
JAVA_HEAP_MAX="-Xmx""$MAHOUT_HEAPSIZE""m"
#echo $JAVA_HEAP_MAX
fi
Modified :
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx2000m
if [ "$MAHOUT_HEAPSIZE" != "" ]; then
#echo "run with heapsize $MAHOUT_HEAPSIZE"
JAVA_HEAP_MAX="-Xmx""$MAHOUT_HEAPSIZE""m"
#echo $JAVA_HEAP_MAX
fi
You're not specifying what process ran out of memory, which is important. You need to set MAHOUT_HEAPSIZE, not whatever JAVA_HEAP_MAX is.
Did you modify the heap size for the hadoop environment or the mahout one? See if this query on mahout list helps. From personal experience, I can suggest that you reduce the data size that you are trying to process. Whenever I tried to execute the Bayes classifier on my laptop, after running for a few hours, the heap space would get exhausted.
I'd suggest that you run this off EC2. I think the basic S3/EC2 option is free for usage.
When you start mahout process, you can runn "jps" it will show all the java process running on your machine with your user-id. "jps" will return you a process-id. You can find the process and can run "jmap -heap process-id" to see your heap space utilization.
With this approach you can estimate at which part of your processing memory is exhausted and where you need to increase.

Resources