What are the Module specific heaps reported by sos.eeheap? - clr

When I run !eeheap -loader SOS command in WinDbg against a memory dump of any .NET process, it outputs two strange groups of of heaps after the domains and JIT code heap.
Here is the output:
0:000> !eeheap -loader
Loader Heap:
--------------------------------------
...........
--------------------------------------
Module Thunk heaps:
Module 000007fecb601000: Size: 0x0 (0) bytes.
Module 000007fee8bc1000: Size: 0x0 (0) bytes.
...........
Total size: Size: 0x0 (0) bytes.
--------------------------------------
Module Lookup Table heaps:
Module 000007fecb601000: Size: 0x0 (0) bytes.
Module 000007fee8bc1000: Size: 0x0 (0) bytes.
...........
Total size: Size: 0x0 (0) bytes.
--------------------------------------
Total LoaderHeap size: Size: 0xd55000 (13979648) bytes total, 0xb0000 (720896) bytes wasted.
=======================================
What are the "Module Thunk heaps" and "Module Lookup Table heaps"? And why are they always zero size?
The only thing I know is that the both of these heaps contain references to every loaded module.

Related

Kilobytes or Kibibytes in GNU time?

We are doing some performance measurements including some memory footprint measurements. We've been doing this with GNU time.
But, I cannot tell if they are measuring in kilobytes (1000 bytes) or kibibytes (1024 bytes).
The man page for my system says of the %M format key (which we are using to measure peak memory usage): "Maximum resident set size of the process during its lifetime, in Kbytes."
I assume K here means the SI "Kilo" prefix, and thus kilobytes.
But having looked at a few other memory measurements of various things through various tools, I trust that assumption like I'd trust a starved lion to watch my dogs during a week-long vacation.
I need to know, because for our tests 1000 vs 1024 Kbytes adds up to a difference of nearly 8 gigabytes, and I'd like to think I can cut down the potential error in our measurements by a few billion.
Using the below testing setup, I have determined that GNU time on my system measures in Kibibytes.
The below program (allocator.c) allocates data and touches each of it 1 KiB at a time to ensure that it all gets paged in. Note: This test only works if you can page in the entirety of the allocated data, otherwise time's measurement will only be the largest resident collection of memory.
allocator.c:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define min(a,b) ( ( (a)>(b) )? (b) : (a) )
volatile char access;
volatile char* data;
const int step = 128;
int main(int argc, char** argv ){
unsigned long k = strtoul( argv[1], NULL, 10 );
if( k >= 0 ){
printf( "Allocating %lu (%s) bytes\n", k, argv[1] );
data = (char*) malloc( k );
for( int i = 0; i < k; i += step ){
data[min(i,k-1)] = (char) i;
}
free( data );
} else {
printf("Bad size: %s => %lu\n", argv[1], k );
}
return 0;
}
compile with: gcc -O3 allocator.c -o allocator
Runner Bash Script:
kibibyte=1024
kilobyte=1000
mebibyte=$(expr 1024 \* ${kibibyte})
megabyte=$(expr 1000 \* ${kilobyte})
gibibyte=$(expr 1024 \* ${mebibyte})
gigabyte=$(expr 1000 \* ${megabyte})
for mult in $(seq 1 3);
do
bytes=$(expr ${gibibyte} \* ${mult} )
echo ${mult} GiB \(${bytes} bytes\)
echo "... in kibibytes: $(expr ${bytes} / ${kibibyte})"
echo "... in kilobytes: $(expr ${bytes} / ${kilobyte})"
/usr/bin/time -v ./allocator ${bytes}
echo "===================================================="
done
For me this produces the following output:
1 GiB (1073741824 bytes)
... in kibibytes: 1048576
... in kilobytes: 1073741
Allocating 1073741824 (1073741824) bytes
Command being timed: "./a.out 1073741824"
User time (seconds): 0.12
System time (seconds): 0.52
Percent of CPU this job got: 75%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.86
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1049068
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 262309
Voluntary context switches: 7
Involuntary context switches: 2
Swaps: 0
File system inputs: 16
File system outputs: 8
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
====================================================
2 GiB (2147483648 bytes)
... in kibibytes: 2097152
... in kilobytes: 2147483
Allocating 2147483648 (2147483648) bytes
Command being timed: "./a.out 2147483648"
User time (seconds): 0.21
System time (seconds): 1.09
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.31
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2097644
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 524453
Voluntary context switches: 4
Involuntary context switches: 3
Swaps: 0
File system inputs: 0
File system outputs: 8
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
====================================================
3 GiB (3221225472 bytes)
... in kibibytes: 3145728
... in kilobytes: 3221225
Allocating 3221225472 (3221225472) bytes
Command being timed: "./a.out 3221225472"
User time (seconds): 0.38
System time (seconds): 1.60
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.98
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 3146220
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 786597
Voluntary context switches: 4
Involuntary context switches: 3
Swaps: 0
File system inputs: 0
File system outputs: 8
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
====================================================
In the "Maximum resident set size" entry, I see values that are closest to the kibibytes value I expect from that raw byte count. There is some difference because its possible that some memory is being paged out (in cases where it is lower, which none of them are here) and because there is more memory being consumed than what the program allocates (namely, the stack and the actual binary image itself).
Versions on my system:
> gcc --version
gcc (GCC) 6.1.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> /usr/bin/time --version
GNU time 1.7
> lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.10 (Final)
Release: 6.10
Codename: Final

How to debug leak in native memory on JVM?

We have a java application running on Mule. We have the XMX value configured for 6144M, but are routinely seeing the overall memory usage climb and climb. It was getting close to 20 GB the other day before we proactively restarted it.
Thu Jun 30 03:05:57 CDT 2016
top - 03:05:58 up 149 days, 6:19, 0 users, load average: 0.04, 0.04, 0.00
Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.2%us, 1.7%sy, 0.0%ni, 93.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 24600552k total, 21654876k used, 2945676k free, 440828k buffers
Swap: 2097144k total, 84256k used, 2012888k free, 1047316k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3840 myuser 20 0 23.9g 18g 53m S 0.0 79.9 375:30.02 java
The jps command shows:
10671 Jps
3840 MuleContainerBootstrap
The jstat command shows:
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
37376.0 36864.0 16160.0 0.0 2022912.0 1941418.4 4194304.0 445432.2 78336.0 66776.7 232 7.044 17 17.403 24.447
The startup arguments are (sensitive bits have been changed):
3840 MuleContainerBootstrap -Dmule.home=/mule -Dmule.base=/mule -Djava.net.preferIPv4Stack=TRUE -XX:MaxPermSize=256m -Djava.endorsed.dirs=/mule/lib/endorsed -XX:+HeapDumpOnOutOfMemoryError -Dmyapp.lib.path=/datalake/app/ext_lib/ -DTARGET_ENV=prod -Djava.library.path=/opt/mapr/lib -DksPass=mypass -DsecretKey=aeskey -DencryptMode=AES -Dkeystore=/mule/myStore -DkeystoreInstance=JCEKS -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf -Dmule.mmc.bind.port=1521 -Xms6144m -Xmx6144m -Djava.library.path=%LD_LIBRARY_PATH%:/mule/lib/boot -Dwrapper.key=a_guid -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=10744 -Dwrapper.version=3.5.19-st -Dwrapper.native_library=wrapper -Dwrapper.arch=x86 -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 -Dwrapper.lang.domain=wrapper -Dwrapper.lang.folder=../lang
Adding up the "capacity" items from jps shows that only my 6144m is being used for java heap. Where the heck is the rest of the memory being used? Stack memory? Native heap? I'm not even sure how to proceed.
If left to continue growing, it will consume all memory on the system and we will eventually see the system freeze up throwing swap space errors.
I have another process that is starting to grow. Currently at about 11g resident memory.
pmap 10746 > pmap_10746.txt
cat pmap_10746.txt | grep anon | cut -c18-25 | sort -h | uniq -c | sort -rn | less
Top 10 entries by count:
119 12K
112 1016K
56 4K
38 131072K
20 65532K
15 131068K
14 65536K
10 132K
8 65404K
7 128K
Top 10 entries by allocation size:
1 6291456K
1 205816K
1 155648K
38 131072K
15 131068K
1 108772K
1 71680K
14 65536K
20 65532K
1 65512K
And top 10 by total size:
Count Size Aggregate
1 6291456K 6291456K
38 131072K 4980736K
15 131068K 1966020K
20 65532K 1310640K
14 65536K 917504K
8 65404K 523232K
1 205816K 205816K
1 155648K 155648K
112 1016K 113792K
This seems to be telling me that because the Xmx and Xms are set to the same value, there is a single allocation of 6291456K for the java heap. Other allocations are NOT java heap memory. What are they? They are getting allocated in rather large chunks.
Expanding a bit more details on Peter's answer.
You can take a binary heap dump from within VisualVM (right click on the process in the left-hand side list, and then on heap dump - it'll appear right below shortly after). If you can't attach VisualVM to your JVM, you can also generate the dump with this:
jmap -dump:format=b,file=heap.hprof $PID
Then copy the file and open it with Visual VM (File, Load, select type heap dump, find the file.)
As Peter notes, a likely cause for the leak may be non collected DirectByteBuffers (e.g.: some instance of another class is not properly de-referencing buffers, so they are never GC'd).
To identify where are these references coming from, you can use Visual VM to examine the heap and find all instances of DirectByteByffer in the "Classes" tab. Find the DBB class, right click, go to instances view.
This will give you a list of instances. You can click on one and see who's keeping a reference each one:
Note the bottom pane, we have "referent" of type Cleaner and 2 "mybuffer". These would be properties in other classes that are referencing the instance of DirectByteBuffer we drilled into (it should be ok if you ignore the Cleaner and focus on the others).
From this point on you need to proceed based on your application.
Another equivalent way to get the list of DBB instances is from the OQL tab. This query:
select x from java.nio.DirectByteBuffer x
Gives us the same list as before. The benefit of using OQL is that you can execute more more complex queries. For example, this gets all the instances that are keeping a reference to a DirectByteBuffer:
select referrers(x) from java.nio.DirectByteBuffer x
What you can do is take a heap dump and look for object which are storing data off heap such as ByteBuffers. Those objects will appear small but are a proxy for larger off heap memory areas. See if you can determine why lots of those might be retained.

How could I estimate my memory device access speed?

My memory is DDR2 800MHz
dmidecode -t memory
Handle 0x1100, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_1
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 800 MHz (1.2 ns)
Manufacturer: 7F98000000000000
Serial Number: 2DCCDD00
Asset Tag: 050916
Part Number:
I want to estimate the write or read speed by this info.
Is 800MHz means read/write 800bits per second?
Should we multiple 64 bit datawidth? eg: 800*64 bits/sec. So we can read/write 800*64/8 bytes/sec?
Thanks advance!

Security mp_init and init memory leaks

I build an iPad application and profile it with XCode, I got memory leaks in Security framework, following is some of them from Instruments:
Leaked Object # Address Size Responsible Library Responsible Frame
Malloc 128 Bytes,4 < multiple > 512 Bytes Security mp_init
Malloc 128 Bytes, 0x824aa70 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a9f0 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a970 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a8f0 128 Bytes Security mp_init
Leaked Object # Address Size Responsible Library Responsible Frame
Malloc 16 Bytes,4 < multiple > 64 Bytes Security init
Malloc 16 Bytes, 0x824a550 16 Bytes Security init
Malloc 16 Bytes, 0x824a540 16 Bytes Security init
Malloc 16 Bytes, 0x82493d0 16 Bytes Security init
Malloc 16 Bytes, 0x8237ca0 16 Bytes Security init
and some of the details:
# Category Event Type Timestamp RefCt Address Size Responsible Library Responsible Caller
0 Malloc 128 Bytes Malloc 00:02.240.888 1 0x824aa70 128 Security mp_init
# Category Event Type Timestamp RefCt Address Size Responsible Library Responsible Caller
0 Malloc 16 Bytes Malloc 00:02.240.886 1 0x824a550 16 Security init
I use ASIHTTP, FMDatabase and some other frameworks(I do not think they will cause the problem) in my application, and no Security framework in "Link Binary With Libraries", so I have no idea what's going wrong.
Can anyone figure out what may cause the leaks?

PHP Warning: POST Content-Length of 113 bytes exceeds the limit of -1988100096 bytes in Unknown

I have been having lot of problems with users uploading images on my website.
They can upload up to 6 images
Originally I had to change values in php.ini to:
upload_max_filesize = 2000M
post_max_size = 2000M
max_execution_time = 120
max_file_uploads = 7
memory_limit=128M
I had to change to this as was getting all sorts of errors like out of memory, maximum post exceeded etc.
Everything was going ok till I checked my error log which contained :
[11-Jun-2011 04:33:06] PHP Warning: Unknown: POST Content-Length of 113 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:12] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:27] PHP Warning: Unknown: POST Content-Length of 74 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:34] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:43] PHP Warning: Unknown: POST Content-Length of 77 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:48] PHP Warning: Unknown: POST Content-Length of 74 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:33:53] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:34:20] PHP Warning: Unknown: POST Content-Length of 133 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:35:29] PHP Warning: Unknown: POST Content-Length of 131 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:36:00] PHP Warning: Unknown: POST Content-Length of 113 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:36:06] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
[11-Jun-2011 04:36:34] PHP Warning: Unknown: POST Content-Length of 116 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0
if I change the post max value back top 8M I get message like this:
PHP Warning: POST Content-Length of 11933650 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
Any ideas where I am going wrong?
On some 32bit systems PHP will take the memory settings like 2000M or 2G and convert it to the integer number of bytes by not performing a boundary check. A number starting at 2G or 2048M will be -2147483648 bytes then.
Some PHP versions cap this at the top, so it won't go into negative numbers (that is the 32 bit signed integer limit).
If you want to achieve the maximum possible number of bytes on such a system then, use 2147483647. This is equal to two gigabytes minus one byte.
Alternatively if you need to deal with large data, consider a 64bit system.
Additionally you should consider the following:
According to the PHP manual, the memory_limit setting is the more important one. If it does not offer enough memory, the post-data size-check then will pass, but PHP would not have enough memory to actually handle the post-data. You will get another error than, that the memory exceeded. So when you configure your PHP, take care that post_max_size is smaller than memory_limit.
In your example the memory_limit is 128M, so it can not process post-data of a size larger than ~128 Megabyte.
(This blog post shows what can happen and how large memory settings on 32bit and 64bit systems behave)
It looks like your "2000M" is exceeding the integer limit. From the manual:
PHP allows shortcuts for bit values, including K (kilo), M (mega) and G (giga). PHP will do the conversions automatically if you use any of these. Be careful not to exceed the 32 bit signed integer limit (if you're using 32bit versions) as it will cause your script to fail.
try a smaller value, say 1000M. 2 Gigabytes of incoming data are probably unlikely anyway.

Resources