I've seen a fair bit of noise about "false positives," and have even encountered them, myself.
However, this takes the cake.
Easy to reproduce, using Swift 5/Xcode 10.2, create a new single-view iOS app.
Run Leaks.
You get these critters:
Malloc 64 Bytes 1 0x600001d084c0 64 Bytes Foundation +[NSString stringWithUTF8String:]
Malloc 16 Bytes 3 < multiple > 48 Bytes
Malloc 1.50 KiB 3 < multiple > 4.50 KiB
Malloc 32 Bytes 3 < multiple > 96 Bytes
Malloc 8.00 KiB 1 0x7fc56f000c00 8.00 KiB
Malloc 64 Bytes 10 < multiple > 640 Bytes
Malloc 80 Bytes 3 < multiple > 240 Bytes
Malloc 4.00 KiB 3 < multiple > 12.00 KiB
Using the simulator (XR, iOS 12.2).
That first one has a stack trace, but it's worthless.
Is there some way that I can correct for this noise? I'm writing an infrastructure component, and I need to:
A) Make damn sure it doesn't leak, and
B) Not have every Cocoapod jockey on Earth emailing me, and telling me that my component leaks.
If using a iOS 12.1 simulator , the leak instrument still can work (Swift 5/Xcode 10.2). Currently we are hoping it will be fixed in future versions.
Related
I'm trying to detect a memory leak location for a certain process via Windbg, and have come across a strange problem.
Using windbg, I've created 2 memory dump snapshots - one before and one after the leak, that showed an increase of around 20MBs (detected via Performance Monitor - private bytes). it shows that there is indeed a similar size difference in one of the heaps before and after the leak (Used with the command !heap -s):
Before:
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
03940000 08000002 48740 35312 48740 4372 553 9 5 2d LFH
External fragmentation 12 % (553 free blocks)
03fb0000 08001002 7216 3596 7216 1286 75 4 8 0 LFH
External fragmentation 35 % (75 free blocks)
05850000 08001002 60 16 60 5 2 1 0 0
...
After:
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
03940000 08000002 64928 55120 64928 6232 1051 26 5 51 LFH
External fragmentation 11 % (1051 free blocks)
03fb0000 08001002 7216 3596 7216 1236 73 4 8 0 LFH
External fragmentation 34 % (73 free blocks)
05850000 08001002 60 16 60 5 2 1 0 0
...
See the first Heap (03940000) - there is a difference in committed KBs of around 55120 - 35312 = 19808 KB = 20.2 MB.
However, when I inspected that heap with (!heap -stat -h 03940000), it displays the following for both dump files:
size #blocks total ( %) (percent of total busy bytes)
3b32 1 - 3b32 (30.94)
1d34 1 - 1d34 (15.27)
880 1 - 880 (4.44)
558 1 - 558 (2.79)
220 1 - 220 (1.11)
200 2 - 400 (2.09)
158 1 - 158 (0.70)
140 2 - 280 (1.31)
...(rest of the lines show no difference)
size #blocks total ( %) (percent of total busy bytes)
3b32 1 - 3b32 (30.95)
1d34 1 - 1d34 (15.27)
880 1 - 880 (4.44)
558 1 - 558 (2.79)
220 1 - 220 (1.11)
200 2 - 400 (2.09)
158 1 - 158 (0.70)
140 2 - 280 (1.31)
...(rest of the lines show no difference)
As you can see, there is hardly a difference between the two, despite the abovementioned 20MB size difference.
Is there an explanation for that?
Note: I have also inspected the Unmanaged memory using UMDH - there wasn't a noticeable size difference there.
We have a java application running on Mule. We have the XMX value configured for 6144M, but are routinely seeing the overall memory usage climb and climb. It was getting close to 20 GB the other day before we proactively restarted it.
Thu Jun 30 03:05:57 CDT 2016
top - 03:05:58 up 149 days, 6:19, 0 users, load average: 0.04, 0.04, 0.00
Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.2%us, 1.7%sy, 0.0%ni, 93.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 24600552k total, 21654876k used, 2945676k free, 440828k buffers
Swap: 2097144k total, 84256k used, 2012888k free, 1047316k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3840 myuser 20 0 23.9g 18g 53m S 0.0 79.9 375:30.02 java
The jps command shows:
10671 Jps
3840 MuleContainerBootstrap
The jstat command shows:
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
37376.0 36864.0 16160.0 0.0 2022912.0 1941418.4 4194304.0 445432.2 78336.0 66776.7 232 7.044 17 17.403 24.447
The startup arguments are (sensitive bits have been changed):
3840 MuleContainerBootstrap -Dmule.home=/mule -Dmule.base=/mule -Djava.net.preferIPv4Stack=TRUE -XX:MaxPermSize=256m -Djava.endorsed.dirs=/mule/lib/endorsed -XX:+HeapDumpOnOutOfMemoryError -Dmyapp.lib.path=/datalake/app/ext_lib/ -DTARGET_ENV=prod -Djava.library.path=/opt/mapr/lib -DksPass=mypass -DsecretKey=aeskey -DencryptMode=AES -Dkeystore=/mule/myStore -DkeystoreInstance=JCEKS -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf -Dmule.mmc.bind.port=1521 -Xms6144m -Xmx6144m -Djava.library.path=%LD_LIBRARY_PATH%:/mule/lib/boot -Dwrapper.key=a_guid -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=10744 -Dwrapper.version=3.5.19-st -Dwrapper.native_library=wrapper -Dwrapper.arch=x86 -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 -Dwrapper.lang.domain=wrapper -Dwrapper.lang.folder=../lang
Adding up the "capacity" items from jps shows that only my 6144m is being used for java heap. Where the heck is the rest of the memory being used? Stack memory? Native heap? I'm not even sure how to proceed.
If left to continue growing, it will consume all memory on the system and we will eventually see the system freeze up throwing swap space errors.
I have another process that is starting to grow. Currently at about 11g resident memory.
pmap 10746 > pmap_10746.txt
cat pmap_10746.txt | grep anon | cut -c18-25 | sort -h | uniq -c | sort -rn | less
Top 10 entries by count:
119 12K
112 1016K
56 4K
38 131072K
20 65532K
15 131068K
14 65536K
10 132K
8 65404K
7 128K
Top 10 entries by allocation size:
1 6291456K
1 205816K
1 155648K
38 131072K
15 131068K
1 108772K
1 71680K
14 65536K
20 65532K
1 65512K
And top 10 by total size:
Count Size Aggregate
1 6291456K 6291456K
38 131072K 4980736K
15 131068K 1966020K
20 65532K 1310640K
14 65536K 917504K
8 65404K 523232K
1 205816K 205816K
1 155648K 155648K
112 1016K 113792K
This seems to be telling me that because the Xmx and Xms are set to the same value, there is a single allocation of 6291456K for the java heap. Other allocations are NOT java heap memory. What are they? They are getting allocated in rather large chunks.
Expanding a bit more details on Peter's answer.
You can take a binary heap dump from within VisualVM (right click on the process in the left-hand side list, and then on heap dump - it'll appear right below shortly after). If you can't attach VisualVM to your JVM, you can also generate the dump with this:
jmap -dump:format=b,file=heap.hprof $PID
Then copy the file and open it with Visual VM (File, Load, select type heap dump, find the file.)
As Peter notes, a likely cause for the leak may be non collected DirectByteBuffers (e.g.: some instance of another class is not properly de-referencing buffers, so they are never GC'd).
To identify where are these references coming from, you can use Visual VM to examine the heap and find all instances of DirectByteByffer in the "Classes" tab. Find the DBB class, right click, go to instances view.
This will give you a list of instances. You can click on one and see who's keeping a reference each one:
Note the bottom pane, we have "referent" of type Cleaner and 2 "mybuffer". These would be properties in other classes that are referencing the instance of DirectByteBuffer we drilled into (it should be ok if you ignore the Cleaner and focus on the others).
From this point on you need to proceed based on your application.
Another equivalent way to get the list of DBB instances is from the OQL tab. This query:
select x from java.nio.DirectByteBuffer x
Gives us the same list as before. The benefit of using OQL is that you can execute more more complex queries. For example, this gets all the instances that are keeping a reference to a DirectByteBuffer:
select referrers(x) from java.nio.DirectByteBuffer x
What you can do is take a heap dump and look for object which are storing data off heap such as ByteBuffers. Those objects will appear small but are a proxy for larger off heap memory areas. See if you can determine why lots of those might be retained.
I'm using Ipython parallel in an optimisation algorithm that loops a large number of times. Parallelism is invoked in the loop using the map method of a LoadBalancedView (twice), a DirectView's dictionary interface and an invocation of a %px magic. I'm running the algorithm in an Ipython notebook.
I find that the memory consumed by both the kernel running the algorithm and one of the controllers increases steadily over time, limiting the number of loops I can execute (since available memory is limited).
Using heapy, I profiled memory use after a run of about 38 thousand loops:
Partition of a set of 98385344 objects. Total size = 18016840352 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 5059553 5 9269101096 51 9269101096 51 IPython.parallel.client.client.Metadata
1 19795077 20 2915510312 16 12184611408 68 list
2 24030949 24 1641114880 9 13825726288 77 str
3 5062764 5 1424092704 8 15249818992 85 dict (no owner)
4 20238219 21 971434512 5 16221253504 90 datetime.datetime
5 401177 0 426782056 2 16648035560 92 scipy.optimize.optimize.OptimizeResult
6 3 0 402654816 2 17050690376 95 collections.defaultdict
7 4359721 4 323814160 2 17374504536 96 tuple
8 8166865 8 196004760 1 17570509296 98 numpy.float64
9 5488027 6 131712648 1 17702221944 98 int
<1582 more rows. Type e.g. '_.more' to view.>
You can see that about half the memory is used by IPython.parallel.client.client.Metadata instances. A good indicator that results from the map invocations are being cached is the 401177 OptimizeResult instances, the same number as the number of optimize invocations via lbview.map - I am not caching them in my code.
Is there a way I can control this memory usage on both the kernel and the Ipython parallel controller (who'se memory consumption is comparable to the kernel)?
Ipython parallel clients and controllers store past results and other metadata from past transactions.
The IPython.parallel.Client class provides a method for clearing this data:
Client.purge_everything()
documented here. There is also purge_results() and purge_local_results() methods that give you some control over what gets purged.
I build an iPad application and profile it with XCode, I got memory leaks in Security framework, following is some of them from Instruments:
Leaked Object # Address Size Responsible Library Responsible Frame
Malloc 128 Bytes,4 < multiple > 512 Bytes Security mp_init
Malloc 128 Bytes, 0x824aa70 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a9f0 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a970 128 Bytes Security mp_init
Malloc 128 Bytes, 0x824a8f0 128 Bytes Security mp_init
Leaked Object # Address Size Responsible Library Responsible Frame
Malloc 16 Bytes,4 < multiple > 64 Bytes Security init
Malloc 16 Bytes, 0x824a550 16 Bytes Security init
Malloc 16 Bytes, 0x824a540 16 Bytes Security init
Malloc 16 Bytes, 0x82493d0 16 Bytes Security init
Malloc 16 Bytes, 0x8237ca0 16 Bytes Security init
and some of the details:
# Category Event Type Timestamp RefCt Address Size Responsible Library Responsible Caller
0 Malloc 128 Bytes Malloc 00:02.240.888 1 0x824aa70 128 Security mp_init
# Category Event Type Timestamp RefCt Address Size Responsible Library Responsible Caller
0 Malloc 16 Bytes Malloc 00:02.240.886 1 0x824a550 16 Security init
I use ASIHTTP, FMDatabase and some other frameworks(I do not think they will cause the problem) in my application, and no Security framework in "Link Binary With Libraries", so I have no idea what's going wrong.
Can anyone figure out what may cause the leaks?
Can someone explain this in a practical way? Sample represents usage for one, low-traffic Rails site using Nginx and 3 Mongrel clusters. I ask because I am aiming to learn about page caching, wondering if these figures have significant meaning to that process. Thank you. Great site!
me#vps:~$ free -m
total used free shared buffers cached
Mem: 512 506 6 0 15 103
-/+ buffers/cache: 387 124
Swap: 1023 113 910
Physical memory is all used up. Why? Because it's there, the system should be using it.
You'll note also that the system is using 113M of swap space. Bad? Good? It depends.
See also that there's 103M of cached disk; this means that the system has decided that it's better to cache 103M of disk and swap out these 113M; maybe you have some processes using memory that are not being used and thus are paged out to disk.
As the other poster said, you should be using other tools to see what's happening:
Your perception: is the site running appropiately when you use it?
Benchmarking: what response times are your clients seeing?
More fine-grained diagnostics:
top: you can see live which processes are using memory and CPU
vmstat: it produces this kind of output:
alex#armitage:~$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 1 71184 156520 92524 316488 1 5 12 23 362 250 13 6 80 1
0 0 71184 156340 92528 316508 0 0 0 1 291 608 10 1 89 0
0 0 71184 156364 92528 316508 0 0 0 0 308 674 9 2 89 0
0 0 71184 156364 92532 316504 0 0 0 72 295 723 9 0 91 0
1 0 71184 150892 92532 316508 0 0 0 0 370 722 38 0 62 0
0 0 71184 163060 92532 316508 0 0 0 0 303 611 17 2 81 0
which will show you whether swap is hurting you (high numbers on si, so) and a more easier to see performance-over-time statistic.
by my reading of this, you have used almost all your memory, have 6 M free, and are going into about 10% of your swap. A more useful tools is to use top or perhaps ps to see how much each of your individual mongrels are using in RAM. Because you're going into swap, you're probably getting more slowdowns. you might find having only 2 mongrels rather than 3 might actually respond faster because it likely wouldn't go into swap memory.
Page caching will for sure help a tonne on response time, so if your pages are cachable (eg, they don't have content that is unique to the individual user) I would say for sure check it out