I want to dump the memory pages of a process once it finishes execution. I'm trying to use gdb for that, First I set break points at exit and _exit then I run the process inside gdb, once the process breaks I use info proc mappings to get the memory map of the process. it looks like the following:
Mapped address spaces:
Start Addr End Addr Size Offset objfile
0x400000 0x415000 0x15000 0x0 /path/workspace/freqmine
0x614000 0x615000 0x1000 0x14000 /path/workspace/freqmine
0x615000 0x616000 0x1000 0x15000 /path/workspace/freqmine
0x616000 0x129b000 0xc85000 0x0 [heap]
0x7ffff71f4000 0x7ffff720a000 0x16000 0x0 /lib/x86_64-linux-gnu/libgcc_s.so.1
0x7ffff720a000 0x7ffff7409000 0x1ff000 0x16000 /lib/x86_64-linux-gnu/libgcc_s.so.1
0x7ffff7409000 0x7ffff740a000 0x1000 0x15000 /lib/x86_64-linux-gnu/libgcc_s.so.1
0x7ffff740a000 0x7ffff750f000 0x105000 0x0 /lib/x86_64-linux-gnu/libm-2.19.so
0x7ffff750f000 0x7ffff770e000 0x1ff000 0x105000 /lib/x86_64-linux-gnu/libm-2.19.so
0x7ffff770e000 0x7ffff770f000 0x1000 0x104000 /lib/x86_64-linux-gnu/libm-2.19.so
0x7ffff770f000 0x7ffff7710000 0x1000 0x105000 /lib/x86_64-linux-gnu/libm-2.19.so
0x7ffff7710000 0x7ffff78cb000 0x1bb000 0x0 /lib/x86_64-linux-gnu/libc-2.19.so
0x7ffff78cb000 0x7ffff7acb000 0x200000 0x1bb000 /lib/x86_64-linux-gnu/libc-2.19.so
0x7ffff7acb000 0x7ffff7acf000 0x4000 0x1bb000 /lib/x86_64-linux-gnu/libc-2.19.so
0x7ffff7acf000 0x7ffff7ad1000 0x2000 0x1bf000 /lib/x86_64-linux-gnu/libc-2.19.so
0x7ffff7ad1000 0x7ffff7ad6000 0x5000 0x0
0x7ffff7ad6000 0x7ffff7bbc000 0xe6000 0x0 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
0x7ffff7bbc000 0x7ffff7dbb000 0x1ff000 0xe6000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
0x7ffff7dbb000 0x7ffff7dc3000 0x8000 0xe5000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
0x7ffff7dc3000 0x7ffff7dc5000 0x2000 0xed000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
0x7ffff7dc5000 0x7ffff7dda000 0x15000 0x0
0x7ffff7dda000 0x7ffff7dfd000 0x23000 0x0 /lib/x86_64-linux-gnu/ld-2.19.so
0x7ffff7fce000 0x7ffff7fd3000 0x5000 0x0
0x7ffff7ff7000 0x7ffff7ffa000 0x3000 0x0
0x7ffff7ffa000 0x7ffff7ffc000 0x2000 0x0 [vdso]
0x7ffff7ffc000 0x7ffff7ffd000 0x1000 0x22000 /lib/x86_64-linux-gnu/ld-2.19.so
0x7ffff7ffd000 0x7ffff7ffe000 0x1000 0x23000 /lib/x86_64-linux-gnu/ld-2.19.so
0x7ffff7ffe000 0x7ffff7fff000 0x1000 0x0
0x7ffffffdd000 0x7ffffffff000 0x22000 0x0 [stack]
0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall]
Now I have two questions, first: getconf PAGESIZE on my machine returns 4096 which is equal to 0x1000, some of these memory spaces have different sizes though. how is that possible? are these spaces memory pages or just logical spaces? if these are not memory pages, how can I view the addresses of memory pages, or even directly dump memory pages to files?
my second question is the following: these addresses are supposed to be virtual addresses viewed by the program (not physical addresses), so why doesn't the program space start at 0? if I try to dump the memory starting from address 0 I get the following error: Cannot access memory at address 0x0. also why are there some regions in between these memory spaces that cannot be accessed (the region after the heap for example)? shouldn't the virtual space of a process be contiguous?
some of these memory spaces have different sizes though. how is that possible?
Easy: they span multiple pages (note that all of their sizes are multiples of 0x1000).
are these spaces memory pages or just logical spaces?
They are spans of one or more pages that have the same underlying mapping (the same file) and the same protections. I am not sure what exactly you call "logical spaces", but chances are you can call them that.
these addresses are supposed to be virtual addresses viewed by the program (not physical addresses),
Correct.
so why doesn't the program space start at 0?
Because long time ago VAX machines used to map something at address 0, and that made finding NULL pointer dereferences hard (they didn't crash). This was decided to be a bad idea, so later UNIX variants do not map zero page, and any attempt to dereference a NULL pointer causes SIGSEGV, helping you to debug your programs.
Related
I have a problem... we are capturing packets off the wire in our network but are getting a lot of "IPv4 total length exceeds packet length" errors and always from, 8.0.69.0.
This IP is spurious and the destination seems to be 0.x, 5.x or 1.x, i.e. not valid either.
Some Googling seems to suggest that others have also seen 8.0.69.0 when working with GRE/NAT scenarios. Does anyone have any idea at all what this might be please?
PCAP uploaded here.
Error is Accurate
The error "IPv4 total length exceeds packet length" means exactly what it says: The number of bytes in the IPv4 packet exceeds the number of bytes recorded for the entire frame.
Looking at Fields
We can specify both fields of frame length (frame.len, frame.cap_len) and IP length (ip.len) to see this in more depth:
$ tshark -r exceed-length.pcap -T fields -e frame.len -e frame.cap_len -e ip.len
1478 1478 3114
1536 1536 3114
1476 1476 13615
1399 1399 13781
1478 1478 3114
228 228 13615
195 195 13615
1478 1478 3114
1476 1476 13615
1478 1478 3114
265 265 3114
505 505 3114
1478 1478 3114
88 88 33514
1478 1478 3114
1478 1478 3114
1478 1478 3114
As we can see, each and every frame has a frame length exceeded by packet length, sometimes by an order of magnitude.
What Causes This?
When a capturing program saves a packet in the pcap format (as this file is), it prepends each packet with the length the frame that it captured (frame.cap_len), the actual frame length (frame.len) and capture), and the timestamp. In most cases frame.cap_len and frame.len, won't differ at all, and they don't here either.
How to Fix?
If you want to manually edit the hex of the packet length, it's possible to make this error go away.
Next Steps
More likely than not, "IPv4 total length exceeds packet length" is incidental to your actual problem. Just because you see an Expert Infos in Wireshark, that does not necessarily mean that it's relevant. You should continue troubleshooting.
THere's an extra weird header in the packet, as per the answers when you asked this on the Wireshark Q&A site.
This turned out to be extra layers of encapsulation.
Removing the leading 22 bytes was the answer and revealed the correctly formed packets inside. Many thanks
I've written a simulator, which is distributed over two hosts. When I launch a few thousand processes, after about 10 minutes and half a million events written, my main Erlang (OTP v22) virtual machine crashes with this message:
no next heap size found: 18446744071789822643, offset 0.
It's always that same number - 18446744071789822643.
Because my server is very capable, the crash dump is also huge and I can't view it on my headless server (no WX installed).
Are there any tips on what I can look at?
What would be the first things I can try out to debug this issue?
First, see what memory() says:
> memory().
[{total,18480016},
{processes,4615512},
{processes_used,4614480},
{system,13864504},
{atom,331273},
{atom_used,306525},
{binary,47632},
{code,5625561},
{ets,438056}]
Check which one is growing - processes, binary, ets?
If it's processes, try typing i(). in the Erlang shell while the processes are running. You'll see something like:
Pid Initial Call Heap Reds Msgs
Registered Current Function Stack
<0.0.0> otp_ring0:start/2 233 1263 0
init init:loop/1 2
<0.1.0> erts_code_purger:start/0 233 44 0
erts_code_purger erts_code_purger:wait_for_request 0
<0.2.0> erts_literal_area_collector:start 233 9 0
erts_literal_area_collector:msg_l 5
<0.3.0> erts_dirty_process_signal_handler 233 128 0
erts_dirty_process_signal_handler 2
<0.4.0> erts_dirty_process_signal_handler 233 9 0
erts_dirty_process_signal_handler 2
<0.5.0> erts_dirty_process_signal_handler 233 9 0
erts_dirty_process_signal_handler 2
<0.8.0> erlang:apply/2 6772 238183 0
erl_prim_loader erl_prim_loader:loop/3 5
Look for a process with a very big heap, and that's where you'd start looking for a memory leak.
(If you weren't running headless, I'd suggest starting Observer with observer:start(), and look at what's happening in the Erlang node.)
We have a java application running on Mule. We have the XMX value configured for 6144M, but are routinely seeing the overall memory usage climb and climb. It was getting close to 20 GB the other day before we proactively restarted it.
Thu Jun 30 03:05:57 CDT 2016
top - 03:05:58 up 149 days, 6:19, 0 users, load average: 0.04, 0.04, 0.00
Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.2%us, 1.7%sy, 0.0%ni, 93.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 24600552k total, 21654876k used, 2945676k free, 440828k buffers
Swap: 2097144k total, 84256k used, 2012888k free, 1047316k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3840 myuser 20 0 23.9g 18g 53m S 0.0 79.9 375:30.02 java
The jps command shows:
10671 Jps
3840 MuleContainerBootstrap
The jstat command shows:
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
37376.0 36864.0 16160.0 0.0 2022912.0 1941418.4 4194304.0 445432.2 78336.0 66776.7 232 7.044 17 17.403 24.447
The startup arguments are (sensitive bits have been changed):
3840 MuleContainerBootstrap -Dmule.home=/mule -Dmule.base=/mule -Djava.net.preferIPv4Stack=TRUE -XX:MaxPermSize=256m -Djava.endorsed.dirs=/mule/lib/endorsed -XX:+HeapDumpOnOutOfMemoryError -Dmyapp.lib.path=/datalake/app/ext_lib/ -DTARGET_ENV=prod -Djava.library.path=/opt/mapr/lib -DksPass=mypass -DsecretKey=aeskey -DencryptMode=AES -Dkeystore=/mule/myStore -DkeystoreInstance=JCEKS -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf -Dmule.mmc.bind.port=1521 -Xms6144m -Xmx6144m -Djava.library.path=%LD_LIBRARY_PATH%:/mule/lib/boot -Dwrapper.key=a_guid -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=10744 -Dwrapper.version=3.5.19-st -Dwrapper.native_library=wrapper -Dwrapper.arch=x86 -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 -Dwrapper.lang.domain=wrapper -Dwrapper.lang.folder=../lang
Adding up the "capacity" items from jps shows that only my 6144m is being used for java heap. Where the heck is the rest of the memory being used? Stack memory? Native heap? I'm not even sure how to proceed.
If left to continue growing, it will consume all memory on the system and we will eventually see the system freeze up throwing swap space errors.
I have another process that is starting to grow. Currently at about 11g resident memory.
pmap 10746 > pmap_10746.txt
cat pmap_10746.txt | grep anon | cut -c18-25 | sort -h | uniq -c | sort -rn | less
Top 10 entries by count:
119 12K
112 1016K
56 4K
38 131072K
20 65532K
15 131068K
14 65536K
10 132K
8 65404K
7 128K
Top 10 entries by allocation size:
1 6291456K
1 205816K
1 155648K
38 131072K
15 131068K
1 108772K
1 71680K
14 65536K
20 65532K
1 65512K
And top 10 by total size:
Count Size Aggregate
1 6291456K 6291456K
38 131072K 4980736K
15 131068K 1966020K
20 65532K 1310640K
14 65536K 917504K
8 65404K 523232K
1 205816K 205816K
1 155648K 155648K
112 1016K 113792K
This seems to be telling me that because the Xmx and Xms are set to the same value, there is a single allocation of 6291456K for the java heap. Other allocations are NOT java heap memory. What are they? They are getting allocated in rather large chunks.
Expanding a bit more details on Peter's answer.
You can take a binary heap dump from within VisualVM (right click on the process in the left-hand side list, and then on heap dump - it'll appear right below shortly after). If you can't attach VisualVM to your JVM, you can also generate the dump with this:
jmap -dump:format=b,file=heap.hprof $PID
Then copy the file and open it with Visual VM (File, Load, select type heap dump, find the file.)
As Peter notes, a likely cause for the leak may be non collected DirectByteBuffers (e.g.: some instance of another class is not properly de-referencing buffers, so they are never GC'd).
To identify where are these references coming from, you can use Visual VM to examine the heap and find all instances of DirectByteByffer in the "Classes" tab. Find the DBB class, right click, go to instances view.
This will give you a list of instances. You can click on one and see who's keeping a reference each one:
Note the bottom pane, we have "referent" of type Cleaner and 2 "mybuffer". These would be properties in other classes that are referencing the instance of DirectByteBuffer we drilled into (it should be ok if you ignore the Cleaner and focus on the others).
From this point on you need to proceed based on your application.
Another equivalent way to get the list of DBB instances is from the OQL tab. This query:
select x from java.nio.DirectByteBuffer x
Gives us the same list as before. The benefit of using OQL is that you can execute more more complex queries. For example, this gets all the instances that are keeping a reference to a DirectByteBuffer:
select referrers(x) from java.nio.DirectByteBuffer x
What you can do is take a heap dump and look for object which are storing data off heap such as ByteBuffers. Those objects will appear small but are a proxy for larger off heap memory areas. See if you can determine why lots of those might be retained.
I am currently using Xcode 5 on my Macbook Pro which is running OSX Yosemite. I am trying to create an outlet from a UIWebview to a header file. When I let go of my mouse to create the connection, Xcode freezes and crashes. This is repeatedly happening no matter what I have done. I have tried restarting Xcode and my Macbook. Does anyone else have this problem or know how to fix it? Thanks!
I have added the logs from the crash(I couldn't include the backtrace because it exceeded the maximum characters in the body)
Process: Xcode [1631]
Path: /Applications/Xcode.app/Contents/MacOS/Xcode
Identifier: com.apple.dt.Xcode
Version: 5.1.1 (5085)
Build Info: IDEFrameworks-5085000000000000~10
App Item ID: 497799835
App External ID: 520942841
Code Type: X86-64 (Native)
Parent Process: ??? [1]
Responsible: Xcode [1631]
User ID: 502
Date/Time: 2014-06-26 21:50:34.616 -0400
OS Version: Mac OS X 10.10 (14A238x)
Report Version: 11
Anonymous UUID: 6D3E70F1-91EB-79A5-FAB6-2929D7E8A66F
Sleep/Wake UUID: C86C5452-A655-43AA-9D11-92BE0F080F3E
Time Awake Since Boot: 11000 seconds
Time Since Wake: 510 seconds
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Application Specific Information:
ProductBuildVersion: 5B1008
ASSERTION FAILURE in /SourceCache/DVTFrameworks/DVTFrameworks-5074/DVTKit/ViewControllers/DVTControllerContentView.m:280
Details: <DVTControllerContentView: 0x7fe687d170a0> can only have one subview, its contentView
Object: <DVTControllerContentView: 0x7fe687d170a0>
Method: -addSubview:
Thread: <NSThread: 0x7fe682d086a0>{number = 1, name = main}
Hints: None
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 1
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 4286
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=404.2M resident=289.4M(72%) swapped_out_or_unallocated=114.9M(28%)
Writable regions: Total=1.2G written=71.9M(6%) resident=141.4M(11%) swapped_out=0K(0%) unallocated=1.1G(89%)
REGION TYPE VIRTUAL
=========== =======
Activity Tracing 2048K
CG backing stores 4084K
CG image 19.3M
CG raster data 92K
CG shared images 1324K
CoreAnimation 880K
CoreAnimation (reserved) 12K reserved VM address space (unallocated)
CoreData Object IDs 4100K
CoreImage 264K
CoreUI image data 180K
Dispatch continuations 4096K
Foundation 16K
IOKit 13.8M
IOKit (reserved) 4K reserved VM address space (unallocated)
Image IO 700K
JS JIT generated code 8K
JS JIT generated code (reserved) 1.0G reserved VM address space (unallocated)
Kernel Alloc Once 8K
MALLOC 114.7M
MALLOC (admin) 32K
Memory Tag 242 12K
Memory Tag 249 156K
Memory Tag 251 80K
OpenCL 16K
OpenGL GLSL 256K
SQLite page cache 4352K
STACK GUARD 56.1M
Stack 15.7M
VM_ALLOCATE 16.9M
WebKit Malloc 464K
__DATA 60.1M
__GLSLBUILTINS 2588K
__IMAGE 528K
__LINKEDIT 100.5M
__TEXT 304.0M
__UNICODE 544K
mapped file 152.6M
shared memory 68K
=========== =======
TOTAL 1.9G
TOTAL, minus reserved VM space 879.9M
Model: MacBookPro7,1, BootROM MBP71.0039.B0E, 2 processors, Intel Core 2 Duo, 2.4 GHz, 8 GB, SMC 1.62f7
Graphics: NVIDIA GeForce 320M, NVIDIA GeForce 320M, PCI, 256 MB
Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1067 MHz, 0x859B, 0x435435313236344243313036372E4D313646
Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1067 MHz, 0x859B, 0x435435313236344243313036372E4D313646
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.106.98.100.24)
Bluetooth: Version 4.3.0d49 14130, 3 services, 16 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en1
Serial ATA Device: Hitachi HTS545032B9SA02, 320.07 GB
Serial ATA Device: HL-DT-ST DVDRW GS23N
USB Device: Built-in iSight
USB Device: Internal Memory Card Reader
USB Device: BRCM2046 Hub
USB Device: Bluetooth USB Host Controller
USB Device: IR Receiver
USB Device: Apple Internal Keyboard / Trackpad
Thunderbolt Bus:
I think you should declare a IBOutlet in your header file. And then in Storyboard you try to connect the header file.
Have you ever tried to clean your product(Product->Clean)?
Then, open the xib file as source code and delete your IBOutlet info. Maybe XCode get confused about your IBOutlet.
There is a part of my app where I perform operations concurrently. They consist of initializing many CALayers and rendering them to bitmaps.
Unfortunately, during these operations (each takes about 2 seconds to complete on an iphone 4), the Dirty Size indicated by VM Tracker spikes to ~120MB. Allocations spike to ~12MB(does not accumulate) From my understanding, the Dirty size is memory that cannot be freed. so often, my app and all other apps in the background gets killed.
Incident Identifier: 7E6CBE04-D965-470D-A532-ADBA007F3433
CrashReporter Key: bf1c73769925cbff86345a576ae1e576728e5a11
Hardware Model: iPhone3,1
OS Version: iPhone OS 5.1.1 (9B206)
Kernel Version: Darwin Kernel Version 11.0.0: Sun Apr 8 21:51:26 PDT 2012; root:xnu-
1878.11.10~1/RELEASE_ARM_S5L8930X
Date: 2013-03-18 19:44:51 +0800
Time since snapshot: 38 ms
Free pages: 1209
Active pages: 3216
Inactive pages: 1766
Throttled pages: 106500
Purgeable pages: 0
Wired pages: 16245
Largest process: Deja Dev
Processes
Name UUID Count resident pages
geod <976e1080853233b1856b13cbd81fdcc3> 338
LinkedIn <24325ddfeed33d4fb643030edcb12548> 6666 (jettisoned)
Music~iphone <a3a7a86202c93a6ebc65b8e149324218> 935
WhatsApp <a24567991f613aaebf6837379bbf3904> 2509
MobileMail <eed7992f4c1d3050a7fb5d04f1534030> 945
Console <9925a5bd367a7697038ca5a581d6ebdf> 926 (jettisoned)
Test Dev <c9b1db19bcf63a71a048031ed3e9a3f8> 81683 (active)
MobilePhone <8f3f3e982d9235acbff1e33881b0eb13> 867
debugserver <2408bf4540f63c55b656243d522df7b2> 92
networkd <80ba40030462385085b5b7e47601d48d> 158
notifyd <f6a9aa19d33c3962aad3a77571017958> 234
aosnotifyd <8cf4ef51f0c635dc920be1d4ad81b322> 438
BTServer <31e82dfa7ccd364fb8fcc650f6194790> 275
CommCenterClassi <041d4491826e3c6b911943eddf6aaac9> 722
SpringBoard <c74dc89dec1c3392b3f7ac891869644a> 5062 (active)
aggregated <a12fa71e6997362c83e0c23d8b4eb5b7> 383
apsd <e7a29f2034083510b5439c0fb5de7ef1> 530
configd <ee72b01d85c33a24b3548fa40fbe519c> 465
dataaccessd <473ff40f3bfd3f71b5e3b4335b2011ee> 871
fairplayd.N90 <ba38f6bb2c993377a221350ad32a419b> 169
fseventsd <914b28fa8f8a362fabcc47294380c81c> 331
iapd <0a747292a113307abb17216274976be5> 323
imagent <9c3a4f75d1303349a53fc6555ea25cd7> 536
locationd <cf31b0cddd2d3791a2bfcd6033c99045> 1197
mDNSResponder <86ccd4633a6c3c7caf44f51ce4aca96d> 201
mediaremoted <327f00bfc10b3820b4a74b9666b0c758> 257
mediaserverd <f03b746f09293fd39a6079c135e7ed00> 1351
lockdownd <b06de06b9f6939d3afc607b968841ab9> 279
powerd <133b7397f5603cf8bef209d4172d6c39> 173
syslogd <7153b590e0353520a19b74a14654eaaa> 178
wifid <3001cd0a61fe357d95f170247e5458f5> 319
UserEventAgent <dc32e6824fd33bf189b266102751314f> 409
launchd <5fec01c378a030a8bd23062689abb07f> 126
**End**
On closer inspection, the dirty memory consists mostly of Image IO and Core Animation pages. multiple entries consisting of hundreds to thousands of pages. What does Image IO and Core Animation do exactly? and how can I reduce the Dirty Memory?
edit: tried doing this on a serial queue and no improvement on size of Dirty memory
another question. how large is too large for Dirty Memory and allocations?
Updated:
- (void) render
{
for (id thing in mylist) {
#autorelease {
CALayer *layer = createLayerFromThing(thing);
UIImage *img = [self renderLayer:layer];
[self writeToDisk:img];
}
}
}
in createLayerFromThing(thing); I actually creating a layer with a huge amount of sub layers
UPDATED
first screenshot for maxConcurrentOperationCount = 4
second for maxConcurrentOperationCount = 1
============================================================================================================================================================
and since it bringing down the number of concurrent operations barely made a dent,
I decided to try maxConcurrentOperationCount = 10
It's difficult to say what's going wrong without any details but here are a few ideas.
A. Use #autorelease. CALayers generate bitmaps in the backgound, which in aggregate can take-up lots of space if they are not freed in time. If you are creating and rendering many layers I suggest adding an autorelease block inside your rendering loop. This won't help if ALL your layers are nested and needed at the same time for rendering.
- (void) render
{
for (id thing in mylist) {
#autorelease {
CALayer *layer = createLayerFromThing(thing);
[self renderLayer:layer];
}
}
}
B. Also if you are using CGBitmapCreateContext for rendering are you calling the matching CGContextRelease? This goes also for CGColorRef.
C. If you are allocating memory with malloc or calloc are you freeing it when done? One way to ensure that this happens
Post the code for the rendering loop to provide more context.
I believe there are two possibilities here:
The items you create are not autoreleased.
The memory you are taking is what it is, due to the number of concurrent operations you are doing.
In the first case the solution is simple. Send an autorelease message to the layers and images upon their creation.
In the second case, you could limit the number of concurrent operations by using an NSOperationQueue. Operation queues have a property called maxConcurrentOperationCount. I would try with a value of 4 and see how the memory behavior changes from what you currently have. Of course, you might have to try different values to get the right memory vs performance balance.
Autorelease will wait till the end of the run loop to clean up. If you explicit release it can take it from the heap without filling the pool.
- (void) render {
for (id thing in mylist) {
CALayer *layer = createLayerFromThing(thing); // assuming this thing is retained
[self renderLayer:layer];
[layer release]; // this layer no longer needed
} }
Also run build with analyse and see if you have leaks and fix them too.