memory issue iPad 4.2 crashes - ipad

I am developing a application which receives 600-700 KB of XML data from the server. I have to do some manipulations in that data so once received the data the memory increases to 600 KB to 2 M.B. Already view occupied 4 M.B of memory in the application.
So while processing the XML data i m doing some manipulation(pre-parsing) and the memory increases to 600 K.B to 2 M.B and finally decreases to 600 K.B. due to increase in memory, application gives the memory warning. While getting memory warning i m releasing all the views in the navigation controller but it releases only 1 M.B of memory. Even though I release all the views the application is crashing.
Please help me out in this issue. It happens in iPad 4.2.
Thanks in advance

There's no magical answer here. You're using too much memory and you need to figure out how to use less. Without knowing more about your application it's difficult to be specific, though clearly loading in nearly 1Mb of data and playing around with it isn't helping.
Maybe you can stream the data rather than loading it all into memory? There's an open source library that helps: StreamingXMLParser.
Also, your view sounds huge (over a megabyte!). I'm sure there's some optimisation that can be performed there. Use Instruments to see where your memory is being used.

Maybe only 1MB is released because of a parameter value which can be altered or you may need to manually start a garbage collection operation during your development session, if relevant to the language in use. You could sectionalise the xml input if possible or you could invoke [your own] compact or compress of the xml when stored if you have access to the script or code in a way that allows it.

Related

Why would Xcode show MUCH more memory use than Instruments for SceneKit app?

I'm trying to debug why our SceneKit-based app is using so much memory but Xcode and Instruments / Allocations seem to have very different values for the amount of memory being used. When I look in Xcode I see something like 600 MB but when I transfer the same running session over to Instruments / Allocations, I see a very different number for persistent bytes, like 150 MB.
Which one is correct? Why the difference? Are they measuring different things?
(Regardless of whether I Transfer an Xcode debug session or start fresh in Instruments, it doesn't seem to make much difference.)
The reason that I care is that iOS is killing the app for excessive memory use (according to Xcode) but I can't seem to find the problem via Instruments.
I've tried turning off all GPU and Metal debug options but they don't seem to make a difference.
Which one is correct?
My intuition is: Instruments. It uses Dtrace to (sorry) instrument your code and watch actual allocations and deallocations as they happen, at the expense of performance. The Xcode debug navigator memory graph is more of an outside view designed to give a very general sense of what’s happening. That is exactly why the latter offers you a way to switch to the former — because that (Instruments) is where you’re going to get real measurements.
(However, let’s keep in mind that Instruments may fail to include in the total you’re seeing some virtual memory backing stores for graphics. There are plenty of WWDC videos discussing this topic in more detail. )
I know that this answer is quite late, but for the sake of future developers with the same problem, I would advise you to check the images in your assets folder. If any of your images have dimensions larger than 1000 x 1000 you should scale them down. With the example above, the image comprises of 1000000 pixels. Following how images are loading in (4 bytes per pixel), this means 4 MB of memory is used to load the image. Unbeknownst to me, I had an image of roughly 3600 * 4000 in my assets folder. Doing the maths, this was over 50 MB of memory usage!

Using Instruments to Work Through Low Memory Warnings

I am trying to work through some low memory conditions using instruments. I can watch memory consumption in the Physical Memory Free monitor drop down to a couple of MB, even though Allocations shows that All Allocations is about 3 MB and Overall Bytes is 34 MB.
I have started to experience crashing since I moved some operations to a separate thread with an NSOperationQueue. But I wasn't using instruments before the change. Nevertheless, I'm betting I did something that I can undo to stop the crashes.
By the way, it is much more stable without instruments or the debugger connected.
I have the leaks down to almost none (maybe a hundred bytes max before a crash).
When I look at Allocations, I only see very primitive objects. And the total memory reported by it is also very low. So I cant see how my app is causing these low memory warnings.
When I look at Heap Shots from the start up, I don't see more than about 3 MB there, between the baseline and the sum of all the heap growth values.
What should I be looking at to find where the problem is? Can I isolate it to one of my view controller instances, for example? Or to one of my other instances?
What I have done:
I powered the device off and back on, and this made a significant improvement. Instruments is not reporting a low memory warning. Also, I noticed that Physical Free Memory at start up was only about 7 MB before restarting, and its about 60 MB after restarting.
However, I am seeing a very regular (periodic) drop in Physical Free Memory, dropping from 43 MB to 6 MB (an then back up to 43 MB). I would like to knwo what it causing that. I don't have any timers running in this app. (I do have some performSelector:afterDelay:, but those aren't active during these tests.)
I am not using ARC.
The allocations and the leaks instruments only show what the objects actually take, but not what their underlaying non-object structures (the backing stores) are taking. For example, for UIImages it will show you have a few allocated bytes. This is because a UIImage object only takes those bytes, but the CGImageRef that actually contains the image data is not an object, and it is not taken into account in these instruments.
If you are not doing it already, try running the VM Tracker at the same time you run the allocations instrument. It will give you an idea of the type memory that is being allocated. For iOS the "Dirty Memory", shown by this instrument, is what normally triggers the memory warnings. Dirty memory is memory that cannot be automatically discarded by the VM system. If you see lots of CGImages, images might be your problem.
Another important concept is abandoned memory. This is memory that was allocated, it is still referenced somewhere (and as such not a leak), but not used. An example of this type of memory is a cache of some sort, which is not freeing up upon memory warning. A way to find this out is to use the heap shot analysis. Press the "Mark Heap" button of the allocations instrument, do some operation, return to the previous point in the app and press "Mark Heap" again. The second heap shot should show you what new objects have been allocated between those two moments, and might shed some light on the mystery. You could also repeat the operation simulating a memory warning to see if that behaviour changes.
Finally, I recommend you to read this article, which explains how all this works: http://liam.flookes.com/wp/2012/05/03/finding-ios-memory/.
The difference between physical memory from VM Tracker and allocated memory from "Allocations" is due to the major differences of how these instruments work:
Allocations traces what your app does by installing a tap in the functions that allocate memory (malloc, NSAllocateObject, ...). This method yields very precise information about each allocation, like position in code (stack), amount, time, type. The downside is that if you don't trace every function (like vm_allocate) that somehow allocates memory, you lose this information.
VM Tracker samples the state of the system's virtual memory in regular intervals. This is a much less precise method, as it just gives you an overall view of the current state. It operates at a low frequency (usually something like every three seconds) and you get no idea of how this state was reached.
A known culprit of invisible allocations is CoreGraphics: It uses a lot of memory when decompressing images, drawing bitmap contexts and the like. This memory is usually invisible in the Allocations instrument. So if your app handles a lot of images it is likely that you see a big difference between the amount of physical memory and the overall allocated size.
Spikes in physical memory might result from big images being decompressed, downsized and then only used in screen resolution in some view's or layer's contents. All this might happen automatically in UIKit without your code being involved.
I have the leaks down to almost none (maybe a hundred bytes max before a crash).
In my experience, also very small leaks are "dangerous" sign. In fact, I have never seen a leak larger than 4K, and leaks I usually see are a couple hundreds of bytes. Still, they usually "hide" behind themselves a much larger memory which is lost.
So, my first suggestion is: get rid of those leaks, even though they seem small and insignificant -- they are not.
I have started to experience crashing since I moved some operations to a separate thread with an NSOperationQueue.
Is there a chance that the operation you moved to the thread is the responsible for the pulsing peak? Could it be spawned more than once at a time?
As to the peaks, I see two ways you can go about them:
use the Time Profiler in Instruments and try to understand what code is executing while you see the peak rising;
selectively comment out portions of your code (I mean: entire parts of your app -- e.g., replace a "real" controller with a basic/empty UIViewController, etc) and see if you can identify the culprit this way.
I have never seen such a pulsating behaviour, so I assume it depends on your app or on your device. Have you tried with a different device? What happens in the simulator (do you see the peak)?
When I'm reading your text, I have the impression that you might have some hidden leaks. I could be wrong but, are you 100% sure that you have check all leaks?
I remember one particular project I was doing few month ago, I had the same kind of issue, and no leaks in Instruments. My memory kept growing up and I get memory warnings... I start to log on some important dealloc method. And I've seen that some objects, subviews (UIView) were "leaking". But they were not seen by Instruments because they were still attached to a main view.
Hope this was helpful.
In the Allocations Instrument make sure you have "Only Track Active Allocations" checked. See Image Below. I think this makes it easier to see what is actually happening.
Have you run Analyze on the project? If there's any analyze warnings, fix them first.
Are you using any CoreFoundation stuff? Some of the CF methods have ... strange ... interactions with the ObjC runtime and mem management (they shouldn't do, AFAICS, but I've seen some odd behaviour with the low-level image and AV manipulations where it seems like mem is being used outside the core app process - maybe the OS calls being used by Apple?)
... NB: there have also, in previous versions of iOS, been a few mem-leaks inside Apple's CF methods. IIRC the last of those was fixed in iOS 5.0.
(StackOVerflow's parser sucks: I typed "3" not "1") Are you doing something with a large number of / large-sized CALayer instances (or UIView's with CG* methods, e.g. a custom drawRect method in a UIView?)
... NB: I have seen the exact behaviour you describe caused by 2 and 3 above, either in the CF libraries, or in the Apple windowing system when it tries to work with image data that was originally generated inside CF libraries - or which found its way into CALayers.
It seems that Instruments DOES NOT CORRECTLY TRACK memory usage inside the CA / CG system; this area is a bit complex since Apple is shuffling back and forth between CPU and GPU ram, but it's disappointing that the mem usage seems to simply "disappear" when it clearly is still being used!
Final thought (4. -- but SO won't let me type that) - are you using the invisible RHS of Instruments?
Apple hardcoded Instruments to always disable itself everytime you run it (so you have to keep manually opening it). This is stupid, since some of the core information only exists in the RHS bar. But I've worked with several people who didn't even know it existed :)

iPhone memory warnings and crashes - but Instruments showing lowish memory use

I have a strange memory issue I'm having problems resolving and would appreciate some advice as to where else to look.
The program I have (iPhone App) has a function whereby it basically downloads loads of files, processes those that are JSON, and stores the rest to disk. The JSON processing is CPU intensive and can take several seconds per file, so I have a NSOperationQueue with maxConcurrency limited to 1 that handles all the heavy lifting, and a queue that manages the multiple files to download.
Ever since iOS5 came out, the App has had problems completing the download sequence without crashing and so far what I have tried is;
1) Changed the performSelectorOnBackgroundThread JSON processing to use a single NSOperationQueue so as to limit the number of background threads working with large objects.
2) Added NSAutoReleasePools inside loops that create multiple, large, transient objects.
3) Flushed the sharedURLCache to ensure the files aren't hanging around in the system cache.
4) Stored the JSON objects to disk using NSKeyedArchiver and passed the filenames between threads rather than the actual objects, to again try to mitigate the number and size of retained objects currently in use.
All of these at first seemed to make a difference, and when I look at the memory allocations, I've now got the peak usage down from just over 20MB (hence no wonder it was crashing) to under 10MB, and yet the app is still crashing with low memory as before.
I'm trying to trace what is eating the memory causing the app to crash and on this occasion I'm having real problems persuading Instruments to tell me anything useful.
Here's a typical trace (on an iPhone 3GS running iOS 4.3.5)
You can see that the PEAK usage was a tad over 7MB and yet shortly after, you can see the 2 flags pertaining to low memory, and then low memory urgent, followed by the app terminating shortly thereafter.
If I use the memory monitor, the cause of the crash seems clear enough - physical memory is being exhausted - look at the light green trace below. The low memory warnings co-incide (not surprisingly) with the physical memory running out.
There are no leaks showing FWIW either (I've done that in other runs).
It's not image caches or NSURLConnection caches and the only thing I can think of is that perhaps there are some bad leaks that aren't being detected ... but I'm having issues identifying them because if I click into all allocations to see the objects that are live, and then do a command-A to select them all (in order to paste them into a spreadsheet to see where the memory seems to be), at the point I hit command-C to copy them, Instruments beachballs and never recovers.
I really cant figure out what's going on. Does anyone have some advice on how to persuade instruments to show me some more useful information about what is using this memory?
Sorry I can't post any meaningful code fragments ... hopefully the instruments screenshots at least give you an idea about where I'm coming from.
The Leaks instrument isn't terribly useful for figuring out anything but the obvious leaks in your app.
What you are describing is an ideal candidate for heapshot analysis.
tl;dr Heapshot analysis allows you to see exactly how the heap of your application grows between any two points of time (where you determine the points).

When to call SetProcessWorkingSetSize? (Convincing the memory manager to release the memory)

In a previous post ( My program never releases the memory back. Why? ) I show that FastMM can cache (read as hold for itself) pretty large amounts of memory. If your application just loaded a large data set in RAM, after releasing the data, you will see that impressive amounts of RAM are not released back to the memory pool.
I looked around and it seems that calling the SetProcessWorkingSetSize API function will "flush" the cache to disk. However, I cannot decide when to call this function. I wanted to call it at the end of the OnClick event on the button that is performing the RAM intensive operation. However, some people are saying that this may cause AV.
If anybody used this function successfully, please let me (us) know.
Many thanks.
Edit:
1. After releasing the data set, the program still takes large amounts of RAM. After calling SetProcessWorkingSetSize the size returns to few MB. Some argue that nothing was released back. I agree. But the memory foot print is now small AND it is NOT increasing back after using the program normally (for example when performing normal operation that does not involves loading large data sets). Unfortunately, there is no way to demonstrate that the memory swapped to disk is ever loaded back into memory, but I think it is not.
2. I have already demonstrated (I hope) this is not a memory leak:
My program never releases the memory back. Why?
How to convince the memory manager to release unused memory
If SetProcessWorkingSetSize would solve your problem, then your problem is not that FastMM is keeping hold of memory. Since this function will just trim the workingset of your application by writing the memory in RAM to the page file. Nothing is released back to Windows.
In fact you only have made accessing the memory again slower, since it now has to be read from disc. This method has the same effect as minimising your application. Then Windows presumes you are not going to use the application again soon and also writes the workingset in RAM to the pagefile. Windows does a good job of deciding when to write RAM to the pagefile and tries to keep the most used memory in RAM as long as it can. It will make the workinset size smaller (write to pagefile) when there is little RAM left. I would not mess with it just to give the illusion that you program is using less memory while in fact it is using just as much as before, only now it is slower to access. Memory that is accessed again will be loaded into RAM again and make the workinset size grow again. Touching less memory keeps the workingset size smaller.
So no, this will not help you forcing FastMM to release the memory. If your goal is for your application to use less memory you should look elsewhere. Look for leaks, look for heap fragmentations look for optimisations and if you think FastMM is keeping you from doing so you should try to find facts to support it. If your goal is to keep your workinset size small you could try to keep your memory access local. Maybe FastMM or another memory manager could help you with it, but it is a very different problem compared to using to much memory. And maybe this function does help you solve the problem you are having, but I would use it with care and certainly not use it just to keep the illusion that your program has a low memory usage.
I agree with Lars Truijens 100%, if you don't than you can check the FasttMM memory usage via FasttMM calls GetMemoryManagerState and GetMemoryManagerUsageSummary before and after calling API SetProcessWorkingSetSize.
Are you sure there is a problem? Working sets might only decrease when there really is a memory shortage.
Problem solved:
I don't need to use SetProcessWorkingSetSize. FastMM will eventually release the RAM.
To confirm that this behavior is generated by FastMM (as suggested by Barry Kelly) I crated a second program that allocated A LOT of RAM. As soon as Windows ran out of RAM, my program memory utilization returned to its original value.
I used this function just once, when I implemented TWebBrowser. This component took me so much memory even if I destroyed the instance.

40 million page faults. How to fix this?

I have an application that loads 170 files (let’s say they are text files) from disk in individual objects and kept in memory all the time. The memory is allocated once when I load those files from disk. So, there is no memory fragmentation involved. I also use FastMM to make sure my applications never leaks memory.
The application compares all these files with each other to find similarities. Over-simplified we can say that we compare text strings but the algorithm is way more complex as I have to allow some differences between strings. Each file is about 300KB. Loaded in memory (the object that holds it) it takes about 0.4MB of RAM. So, the running app takes about 60MB or RAM (working set). It processes the data for about 15 minutes. The thing is that it generates over 40 million page faults.
Why? I have about 2GB of free RAM. From what I know Page Faults are slow. How much they are slowing down my program?
How can I optimize the program to reduce these page faults? I guess it has something to do with data locality. Does anybody know some example algorithms for this (Delphi)?
Update:
But looking at the number of page faults (no other application in Task Manager comes close to mine, not even by far) I guess that I could increase the speed of my application IF I manage to optimize memory layout (reduce the page faults).
Delphi 7, Win 7 32 bit, RAM 4GB (3GB visible, 2GB free).
Caveat - I'm only addressing the page faulting issue.
I cannot be sure but have you considered using Memory Mapped files? In this way windows will use the files themselves as the paging file (rather than the main paging file pagrefile.sys). If the files are read only then the number of page faults should theoretically decrease as the pages won't need to written out to disk via the paging file as windows will just load the data from the file itself as needed.
Now to reduce files from paging in and out you need to try and go through the data in one direction so that as new data is read, older pages can be discarded for ever. Here is where you trade off going over the files again and caching data - the cache has to be stored somewhere.
Note that Memory Mapped files is how windows loads .dlls and .exes amongst other things. I've used them to scan though gigabyte files without hitting memory limits (we had MBs in those days and not GBs of ram).
However from the data you describe I'd suggest the ability to not go back ovver files will reduce the amount of repaging going on.
On my machine most pagefaults are reported for developer studio which is reported to have 4M page faults after 30+ minutes total CPU time. You get 10 times more, in half the time. And memory is scarce on my system. So 40M faults seems like a lot.
It could just maybe be you have a memory leak.
the working set is only the physical memory in use for your application. If you leak memory, and don't touch it, it will get paged out. You will see the virtual memory useage (or page file use) increase. These pages might be swapped back in when the heap memory walks the heap, to get swapped out again by windows.
Because you have a lot of RAM, the swapped out pages will stay in physical memory, as nobody else needs them. (a page recovered from RAM counts as a soft fault, from disk as a hard one)
Do you use an exponential resize system ?
If you grow the block of memory in too small increments while loading, it might constantly request large blocks from the system, copy the data over, and then release the old block (assuming that fastmm (de)allocates very large blocks directly from the OS).
Maybe somehow this causes a loop where the OS releases memory from your app's process, and then adds it again, causing page faults on first write.
Also avoid Tstringlist.load* methods for very large files, IIRC these consume twice the space needed.

Resources