I develop an android application, in c#, using xamarin. It uses a lot of memory. For long time, I used weaker 2GB device to run it successfully. Now I changed phone for 4GB device and suddenly I get out of memory exception. It's caused by creating larger bitmaps.
Here you can see output:
(13022): Starting a blocking GC Alloc
(13022): Clamp target GC heap from 271MB to 256MB
(13022): Alloc concurrent mark sweep GC freed 4(96B) AllocSpace objects, 0(0B) LOS objects, 0% free, 255MB/256MB, paused 172us total 13.525ms
(13022): Forcing collection of SoftReferences for 833KB allocation
(13022): Starting a blocking GC Alloc
(13022): Clamp target GC heap from 271MB to 256MB
(13022): Alloc concurrent mark sweep GC freed 5(120B) AllocSpace objects, 0(0B) LOS objects, 0% free, 255MB/256MB, paused 175us total 13.474ms
(13022): Out of memory: Heap Size=256MB, Allocated=255MB, Capacity=256MB
I tried all possible combinations of setting Java Max Heap Size = 1G and writing to manifest android:largeHeap="true" as was recommended here but it is still saying I'm only on 256MB, crashing at the same point. Any ideas why I don't get more heap memory? There is a lot of free memory in the system. When the time comes, I will do some optimizations, but at the moment I want to use full capabilities of my testing device to code easy way. I looked at various articles, questions and one of two highlighted actions always solved the problem. I have no idea what condition is wrong in my code.
Edit:
here is whole manifest file
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="AlienChessAndroid.AlienChessAndroid" android:versionCode="1" android:versionName="1.0" android:largeHeap="true" android:installLocation="auto">
<uses-sdk android:minSdkVersion="23" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.INTERNET" />
<application android:label="Alien Chess" android:icon="#drawable/Alien">
</application>
</manifest>
The attribute android:largeHeap="true" belongs to the application tag, please check the official document here: application. You put this attribute in manifest tag, this should be the reason why android:largeHeap="true" doesn't work for your app.
By the way, maybe it's off topic, since your problem is caused by large bitmap, using native memory (NDK & JNI) can actually bypass the heap size limitation. You can check this case: JNI bitmap operations , for helping to avoid OOM when using large images.
Related
I have a service which intermittently starts gobbling up server memory over time and needs to be restarted to free it. I turned +ust with gflags, restarted the service, and started taking scheduled UMDH snapshots. When the problem reoccurred, resource manager reported multiple GB under Working set and Private bytes, but the UMDH snapshots account only for a few MB allocations in the process' heaps.
At the top of UMDH snapshot files, it mentions "Only allocations for which the heap manager collected a stack are dumped".
How can an allocation in a process be without a trace when +ust flags were specified?
How can I find out where/how these GBs were allocated?
UMDH is short for User Mode Dump Heap. The term Heap is a key term here: it refers to the C++ heap manager only. This means that all memory which is allocated by other means than the C++ heap manager is not tracked by UMDH.
This can be
direct calls to VirtualAlloc()
memory used by .NET, since .NET has its own heap manager
But even for C++, there is the case that allocations larger than 512 kB are not efficiently manageable by the C++ heap manager, so it just redirects it to VirtualAlloc() and does not create a heap segment of such large allocations.
How can I find out where/how these GBs were allocated?
For direct calls to VirtualAlloc(), the WinDbg command !address -summary may give an answer. For .NET, the SOS extension and the !dumpheap -stat can give an answer.
We have some problems with Dart. It seems like after some period of time the garbage collector can't clear the memory in VM, so application hangs. Anyone with this issue? Are there any memory limits?
You should reuse your objects instead of creating new ones. You should use pool pattern:
http://en.wikipedia.org/wiki/Object_pool_pattern
Be careful about canvas and it's proper destruction.
Another GC performance papers:
http://blog.tojicode.com/2012/03/javascript-memory-optimization-and.html
http://qt-project.org/doc/qt-5/qtquick-performance.html
Are there any memory limits?
Yes. Dart apparently runs with a maximum sizes that can be configured at launch time:
How to run a dart program with big memory?
(The following applies to all garbage-collected languages ...)
If your application starts to run out of space (i.e. the heap is slowly filing with objects that the GC can't remove) then you may get into a nasty situation where the GC runs more and more frequently, and manages to reclaim less and less memory each time. Eventually you run out of memory, but before that happens the application gets really slow.
The solution is typically to do one or both of the following:
Find what is causing the memory to run out. It is typically not that you are allocating too many objects. Rather, the typical cause is that the unwanted objects are all still reachable ... via some data structure that your application has built.
Set the "quick death" tuning option for the GC .... if available. For example, Java garbage collectors can be configured to measure the time spent garbage collecting. (The GC overhead.) When the GC overhead exceeds a preset ratio, the Java virtual machine throws an OutOfMemoryError to "pull the plug".
Does the JVM ever give memory back to the OS it has previously allocated for the heap?
For example, I have a JVM that at set to -Xmx5120M and I have actually used all of that memory, doing stuff that would cause the heap to fill up. Lets say a full GC happens, which brings actual heap usage down significantly. Will that drop cause the total heap size to be reduced, presumably to just above actual usage levels, and the "cleared" memory is returned to the OS? Or will the memory allocated to the JVM remain at the high level even though it may not be "actively" using all of it in the heap now.
Slim down vs hoard I guess.
EDIT: I'm interested in the Sun/Oracle JVM (i.e. 1.6.0_33, 1.7+ or the like)
I'm definitely confused on this point.
I have an iPad application that shows 'Live Bytes' usage of 6-12mb in the object allocation instrument. If I pull up the memory monitor or activity monitor, the 'Real Memory' Column consistently climbs to around 80-90mb after some serious usage.
So do I have a normal memory footprint or a high one?
This answer and this answer claim you should watch 'Live Bytes' as the 'Real Memory' column shows memory blocks that have been released, but the OS hasn't yet reclaimed it.
On the other hand, this answer claims you need to pay attention to that memory monitor, as the 'Live Bytes' doesn't include things like interface elements.
What is the deal with iOS memory footprint!? :)
Those are simply two different metrics for measuring memory use. Which one is the "right" one depends on what question you're trying to answer.
In a nutshell, the difference between "live bytes" and "real memory" is the difference between the amount of memory currently used for stuff that your app has created and the total amount of physical memory currently attributed to your app. There are at least two reasons that those are different:
code: Your app's code has to be loaded into memory, of course, and the virtual memory system surely attributes that to your app even though it's not memory that your app allocated.
memory pools: Most allocators work by maintaining one or more pools of memory from which they can carve off pieces for individual objects or allocated memory blocks. Most implementations of malloc work that way, and I expect that the object allocator does too. These pools aren't automatically resized downward when an object is deallocated -- the memory is just marked 'free' in the pool, but the whole pool will still be attributed to your app.
There may be other ways that memory is attributed to your app without being directly allocated by your code, too.
So, what are you trying to learn about your application? If you're trying to figure out why your app crashed due to low memory, look at both "live bytes" (to see what your app is using now) and "real memory" (to see how much memory the VM system says your app is using). If you're trying to improve your app's memory performance, looking at "live bytes" or "live objects" is more likely to help, since that's the memory that you can do something about.
Seeing as how I wrote the last answer you linked to, I'll have to stand by that. If you want a total, accurate count of the current memory usage for your application, use the Memory Monitor instrument.
For reasons that I describe in this answer, Allocations hides the memory sizes of certain elements, meaning that its memory usage totals are significantly lower than your application's in-memory size. Many people find this out the hard way when they try to get their application functional on older iOS devices. On the older hardware, you had a hard memory ceiling of ~30 MB, where if you exceeded that your application was hard-killed.
Many developers (myself included) saw that we only had ~1-2 MB of live bytes in Allocations and thought we were good, until our applications started receiving memory warnings and early terminations. If you looked at Memory Monitor, you could see the true in-memory size of these applications being >20 MB, and you could see the applications being terminated the instant they crossed the 30 MB barrier in Memory Monitor.
Therefore, if you want an accurate assessment of your total application memory usage, use Memory Monitor. Allocations is great to find out the specific objects that are in memory, particularly when you use the heap shots to find things that might be accumulating (as leaks, retain cycles, or for other reasons). Just don't trust it when determining your application's actual size in memory.
'Live bytes' means memory allocated by your code (for example by malloc), so you have access to this memory. 'Real memory' shows physical amount of memory used by your app. This include also OpenGL textures, (possibly) sounds from Open AL...
Live bytes is useful to check when you allocate and release memory in your code. Real memory is good indicator for memory optimization efficiency. And it's overhead causes 'low memory' warnings.
Leaks:
None
ObjectAlloc:
Net Bytes: 4,332,512
# Net: 26,696
Overall Bytes: 103,769,552
# Overall: 738,987
Activity Monitor (MyApp):
# Thread: 6
Real Memory: 63.65 MB
Virtual Memory: 209.45 MB
Memory monitor showed same readings as Activity Monitor. I don't know whether these readings are good or bad. Memory indicated by Activity Monitor is horrifying. Should i be worried? Can i somehow estimate memory used by the application once its moved to the device. The real run time memory? Thanks.
Memory usage as reported by Object Allocation is not very autoritative, at least according to my experience. The real deal is the real memory consumption as reported by Memory Monitor, see my question on iPhone memory consumption. Your numbers seem to be measured in Simulator, such measurement is worthless. You have to measure on the device.
Object Alloc is reporting to the total memory used over the entire lifespan of the run. That means if objects are allocated and deallocated (which they often are) you see all the memory consumed in total.
Far more useful is to select the option "created and still living", then highlight regions of the graph where memory increases but it never goes down if you expect it would. Then you can see how much memory is being allocated at that point and what is allocating it. This works in the simulator as well as the device.