Adobe Air 3 iOS app memory leak issue maybe related to getDefinitionByName class - ios

I'm developing an application with adobe air 3 for ios and having low memory errors frequently.
After ios 5 update os started to kill my app after some low memory warnings.
But the thing is profiler says app uses 4 to 9 megs of memory.
There are a lot of bitmap copy operations around and sometimes instantiates new bitmaps from embedded bitmaps.
I highly optimized everything and look for leaks etc.
I watch profiler for memory status and seems like GC clears everything. everything looks perfect but app continues to get low memory errors and gets killed by os.
Is there anything wrong with this code below. Because my assumption is this ClassReference never gets off from memory even the profiles says memory is cleared.
I used clone method to pass value instead of pass by ref. so I guess GC can collect that local variable. I tried with and without clone nothing changes.
If the code below runs 10-15 times with different tile Id's app crashes but with same ID's it continues working.
Is there anyone who is familiar with this kind of thing?
tmp is bitmapData
if (isMoving)
{
tmp=getProxyImage(x,y); //low resolution tile image
}
else
{
strTmp="main_TILE"+getTileID(x,y);
var ClassReference:Class = getDefinitionByName(strTmp) as Class; //full resolution tile image //something wrong here
tmp=new ClassReference().bitmapData.clone(); //something wrong here
ClassReference=null;
}
return tmp.clone();
Thanks for reading. I hope some one has a solution for this.

You are creating three copies of your bitmapdata with this. They will likely get garbage collected eventually, but you probably run out of memory before that happens.
(Here I assume you have embedded your bitmapdata using the [Embed] tag)
tmp = new ClassReference()
// allocates no new memory, class reference already exists
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
// creates a new BitmapAsset from the class reference including it's BitmapData.
// then you clone this bitmapdata, giving you two
tmp = new ClassReference().bitmapData.clone();
// not really necessary since ClassReference goes out of scope anyway, but no harm done
ClassReference=null;
// Makes a third copy of your second copy and returns it.
return tmp.clone();
I would recommend this (assuming you need unique bitmapDatas for each tile)
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
return new ClassReference().bitmapData.clone();
If you don't need unique bitmapDatas, keep static properties with the bitmapDatas on some class and use the same ones all over. That will minimize memory usage.

Related

Memory leak when showing OpenCV image in Gtk Widget

I am trying to create a Gtk Widget that you can pass an OpenCV image to that will then show it. I have created a class that is inherited from Gtk.Image that is used to show the image. You pass the OpenCV image to this class using the show_frame method, which then updates the Gtk.Image so it shows that image.
I have tested this and it works fine, i.e the image is correctly shown and updated when the show_frame method is called. However every time the image is updated, the memory used increases, until there is not enough memory and the program crashes.
I believe this is due to the memory that image is not being freed correctly. I cannot however work out how to fix this. I have tried unreferencing the gbytes once a new frame is received but this does not help. The memory only builds up when the set_from_pixbuf function is called. If this is commented out the memory usage stays at a constant level.
class OpenCVImageViewer(Gtk.Image):
def __init__(self):
Gtk.Image.__init__(self)
def show_frame(self, frame):
# Convert to opencv BGR to Gtk RGB
rgb_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Get details about frame in order to set up pixbuffer
height = rgb_image.shape[0]
width = rgb_image.shape[1]
nChannels = rgb_image.shape[2]
gbytes = GLib.Bytes.new(rgb_image.tostring())
pixbuf = GdkPixbuf.Pixbuf.new_from_bytes(gbytes, GdkPixbuf.Colorspace.RGB, False,
8, width, height, width*nChannels)
# Add Gtk to main thread loop for thread safety
GLib.idle_add(self.set_from_pixbuf, pixbuf)
GLib.idle_add(self.queue_draw)
Well,
I found a solution, but I do not understand why it works: Set the image with a copy of the pixbuffer.
imageWidget.set_from_pixbuf(pixbuffer.copy())
I came to this solution after observing that the memory leak disapeared for scaled pixbuffers (i.e. result of pixbuffer.scale_simple).
Excerpt from the PyGTK FAQ, section 5.17:
There is a reference cycle between the python wrapper and its underlying C object; this means that the object will not be automatically deallocated when there are no more user references, and you will need the garbage collector to kick in (which may take a few cycles). This occasionally causes the odd problem, such as with pixbufs described in FAQ 8.4
And from section 8.4:
The answer is "Interesting GC behaviour" in Python. Apparently finalizers are not necessarily called as soon as an object goes out of scope. My guess is that the python memory manager doesn't directly know about the storage allocated for the image buffer (since it's allocated by the gdk) and that it therefore doesn't know how fast memory is being consumed.
The solution is to call gc.collect() at some appropriate place.
For example, I had some code that looked like this:
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
This chewed up an unacceptably large amount of memory for any reasonable image set. Changing the code to look like this fixed the problem:
import gc
for image_path in images:
pb = gtk.gdk.pixbuf_new_from_file(image_path)
pb = pb.scale_simple(thumb_width, thumb_height, gtk.gdk.INTERP_BILINEAR)
thumb_list_model.set_value(thumb_list_model.append(None), 0, pb)
del pb
gc.collect()
I am not exactly sure where you should call the garbage collector in your code (since I don't really know that much Python), but I believe this is the way to solve it.

Why do I get memory warnings with only 7 MB of memory allocated?

I am running my iOS App on iPod touch device and I get memory warnings even if the total allocation peak is only 7 MB as shown below (this happens when the Game Scene is pushed):
What I find strange is that:
the left peak (at time 0.00) corresponds to 20 MB of memory allocated (Introduction Scene) and despite this DOES NOT give any memory warning.
the central peak (at time 35.00) corresponds to raughly 7 MB of memory allocated (Game Scene is being pushed) and DOES give memory warning.
I do not understand why I get those warnings if the total memory is only 7 MB. Is this normal? How can I avoid this?
Looking at the allocation density we can see the following schema, which (to me) does not show much difference between the moment when the Intro Scene is being pushed (0.00) and the moment in which the Game Scene is being pushed (35.00). Being the density peaks similar I would assume that the memory warnings are due to something else that I am not able to spot.
EDIT:
I have been following a suggestion to use "Activity monitor" instead but unfortunately my App crashes when loading the Game Scene with only 30 MB of memory allocated. Here is the Activity monitor report.
Looking at the report I can see a total real memory usage sum of about 105 MB. Given this should refer to RAM memory and given my model should have 256 MB of RAM this should not cause APP crashes or Memory leaks problems.
I run the Leak monitor and it does not show any leak on my App. I also killed all the other apps.
However, analyzing the report, I see an astonishing 167 MB of Virtual Memory value associated to my App. Is this normal? What does that value mean? Can this be the reason for the crash? How can I detect which areas of my code are responsible for this?
My iPod is a 4th Generation model with 6.4 GB of capacity (memory) and only 290 MB of memory free. I am not sure if this somehow effects the Virtual Memory paging performance.
EDIT 2: I have also looked more at SpringBoard and its Virtual Memory usage is 180 MB. Is this normal? I found some questions/answers that seem to suggest that SpringBoard is responsible for autoreleasing objects (it should be the process for managing the screen and home botton but I am not sure if it has also to do with memory management). Is this correct?
Another note. I am using ARC. However I am not sure this has to do much with the issue as there are no apparent memory leaks and XCode should convert the code adding release/dealloc/retain calls to the compiled binary.
EDIT 3: As said before I am using ARC and Cocos2d (2.0). I have been playing around with the Activity monitor. I found out that if I remove the GameCenter authentication mechanism then the Activity Monitor runs fine (new doubt: did anyone else had a similar issue? Is the GameCenter authentication view being retained somewhere?). However I noticed that every time I navigate back and forwards among the various scenes prior the GameScene (Initial Scene -> Character Selection -> Planet Selection -> Character Selection -> Planet Selection -> etc.. -> Character Selection ..) the REAL MEMORY usage increases. After a while I start to get memory warnings and the App gets killed by iOS. Now the question is:
-> am I replacing the scenes in the correct way? I call the following from the various scene:
[[CCDirector sharedDirector] replaceScene: [MainMenuScene scene]];
I have Cocos2d 2.0 as static library and the code of replaceScene is this:
-(void) replaceScene: (CCScene*) scene
{
NSAssert( scene != nil, #"Argument must be non-nil");
NSUInteger index = [scenesStack_ count];
sendCleanupToScene_ = YES;
[scenesStack_ replaceObjectAtIndex:index-1 withObject:scene];
nextScene_ = scene; // nextScene_ is a weak ref
}
I wonder if somehow the scene does not get deallocated properly. I verified that the cleanup method is being called however I also added a CCLOG call on the CCLayer dealloc method and rebuild the static library. The result is that the dealloc method doesn't seem to be called.
Is this normal? :D
I found that other people had similar issues. I am wondering if it has to do with retain cycles and self blocks. I really need to spend some time studying this unless, from EDIT 3, anyone can tell me already what I am doing wrong :-)
All memory capacity shared through all apps&processes run in iOS. So, other apps can use a lot of memory and your app receive memory warning too. You'll receive memory warnings until it is not enough.
To understand what actually happens with memory in your app you should
Profile your app with Leaks (ARC is not guarantee that you don't have leaks, i.e. self-capturing issue).
Use heapshot analysis (shortly described here http://bentrengrove.com/blog/2013/4/26/heapshot-analysis)
And checkout this post about memory & virtual memory in iOS: http://liam.flookes.com/wp/2012/05/03/finding-ios-memory/
I solved this by adding a print of the process effective memory usage in the console. In this way I could get a precise measurament of the real memory used by the App process. Using instrument proved to be imprecise as the real memory used did not match with the one shown on instruments.
This code can be used to get the effective memory usage:
-(vm_size_t)report_memory
{
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
return info.resident_size;
}

Core Data Import - Not releasing memory

My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.

How to dispose a Writeable Bitmap? (WPF)

Some time ago i posted a question related to a WriteableBitmap memory leak, and though I received wonderful tips related to the problem, I still think there is a serious bug / (mistake made by me) / (Confusion) / (some other thing) here.
So, here is my problem again:
Suppose we have a WPF application with an Image and a button. The image's source is a really big bitmap (3600 * 4800 px), when it's shown at runtime the applicaton consumes ~90 MB.
Now suppose i wish to instantiate a WriteableBitmap from the source of the image (the really big Image), when this happens the applications consumes ~220 MB.
Now comes the tricky part, when the modifications to the image (through the WriteableBitmap) end, and all the references to the WriteableBitmap (at least those that I'm aware of) are destroyed (at the end of a method or by setting them to null) the memory used by the writeableBitmap should be freed and the application consumption should return to ~90 MB. The problem is that sometimes it returns, sometimes it does not.
Here is a sample code:
// The Image's source whas set previous to this event
private void buttonTest_Click(object sender, RoutedEventArgs e)
{
if (image.Source != null)
{
WriteableBitmap bitmap = new WriteableBitmap((BitmapSource)image.Source);
bitmap.Lock();
bitmap.Unlock();
//image.Source = null;
bitmap = null;
}
}
As you can see the reference is local and the memory should be released at the end of the method (Or when the Garbage collector decides to do so). However, the app could consume ~224 MB until the end of the universe.
Any help would be great.
Is it necessary to render the Bitmap image at the same resolution and pixels? You could create the writeablebitmap at a much lower set of pixels and call the render method. Since the writeablebitmap carries a reference to the original uielements when calling render, in this case, you are going to have 3 chunks: 1) original uielement, 2) pixels in writeablebitmap, 3) reference to copied original.
I had a similar issue with the writeablebitmap in terms of memory leaks and I fixed it from checking out this link:
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/17/silverlight-s-big-image-problem-and-what-you-can-do-about-it.aspx
If you create another writeablebitmap and copy the pixels over, then dispose of the first writeablebitmap you should see some memory release - at least I did in my scenario.

Loading Images in J2ME?

I'm not so new to the concepts present on J2ME, but I'm sort of lazy in ways I shouldn't:
Lately my app has been loading images into memory as they were candy...
Sprite example = new Sprite(Image.createImage("/images/example.png"), w, h);
and I'm not really sure it's the best way, but it worked fine in my Motorola Z6, until last night, when I tested the app in a old Samsung cellphone and the images wont even load and require several attemps of starting the thread to show up. The screen was left on white, so I realized that it has to be something about Image loading that I'm not doing quite well... Is there anyone who can tell me how to properly make a loading routine in my app?.
I'm not sure exactly what you are looking for, but the behavior you describe very much sounds like you are experiencing an OutOfMemory exception. Try reducing the dimensions of your images (heap usage is based on dimension) and see if the behavior ceases. This will let you know if it is truly an OutOfMemory issue or something else.
Other tips:
Load images largest to smallest. This helps with heap fragmentation and allows the largest heap space for the largest images.
Unload (set to null) in reverse order of how you loaded and garbage collect after doing so. Make sure to Thread.yield() after you call the GC.
Make sure you only load the images that you need. Unload images from a state that the application is no longer in.
Since you are creating sprites you may have multiple sprites for one image. Consider creating an image pool to make sure you only load the image once. Then just point each Sprite object to the image within the pool that it requires. Your example in your question seems like you would more than likely load the same image into memory more than once. That's wasteful and could be part of the OutOfMemory issue.
Using a film image(a set of images by a defined dimension in one image) and use logic to pull them out one at a time.
Because they a grouped into one image, you are saving header space per image and thus can reduce the memory used.
This techniques was first used in MIDP 1.0 memory constrained devices.
Using the Fostah approach of not loading images over and over, I made the following class:
public class ImageLoader {
private static Hashtable pool = new Hashtable();
public static Image getSprite(String source){
if(pool.get(source) != null) return (Image) pool.get(source);
try {
Image temp = Image.createImage(source);
pool.put(source, temp);
return temp;
} catch (IOException e){
System.err.println("Error al cargar la imagen en "+source+": "+e.getMessage());
}
return null;
}
}
So, whenever I need an image I first ask the pool for it, or just load it into the pool.

Resources