Loading Images in J2ME? - image-processing

I'm not so new to the concepts present on J2ME, but I'm sort of lazy in ways I shouldn't:
Lately my app has been loading images into memory as they were candy...
Sprite example = new Sprite(Image.createImage("/images/example.png"), w, h);
and I'm not really sure it's the best way, but it worked fine in my Motorola Z6, until last night, when I tested the app in a old Samsung cellphone and the images wont even load and require several attemps of starting the thread to show up. The screen was left on white, so I realized that it has to be something about Image loading that I'm not doing quite well... Is there anyone who can tell me how to properly make a loading routine in my app?.

I'm not sure exactly what you are looking for, but the behavior you describe very much sounds like you are experiencing an OutOfMemory exception. Try reducing the dimensions of your images (heap usage is based on dimension) and see if the behavior ceases. This will let you know if it is truly an OutOfMemory issue or something else.
Other tips:
Load images largest to smallest. This helps with heap fragmentation and allows the largest heap space for the largest images.
Unload (set to null) in reverse order of how you loaded and garbage collect after doing so. Make sure to Thread.yield() after you call the GC.
Make sure you only load the images that you need. Unload images from a state that the application is no longer in.
Since you are creating sprites you may have multiple sprites for one image. Consider creating an image pool to make sure you only load the image once. Then just point each Sprite object to the image within the pool that it requires. Your example in your question seems like you would more than likely load the same image into memory more than once. That's wasteful and could be part of the OutOfMemory issue.

Using a film image(a set of images by a defined dimension in one image) and use logic to pull them out one at a time.
Because they a grouped into one image, you are saving header space per image and thus can reduce the memory used.
This techniques was first used in MIDP 1.0 memory constrained devices.

Using the Fostah approach of not loading images over and over, I made the following class:
public class ImageLoader {
private static Hashtable pool = new Hashtable();
public static Image getSprite(String source){
if(pool.get(source) != null) return (Image) pool.get(source);
try {
Image temp = Image.createImage(source);
pool.put(source, temp);
return temp;
} catch (IOException e){
System.err.println("Error al cargar la imagen en "+source+": "+e.getMessage());
}
return null;
}
}
So, whenever I need an image I first ask the pool for it, or just load it into the pool.

Related

iOS: Detect memory constraints before allocating objects

Is there a technique for avoiding undue memory consumption by testing the availability of memory before it's allocated? I understand that the general iOS approach is to optimize memory usage and respond to didReceiveMemoryWarning when necessary, but sometimes that doesn't cut it.
In my use case (image processing), I'm allocating space for a (potentially) large image using UIGraphicsBeginImageContext(). If the image is too big, I eventually get a didReceiveMemoryWarning. But, it's too late at that point: from a user experience perspective, it would've been better to prevent the user from working with such a large image to begin with; it would make more sense to say, "Sorry! Image size too big! Do something else!" before creating it than to say, "Ooops! Crashing now!"
I found a few SO threads on querying available memory and/or total physical memory, but using them is a messy and unreliable solution: there's no way to tell how much memory the OS is actually going to let you use at a given point in time, regardless of how much is free.
Basically, I want these semantics: (in "Swift-Java-ese")
try {
UIGraphicsBeginImageContext(CGRect(x: reallyBig, y: reallyBig))
}
catch NotEnoughMemoryException {
directUserToPickSmallerImage()
}
// The memory is mine; it's OK to use it
continueUsingBigImage()
Is there a methodology for doing this in iOS?
You might try pre-flitting with NSMutableData var length: Int and check for nil.
let data: NSMutableData? = NSMutableData(length:1000)
if data != nil {
println("Success")
}
else {
println("Failure")
}

Adaptive image caching based on available RAM or iOS Memory warning level

I have an app which caches large images so that user do not wait for imageWithContentsOfFile. As a general rule I cache a previous and next image.
1) Can I make this caching adaptive based on the available memory in iPad ? If yes what should be the threshold ? Below is the function to calculate the available memory
-(void) report_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
Log(#"Memory in use (in bytes): %u", info.resident_size);
} else {
Log(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
2) I know there is no way to (except the private/undocumented API) know the memory level warning, otherwise it could be a good factor to determine how many pages I can cache. But just to confirm can I use them in some way.
3) Right now I am thinking of caching 3 screens (which have 6 images) and in case my ViewController receive memory warning I unload all screens except the visible one and reset number of screens to cache to 2 (4 images). But I don't found it optimized because either I am caching less than what is possible or in some conditions even loading 4 leads to crash.
If you are looking to cache as much data as possible without causing the application to crash, you can always make use of the function didReceiveMemoryWarning to remove excess caches when needed. Within this function, you don't have to clear out everything. You could use it to selectively clear up enough room for the current situation and then continue caching until it fires off this warning again.
An alternative would be to start a cache routine until the warning fires. This would allow you to create a count of items that were able to be cached. Once this count is reached, you could have your caching iterations stay far enough below that count to avoid issues.
Not exactly an in-depth explanation, but these are some ideas for making use of the standard methods available that should still be able to achieve your goal.
So after waiting for so long I am answering my question if it may be helpful for someone.
You should avoid any undocumented API to get warning level and take action based on that. The only suggested action on memory warning irrespective of level is to release as much memory as you can.
You can use a heuristic based on report_memory function (see question) to determine how much we can cache. Although I have not performed any tests to calculate the threshold (it should be based on total RAM). I would love to see someone perform these tests and update.
The approach to reset the number of pages to cache on memory warning is working fine for my case.

Adobe Air 3 iOS app memory leak issue maybe related to getDefinitionByName class

I'm developing an application with adobe air 3 for ios and having low memory errors frequently.
After ios 5 update os started to kill my app after some low memory warnings.
But the thing is profiler says app uses 4 to 9 megs of memory.
There are a lot of bitmap copy operations around and sometimes instantiates new bitmaps from embedded bitmaps.
I highly optimized everything and look for leaks etc.
I watch profiler for memory status and seems like GC clears everything. everything looks perfect but app continues to get low memory errors and gets killed by os.
Is there anything wrong with this code below. Because my assumption is this ClassReference never gets off from memory even the profiles says memory is cleared.
I used clone method to pass value instead of pass by ref. so I guess GC can collect that local variable. I tried with and without clone nothing changes.
If the code below runs 10-15 times with different tile Id's app crashes but with same ID's it continues working.
Is there anyone who is familiar with this kind of thing?
tmp is bitmapData
if (isMoving)
{
tmp=getProxyImage(x,y); //low resolution tile image
}
else
{
strTmp="main_TILE"+getTileID(x,y);
var ClassReference:Class = getDefinitionByName(strTmp) as Class; //full resolution tile image //something wrong here
tmp=new ClassReference().bitmapData.clone(); //something wrong here
ClassReference=null;
}
return tmp.clone();
Thanks for reading. I hope some one has a solution for this.
You are creating three copies of your bitmapdata with this. They will likely get garbage collected eventually, but you probably run out of memory before that happens.
(Here I assume you have embedded your bitmapdata using the [Embed] tag)
tmp = new ClassReference()
// allocates no new memory, class reference already exists
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
// creates a new BitmapAsset from the class reference including it's BitmapData.
// then you clone this bitmapdata, giving you two
tmp = new ClassReference().bitmapData.clone();
// not really necessary since ClassReference goes out of scope anyway, but no harm done
ClassReference=null;
// Makes a third copy of your second copy and returns it.
return tmp.clone();
I would recommend this (assuming you need unique bitmapDatas for each tile)
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
return new ClassReference().bitmapData.clone();
If you don't need unique bitmapDatas, keep static properties with the bitmapDatas on some class and use the same ones all over. That will minimize memory usage.

How to dispose a Writeable Bitmap? (WPF)

Some time ago i posted a question related to a WriteableBitmap memory leak, and though I received wonderful tips related to the problem, I still think there is a serious bug / (mistake made by me) / (Confusion) / (some other thing) here.
So, here is my problem again:
Suppose we have a WPF application with an Image and a button. The image's source is a really big bitmap (3600 * 4800 px), when it's shown at runtime the applicaton consumes ~90 MB.
Now suppose i wish to instantiate a WriteableBitmap from the source of the image (the really big Image), when this happens the applications consumes ~220 MB.
Now comes the tricky part, when the modifications to the image (through the WriteableBitmap) end, and all the references to the WriteableBitmap (at least those that I'm aware of) are destroyed (at the end of a method or by setting them to null) the memory used by the writeableBitmap should be freed and the application consumption should return to ~90 MB. The problem is that sometimes it returns, sometimes it does not.
Here is a sample code:
// The Image's source whas set previous to this event
private void buttonTest_Click(object sender, RoutedEventArgs e)
{
if (image.Source != null)
{
WriteableBitmap bitmap = new WriteableBitmap((BitmapSource)image.Source);
bitmap.Lock();
bitmap.Unlock();
//image.Source = null;
bitmap = null;
}
}
As you can see the reference is local and the memory should be released at the end of the method (Or when the Garbage collector decides to do so). However, the app could consume ~224 MB until the end of the universe.
Any help would be great.
Is it necessary to render the Bitmap image at the same resolution and pixels? You could create the writeablebitmap at a much lower set of pixels and call the render method. Since the writeablebitmap carries a reference to the original uielements when calling render, in this case, you are going to have 3 chunks: 1) original uielement, 2) pixels in writeablebitmap, 3) reference to copied original.
I had a similar issue with the writeablebitmap in terms of memory leaks and I fixed it from checking out this link:
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/17/silverlight-s-big-image-problem-and-what-you-can-do-about-it.aspx
If you create another writeablebitmap and copy the pixels over, then dispose of the first writeablebitmap you should see some memory release - at least I did in my scenario.

Silverlight 3 IncreaseQuotaTo fails if I call AvailableFreeSpace first

The following code throws an exception...
private void EnsureDiskSpace()
{
using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForSite())
{
const long NEEDED = 1024 * 1024 * 100;
if (file.AvailableFreeSpace < NEEDED)
{
if (!file.IncreaseQuotaTo(NEEDED))
{
throw new Exception();
}
}
}
}
But this code does not (it displays the silverlight "increase quota" dialog)...
private void EnsureDiskSpace()
{
using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForSite())
{
const long NEEDED = 1024 * 1024 * 100;
if (file.Quota < NEEDED)
{
if (!file.IncreaseQuotaTo(NEEDED))
{
throw new Exception();
}
}
}
}
The only difference in the code is that the first one checks file.AvailableFreeSpace and the second checks file.Quota.
Are you not allowed to check the available space before requesting more? It seems like I've seen a few examples on the web that test the available space first. Is this no longer supported in SL3? My application allows users to download files from a server and store them locally. I'd really like to increase the quota by 10% whenever the user runs out of sapce. Is this possible?
I had the same issue. The solution for me was something written in the help files. The increase of disk quota must be initiated from a user interaction such as a button click event. I was requesting increased disk quota from an asynchronous WCF call. By moving the space increase request to a button click the code worked.
In my case, if the WCF detected there was not enough space, the silverlight app informed the user they needed to increase space by clicking a button. When the button was clicked, and the space was increased, I called the WCF service again knowing I now had more space. Not as good a user experience, but it got me past this issue.
There is a subtle bug in your first example.
There may not be enough free space to add your new storage, triggering the request - but the amount you're asking for may be less than the existing quota. This throws the exception and doesn't show the dialog.
The correct line would be
file.IncreaseQuotaTo(file.Quota + NEEDED);
I believe that there were some changes to the behavior in Silverlight 3, but not having worked directly on these features, I'm not completely sure.
I did take a look at this MSDN page on the feature and the recommended approach is definitely the first example you have; they're suggesting:
Get the user store
Check the AvailableFreeSpace property on the store
If needed, call IncreaseQuotaTo
It isn't ideal, since you can't implement your own growth algorithm (grow by 10%, etc.), but you should be able to at least unblock your scenario using the AvailableFreeSpace property, like you say.
I believe reading the amount of total space available (the Quota) to the user store could be in theory an issue, imagine a "rogue" control or app that simply wants to fill every last byte it can in the isolated storage space, forcing the user eventually to request more space, even when not available.
It turns out that both code blocks work... unless you set a break point. For that matter, both code blocks fail if you do set a break point.

Resources