My app has a 3x3 grid of images that fill the screen. I allow users to take pictures using a UIImagePickerController, and set these to be displayed on the grid.
Once I get to the 7th or 8th picture, however, my console starts showing that I have memory warnings. Specifically I get bunch of these:
2013-11-05 00:04:46.008 gridTestApp[545:907] Received memory warning.
2013-11-05 00:05:00.445 gridTestApp[545:907] Received memory warning.
I ran a profile and I don't have any leaks. My app uses around 50MB. Any ideas on how to lower this / why my app is crashing?
Where are you storing your pictures? Keep on mind that each picture takes considerable amount of space. If you need to access the pictures later, it's better to save them on disk and release the object.
If you need to display several pictures at the same time, it's better to resize and cache each picture. In this way you reduce the amount of memory you need.
Related
I am planning to ship an app with at least 20 pictures that can be at big as 10mb each. They are pictures that the user is likely to zoom in quite a bit therefore it is a requirement to keep the resolution quite high. We are still trying to make them smaller, but even so, its unlikely that they are going to be less than 7mb each.
The images can be shipped with the app as well as additional pictures can also be downloaded. The requirement is for the pictures to be available offline once the user downloads them as the app is to be used in remote areas by researchers.
What is the best mechanism to store them and how should I store them in iOS Swift 3?
Thanks for your help in advance.
You can store your pictures in document directory of the app and store the path in SQLite DB.
There are not rules but a set of best practices.
To store them I suggest you to save them directly into your resources, not Xcode image asset, this is because "image asset" can only be called with the imageNamed: method of UIImage, that has the side effect of cache images.
Then you can create a plist file with an array of image names, and fetch your info from here. If you need something more complicated there is Core Data, but I can't see an application of it with your spec.
What is the size of an uncompressed image in memory? An approx
equation n_pixel_height * n_pixel_width * n_channels in bytes (supposing 8 bit for channel)
If your images are about 10Mb in jpg they are compressed, thus means that they will take a huge amount of memory. memory on this kind of devices is a precious and short resource.
If your app exceeds the memory limit, after a set of callback such as didReceiveMemoryWarning, if you don't free enough memory, you app will be killed.
Alway try on device in this case and not on simulator because the simulator use your mac resources.
Now how to handle big images?
You can use CATiledLayer, you can find a lot of tutorials online. CATiledLayer as the name suggest creates a tiles of layer, each tile should correspond at a piece of your image, and it draws them only when they are visible.
Unfortunately it draws asynchronously this means that your tiles can be shown not exactly at the same time, there some strategy to avoid that issue, one is explained in an apple sample code.
Of course there is a problem that needs to be solved, how can I cut my large images into tiles?
You can do programmatically or provide them already cut as resources of your application
I have been searching these threads and other sites but have not come across a way to do this both efficiently and memory friendly. And so, here is my story:
My IOS (iPad) app uses sprite sheets (a large image, such as 2k x 2k at 16 bpp, which is composed of many smaller sprites). I have created a sprite atlas class which manages these sheets, handle sprite animations, and other features.
The idea (in the load method) is to load in the sheet from the file system, split it apart into UIImages (one per sprite) using CGImageCreateWithImageInRect, and then dispose of the loaded sheet. Seems simple enough.
Note that the sheet is loaded into a UIImage by initWithContentsOfFile or imageNamed (more on that below).
The desire is to save file memory by using sprite sheets, and then save runtime memory by only retaining the actual sprites themselves as UIImages. In my experiments thus far this is what I find happening:
If I use initWithContentsOfFile I see (from Instruments) that it appears to do a file open, fstat64, and close for the file for EACH sprite in it. This takes an horrendous amount of time to load all the sprites. It actually seems to load the entire sheet, grab one sprite, close the sheet, then load the entire sheet again for the next sprite, until they are all created. Also it appears to consume lots of memory (proven by the "received memory warning" after just the second sheet loaded, as well as by Instruments allocations).
Next I tried imageNamed (which caches the sheet). The file loading occurs once and so it is MUCH faster. All seems good and in fact and I can go until many sheets are loaded. But eventually the dreaded "received memory warning" appears... and a few seconds later the app crashes. It appears to be the case that the cached image (even though the pointer is set to null after pulling out the sprites) never goes away. I have read several posts that also state this seems to be its behavior (although other posts say other things, so that is not conclusive - does anyone know definitively?).
And so it appears that neither method is what is needed. What I want is to load the file, pull out each sprite into its own UIImage, then have the UIImage for the big sheet file released completely.
I have read one site that talked about using the initWithContentsOfFile approach to set up their own caching system (rather than trust the IOS plan with imageNamed) so they can release the image when desired. However, I don't think they had sprite sheets in mind.
And so, I turn this over to the experts out there to see if there are some ideas on how to get both fast load times AND use minimal memory.
[and yes, I know that IOS 7 has SpriteKit. But this needs to also work on IOS 6.x.]
One interesting data point is that on an original iPad the imageNamed version actually works fine with no "received memory warning". It might have been IOS 5.x. But the app will crash on the iPad 2 device.
I am not including code here because what I am after is an understanding of the mechanics involved with how memory is used with these functions related to image handling.
And while I am at it, can someone please clarify this point:
True or False: When using CGImageCreateWithImageInRect, what it does is actually creates from new memory a bitmap the size of the rectangle specified and then COPIES from the original UIImage the pixels into this new memory (as opposed to setting up the bitmap format and having a pointer point into the original UIImage's pixel data). I think this is True, but want verification.
Thanks!
I want to store a bunch of images that are taken while the user uses the app, while making sure that I can view them with decently high resolution later on. And by "store", for now I don't need to store them past the closure of the app. Simply having them available after some point while the app is still alive is all I need.
I first tried simply storing the UIImages in their original size on the app, but then the app would crash after 7 or 8 pics were taken because of memory usage.
I then tried shrinking them (since my app has a grid display wherein I can see all the pictures, but shrunk to fit on a 3x3 grid of images) , and my app stopped crashing. But when I wanted the pictures to be viewed individually on full screen, the quality was terrible because I was enlarging a shrunk photo.
So I figured why not find a way to store the original image through another object in a way that wouldn't eat up too much memory. Searching online lead me to decide to store them in a file, by converting the images into NSData and then writing this into a file. BUT, when I would then load the NSData back into a UIImage, the orientation of my photos taken through the camera were all sideways! And after hours of looking (and failing) through how to transform it back into the proper orientation, I've decided to give up on trying to fix this orientation bug.
Instead, I just want to know if there is any other way for me to store large/high-res UIImages (without hogging up memory) besides using NSData. What ideas do you guys have?
And pardon me for having to write so much for a one-liner question. I just didn't want to get suggestions on doing something I've already tried.
Save it as a jpeg instead of a PNG, that way the image will be rotated for you. See https://stackoverflow.com/a/4868914/96683
I'm working on an iPad-only iOS app that essentially downloads large, high quality images (JPEG) from Dropbox and shows the selected image in a UIScrollView and UIImageView, allowing the user to zoom and pan the image.
The app is mainly used for showing the images to potential clients who are interested in buying them as framed prints. The way it works is that the image is first shown, zoomed and panned to show the potential client if they like the image. If they do like it, they can decide if they want to crop a specific area (while keeping to specific aspect ratios/sizes) and the final image (cropped or not) is then sent as an email attachment to production.
The problem I've been facing for a while now, is that even though the app will only be running on new iPads (ie. more memory etc.), I'm unable to find a method of handling the images so that the app doesn't get a memory warning and then crash.
Most of the images are sized 4256x2832, which brings the memory usage to at least 40MB per image. While I'm only displaying one image at a time, image cropping (which is the main memory/crash problem at the moment) is creating a new cropped image, which in turn momentarily bumps the apps total RAM usage to about 120MB, causing a crash.
So in short: I'm looking for a way to manage very large images, have the ability to crop them and after cropping still have enough memory to send them as email attachments.
I've been thinking about implementing a singleton image manager, which all the views would use and it would only contain one big image at a time, but I'm not sure if that's the right way to go, or even if it'd help in any way.
One way to deal with this is to tile the image. You can save the large decompressed image to "disk" as a series of tiles, and as the user pans around pull out only the tiles you need to actually display. You only ever need 1 tile in memory at a time because you draw it to the screen, then throw it out and load the next tile. (You'll probably want to cache the visible tiles in memory, but that's an implementation detail. Even having the whole image as tiles may relieve memory pressure as you don't need one large contiguous block.) This is how applications like Photoshop deal with this situation.
I ended up sort of solving the problem. Since I couldn't resize the original files in Dropbox (the client has their reasons), I went ahead and used BOSImageResizeOperation, which is essentially just a fast, thread-safe library for quickly resizing images.
Using this library, I noticed that images that previously took 40-60MB of memory per image, now only seemed to take roughly half that. Additionally, the resizing is so quick that the original image gets released from memory so fast, that iOS doesn't execute a memory warning.
With this, I've gotten further with the app and I appreciate all the idea, suggestions and comments. I'm hoping this will get the app done and I can get as far away from large image handling as possible, heh.
When developing a mobile app, and letting the user take photos (That later will be shown in full size also) but are also viewed in the table views (mid size) and even in the Google maps pin title view, Should I create a thumbnail/s for every image the user take for the smaller ones? or should I just use the regular image?
I am asking because From the tutorials i saw, and as a web developer, all I could figure out is that when using a web service to get groups of small images you usually get the thumbnails first and only when needed get the Full size image.
But this is an embedded (I know it is not embedded, but i don't have a better way to describe this) app, that all the data sits on the device, So there is no upload performance issues, just memory and processor time issues (loading to view the big HD photos that the cameras take today is very heavy I think.
Any way, What is best practice for this?
Thank you,
Erez
It's all about memory usage balanced with performance. If you don't create thumbnails for each photo, there are only so many photo you can hold in memory before you receive memory warnings or have your app terminated by the system (maybe only 6-8 full size UIImages). To avoid that, you might write the photos out to the file system and keep a reference to their location. But then your tableview scrolling will suffer as it attempts to read photos from the file system for display.
So the solution is to create thumbnails for each photo so that you can store a lot of them in memory without any troubles. Your tableview will perform well as the photos are quickly accessible from memory. You'll also want to write the full size photos to the file system (and keep a reference to their location) to avoid having to store them in memory. When it's time to display the full size image, retrieve it from the file system and store it in memory. When it's no longer needed, release it.
I'm assuming that you're in iOS4, and you are saving the photos in the Asset library, there is already a method for you.
http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAsset_Class/Reference/Reference.html
You're looking for the "thumbnail" method.
So, save the large image, and compute the thumbnail when required, I believe, is the way to go.