I have a photo collage app that takes a bunch of photos from a user's facebook/instagram/library, and draws a bunch of them onto the UIView. To control the amount, there is a slider tool that calculates density (for example, every 150px wide, add an image). I'm running into memory issues after lots and lots of UIImageViews are being added.
Is there some better, more efficient way of showing images onto the screen?
Why not use a UICollectionView. It's designed to present a group of objects (including images) in a manner similar to UITableVIew. Like a TableView memory is automatically managed by the controller.
See the documentation for more information.
You didn't make it clear in your question exactly how you're using this set of images. An alternative is to draw the images into the graphic context using Core Graphics. This will draw the images into a single view instead of into several views. There's only one bitmap to keep in memory. See Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation for an example.
Related
could you please give me some advices please. I've been struggled for days.
My goal is implement continuous scrolling to show pages from document. Each page is controlled by a viewController. And user should be able zoom in and out.
Should i do it from scratch with scrollView or collectionView? which is better and more memory efficient?
Or there are any off-the-shelf solution for this? (i've searched in Github without success, UIPageViewController is definitive not a solution because it doesn't allow continuous scrolling and only show a whole page)
Thank you very much
Image:
example continuous scrolling
A collection view works just fine for continuous scrolling, and makes efficient use of memory. (Cells are recycled.) If the total contents of your scrolling area are too much to fit into memory you will want to load each page's contents into memory as it scrolls into view, and release it when it scrolls off. (Perhaps have your model store file URLs to each page's contents, and save the page contents into the cell.)
As for zooming, the best way to do that depends on what you mean. If the contents are vector contents like a PDF, you could simply re-render the vector image as the user scrolls. If the contents are very high resolution images you might need to create a mipmapped tiled rendering framework, or use somebody else's. I've written my own mipmapped tiled rendering framework before. It's doable, but a lot of work.
(You take the original huge image and break it into smaller square tiles. You then render the original image into tiles at 50% scale and save those, and then 25%, and then 12.5%, etc, until you get to a size where a single image fills the screen.)
I'm loading a very large image (8000x8000) into an UIImageView that's in a UIScrollView. This works but it consumes a lot of memory and takes a couple of seconds to load.
I've looked into a better approach and all the examples I have found are based on a 2010 WWDC example called PhotoScroller. This example uses a custom UIView with CATiledLayer to break the large image into smaller tiles and draw them to the screen.
https://github.com/master-nevi/WWDC-2010/tree/master/PhotoScroller
Is there a more modern way to do this in iOS 13 or is this still the best approach?
It is still the best approach. 8000x8000 is huge. You cannot load the whole image at once. CATiledLayer means you only need a few tiles loaded at a time — just the portion of the image that the user is looking at at the moment.
So I have a massive UIImage, maybe 10,000x10,000 px (I know they're not supposed to exceed 1024x1024 apparently, but anyway that's not the main problem). Moving this around the screen (constantly drawing at different points using -drawAtPoint) is very slow.
So I split the image into 100x100 px UIImages, and decided to draw them all separately using drawAtPoint. The result was even worse.
Is there a more efficient way of drawing UIImages to screen like this? Or a more efficient method of managing the images? Thanks.
EDIT.. When I broke it into tiles I was only drawing the tiles that were in view.
Apple has this really nice sample code called PhotoScroller - it shows how to use CATiledLayers along with pre-tiled images. But, you have to create the hundreds of tiles before hand, and either include them in your app bundle or download each.
There is another project on github called PhotoScrollerNetwork that has the ability to download massive jpeg only images and do all the various tiling for you, as it downloads. It leverages another open source library, libjpegturbo.
I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize
few months ago I've found a really awesome sample code from Apple site. The sample is called "LargeImageDownsizing" the wonderful thing is that it explain a lot about how image are read from resources and then rendered on screen. Digging into that code I've found something that is disturbing me a little. The downsized image is passed to a view that has a CATiledLayer, but without giving a piece of image at each tile to improve memory performance, it just set the tile size and then load image (I'm making things simple to go to the concept). So my question basically is why?Why use a CATiledLayer if it is not feed in the right way, they could have used a normal UIImageView... So I made few tests to understand if I was right. Modifing the code simple adding a scrollview with an image view as subview and responding to the delegate scrollview for zoom. I went to those conclusions testing on device and sim:
-The memory impact and footprint is exactly the same, even during zooming scrolling operation and it doesn't surprise me at all, the image is decompressed in memory
-Time profile say that a tileview take more time to be drawn during scrolling zoom operation instead of a uiimageview and that doesn't surprise me at all again the uiimageview is already drawn
-If I send memory warning nothing change between the two solution(only on sim)
-Testing Core Animation performance I get the same results around 60FPS
So what's the deal between those two views/layers why should I pick one instead of the other in these specific case? UIImageView seems to win the battle.
I hope that someone could help me to understand that.
They might perform the same for small images because ghen the only difference in terms os performance is that CATiledLayer draws on a background thread. Depending on the tile size CATiledLayer would even be slower because it has to draw multiple tiles for one image.
BUT ...
the point of CATiledLayer is that you don't need to draw all tiles, especially when zooming into a very very large image. It is smart to know which parts are actually needed. It also is smart about evicting tiles that are not needed any more.
Or this mechanism to work you need to provide the individual parts of the image separately. We're talking a total size of an image that probably cannot be held in memory uncompressed.