So I have a massive UIImage, maybe 10,000x10,000 px (I know they're not supposed to exceed 1024x1024 apparently, but anyway that's not the main problem). Moving this around the screen (constantly drawing at different points using -drawAtPoint) is very slow.
So I split the image into 100x100 px UIImages, and decided to draw them all separately using drawAtPoint. The result was even worse.
Is there a more efficient way of drawing UIImages to screen like this? Or a more efficient method of managing the images? Thanks.
EDIT.. When I broke it into tiles I was only drawing the tiles that were in view.
Apple has this really nice sample code called PhotoScroller - it shows how to use CATiledLayers along with pre-tiled images. But, you have to create the hundreds of tiles before hand, and either include them in your app bundle or download each.
There is another project on github called PhotoScrollerNetwork that has the ability to download massive jpeg only images and do all the various tiling for you, as it downloads. It leverages another open source library, libjpegturbo.
Related
I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize
I have a photo collage app that takes a bunch of photos from a user's facebook/instagram/library, and draws a bunch of them onto the UIView. To control the amount, there is a slider tool that calculates density (for example, every 150px wide, add an image). I'm running into memory issues after lots and lots of UIImageViews are being added.
Is there some better, more efficient way of showing images onto the screen?
Why not use a UICollectionView. It's designed to present a group of objects (including images) in a manner similar to UITableVIew. Like a TableView memory is automatically managed by the controller.
See the documentation for more information.
You didn't make it clear in your question exactly how you're using this set of images. An alternative is to draw the images into the graphic context using Core Graphics. This will draw the images into a single view instead of into several views. There's only one bitmap to keep in memory. See Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation for an example.
I have a list of png images that I want them to show one after another to show an animation. In most of my cases I use a UIImageView with animationImages and it works fine. But in a couple of cases my pngs are 1280*768 (full screen iPad) animations with 100+ frames. I see that using the UIImageView is quite slow on the emulator (too long to load for the first time) and I believe that if I put it on the device it will be even slower.
Is there any alternative that can make show an image sequence quite smoothly? Maybe Core Animation? Is there any working example I can see?
Core Animation can be used for vector/key-frame based animation - not image sequences. Loading over a hundred full-screen PNGs on an iPad is a really bad idea, you'll almost certainly get a memory warning if not outright termination.
You should be using a video to display these kind of animations. Performance will be considerably better. Is there any reason why you couldn't use a H.264 video for your animation?
Make a video of your pictures. It is the simplest and probably most reasonable approach.
If you want really good performance and full control over your animation, you can convert the pictures to pvrtc4 format and draw them as billboards (textured sprites) with OpenGL. This can be a lot of work if you don't know how to do it.
Look at the second example
http://www.modejong.com/iPhone/
Extracts from http://www.modejong.com/iPhone/
There is also the UIImageView.animationImages API, but it quickly sucks up all the system memory when using more than a couple of decent size images.
I wanted to show a full screen animation that lasts 2 seconds, at 15 FPS that is a total of 30 PNG images of size 480x320. This example implements an animation oriented view controller that simply waits to read the PNG image data for a frame until it is needed.
Instead of alllocating many megabytes, this class run in about a half a meg of memory with about a 5-10% CPU utilization on a 2nd gen iPhone. This example has also been updated to include the ability to optionally play an audio file via AVAudioPlayer as the animation is displayed.
I am creating a kind of 'map' in my app. This is basically only viewing an image with an imageView/scrollView. However, the image is huge. Like 20,000x15,000 px or something. How can I tile this image so that it fits? When the app tiles by itself, it uses way too much memory, and I want this to be done before the app I launched, and just include the tiled, not the original image. Can photoshop do this?
I have not done a complete search for this yet, as I am away, and typing on an iPhone with limited network connection..
Apple has a project called PhotoScroller. It supports panning and zooming of large images. However, it does this by pre-tiling the images - if you look in the project you will see hundreds of tiles for various zoom sizes. The project however does NOT come with any kind of tiling utility.
So what some people have done is create algorithms or code that anyone can use to create these tiles. I support an open source project PhotoScrollerNetwork that allows people to download huge jpegs from the network, tile them, then display them as PhotoScroller does, and while doing research for this I found several people who had posted tiling software.
I googled "PhotoScroller tiling utility" and got lots of hits, including one here on SO
CATiledLayer is one way to do it and of course the best if you can pre-tile the images downloading them from the internet (pay attention on how many connection you are going to open) or embedding them(increasing overall app size), the other is memory map the image on the file system (but an image with that res could take about 1GB), take a look at this question it could be an intersteing topic SO question about low memory scenario
few months ago I've found a really awesome sample code from Apple site. The sample is called "LargeImageDownsizing" the wonderful thing is that it explain a lot about how image are read from resources and then rendered on screen. Digging into that code I've found something that is disturbing me a little. The downsized image is passed to a view that has a CATiledLayer, but without giving a piece of image at each tile to improve memory performance, it just set the tile size and then load image (I'm making things simple to go to the concept). So my question basically is why?Why use a CATiledLayer if it is not feed in the right way, they could have used a normal UIImageView... So I made few tests to understand if I was right. Modifing the code simple adding a scrollview with an image view as subview and responding to the delegate scrollview for zoom. I went to those conclusions testing on device and sim:
-The memory impact and footprint is exactly the same, even during zooming scrolling operation and it doesn't surprise me at all, the image is decompressed in memory
-Time profile say that a tileview take more time to be drawn during scrolling zoom operation instead of a uiimageview and that doesn't surprise me at all again the uiimageview is already drawn
-If I send memory warning nothing change between the two solution(only on sim)
-Testing Core Animation performance I get the same results around 60FPS
So what's the deal between those two views/layers why should I pick one instead of the other in these specific case? UIImageView seems to win the battle.
I hope that someone could help me to understand that.
They might perform the same for small images because ghen the only difference in terms os performance is that CATiledLayer draws on a background thread. Depending on the tile size CATiledLayer would even be slower because it has to draw multiple tiles for one image.
BUT ...
the point of CATiledLayer is that you don't need to draw all tiles, especially when zooming into a very very large image. It is smart to know which parts are actually needed. It also is smart about evicting tiles that are not needed any more.
Or this mechanism to work you need to provide the individual parts of the image separately. We're talking a total size of an image that probably cannot be held in memory uncompressed.