Draw elements with Core Graphics or provide images ? - ios

I am not quite sure whether it is beneficial to draw the visual elements of my app with Core Graphics instead of providing the images. In terms of memory preservation and runtime speed which way is better ?

In terms of memory preservation and runtime speed which way is better?
+UIImage:imageNamed: is most efficient. It caches images, i.e. only one copy of an image is in memory and the image is decoded (from its PNG, JPEG, TIFF, etc. data) when it is needed and kept around for future reuse. If you are worried about memory use, iOS will purge the UIImage cache if you are running low or go into the background.
Using Core Graphics to draw an image does not do any caching for you, unless you write the code to draw your image into a context, save the context as a bitmap, cache the bitmap and then reuse it later on. So you end up drawing the same thing over and over every time it is needed. For example, if you override UIView's -drawRect: to draw imagery, then during animations it will be called for every single frame (60 times a second). This needlessly burns CPU cycles and battery life.
Bottom line is it depends on what your app is and does.

If you dont need your images to Change or animate much ,then you shoud directly use an image.Dont worry so much about performace unless you have like 100 images in a single view controller.
If iPhone can handle games like need for speed , to run an app with various images is an easy task.
Hope this helps.

Related

Opengles image texture preloading on iOS

I am creating a photo slide show with complex transitions between images on iOS. Core Animation doesn't suits the purpose as the possible transitions are limited, so I resort to Opengles 2.0. The problem is uploading images to GPU and creating texture is a time consuming operation & takes roughly 200 ms even for a 960x640 image, which is not suitable for real time playback scenario. And its not feasible to pre-create all the textures before hand as there could be 100s of them. I wonder how Core Animation deals with this problem and is smooth enough to run no matter how many CGImages you assign in animations ? (As long as images are presented at different times and not together).
Texture loading is time consuming and most of applications dealing with a large number of them are loading them on some initialisation. That is the simplest approach but surely most resource consuming. You must understand that what goes on in the back is reading an image file, decompressing it, creating a raw RGB(A) data on the CPU, allocating a memory on the GPU and sending the raw data to the GPU...
As the best approach of dealing with large number of textures is loading them in background preferably even before you need them. In your case as already mentioned in the comment you will need to create some smart cache of these textures. This will still not be enough since the loading itself might make your thread unresponsive. You will need to add a background task to handle those images.
What I suggest to you is creating 2 additional threads. First should load the image data to the CPU while the second will push the data to the GPU. The first thread is pretty straight forward while the second will need a bit of additional GL code to accomplish. Each thread will need its own openGL context to be able to communicate with the GPU, so once you create this thread you also need to create an extra context. These contexts are not aware of each others resources which leads to creating a texture in one context will make it unusable on the other context. For this you will need an extra parameter called a share group. So first you create the share group and then create both contexts with the same share group so the textures will be accessible. Do note that the context is preferably created on the thread you are supposed to be using it on (it might be enough to simply set it as current though).

iOS - View high resolution image

I want to display a high resolution image in imageview. Here is what i do
Prepare UIImage in the background
When the file is loaded, switch to main thread
Display the image [ by using UIImageView's setImage: ]
The problem is there is a lag in the 3rd step. It takes few seconds and loads the file. And if its a large image, I get the memory warning and a crash. So is there a way by which i can draw images from top to bottom ( like how the browser does) ? Also I need to preserve the quality of the image.
Is there a way by which i can achieve my requirement. Thanks in advance
And if its a large image, I get the memory warning and a crash.
it seems you are hitting some "hard" (memory related) limit of your device trying to load and display the image into memory at once.
So is there a way by which i can draw images from top to bottom ( like how the browser does)?
You should try with CATiledLayer, which provides a way to draw very large images without incurring a memory hit.
Here you can find a tutorial about it.
You could also give a look at the PhotoScrollerNetwork project:
blazingly fast tile rendering - visually much much faster than Apple's code (which uses png files in the file system)
you supply a single jpeg file or URL and this code does all the tiling for you, quickly and painlessly
UIImage will not decode its underlying image data until it is actually requested, which will happen on the main thread when you assign it to an image view. So although you are trying to load the image in a background thread, the work is being delayed and actually occurs on the main thread.
The workarounds to this issue are pretty hacky and usually involve writing the image to a new graphics context to force the decoding, and then creating a CGImageRef or UIImage from that. This all takes place on the background thread. By the time you ship the new image to the main thread, all the decoding has already taken place and you shouldn't see a delay. This question has some answers which demonstrate this technique.
Drawing images from top-to-bottom is not possible with the standard APIs provided by Apple (as far as I know). You would have to write your own streaming image decoder which would be a significant task.
You can use SDWebImage framework which helps you for :
1.An asynchronous image downloader
2.Automatic image caching .
3.Avoid duplications .
You can download this from:https://github.com/rs/SDWebImage
About the memory warning it is not only due to your application its due to effective memory use from all the application running on the background so try to stop the background running applications.

Handle large images in iOS

I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize

Why is -drawRect faster than using CALayers/UIViews for UITableViews?

I can already hear the wrenching guts of a thousand iOS developers.
No, I am not noob.
Why is -drawRect faster for UITableView performance than having multiple views?
I understand that compositing operations take place on the GPU. But compositing is a one-time operation; once the layers are committed to memory, it is no different from a cached buffer that, from the point of view of the GPU, gets translated in and out of view. Compare this to using Core Graphics in drawRect, which employ an unknown amount of operations on the CPU to produce pixels that end up getting cached in CALayers anyway. What's the difference if it all ends up cached and flattened anyway?
Also, if you're handling cell reuse properly, you shouldn't need to regenerate views on each call to -cellForRowAtIndexPath. In fact, there may be a performance benefit to having the state data (font, font size, text color, attributes, etc) cached by UIView/CALayer objects than having them constantly recreated during -drawRect.
Why the craze for drawRect? Can someone give me pointers?
When you talking about optimization, you need to provide specific situations and conditions and limitations. Because optimization is all about micro-management. Otherwise, it's meaningless.
What's the basis of your faster? How did you measured it? What's the numbers?
For example, no-op or very simple -drawRect: can be faster, but it doesn't mean it always does.
I don't know internal design of CA neither. So here are my guesses.
In case of static content
It's weird that your drawing code is being called constantly. Because CALayer caches drawing result, and won't draw it again until you send setNeedsDisplay message. If you don't update cell's content, it's just same with single bitmap layer. Should be faster than multiple composited layers because it doesn't need composition cost. If you're using only small number of cells which are enough to be exist all in the pool at same time, it doesn't need to be updated. As RAM becomes larger in recent model, it's more likely to happen in recent models.
In case of dynamic content
If it is being updated constantly, it means you're actually updating them yourself. So maybe your layer-composited version would also being updated constantly. It means it is being composited again for every frame. It could be slower by how it is complex and large. If it's complex and large and have a lot of overlapping areas, it could be slower. I guess CA will draw everything strictly if it can't determine what area is fine to ignore. Unlike you can choose what to draw or not.
In case of actual drawing is done in CPU
Even you configure your view as pure composition of many layers, each sublayers should be drawn eventually. And drawing of their content is not guaranteed to be done in GPU. For example, I believe CATextLayer is drawing itself in CPU. (because drawing text with polygons on current mobile GPU doesn't make sense in performance perspective) And some filtering effects too. In that case, overall cost would be similar and plus it requires compositing cost.
In case of well balanced load of CPU and GPU
If your GPU is very busy for heavy load because there're too many layers or direct OpenGL drawings, your CPU may be idle. If your CG drawing can be done within the idle CPU time, it could be faster than giving more load to GPU.
None of them is your case?
If your case is none of situations I listed above, I really want to see and check the CG code draws faster than CA composition. I wish you attach some source code.
well, your program could easily end up moving and converting a lot of pixel data if going back and forth from GPU to CPU based renderers.
as well, many layers can consume a lot of memory.
I'm only seeing half the conversation here, so I might have misunderstood. Based on my recent experiences optimizing CALayer rendering, and investigating the ways Apple does(n't) optimize stuff you'd expect to be optimized...
What's the difference if it all ends up cached and flattened anyway?
Apple ends up creating a separate GPU element per layer. If you have lots of layers, you have lots of GPU elements. If you have one drawRect, you only have one element. Apple often does NOT flatten those, even where they could (and possibly "should").
In many cases, "lots of elements" is no issue. But if they get to be large ... or there's enough of them ... or they're bad sizes for OpenGL ... AND (see below) they get stored in CPU instead of on GPU, then things start to get nasty. NB: in my experience:
"enough": 40+ in memory
"large": 100x100 points (200x200 retina pixels)
Apple's code for GPU elements / buffers is well optimized in MOST places, but in a few places it's very POORLY optimized. The performance drop is like going off a cliff.
Also, if you're handling cell reuse properly, you shouldn't need to
regenerate views on each call to -cellForRowAtIndexPath
You say "properly", except ... IIRC Apple's docs tell people not to do it that way, they go for a simpler approach (IMHO: weak docs), and instead re-populate all the subviews on every call. At which point ... how much are you saving?
FINALLY:
...doesn't all this change with iOS 6, where the cost of creating a UIView is greatly reduced? (I haven't profiled it yet, just been hearing about it from other devs)

Quartz Performance Drawing Large Buffers

I am wondering if what I'm attempting is just a bad idea. I'm currently working in monotouch. Is it possible to draw a screen-sized (on my iPhone 4 its about 320x460) buffer onto a UIView of equal size fast enough so that animated changes to that buffer look smooth to the end user (need it to be around 20ms per draw).
I've attempted many different implementations. The best one so far seems to be using an in-memory CGLayer and calling context.DrawLayer() to apply it to the view inside of Draw(). But even that takes 30-40ms per DrawLayer.
I'm writing my own tile-image control, and aside from performance, the idea is working well. I just can't figure out how to get the buffer onto the UIView fast enough.
Any ideas?
I've been dealing with custom views a lot lately, and i've had a bunch of performance problems, too.
All of these performance issues could be solved by determining the elements that need to be redrawn, and, more importantly, the elements that do not need to be redrawn.
Then, split the contents in the layer into individual sublayers and only redraw them if necessary. The good thing is, animations and so on are very smooth for those individual layers. (Their content is only a simple bitmap and does not change until you tell it to).
The only limitation i've come across was, that you cannot use CG blend modes (e.g. multiply) for the sublayers. As far as i know that is not possible. You can only use those blend modes inside the CG code used to draw the contents of the sublayers, but after that they are all composed in "normal" mode.
It really depends on what you are drawing.
If you are just drawing a solid filled color, that should not be a problem. The question is how much of the surface you are changing, and how you are changing it.
Again, it depends on what you are drawing and whether you could offload some of the work to the GPU. For example if you have static parts of your interface that will remain the same, or are animated/updated independently, you could use a different layer for those areas and let the GPU compose those.
Layers have the advantage that they are composited by the GPU, and they are backed by their own bitmaps. Once you draw into the surface of the layer, the OS will cache the result in the GPU and compose all of your layers at the same time.
Then you can determine which parts of your application actually need to be redrawn and only redraw those sections on each frame.
But again, it really will depend a lot on what you are trying to do.

Resources