OpenGL ES 2.0 zooming at multiple resolutions - ios

My iOS app draws 2D curves in an opengles view. The scene is very expensive to render (can take up to 1-2 seconds), which means that, AFAIK, I can't change the scale, redraw, and re-render for incremental changes in scale (due to pinch-and-zoom). I currently draw directly on a buffer that is rendered to the screen.
I think one way I can achieve zooming is by rendering to a texture at a given resolution and then render a quad with part of that texture (potentially at a different scale and translated). My guess is that it'll double the memory I'm currently using (if I of course keep the texture at the same resolution as the screen). Can someone confirm that? Is there another way to do zooming without redrawing while not doubling graphical memory usage?
Now, if I want to maintain a decent quality, I'll have to re-render at different resolutions. My initial thinking was to "manually" create a mipmap with e.g. 2 levels (1 texture for 100-150% zooming, and another one for 150-200% zooming). This time, I'll have 1 buffer + 2 textures. I can of course, re-render on panning but I don't think the user experience will be great. Any thoughts on how I can improve that from a user experience and/or memory perspective?

Since you already need that long to draw the scene I would suggest you to create tiling. You could draw the scene in different resolutions at load time and save the output to some images (save the image files into some temporary directory). With this approach you should have minimum memory consumption and the user experience should be great.
If you do this you should also consider if you even want to use openGL to present the scene since you have some very nice methods for presenting a large image on an image view, scroll view. Doing it this way you can actually skip all the GL - UIView bindings and presenting. You can move all the GL work to some separate thread which means if a scene should change you can do that in background allowing the user to work on the current scene uninterrupted. Also if you expect the user to be "swapping" between the scenes you can keep them saved and reuse them without any performance impact at all.

Related

How does UIImageView animate so smoothly? (Or: how to get OpenGL to efficiently animate frame by frame textures)

To clarify, I know that a texture atlas improves performance when using multiple distinct images. But I'm interested in how things are done when you are not doing this.
I tried doing some frame-by-frame animation manually in custom OpenGL where each frame I bind a new texture and draw it on the same point sprite. It works, but it is very slow compared to the UIImageView ability to abstract the same. I load all the textures up front, but the rebinding is done each frame. By comparison, UIImageView accepts the individual images, not a texture atlas, so I'd imagine it is doing similarly.
These are 76 images loaded individually, not as a texture atlas, and each is about 200px square. In OpenGL, I suspect the bottleneck is the requirement to rebind a texture at every frame. But how is UIImageView doing this as I'd expect a similar bottleneck?? Is UIImageView somehow creating an atlas behind the scenes so no rebinding of textures is necessary? Since UIKit ultimately has OpenGL running beneath it, I'm curious how this must be working.
If there is a more efficient means to animate multiple textures, rather than swapping out different bound textures each frame in OpenGL, I'd like to know, as it might hint at what Apple is doing in their framework.
If I did in fact get a new frame for each of 60 frames in a second, then it would take about 1.25 seconds to animate through my 76 frames. Indeed I get that with UIImageView, but the OpenGL is taking about 3 - 4 seconds.
I would say your bottleneck is somewhere else. The openGL is more then capable doing an animation the way you are doing. Since all the textures are loaded and you just bind another one each frame there is no loading time or anything else. Consider for a comparison I have an application that can in runtime generate or delete textures and can at some point have a great amount of textures loaded on the GPU, I have to bind all those textures every frame (not 1 every frame), using all from depth buffer, stencil, multiple FBOs, heavy user input, about 5 threads bottlenecked into 1 to process all the GL code and I have no trouble with the FPS at all.
Since you are working with the iOS I suggest you run some profilers to see what code is responsible for the overhead. And if for some reason your time profiler will tell you that the line with glBindTexture is taking too long I would still say that the problem is somewhere else.
So to answer your question, it is normal and great that UIImageView does its work so smoothly and there should be no problem achieving same performance with openGL. THOUGH, there are a few things to consider at this point. How can you say that image view does not skip images, you might be setting a pointer to a different image 60 times per second but the image view might just ask itself 30 times per second to redraw and when it does just uses a current image assigned to it. On the other hand with your GL code you are forcing the application to do the redraw 60FPS regardless to if it is capable of doing so.
Taking all into consideration, there is a thing called display link that apple developers created for you. I believe it is meant for exactly what you want to do. The display link will tell you how much time has elapsed between frames and by that key you should ask yourself what texture to bind rather then trying to force them all in a time frame that might be too short.
And another thing, I have seen that if you try to present render buffer at 100 FPS on most iOS devices (might be all), you will only get 60 FPS as the method to present render buffer will pause your thread if it has been called in less then 1/60s. That being said it is rather impossible do display anything at all at 60 FPS on iOS devices and everything running 30+ FPS is considered good.
"not as a texture atlas" is the sentence that is a red flag for me.
USing a texture atlas is a good thing....the texture is loaded into memory once and then you just move the rectangle position to play the animation. It's fast because its already all in memory. Any operation which involves constantly loading and reloading new image frames is going to be slower than that.
You'd have to post source code to get any more exact an answer than that.

On iOS, how do CALayer bitmaps (CGImage objects) get displayed onto Graphics Card?

On iOS, I was able to create 3 CGImage objects, and use a CADisplayLink at 60fps to do
self.view.layer.contents = (__bridge id) imageArray[counter++ % 3];
inside the ViewController, and each time, an image is set to the view's CALayer contents, which is a bitmap.
And this all by itself, can alter what the screen shows. The screen will just loop through these 3 images, at 60fps. There is no UIView's drawRect, no CALayer's display, drawInContext, or CALayer's delegate's drawLayerInContext. All it does is to change the CALayer's contents.
I also tried adding a smaller size sublayer to self.view.layer, and set that sublayer's contents instead. And that sublayer will cycle through those 3 images.
So this is very similar to back in the old days even on Apple ][ or even in King's Quest III era, which are DOS video games, where there is 1 bitmap, and the screen just constantly shows what the bitmap is.
Except this time, it is not 1 bitmap, but a tree or a linked list of bitmaps, and the graphics card constantly use the Painter's Model to paint those bitmaps (with position and opacity), onto the main screen. So it seems that drawRect, CALayer, everything, were all designed to achieve this final purpose.
Is that how it works? Does the graphics card take an ordered list of bitmaps or a tree of bitmaps? (and then constantly show them. To simplify, we don't consider the Implicit animation in the CA framework) What is actually happening down in the graphics card handling layer? (and actually, is this method almost the same on iOS, Mac OS X, and on the PCs?)
(this question aims to understand how our graphics programming actually get rendered in modern graphics cards, since for example, if we need to understand UIView and how CALayer works, or even use CALayer's bitmap directly, we do need to understand the graphics architecture.)
Modern display libraries (such as Quartz used in iOS and Mac OS) use hardware accelerated compositing. The workings is very similar to how computer graphics libraries such as OpenGL work. In essence, each CALayer is kept in as a separate surface that is buffered and rendered by the video hardware much like a texture in a 3D game. This is exceptionally well implemented in iOS and this is why the iPhone is so well-known for having a smooth UI.
In the "old days" (i.e. Windows 9x, Mac OS Classic, etc), the screen was essentially one big framebuffer, and everything that was exposed by e.g. moving a window had to be redrawn manually by each application. The redrawing was mostly done by the CPU, which put an upper limit on animation performance. Animation were usually very "flickery" due to the redrawing involved. This technique was mostly suited for desktop applications without too much animation. Notably, Android uses (or at least used to use) this technique, which is a big problem when porting iOS applications over to Android.
Games of the old days days (e.g. DOS, arcade machines, etc, also used a lot on Mac OS classic), something called sprite animation was used to improve performance and reduce flickering by keeping the moving images in offscreen buffers that were rendered by the hardware and synchronized with the monitor's vblank, which meant that animations were smooth even on very low-end systems. However, the size of these images were very limited and the screen resolutions were low, only about 10-15% of the pixels of even an iPhone screen of today.
You've got a reasonable intuition here, but there are still several steps between contents and the display. First off, contents doesn't have to be a CGImage. It is often a private class called CABackingStorage which is not quite the same thing. In many cases there are hardware optimizations going on to bypass rendering the image into main memory and then copying it to video memory. And since the contents of various layers are all composited together, you're still a ways from the "real" display memory. Not to mention that modifications to contents just directly impacts the model layer, not the presentation or render layers. Plus there are CGLayer objects that can store their image directly in video memory. There's a lot of different stuff going on.
So the answer is, no, the video "card" (chip; it's the PowerVR BTW) does not take an ordered bunch of layers. It takes lower-level data in ways that are not well documented. Some things (particularly parts of Core Animation, and perhaps CGLayer) appear to be wrappers around OpenGL textures, but others are probably Core Graphics directly accessing the hardware itself. Once you get to this level of the stack, it's all private and can change from version to version and from device to device.
You also may find Brad Larson's response useful here:
iOS: is Core Graphics implemented on top of OpenGL?
You may also be interested in Chapter 6 of iOS:PTL. While it doesn't go into the implementation specifics, it does include a lot of practical discussion of how to improve drawing performance and best utilize the hardware with Core Graphics. Chapter 7 details all the developer-accessible steps involved in CALayer drawing.

On iPad, to keep a bitmap of the current main view, how is Core Graphics compared with cocos2D and OpenGL ES?

I heard that for a single view app, it is best to keep the current main view's content in a bitmap, and we paint extra things on this bitmap, and when it is time to do drawRect (when called by iOS), then we just display the bitmap.
Is it true that Core Graphics is quite capable of handling it? But since both cocos2D and OpenGL ES are said to be super fast when handling graphics, is using one of them good for the situation above, making it more smooth or is it not that different from using Core Graphics?
(Or maybe it matters for type of application, such as a simple drawing program, versus a game that refreshes 30fps or 60fps?)
It depends if you need to draw every frame and have your drawings change.
Check out the AccelerometerGraph sample code provided by Apple. This is one way of using Core Animation layers and Core Graphics (Quartz drawing) together to do continuous drawings across many frames without redrawing what has already been drawn.
Cocos2D is less for drawing and more for sprite-based animations where your sprites are pre-rendered images that you have imported (png, etc). It's very very fast at handling multiple images simultaneously and offers batch nodes for consolidating all images to just one single Open GL call.
Quartz (Core Graphics) by itself is not usually the best option for continuous drawing, but is better for generating static images once that you then move around, if necessary.

Quartz Performance Drawing Large Buffers

I am wondering if what I'm attempting is just a bad idea. I'm currently working in monotouch. Is it possible to draw a screen-sized (on my iPhone 4 its about 320x460) buffer onto a UIView of equal size fast enough so that animated changes to that buffer look smooth to the end user (need it to be around 20ms per draw).
I've attempted many different implementations. The best one so far seems to be using an in-memory CGLayer and calling context.DrawLayer() to apply it to the view inside of Draw(). But even that takes 30-40ms per DrawLayer.
I'm writing my own tile-image control, and aside from performance, the idea is working well. I just can't figure out how to get the buffer onto the UIView fast enough.
Any ideas?
I've been dealing with custom views a lot lately, and i've had a bunch of performance problems, too.
All of these performance issues could be solved by determining the elements that need to be redrawn, and, more importantly, the elements that do not need to be redrawn.
Then, split the contents in the layer into individual sublayers and only redraw them if necessary. The good thing is, animations and so on are very smooth for those individual layers. (Their content is only a simple bitmap and does not change until you tell it to).
The only limitation i've come across was, that you cannot use CG blend modes (e.g. multiply) for the sublayers. As far as i know that is not possible. You can only use those blend modes inside the CG code used to draw the contents of the sublayers, but after that they are all composed in "normal" mode.
It really depends on what you are drawing.
If you are just drawing a solid filled color, that should not be a problem. The question is how much of the surface you are changing, and how you are changing it.
Again, it depends on what you are drawing and whether you could offload some of the work to the GPU. For example if you have static parts of your interface that will remain the same, or are animated/updated independently, you could use a different layer for those areas and let the GPU compose those.
Layers have the advantage that they are composited by the GPU, and they are backed by their own bitmaps. Once you draw into the surface of the layer, the OS will cache the result in the GPU and compose all of your layers at the same time.
Then you can determine which parts of your application actually need to be redrawn and only redraw those sections on each frame.
But again, it really will depend a lot on what you are trying to do.

iOS: playing a frame-by-frame greyscale animation in a custom colour

I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images # 1024x1024)
my game consists of 12 separate colours, so I need to perform the animation in any desired colour
this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.
what are my potential solution paths?
these are the best SO links I have found:
Faster iPhone PNG Animations
frame by frame animation
Is it possible using video as texture for GL in iOS?
that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )
I can see these options ( listed laziest first, most optimised last )
option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture
I'm pretty sure this is stupid, so onto option B
option B
preload all images into memory
then as per above, only we load from memory rather than from file
I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?
option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.
while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it
option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels
I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach
I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!
If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml
Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.
This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.
Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.
I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.
I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...
You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.

Resources