Is iOS using dirty regions by default? - ios

Is iOS using some form of dirty region rendering by default while rendering applications?
For example, if I have a graph paper background graphic on a text input field, and the user scrolls through it, will the whole graphic be redrawn every time, or only the part that has changed (In the graph paper example: If a pixel was white before and will be white on the next rendering, will it be redrawn or left alone)?
Again, this question is mostly out of curiosity and not out of programming needs. I did not find anything in the developer manuals about this.

Not only is Richard J. Ross III's comment evidence — Apple have reserved the right to do partial redraws even though the iPhone had full on-GPU composition from day one, so it isn't a legacy thing — but the advice always handed out in OpenGL sessions is to try to avoid composition of OpenGL on OpenGL or any two rapidly changing views since it causes the compositor to do a lot of extra work that you can usually avoid (by moving all the content into one OpenGL view in that specific case).
I would therefore assume the compositor has some notion of updating only dirty parts of the layer hierarchy.

Related

Does UIView transparency affects a performance of an app?

I'm creating an app where there're up to 40 UIViews where every view stores a drawing of a stick on it which is available in several positions, rotated to 30 degree angle, 45 degree angle etc). Background of a View is transparent. These views can intersect with each other, so I need the UIViews to be transparent in order a user could see both drawings from overlapped and overlapping view. I wonder if this affects a performance of an application seriously? (all this transparency of all 40 UIViews). And how I can track how much memory or CPU my app currently uses.
I recommend watching WWDC 2012 Session 238 - iOS App Performance: Graphics and Animations, which covers these questions.
As a broad answer:
The iPhone will probably handle your 40-view requirement fine—but its impossible to know for sure without trying it out, and without more context (are they being animated? Are they scrolling?)
More views creates more performance problems, because all of the views need to be packaged up and shipped off to be rendered (by backboardd I think).
Transparency will hurt application performance. I believe the core reason is that transparent views need to be drawn in an off-screen buffer rather than be painted over existing content (something like that).
Use Instruments for Profiling
Profile your GPU usage using the Open GL ES Driver (look at 'Device Utilization')
Measure CPU usage using Time Profiler
Measure FPS and check for common performance problems using the CoreAnimation instrument
I wouldn't bother thinking about this until you actually see performance issues. If you do, I can't recommend that WWDC session enough—it covers things like what strategy you should take to optimize performance (e.g. moving work to the GPU as long as it can handle more; the basics of profiling, etc.) as well as tips and tricks based on the implementation details of iOS.

CGContextDrawLayerAtPoint is slow on iPad 3

I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.

On iOS, how do CALayer bitmaps (CGImage objects) get displayed onto Graphics Card?

On iOS, I was able to create 3 CGImage objects, and use a CADisplayLink at 60fps to do
self.view.layer.contents = (__bridge id) imageArray[counter++ % 3];
inside the ViewController, and each time, an image is set to the view's CALayer contents, which is a bitmap.
And this all by itself, can alter what the screen shows. The screen will just loop through these 3 images, at 60fps. There is no UIView's drawRect, no CALayer's display, drawInContext, or CALayer's delegate's drawLayerInContext. All it does is to change the CALayer's contents.
I also tried adding a smaller size sublayer to self.view.layer, and set that sublayer's contents instead. And that sublayer will cycle through those 3 images.
So this is very similar to back in the old days even on Apple ][ or even in King's Quest III era, which are DOS video games, where there is 1 bitmap, and the screen just constantly shows what the bitmap is.
Except this time, it is not 1 bitmap, but a tree or a linked list of bitmaps, and the graphics card constantly use the Painter's Model to paint those bitmaps (with position and opacity), onto the main screen. So it seems that drawRect, CALayer, everything, were all designed to achieve this final purpose.
Is that how it works? Does the graphics card take an ordered list of bitmaps or a tree of bitmaps? (and then constantly show them. To simplify, we don't consider the Implicit animation in the CA framework) What is actually happening down in the graphics card handling layer? (and actually, is this method almost the same on iOS, Mac OS X, and on the PCs?)
(this question aims to understand how our graphics programming actually get rendered in modern graphics cards, since for example, if we need to understand UIView and how CALayer works, or even use CALayer's bitmap directly, we do need to understand the graphics architecture.)
Modern display libraries (such as Quartz used in iOS and Mac OS) use hardware accelerated compositing. The workings is very similar to how computer graphics libraries such as OpenGL work. In essence, each CALayer is kept in as a separate surface that is buffered and rendered by the video hardware much like a texture in a 3D game. This is exceptionally well implemented in iOS and this is why the iPhone is so well-known for having a smooth UI.
In the "old days" (i.e. Windows 9x, Mac OS Classic, etc), the screen was essentially one big framebuffer, and everything that was exposed by e.g. moving a window had to be redrawn manually by each application. The redrawing was mostly done by the CPU, which put an upper limit on animation performance. Animation were usually very "flickery" due to the redrawing involved. This technique was mostly suited for desktop applications without too much animation. Notably, Android uses (or at least used to use) this technique, which is a big problem when porting iOS applications over to Android.
Games of the old days days (e.g. DOS, arcade machines, etc, also used a lot on Mac OS classic), something called sprite animation was used to improve performance and reduce flickering by keeping the moving images in offscreen buffers that were rendered by the hardware and synchronized with the monitor's vblank, which meant that animations were smooth even on very low-end systems. However, the size of these images were very limited and the screen resolutions were low, only about 10-15% of the pixels of even an iPhone screen of today.
You've got a reasonable intuition here, but there are still several steps between contents and the display. First off, contents doesn't have to be a CGImage. It is often a private class called CABackingStorage which is not quite the same thing. In many cases there are hardware optimizations going on to bypass rendering the image into main memory and then copying it to video memory. And since the contents of various layers are all composited together, you're still a ways from the "real" display memory. Not to mention that modifications to contents just directly impacts the model layer, not the presentation or render layers. Plus there are CGLayer objects that can store their image directly in video memory. There's a lot of different stuff going on.
So the answer is, no, the video "card" (chip; it's the PowerVR BTW) does not take an ordered bunch of layers. It takes lower-level data in ways that are not well documented. Some things (particularly parts of Core Animation, and perhaps CGLayer) appear to be wrappers around OpenGL textures, but others are probably Core Graphics directly accessing the hardware itself. Once you get to this level of the stack, it's all private and can change from version to version and from device to device.
You also may find Brad Larson's response useful here:
iOS: is Core Graphics implemented on top of OpenGL?
You may also be interested in Chapter 6 of iOS:PTL. While it doesn't go into the implementation specifics, it does include a lot of practical discussion of how to improve drawing performance and best utilize the hardware with Core Graphics. Chapter 7 details all the developer-accessible steps involved in CALayer drawing.

Making a "piece of paper with text on it" in OpenGL (Specifically on iOS 5)

I've never done OpenGL, but I'm looking for some pointers on this particular question on an AR app I'm practicing with.
I'd like to make an app with a "flat rectangle" along with text written on the surface of the rectangle. Visually, I'm imagining something along the lines of a piece of paper with text written on it. Each time the app starts, the text would be something different (the text is pulled from a plist file).
The user would be able to view the paper from all sides, much as if there was a piece of paper hanging in front of him.
Is this trivial to do in OpenGL? How could I get started?
Sorry for the really open-ended question, but I wanted to get a feel for how this kind of thing is done.
Looking at the OpenGL template source code in the Xcode sample projects, I see that there is a big array of vertices. I presume that to create a "flat" rectangle, I'd essentally just have to remove or make the z-axis zero. And then the dynamic text that will attach to the surface of the flat rectangle...I dont have any idea how to do that......
This question is hard to answer unambiguously. In general, this is trivial, but then again it is not.
Drawing a "flat rectangle with something on it" is a couple of API calls, as simple as it can get. Drawing text in OpenGL in an efficient way, and high quality, and without big preprocessing is an entirely different story.
What I would do is render text using whatever the "normal system-supported" way is under iOS (just like you would draw in any window, I wouldn't know this specific detail), but draw into a bitmap rather than on the screen. This should be supported, pretty much every OS has supported this for at least 10-15 years. Then turn this bitmap into a texture, bind it, and draw your trivial flat quad with OpenGL (set up a vertex buffer with 4 vertices, each vertex a texture coordinate, and draw two triangles - as easy as it gets).
The huge advantage of that is that you get to use the installed system fonts (or any fonts available), you don't need to generate a bitmap font and don't need to think about really ugly things such as hinting and proper spacing, and it's much easier to mix different text styles, etc. OpenGL has built-in support for text too, of course, but it is not terribly efficient or nice either. If the text does not change every millisecond, it's really best to render it using the standard renderer that the operating system provides (yes, that probably won't be hardware accelerated, but so what... since the user must read the text, it likely won't change every millisecond).
Now it gets more complicated if your "piece of paper" should bend and twist too, or do a page peel effect rather than being just a flat rectangle. In that case you need to tesselate it, which can be harder than it sounds, too. Not all tesselations look optimal for all bends/twists, or they do but do not have the optimal (read as minimum) number of vertices.
There is an article on "page peel" and such tesselation in one of the GPU Gems or GPU Pro books, let me search...
There: Andreas Bizzotto: "A Shader-Based eBook Reader - Page peeling effect", GPU Pro2 pp. 278-299
Maybe you can get hold of a copy or are lucky enough to find it on Google Books or something.

Clear single viewport in DirectX 10

I am preparing to start on a C++ DirectX 10 application that will consist of multiple "panels" to display different types of information. I have had some success experimenting with multiple viewports on one RenderTargetView. However, I cannot find a definitive answer regarding how to clear a single viewport at a time. These panels (viewports) in my application will overlap in some areas, so I would like to be able to draw them from "bottom to top", clearing each viewport as I go so the drawing from lower panels doesn't show through on the higher ones. In DirectX 9, it seems that there was a Clear() method of the device object that would clear only the currently set viewport. DirectX 10 uses ClearRenderTargetView(), which clears the entire drawing area, and I cannot find any other option that is equivalent to the way DirectX 9 did it.
Is there a way in DirectX 10 to clear only a viewport/rectangle within the drawing area? One person speculated that the only way may be to draw a quad in that space. It seems that another possibility would be to have a seprate RenderTargetView for each panel, but I would like to avoid that as it requires other redundant resources, such as a separate depth/stencil buffers (unless that is a misunderstanding on my part).
Any help will be greatly appreciated! Thanks!
I would recommend using one render target per "viewport", and compositing them together using quads for the final view. I know of no way to scissor a clear in DX 10.
Also, according to the article here, "An array of render-target views may be passed into ID3D10Device::OMSetRenderTargets, however all of those render-target views will correspond to a single depth stencil view."
Hope this helps.
Could you not just create a shader together with the appropriate blendstate settings and a square mesh (or other shape of mesh) and use it to clear the area where you want to clear? I haven't tried this but I think it can be done.

Resources