How to set Background image in GLKViewController iPhone OpenGLES? - ios

I'm drawing a line using OpenGL on the iPhone. I used GLView and GLKView delegates to draw the line. But now I want to set a background image to GLKViewController, and on the background image, I want to draw the line. I have tried to set a background image using UIImageView, but the image is appearing above on the (GLKView) drawn line.
How can I set a background image to the GLKViewController?

The easiest way should be to add additional view with background image (such as you proposed - but you need to set it behind OpenGL view). Try this:
[RootViewController.view addSubview: BackgroundView belowSubview: OpenGLView];
OpenGLView.opaque = NO;
But this will bring some FPS penalty (Apple recommends to use opaque OpenGL view for performance reason). More correct approach is to draw background via OpenGL as fullscreen quad.

I just want to state that the accepted answer is a very bad one... You will lose an almost immediate 20fps even on the newer devices (iPhone 5, iPad 2 & newer) by turning off opaque. The performance penalty is horrible and should not be taken lightly.
You can set a background in OpenGL and keep your opaque. Convert the UIImage to an OpenGL texture and render it. Depending on your OpenGL environment, this can be done a number of ways easily.

Related

Interactive blurring of UIImage and UIView like iOS 8 spotlight

iOS 8 introduces some pretty snazzy interactive blurring. Most notably, there's the interactive blur when you pull down for spotlight, but there's also the animation when opening and closing Siri (though that's not interactive). I've only noticed this interactive blur in one other place: the official Twitter app when pulling down on a profile view (parallax header image zooms and blurs sometimes).
I've attempted to animate something basic with a UISlider with both CoreImage and GPUImage (based on the answer to this question and also Apple's UIImage+ImageEffects, but nothing seems appropriately performant enough to animate the blur interactively (i.e. blurring an image to a single value works quickly once, but not at a framerate fast enough to blur continuously).
How can I implement these methods in a way that they are performant enough to both blur and unblur a UIImage (and ideally a UIView or CIContext snapshot) interactively?
There is no simple and one way of doing it, but it's definitively doable if you follow following steps and optional ways:
Most important: downsample the image. Basic resolution is not very important for gaussian blur. If you downsample only to half resolution, the amount of data is down to quarter!
Define end target blur radius.
Retrieve the architecture of device with the help of C functions and use different values of delta saturation parameter for different architectures, according to processing power, of course.
Experiment creating the blur with Apple provided library with radius step regarding to step value of parameter that is interacting with (KVO to contentOffset property, for example). dispatch_async and don't forget to callback with blurred image to main queue!
Methods above will almost for sure cater all the architectures from arm7s onwards, but you might still have some issues with arm7 - iPhone 4s).
If you still have issues, like with mentioned arm7, then double the contentOffset change required to make next blur with next radius. Then instead of changing the image property on UIImageView, rather create new UIImageView with new blurred UIImage and fade in alpha channel from 0 to 1 within the period the next blurred page is being created.
You might use number of tricks, like creating all blurred images one after another for the full interactive scale and cache them in collection and use them in method described in point 6.
There a also many other techniques if the animation is not interactive, but rather timed in the certain frame.

Darken an opaque UIView without blending

My App's background is an opaque UIImageView. Under some circumstances I would like to darken this down in an animated way from full brightness to about 50%. Currently I lower the alpha property of the view and this works well. Because nothing is behind the view, the background image just becomes dark.
However, I've been profiling using the Core Animation Instrument and when I do this, I see that the whole background shows as being blended. I'd like to avoid this if possible.
It seems to me that this would be achievable during compositing. If a view is opaque, it is possible to mix is with black without anything behind showing through. It's not necessary to blend it, just adjust the pixel values.
I wondered if this was something that UIKit's GPU compositing supports. While blending isn't great, it's probably a lot better than updating the image on the CPU, so I think a CPU approach is probably not a good substitute.
Another question asks about this, and a few ideas are suggested including setting the Alpha. No one has brought up a mechanism for avoiding blending though.
An important question here is whether you want the change to using a darkened background to be animated.
Not animated
Prepare two different background images and simply swap between them. The UIImage+imageEffects library could help with generating the darkened image, or give you some leads.
Animated.
Take a look at GPUImage - "An open source iOS framework for GPU-based image and video processing". Based on this you could render the background in to the scene in a darkened way.

CoreGraphics (drawRect) for drawing label's and UIImageView in UITableViewCell

I have a UITableViewCell in which inside it I have 5 UILabel a UIButton and a UIImageView that fills out the cell as background. The performance seems to be a bit slow, so I was thinking of using CoreGraphics to improve it. Is it true that using CoreGraphics instead of UILabel as subViews will make things much faster? If yes why?
I have the following code to draw shadow on the cells:
[self.layer setBorderColor:[UIColor blackColor].CGColor];
[self.layer setShadowColor:[UIColor blackColor].CGColor];
[self.layer setShadowRadius:10.0];
[self.layer setCornerRadius:5.0];
[self.layer setShadowPath:[[UIBezierPath bezierPathWithRect:self.frame] CGPath]];
In general, (as Gavin indicated) I would say that you have to first confirm that the subviews are indeed causing a jitter in your scrolling.
When I'm testing UITableViewCell scrolling performance, I often use the Time Profiler in Instruments. Switch to Objective-C Only in the left-hand panel, and look at what is taking the most time on your Main Thread. If you see lots of time spent on rearranging (layout) or drawing of subviews, you may need to use CoreGraphics. If the time instead is spent on allocation/deallocation, then you may want to examine how your subviews are reused if at all. If they are not being reused, then of course this can cause performance problems.
Then of course, you should look at compositing. If your subviews are not opaque (identify this through the CoreAnimation instrument), then the performance may be seriously impacted.
Also of note -- realize that shadows are expensive to draw, and depending on your implementation, they may be redrawing on every frame! Your best option is to make sure that any CALayer shadows are fully rasterized, and have a path defined so live computations from the pixel mask don't have to be made.
If finally, you identify that the layout and redrawing of each subview individually is causing the slowdown, then I have a couple of points/explanations:
Your implementation of the drawing routine for your table view cell will probably be slower than the highly optimized drawing that Apple has written for its views. So you won't win any battles re-implementing drawing of UIImageView itself. Performance gains instead come from two places when drawing with CoreGraphics: a.) Pre-rendering of previously non-opaque views, and reduction of time spent in the layout phase of the view drawing cycle - this reduces the workload on the GPU/CPU when scrolling. b.) Reduction in time switching CG contexts for individual view drawing. Each element now draws into the same graphics context at the same time, reducing switching costs.
Drawing in drawRect using CoreGraphics on the main thread draws using the CPU, and depending on how complex your cells are, this may cause jitters of its own. Instead, consider drawing in a background thread to a separate CGContext, then dispatching a worker to insert the contents of the drawing as a CGImageRef into a CALayer, or as a UIImage in a UIImageView. There is a naive implementation on GitHub: https://github.com/mindsnacks/MSCachedAsyncViewDrawing
If you do decide to go with background CoreGraphics drawing, be warned that at present (December 2012), I believe there is a bug in the NSString drawing categories when drawing on background threads that results in a fallback to webkit which is absolutely not thread safe. This will cause a crash, so for the present time, make sure that the asynchronous drawing is done in a serial GCD/NSOperation queue.
On the simulator, Debug→Color Blended Layers. Red is bad, green is good.
More accurately, red means that the GPU needed to do an alpha blend. The main cost is in the memory bandwidth required to draw the pixels twice and possibly re-fetch the extra texture. Completely transparent pixels are really bad.
Three fixes (all of which reduce the amount of red), which you should consider before diving into Core Graphics:
Make views opaque when possible. Labels with the background set to [UIColor clearColor] can often be set to a flat colour instead.
Make views with transparency as small as possible. For labels, this involves using -sizeToFit/-sizeThatFits: and adjusting layout appropriately.
Remove the alpha channel from opaque images (e.g. if your cell background is an image) — some image editors don't do this, and it means the GPU needs to perform an alpha test and might need to render whatever's behind the image.
Additionally, turn on and Color Offscreen-Rendered (possibly after turning off Color Blended Layers so it's easier to see). Offscreen-rendered stuff appears in yellow, and usually means that you've applied a layer mask. These can be very bad for performance; I don't know if CoreAnimation caches the masked result.
Finally, you can make CoreAnimation rasterize the cell by setting cell.layer.shouldRasterize = YES (you might also need cell.layer.rasterizationScale = [[UIScreen mainScreen].scale on retina devices; I forget if this is done automatically). The main benefit is that it's easy, and it's also more efficient than rendering the image view yourself in Core Graphics. (The benefit for labels is reduced since the text needs to be rendered on the CPU.)
Also note that view animations will be affected. I forget what setting CALayer.shouldRasterize does (it might re-rasterize it every frame of the animation, which is a bit wasteful when it'll only be drawn to screen once), but using Core Graphics will (by default) stretch the rendered content during the animation. See CALayer.contentsGravity.
What evidence do you have to suggest that what you have in the views is causing the performance issues? This is a deep black hole that can suck you in, so be sure you know the problem is where you think it is.
Are you preloading all your data? Have you pre-downloaded the images? What you're describing shouldn't be causing a slow down in UITableViewCell. Apple developers are much smarter than you and I so make sure you've got the data to back up your decision!
I've also seen a lagging UITableViewCell within the simulator with no noticeable difference on real hardware.
It is true that using CoreGraphics can speed up your draw performance but it can also slow it down if you do it wrong! Have a look at the Apple Tutorial on Advanced Table View Cells for how to perform the technique.

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

Make an EAGLView transparent?

It's possible to place in a UIViewController object an EAGLView object and make the background of EAGLView transparent in order to view what is behind?
Thanks.
-UPDATE----
I've tried which appears in this post. But still the layer of the EAGLView appears like a black square. :(
Any idea why this is not working for me?
openGLView.opaque = NO
Also pay attention on correct opacity in alpha channel of OpenGL framebuffer.
And pay attention on Tark answer about performance drop.
There is no such thing as an EAGLView, but it is definitely possible to have a CAEAGLLayer with a transparent background on top of UIKit content. It is not recommended however, from the docs:
Because an OpenGL ES rendering surface is presented to the user using Core Animation, any effects and animations you apply to the layer affect the 3D content you render. However, for best performance, do the following:
Set the layer’s opaque attribute to TRUE.
Set the layer bounds to match the dimensions of the display.
Make sure the layer is not transformed.
Avoid drawing other layers on top of the CAEAGLLayer object. If you must draw other, non OpenGL content, you might find the performance cost acceptable if you place transparent 2D content on top of the GL content and also make sure that the OpenGL content is opaque and not transformed.
When drawing landscape content on a portrait display, you should rotate the content yourself rather than using the CAEAGLLayer transform to rotate it.

Resources