Why does the GLKView have context properties in interface builder? - ios

If I create a GLKView in interface builder I see several configurable properties related to the gl context (such as color format). Why do these properties exist if I have to create a context manually?

Colour format, whether or not there's a depth buffer, etc, are properties of a frame buffer, not of a context. Compared to desktop OpenGL, ES has had the functionality of the frame buffer extension from day one so there's no conflation of the two unrelated things.

Related

In iOS Core Graphics, what is a graphicsContext?

When we do:
CGContextRef ctx = UIGraphicsGetCurrentContext();
what exactly is ctx? Apparently it's a struct. Where is the struct defined? What are its members?
What is a graphics context?
A graphics context refers to the graphic destination. Destination can be a window in an application, a bitmap image, a PDF document, or a printer.
If you want to draw on a view, the view is your graphicsContext or if you wish to draw on an image then that image becomes your graphicsContext.
So, if you wish to make custom drawing using CoreGraphics , you must get the graphic context (destination where you want to put your drawing) . After getting the context , drawing can be done using the CoreGraphics function. Almost all CoreGraphics function has a parameter context. So, each time we call the coregraphics function we first get the current context and pass it as a parameter.
How can you obtain a graphics context?
You can obtain a graphics context by using Quartz graphics context creation functions or by using higher-level functions provided in the Carbon, Cocoa, or Printing frameworks.
For eg:
Quartz provides creation functions for various flavors of Quartz graphics contexts including bitmap images and PDF.
The Cocoa framework provides functions for obtaining window graphics contexts. The Printing framework provides functions that obtain a graphics context appropriate for the destination printer.
What a graphics context contains?
It contains drawing parameters and all device-specific information needed to render the paint to the destination.
Source::
https://developer.apple.com/library/ios/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_layers/dq_layers.html
It is a pointer to a struct. The struct is opaque. Just use the functions connected with it.

Is it certain that UIGraphicsBeginImageContext use CGBitmapContextCreate to create the graphics context?

Does UIGraphicsBeginImageContext use CGBitmapContextCreate to create a graphics context [update: I can't find it in the documentation], so the graphics context is exactly the same either way? I also tried to step into UIGraphicsBeginImageContext but the debugger won't let me step into it.
In the UIKit Function References page of iOS documentation, the following is written about UIGraphicsBeginImageContext:
Creates a bitmap-based graphics context and makes it the current context.
Emphasis added. Following a link to the CGContextRef page, I find this:
A graphics context contains drawing parameters and all device-specific information needed to render the paint on a page to the destination, whether the destination is a window in an application, a bitmap image, a PDF document, or a printer.
Again, emphasis added. This says that (as of now) there are 4 kinds of Core Graphics contexts, each with their own initializers. The only kind that has to do with bitmaps is a bitmap-based CGContextRef, and there is only one documented way to create them (well, technically it comes in two versions). It is very likely that this function is being used. I believe that UIGraphicsBeginImageContext is merely a convenience method. It just sets up a default set of parameters for CGBitmapContextCreate (it takes a lot of parameters) and pushes the created context onto the graphics stack.

Is there another way to display OpenGL content than using a Core Animation aware renderbuffer?

According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.

Scaling the contents of OpenGL ES framebuffer

Currently I'm scaling down the contents of my OpenGL ES 1.1 framebuffer like this:
save current framebuffer and renderbuffer references
bind framebuffer2 and smallerRenderbuffer
re-render all contents
now smallerRenderbuffer contains the "scaled-down" contents of
framebuffer
do stuff with contents of smallerRenderbuffer
re-bind framebuffer and renderbuffer
What's an alternative way to do this? Perhaps I can just copy and scale the contents of the original framebuffer and renderbuffer into framebuffer2 and smallerRenderbuffer. Hence avoiding the re-render step. I've been looking at glScalef but I'm not sure where to go from here.
Note: this is all done in OpenGL ES 1.1 on iOS.
You could do an initial render to texture, then render from that to both the frame buffer that you want to be visible and to the small version. Whatever way you look at it, what you're trying to do is use the data that has been rendered as the source for another rendering so rendering to a texture is the most natural thing to do.
You're probably already familiar with the semantics of a render to texture if you're doing work on the miniature version, but for completeness: you'd create a frame buffer object, use glFramebufferTexture2DOES to attach a named texture to a suitable attachment point, then bind either the frame buffer or the texture (ensuring the other isn't simultaneously bound if you want defined behaviour) as appropriate.

iOS " current graphics context" - What is that

When I draw lines and shapes etc I get the " current graphics context" in iOS.
What exactly though is " current graphics context" - I'm looking for the 30,000 foot description.
Right now I just copy and paste UI code, not exactly sure what it's doing.
A graphics context is the place where information about the drawing state is stored. This includes fill color, stroke color, line width, line pattern, winding rule, mask, current path, transparency layers, transform, text transform, etc. When using CoreGraphics calls, you specify the context to use to every single function. This means you can use multiple contexts at once, though typically you only use one. At the UIKit layer, there is the concept of a "current" graphics context, which is a graphics context that's used by all UIKit-level drawing calls (such as -[UIColor set] or UIBezierPath drawing). The current context is stored in a stack of contexts, so you can create a new context for some drawing, then when you finish with it the previous context is restored. Typically you get a context for free inside of -[UIView drawRect:] inside of CALayer display-related methods, but not otherwise.
It used to be that the "current" context was an application-wide global state, and therefore was not safe to touch outside of the main thread. As of iOS 4.0 (I believe), this became a thread-local state and UIKit-level drawing methods became safe to use on background threads.
The OS needs a place to save information, such as drawing state, which you don't want to specify in every single CG drawing command, such as in which bitmap or view to draw, the scale or other transform to use, the last color you specified, etc.
The context tells each CG call where to find all this "stuff" for your current drawing call. Give a different context to the exact same drawing call, and that call might draw to a different bitmap in a completely different view, with a different color, different scale, etc.
Basically it is a class in a platform (iOS, Android, JavaME and many others) that provides access to all the drawing/display capabilities provided for that platform. It varies a bit for different platforms of course, but this is the 30,000 foot description :)

Resources