I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/
Related
I am in the process of migrating a small iPad application from OpenGL ES 2.0 to OpenGL ES 3.0. In the App, I use a subclass of GLKView to handle all my drawing, though the only GLKit features I use are:
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3]; // Or 2
self.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
self.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
self.drawableMultisample = GLKViewDrawableMultisample4X;
self.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
self.enableSetNeedsDisplay = YES;
// ... gl code following
My -drawRect method looks like this:
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
[_currentProgram use]; // Use program
glUniformMatrix4fv([_currentProgram getUniformLocation:#"modelViewProjectionMatrix"], 1, 0, modelViewProjectionMatrix.m);
// ...
if (isES3) {
glBindVertexArray(vertexArray);
}
else {
glBindVertexArrayOES(vertexArray);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, verticiesLength);
I do not yet have a OpenGL ES 3.0-capable device so all of my OpenGL ES 3.0 testing is being done in the iOS Simulator. OpenGL ES 2.0 testing is done on-device and in-simulator.
As expected, in ES2, the screen is cleared to white immediately on startup (-drawRect having been called once and no vertices to draw yet). However, when I make the swap to ES3, the context is successfully created, no gl calls fail, and yet the screen does not clear as it should - it just appears as a black screen. Fishing around for what was going wrong, I decided to remove multi-sampling:
self.drawableMultisample = GLKViewDrawableMultisampleNone;
And it worked! (Albeit without antialiasing.) My question is therefore, are there any known issues with GLKit multi-sampling with OpenGL ES 3.0 in the iOS Simulator (iPad, iPad Retina and iPad Retina (64-Bit))? My laptop has more than enough free memory to cope with the multi-sampling.
OpenGL is a very asynchronous API. For example, if you call glClear, you should not expect the screen to be cleared when the call returns. You can only reliably look at the result of the rendering you produced after you finished rendering the frame, and it is displayed (typically by swapping the buffers when using double buffered rendering).
So what you're observing most likely does not mean anything. Does everything look fine at the end of the frame? If yes, there's no reason to be worried.
The difference is likely caused in the different rendering process if multisampling is enabled. In this case, rendering goes to a higher resolution buffer first, and is only downsampled to the actual framebuffer at the end of the frame.
When you deploy, chose the device type to be iPad, not universal, from the General>Deployment Info menu, that will fix it.
Platform is iPhone OpenGL ES 2.0
the framework already create an main fbo with renderbuffer as it's colorattachment.
And I have my own fbo with texture2D as colorattachment.
I want to copy main fbo's content to my fbo.
I tried common glCopyTexImage2D way, but it's too slow on my device(iPad1).
So I wonder if a faster solution is out there.
If main fbo uses texture2D as colorattachment, I know just draw fullscreen quad using that texture to my fbo, but how to draw it's renderbuffer to my fbo? google quite a while but no specific answer.
RenderBuffers are almost useless on most embedded systems. All you can do with them is read from them with glReadPixels(), which is too slow.
You should use a Texture attachment, as you said, then render with that texture. This artcile will help:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I'd like to better understand the creation, allocation, and binding of OpenGL ES framebuffers, renderbuffers, etc under iOS. I understand that the EAGLContext and EAGLSharegroup classes normally manage the allocation and binding of such objects. However, the apple docs suggest that it is possible to do GL offscreen rendering without using the EAGLContext class and I'm interested in how. Does anyone have any pointers to code examples?
I would also be interested in examples showing how to accomplish offscreen rendering with EAGLContext.
The only way to render content using OpenGL ES on iOS, offscreen or onscreen, is to do so through an EAGLContext. From the OpenGL ES Programming Guide:
Before your application can call any OpenGL ES functions, it must
initialize an EAGLContext object and set it as the current context.
I think the following lines might be what are causing some confusion:
The EAGLContext class also provides methods your application uses to
integrate OpenGL ES content with Core Animation. Without these
methods, your application would be limited to working with offscreen
images.
What that means is that if you want to render content to the screen, you use some extra methods only provided by the EAGLContext class, such as -renderbufferStorage:fromDrawable:. You still need an EAGLContext to manage OpenGL ES commands even if you're going to draw offscreen, but these particular methods which are specific to EAGLContext are needed to draw onscreen.
To your second question, how you setup your offscreen rendering will depend on the configuration of this offscreen render (texture-backed FBO, depth buffer, etc.). For example, the following code will set up a simple FBO that has no depth buffer and renders to the already set up outputTexture texture:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
For code examples, you could look at how I do this within the open source GPUImage framework (which just does simple image rendering) or my open source Molecules application (which does more complex offscreen rendering using depth buffers).
According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.
I'm currently developping an iPad app which is using OpenGL to draw some very simple (no more than 1000 or 2000 vertices) rotating models in multiple OpenGL views.
There are currently 6 view in a grid, each one running its own display link to update the drawing. Due to the simplicity of the models, it's by far the simplest method to do it, I don't have the time to code a full OpenGL interface.
Currently, it's doing well performance-wise, but there are some annoying glitches. The first 3 OpenGL views display without problems, and the last 3 only display a few triangles (while still retaining the ability to rotate the model). Also there are some cases where the glDrawArrays call is going straight into EXC_BAD_ACCESS (especially on the simulator), which tell me there is something wrong with the buffers.
What I checked (as well as double- and triple-checked) is :
Buffer allocation seems OK
All resources are freed on dealloc
The instruments show some warnings, but nothing that seems related
I'm thinking it's probably related to my having multiple views drawing at the same time, so is there any known thing I should have done there? Each view has its own context, but perhaps I'm doing something wrong with that...
Also, I just noticed that in the simulator, the afflicted views are flickering between the right drawing with all the vertices and the wrong drawing with only a few.
Anyway, if you have any ideas, thanks for sharing!
Okay, I'm going to answer my own question since I finally found what was going on. It was a small missing line that was causing all those problems.
Basically, to have multiple OpenGL views displayed at the same time, you need :
Either, the same context for every view. Here, you have to take care not to draw with multiple threads at the same time (i.e. lock the context somehow, as explained on this answer. And you have to re-bind the frame- and render-buffers every time on each frame.
Or, you can use different contexts for each view. Then, you have to re-set the context on each frame, because other display links, could (and would, as in my case) cause your OpenGL calls to use the wrong data. Also, there is no need for re-binding frame- and render-buffers since your context is preserved.
Also, call glFlush() after each frame, to tell the GPU to finish rendering each frame fully.
In my case (the second one), the code for rendering each frame (in iOS) looks like :
- (void) drawFrame:(CADisplayLink*)displayLink {
// Set current context, assuming _context
// is the class ivar for the OpenGL Context
[EAGLContext setCurrentContext:_context]
// Clear whatever you want
glClear (GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Do matrix stuff
...
glUniformMatrix4fv (...);
// Set your viewport
glViewport (0, 0, self.frame.size.width, self.frame.size.height);
// Bind object buffers
glBindBuffer (GL_ARRAY_BUFFER, _vertexBuffer);
glVertexAttribPointer (_glVertexPositionSlot, 3, ...);
// Draw elements
glDrawArrays (GL_TRIANGLES, 0, _currentVertexCount);
// Discard unneeded depth buffer
const GLenum discard[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT (GL_FRAMEBUFFER, 1, discard);
// Present render buffer
[_context presentRenderbuffer:GL_RENDERBUFFER];
// Unbind and flush
glBindBuffer (GL_ARRAY_BUFFER, 0);
glFlush();
}
EDIT
I'm going to edit this answer, since I found out that running multiple CADisplayLinks could cause some issues. You have to make sure to set the frameInterval property of your CADisplayLink instance to something other than 0 or 1. Else, the run loop will only have time to call the first render method, and then it'll call it again, and again. In my case, that was why only one object was moving. Now, it's set to 3 or 4 frames, and the run loop has time to call all the render methods.
This applies only to the application running on the device. The simulator, being very fast, doesn't care about such things.
It gets tricky when you want multiple UIViews that are openGLViews,
on this site you should be able to read all about it: Using multiple openGL Views and uikit