I am in the process of migrating a small iPad application from OpenGL ES 2.0 to OpenGL ES 3.0. In the App, I use a subclass of GLKView to handle all my drawing, though the only GLKit features I use are:
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3]; // Or 2
self.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
self.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
self.drawableMultisample = GLKViewDrawableMultisample4X;
self.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
self.enableSetNeedsDisplay = YES;
// ... gl code following
My -drawRect method looks like this:
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
[_currentProgram use]; // Use program
glUniformMatrix4fv([_currentProgram getUniformLocation:#"modelViewProjectionMatrix"], 1, 0, modelViewProjectionMatrix.m);
// ...
if (isES3) {
glBindVertexArray(vertexArray);
}
else {
glBindVertexArrayOES(vertexArray);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, verticiesLength);
I do not yet have a OpenGL ES 3.0-capable device so all of my OpenGL ES 3.0 testing is being done in the iOS Simulator. OpenGL ES 2.0 testing is done on-device and in-simulator.
As expected, in ES2, the screen is cleared to white immediately on startup (-drawRect having been called once and no vertices to draw yet). However, when I make the swap to ES3, the context is successfully created, no gl calls fail, and yet the screen does not clear as it should - it just appears as a black screen. Fishing around for what was going wrong, I decided to remove multi-sampling:
self.drawableMultisample = GLKViewDrawableMultisampleNone;
And it worked! (Albeit without antialiasing.) My question is therefore, are there any known issues with GLKit multi-sampling with OpenGL ES 3.0 in the iOS Simulator (iPad, iPad Retina and iPad Retina (64-Bit))? My laptop has more than enough free memory to cope with the multi-sampling.
OpenGL is a very asynchronous API. For example, if you call glClear, you should not expect the screen to be cleared when the call returns. You can only reliably look at the result of the rendering you produced after you finished rendering the frame, and it is displayed (typically by swapping the buffers when using double buffered rendering).
So what you're observing most likely does not mean anything. Does everything look fine at the end of the frame? If yes, there's no reason to be worried.
The difference is likely caused in the different rendering process if multisampling is enabled. In this case, rendering goes to a higher resolution buffer first, and is only downsampled to the actual framebuffer at the end of the frame.
When you deploy, chose the device type to be iPad, not universal, from the General>Deployment Info menu, that will fix it.
Related
I'm trying to write an OpenGLES-3.0 Swift app on iOS (>= 8.0) that makes use of Multiple Render Targets (MRT). To get proper antialiasing, I enabled multisampling.
In detail, my rendering architecture looks like this:
The Display framebuffer has one renderbuffer attached:
The Display renderbuffer : controlled by iOS via EAGLContext.renderbufferStorage(), attached as GL_COLOR_ATTACHMENT0
The Sample framebuffer has two renderbuffers attached:
The Color renderbuffer I : GL_RGBA8, multisampled, attached as GL_COLOR_ATTACHMENT0
The Color renderbuffer II : GL_RGBA8, multisampled, attached as GL_COLOR_ATTACHMENT1
Whenever my layer changes its bounds, I resize all my renderbuffers as Apple does it in the GLPaint sample.
I created a minimal example for you. The rendering itself looks like this:
//Set the GL context, bind the sample framebuffer and specify the viewport:
EAGLContext.setCurrentContext(context)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), self.sampleFramebuffer)
glViewport(0, 0, self.layerWidth, self.layerHeight)
//Clear both render targets:
glClearBufferfv(GLenum(GL_COLOR), 0, self.colorRenderbufferIClearColor)
glClearBufferfv(GLenum(GL_COLOR), 1, self.colorRenderbufferIIClearColor)
//Specify the vertex attribute (only position, 6 floats for a triangle):
glEnableVertexAttribArray(0)
glVertexAttribPointer(0, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(2 * sizeof(GLfloat)), nil)
//Use the shader program and render a single triangle:
glUseProgram(self.program)
glDrawArrays(GLenum(GL_TRIANGLES), 0, 3)
//Prepare both framebuffers as source and destination to do multisampling:
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), self.sampleFramebuffer)
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), self.displayFramebuffer)
//Specify from which of the attachments we do the multisampling.
//This is GL_COLOR_ATTACHMENT0 or GL_COLOR_ATTACHMENT1.
glReadBuffer(self.blitAttachment)
//Transfer data between framebuffers and do multisampling:
glBlitFramebuffer(0, 0, self.layerWidth, self.layerHeight, 0, 0, self.layerWidth, self.layerHeight, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_LINEAR))
//Invalidate the sample framebuffer for this pass:
glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)])
//Bind the display renderbuffer and present it:
glBindRenderbuffer(GLenum(GL_RENDERBUFFER), self.displayRenderbuffer)
self.eaglContext.presentRenderbuffer(Int(GL_RENDERBUFFER))
Now to the problem: My sample project draws a blue triangle on red background into the first render target (color renderbuffer I) and a purple triangle on green background into the second render target (color renderbuffer II). By setting blitAttachment in the code, you can select which of the two attachments gets resolved into the display framebuffer.
Everything works as expected on the iOS simulator (all devices, all iOS versions).
I only have access to an iPad Air (Model A1475, iOS 9.3.4) at the moment. But on the device, there are problems:
If I disable multisampling (level = 0 in glRenderbufferStorageMultisample()), everything works.
If I enable multisampling (level = 4), I can only blit from GL_COLOR_ATTACHMENT0 (which is color renderbuffer I).
Blitting from GL_COLOR_ATTACHMENT1 produces the same result (blue triangle on red), but should lead to the other one (purple triangle on green).
You can reproduce the problem with my attached sample code (DropBox).
So there are two questions:
Could somebody please confirm that this works on the simulator, but not on real devices?
Has anybody an idea about errors in my code? Or is this a known bug?
Thanks in advance!
There seem to be a bit of a strange behavior in this API. The code you linked does indeed work on the simulator but the simulator is quite different from the actual device so I suggest you never use it as reference.
So what seems to happen is that the render buffer is discarded simply too quickly. Why and how this happen I have no idea. You blit the buffers and then invalidate them so simply removing the buffer invalidation will remove the issue. But removing the buffer invalidation is not not suggested so rather ensure that all the tasks have been performed by the GPU before you invalidate them. That means simply calling flush.
Before you call glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)]) simply call glFlush():
//Resolve from source to destination while applying multisampling:
glBlitFramebuffer(0, 0, self.layerWidth, self.layerHeight, 0, 0, self.layerWidth, self.layerHeight, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_LINEAR))
OpenGLESView.checkError(dbgDomain, andDbgText: "Failed to blit between framebuffers")
glFlush()
//Invalidate the whole sample framebuffer:
glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)])
OpenGLESView.checkError(dbgDomain, andDbgText: "Failed to invalidate sample framebuffer")
I am working hard on a new iOS game that is drawn only with procedurally generated lines. All is working well, except for a few strange hiccups with drawing some primitives.
I am at a point where I need to implement text, and the characters are set up to be a series of points in an array. When I go to draw the points (which are CGPoints) some of the drawing modes are working funny.
effect.transform.modelviewMatrix = matrix;
[effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, 0, 0, &points);
glDrawArrays(GL_POINTS, 0, ccc);
I am using this code to draw from the array, and when the mode is set to GL_LINE_LOOP or GL_LINE_STRIP all works well. But if I set it to GL_POINTS, I get a gpus_ReturnGuiltyForHardwareRestert error. And if I try GL_LINES it just doesn't draw anything.
What could possibly be going on?
When you draw with GL_POINTS in ES2 or ES3, you need to specify gl_PointSize in the vertex shader or you'll get undefined behavior (ugly rendering on device at best, the crash you're seeing at worst). The vertex shader GLKBaseEffect uses doesn't do gl_PointSize, so you can't use it with GL_POINTS. You'll need to implement your own shaders. (For a starting point, try the ones in the "OpenGL Game" template you get when creating a new Xcode project, or using the Xcode Frame Debugger to look at the GLSL that GLKBaseEffect generates.)
GL_LINES should work fine as long as you're setting an appropriate width with glLineWidth() in client code.
I'm working on a simple 3D application with OpenGL ES 2 on iOS.
I just followed steps in "OpenGL ES Programming Guide for iOS" in Apple Developer Site.
I wanted make the OpenGL View entirely opaque for better performance as suggested in the document. So, I did as below.
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
And, I ran the application with the Core Animation instrument. Then, I turned on 'Color Blended Layers' in Debug Options in Core Animation instrument.
Then, the entire screen became reddish which means the entire view was being blended. I tested another example OpenGL apps from Apple, but they were all greenish with the instrument.
The document dosen't say anything about this except making the layer opaque just like I did.
What else could there be possible cause for this problem?
Be sure to set the alpha component to 1.0 when you call glClearColor(0.0, 0.0, 0.0, 1.0).
We are trying to figure out why we have a relatively slow FPS on iphone 4 and ipad 1.
We are seeing this Category of warning in our open GL Analysis: Logical Buffer Load. The summary is "Slow framebuffer load". The recommendation says that the framebuffer must be loaded by the GPU before rendering. It recommends that we are failing to performa a fullscreen clear operation at the beginning fo each frame. However, we are doing this with glClear.
[EAGLContext setCurrentContext:_context];
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
// Our OpenGL Drawing Occurs here
...
...
...
// hint to opengl to not bother with this buffer
const GLenum discards[] = {GL_DEPTH_ATTACHMENT};
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 1, discards);
// present render
[_context presentRenderbuffer:GL_RENDERBUFFER];
We are not actually using a depth or stencil buffer.
This is happening when we render textures as tiles and it happens each time we load a new tile. It is pointing to our glDrawArrays command.
Any recommendations on how we can get rid of this warning?
If it helps at all, this is how we are setting up the layer:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565, kEAGLDrawablePropertyColorFormat,
nil];
After a lot of work and deliberation, I managed to figure this out in the end.
Ok I am using an open source library called GLESuperman. Its a great library which helps to debug these kind of issues and it can be used for drawing graphics - its pretty fast. Yes I have no idea why its called that... But its free and it works. Just search it up on Github. It gets updated very frequently and supports iOS 7 and higher.
Ok so to implement it, do the following:
// Import the framework into your Xcode project.
#import <GLESuperman/GLESuperman.h>
// Also you will need to import Core Graphics.
#import <CoreGraphics/CoreGraphics.h>
// In order to run it in debug mode and get
// a live detailed report about things like FPS, do the following.
GLESuperman *debugData = [[GLESuperman alloc] init];
[debugData runGraphicDebug withRepeat:YES inBackground:YES];
// In order to draw graphics, do the following.
GLESuperman *graphicView = [[GLESuperman alloc] init];
[graphicView drawView:CGRectMake(0, 0, 50, 50];
// You can do other things too like add images/etc..
// Just look at the library documentation, it has everything.
[graphicView setAlpha:1.0];
[graphicView showGraphic];
I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/