"invalid framebuffer operation" on glClear - using sRGB in OpenGL ES3 - ios

Using openGL-ES3, running on an iPhone5s (hardware, not in the simulator) in Xcode 7.3 I receive an "invalid framebuffer operation" when doing a glClear.
The texture in question is a "final" texture for my GBuffer, much like in this tutorial http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.html.
Key difference being that I'm requesting an sRGB texture and that I use GL_COLOR_ATTACHMENT3 (instead of 4), due to ES3 limitations.
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
// glTexParameteri ...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, GL_TEXTURE_2D, m_finalTexture, 0);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); // No errors here
Now when I try to clear it, I get an "invalid framebuffer operation"
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);
// Element at index**[i]** needs to match GL_COLOR_ATTACHMENT**i** on GL-ES3!
GLenum drawbuf[4] = { GL_NONE, GL_NONE, GL_NONE, GL_COLOR_ATTACHMENT3 };
glDrawBuffers(sizeof(drawbuf)/sizeof(drawbuf[0]), drawbuf);
GLCheckError(); // no errors
glClear(GL_COLOR_BUFFER_BIT);
GLCheckError(); // => glGetError 506 GL_INVALID_FRAMEBUFFER_OPERATION
Now if instead I initialise the texture like this (so without sRGB), OpenGL doesn't give an error on the clear:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
Now as I understood it, sRGB is supported on OpenGL ES3... so why does glClear fail?
Any ideas anyone?

GL_SRGB8 is not a color-renderable format in ES 3.0. In the spec document:
In the "Required Texture Format" section starting on page 128, SRGB8 is listed under "Texture-only color formats".
In table 3.13, starting on page 130, SRGB8 does not have a checkmark in the "Color-renderable" column.
This also matches the EXT_srgb extension specification, under "Issues":
Do we require SRGB8_EXT be supported for RenderbufferStorage?
No. Some hardware would need to pad this out to RGBA and instead of adding that unknown for application developers we will simply not support that format in this extension.
glCheckFramebufferStatus() should return GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT in this case. If it's not doing that, that looks like a bug in the OpenGL implementation.
The closest alternative that is color-renderable is GL_SRGB8_ALPHA8.

try this
#define GL_COLOR_BUFFER_BIT 0x00004000
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Related

OpenGL ES 2.0 iOS - draw a rectangle into stencil buffer and limit drawing only inside it

Do a good deed and help get someone (me) out of their misery, since it's New Year's Eve soon. I'm working on an iOS app, a coloring book for kids and I haven't stumbled upon OpenGL before (more precisely OpenGLES 2.0) so there's a big chance there's stuff I don't actually get in my code.
One of the tasks is to not let the brush spill out of the contour in which the user started drawing.
After reading and understanding some OpenGL basics, I found that using the stencil buffer is the right solution. This is my stencil buffer setup:
glClearStencil(0);
//clear the stencil
glClear(GL_STENCIL_BUFFER_BIT);
//disable writing to color buffer
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
//disable depth buffer
glDisable(GL_DEPTH_TEST);
//enable writing to stencil buffer
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NEVER, 1, 0xFF);
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE);
[self drawStencil];
//re-enable color buffer
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
//only draw where there is a 1
glStencilFunc(GL_EQUAL, 1, 1);
//keep the pixels in the stencil buffer
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
Right now, I'm just trying to draw a square in the stencil buffer and see if I can limit my drawing only to that square. This is the method drawing the square:
- (void)drawStencil
{
// Create a renderbuffer
GLuint renderbuffer;
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
// Create a framebuffer
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, renderbuffer);
// Clear
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
// Read vertex shader source
NSString *vertexShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"VertexShader" ofType:#"vsh"] encoding:NSUTF8StringEncoding error:nil];
const char *vertexShaderSourceCString = [vertexShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
// Create and compile vertex shader
GLuint _vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(_vertexShader, 1, &vertexShaderSourceCString, NULL);
glCompileShader(_vertexShader);
// Read fragment shader source
NSString *fragmentShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"FragmentShader" ofType:#"fsh"] encoding:NSUTF8StringEncoding error:nil];
const char *fragmentShaderSourceCString = [fragmentShaderSource cStringUsingEncoding:NSUTF8StringEncoding];
// Create and compile fragment shader
GLuint _fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(_fragmentShader, 1, &fragmentShaderSourceCString, NULL);
glCompileShader(_fragmentShader);
// Create and link program
GLuint program = glCreateProgram();
glAttachShader(program, _vertexShader);
glAttachShader(program, _fragmentShader);
glLinkProgram(program);
// Use program
glUseProgram(program);
// Define geometry
GLfloat square[] = {
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
0.5, 0.5};
//Send geometry to vertex shader
const char *aPositionCString = [#"a_position" cStringUsingEncoding:NSUTF8StringEncoding];
GLuint aPosition = glGetAttribLocation(program, aPositionCString);
glVertexAttribPointer(aPosition, 2, GL_FLOAT, GL_FALSE, 0, square);
glEnableVertexAttribArray(aPosition);
// Draw
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present renderbuffer
[context presentRenderbuffer:GL_RENDERBUFFER];
}
So much code and nothing happens... I can draw relentlessly wherever I want without a single stencil test stopping me.
What can I do? How do I check if the stencil buffer has something drawn inside it? If there's a missing puzzle for any of you, I will happily share any other parts of the code.
Any help is greatly appreciated! This has been torturing me for a while now. I will be forever in your debt!
UPDATE
I got the contour thing to work but I didn't use the stencil buffer. I created masks for every drawing area and textures for each mask which I loaded in the fragment shader along with the brush texture. When I tap on an area, I iterate through the array of masks and see which one was selected and bind the mask texture. I will make another post on SO with a more appropriate title and explain it there.
The way you allocate the renderbuffer storage looks problematic:
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
The documentation says about this method:
The width, height, and internal color buffer format are derived from the characteristics of the drawable object.
The way I understand it, since your "drawable object" will normally be a color buffer, this will create a color renderbuffer. But you need a renderbuffer with stencil format in your case. I'm not sure if there's a way to do this with a utility method in the context class (the documentation says something about "overriding the internal color buffer format"), but the easiest way is probably to simply call the corresponding OpenGL function directly:
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
If you want to use your own FBO for this rendering, you will also need to create a color buffer for it, and attach it to the FBO. Otherwise you're not really producing any rendering output.
Instead of creating a new FBO, it might be easier to make sure that the default framebuffer has a stencil buffer, and render to it directly. To do this, you can request a stencil buffer for your GLKView derived view by making this call during setup:
[view setDrawableStencilFormat: GLKViewDrawableStencilFormat8];

How to emulate an accumulation buffer in OpenGL es 2.0 (Trailing Particles Effect)

So I have been trying to create a trailing particle effect (seen here) with OpenGL ES 2.0. Unfortunately it appears that the OpenGL command (accumulation buffer) that makes this possible is not available in OpenGL es. This means that it will be necessary to go the LONG way.
This topic described a possible method to do such a thing. However I am quite confused about how to store things inside a buffer and combine buffers. So my thought was to do the following.
Draw the current frame into a texture using a buffer that writes to a texture
Draw the previous frames (but faded) into another buffer.
Put step 1 ontop of step 2. And display that.
Save whatever is displayed for use next frame.
My understanding so far is that buffers store pixel data in the same way textures do, just that buffers can more easily be drawn to using shaders.
So the idea would probably be to render to a buffer THEN move it into a texture.
One theory for doing this that I found is this
In retrospect, you should create two FBOs (each with its own texture);
using the default framebuffer isn't reliable (the contents aren't
guaranteed to be preserved between frames).
After binding the first FBO, clear it then render the scene normally.
Once the scene has been rendered, use the texture as a source and
render it to the second FBO with blending (the second FBO is never
cleared). This will result in the second FBO containing a mix of the
new scene and what was there before. Finally, the second FBO should be
rendered directly to the window (this can be done by rendering a
textured quad, similarly to the previous operation, or by using
glBlitFramebuffer).
Essentially, the first FBO takes the place of the default framebuffer
while the second FBO takes the place of the accumulation buffer.
In summary:
Initialisation:
For each FBO:
- glGenTextures
- glBindTexture
- glTexImage2D
- glBindFrameBuffer
- glFramebufferTexture2D
Each frame:
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1) glClear glDraw* // scene
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2) glBindTexture(tex1)
glEnable(GL_BLEND) glBlendFunc glDraw* // full-screen quad
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0)
glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2) glBlitFramebuffer
unfortunately it didnt have quite enough code (especially for initialization to get me started).
But I have tried, and so far all I have gotten is a disappointing blank screen. I dont really know what I am doing, so probably this code is quite wrong.
var fbo1:GLuint = 0
var fbo2:GLuint = 0
var tex1:GLuint = 0
Init()
{
//...Loading shaders OpenGL etc.
//FBO 1
glGenFramebuffers(1, &fbo1)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo1)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
//FBO 2
glGenFramebuffers(1, &fbo2)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo2)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
}
func drawFullScreenTex()
{
glUseProgram(texShader)
let rect:[GLint] = [0, 0, GLint(width), GLint(height)]
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
//Texture is allready
glTexParameteriv(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_CROP_RECT_OES), rect)
glDrawTexiOES(0, 0, 0, width, height)
}
fun draw()
{
//Prep
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo1)
glClearColor(0, 0.1, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
//1
glUseProgram(pointShader);
passTheStuff() //Just passes in uniforms
drawParticles(glGetUniformLocation(pointShader, "color"), size_loc: glGetUniformLocation(pointShader, "pointSize")) //Draws particles
//2
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo2)
drawFullScreenTex()
//3
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), 0)
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), fbo2)
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_NEAREST))
}
BTW here are some sources I found useful.
Site 1
Site 2
Site 3
Site 4
My main question is: Could someone please write out the code for this. I think I understand the theory involved, but I have spent so much time trying in vain to apply it.
If you want a place to start I have the Xcode project that draws dots, and has a blue one that moves across the screen periodically here, also the code that isn't working is in their as well.
Note: If you are going to write code you can use any language c++, java, swift, objective-c it will be perfectly fine. As long as it is for OpenGL-ES
You call glGenTextures(1, &tex1) twice with the same variable tex1. This overwrites the variable. When you later call glBindTexture(GLenum(GL_TEXTURE_2D), tex1), it does not bind the texture corresponding to fbo1, but rather that of fbo2. You need a different texture for every fbo.
As for a reference, below is a sample from a working program of mine which uses multiple FBOs and renders to texture.
GLuint fbo[n];
GLuint tex[n];
init() {
glGenFramebuffers(n, fbo);
glGenTextures(n, tex);
for (int i = 0; i < n; ++i) {
glBindFramebuffer(GL_FRAMEBUFFER, fbo[i]);
glBindTexture(GL_TEXTURE_2D, tex[i]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[i], 0);
}
}
render() {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw scene into buffer 0
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[0]);
//Draw full screen tex
...
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[n - 1]);
// Draw to screen
return;
}
A few notes. In order to get it to work I had to add the texture parameters.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
This is because on my system they defaulted to GL_NEAREST_MIPMAP_LINEAR. This did not work for the FBO texture, as no mipmap was generated. Set these to anything you like.
Also, make sure you have textures enabled with
glEnable(GL_TEXTURE_2D)
I hope this will help.

EXC_BAD_ACCESS with glTexImage2D in GLKViewController

I have an EXC_BAD_ACCESS at the last line of this code (this code is fired several times per second), but I cannot figure out what is the problem:
[EAGLContext setCurrentContext:_context];
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
When debugging I make sure that the texture is created (the id is > 0), output frame has a valid pointer to the data and is a 4 channel matrix. I am inside the drawRect method of a GLKViewController. I think I should not have to bind the framebuffer as it is one of the things that are automated here. It doesn't crash at the first frame, but a few dozens frames later.
Can anybody spot the problem?
UPDATE:
It seems it's because of a race condition on _outputFrame, it's being updated while being read by glTexImage2D. I will try to lock it for read, then report back.
That was the solution indeed (see UPDATE), I fixed it with NSLock. Firstly I swapped the instance variable _outputFrame with a temporary one that gets updated from another thread and used the lock to update the instance variable:
[_frameLock lock];
_outputFrame = temp;
[_frameLock unlock];
Then used the lock when I wanted to read from the instance variable:
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
[_frameLock lock];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
[_frameLock unlock];
I just figured out some problem like this after several days.
1. better avoid rendering in multi-thread
2. better render in GLKView with base affect, and don't manually manage framebuffer& render buffer by yourself
3. base effect render raw pixel data like this
My solution:
glTexImage2D(...);
self.baseEffect.texture2d0.envMode = GLKTextureEnvModeReplace;
self.baseEffect.texture2d0.target = GLKTextureTarget2D;
self.baseEffect.texture2d0.name = texture;
self.baseEffect.texture2d0.enabled = YES;
self.baseEffect.useConstantColor = YES;

CVOpenGLESTextureCache vs glTexSubImage2D on iOS

My OpenGL app uses OpenGL to render a texture in full screen and updates part of it at regular intervals. So far, I've been using glTexImage2D to push my initial texture and then I update the dirty regions with glTexSubImage2D. To do that, I'm using single buffering. This works well.
I've seen that there might be another way to achieve the same thing using CVOpenGLESTextureCache. The textures held in the texture cache reference a CVPixelBuffer. I'd like to know if I can mutate these cached textures. I tried to recreate a CVOpenGLESTexture for each update but this decreases my frame rate dramatically (not surprising after all since I'm not specifying the dirty region anywhere). Maybe I totally misunderstood the use case for this texture cache.
Can someone provide some guidance?
UPDATE: Here is the code I'm using. The first update works fine. The subsequent updates don't (nothing happens). Between each update I modify the raw bitmap.
if (firstUpdate) {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, ctx, NULL, &texCache);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(NULL, width_, height_, kCVPixelFormatType_32BGRA, bitmap, width_*4, NULL, 0, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CVOpenGLESTextureRef texture = NULL;
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, texCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, width_, height_, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
texture_[0] = CVOpenGLESTextureGetName(texture);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
CVOpenGLESTextureCacheFlush(texCache, 0);
if (firstUpdate) {
glBindTexture(GL_TEXTURE_2D, texture_[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
if (firstUpdate) {
static const float textureVertices[] = {
-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
1.0, 1.0
};
static const float textureCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
glVertexPointer(2, GL_FLOAT, 0, &textureVertices[0]);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
firstUpdate = false;
I have been doing quite a bit of hacking with these texture APIs and I finally was able to produce a working example of writing to a texture via memory using the texture cache API. These APIs work on the iOS device but not on the simulator, so a special workaround was needed (basically just calling glTexSubImage2D() explicitly in the simulator). The code needed to double buffer the texture loading done in another thread to avoid updating while rendering was going on. The full source code and timing results are at opengl_write_texture_cache. The linked Xcode project decodes from PNGs and the performance on older iPhone hardware is a little poor as a result. But the code is free to do whatever you want with, so it should not be hard to adapt to some other pixel source. To only write a dirty region, only write to that portion of the memory buffer in the background thread.

OpenGL ES 2.0 Object Picking on iOS (Using Color Coding)

This might appear as a related question:
OpenGL ES 2.0 Object Picking on iOS
Which says Color picker is a good solution, and in deed after reading about it:
http://www.lighthouse3d.com/opengl/picking/index.php?color1
It does seem like a very simple solution so this brings me to this question
OpenGL ES color picking on the iPhone
Which unfortunately uses opengl es 1.0, I am trying to do it in 2.0 so I have no access
to the functions described in that question.
But the theory seems simple and here is what I think I should do:
On touches begin I render my objects with a unique color.
On touches ended I get the pixel from that position and check it for the color
to get my object. (probably with glReadPixels)
The problem is that I dont know how to do the "Render to the back buffer and read from it".
My code so far simply uses "Draw", I suspect I have to do something like glBindthe other buffer but I would appreciate some help.
My Drawing code is like this:
glClearColor(0, 0, 0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
// Set the Projection Matrix
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(60), 2.0/3.0, 0, 50);
glUseProgram(_programHD);
glBindVertexArrayOES(_vao);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textureBuffer[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(uniforms[UNIFORM_TEXTURE_HD], 1);
// Drawing starts here //
// Pass the Model View Matrix to Open GL
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix,rotationMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX_HD], 1, GL_FALSE, _modelViewProjectionMatrix.m);
// Change texture coordinates to draw a different image
glUniform2fv(uniforms[TEXTURE_OFFSET_HD], 1, offSet.v);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
//glUniform2i(uniforms[TEXTURE_OFFSET], 7, -5);
glUniform2fv(uniforms[TEXTURE_OFFSET_HD], 1, borderHD.v);
glDrawElements(GL_LINE_STRIP, 6, GL_UNSIGNED_SHORT, 0);
glBindVertexArrayOES(0);
glUseProgram(0);
I have stripped the drawing calculations to make it more understandable.
The point is I do not see anywhere where I specify to "where" am i drawing.
Thanks for your help.
I've actually just finished implementing a colour picking function into my iPhone game, using openGL ES 2.0, using the lighthouse tutorial funny enough.
You should be drawing to the frame buffer.
If you want to read from the frame buffer, then you're correct in that you want to use glReadPixels. More information is here:
http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml
The only thing that's different from the lighthouse tutorial is that you also want to store the alpha values.
Here's a quick function to get the colour of a specific pixel. Feel free to improve it or change it, but it does the job.
+ (void) ProcessColourPick : (GLubyte*) out : (Float32) x : (Float32) y
{
GLint viewport[4];
//Get size of screen
glGetIntegerv(GL_VIEWPORT,viewport);
GLubyte pixel[4];
//Read pixel from a specific point
glReadPixels(x,viewport[3] - y,1,1,
GL_RGBA,GL_UNSIGNED_BYTE,(void *)pixel);
out[0] = pixel[0];
out[1] = pixel[1];
out[2] = pixel[2];
out[3] = pixel[3];
}
Hope this helps.

Resources