How to view a renderbuffer of GLuints on the screen? - opengl-3

To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)

glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();

Related

I have encountered strange behaviour with opengl shader in iOS

I am writing opengl shader in iOS to apply image effects. I need to use a map to lookup pixels from image. Here is the code for my shader:
precision lowp float;
uniform sampler2D u_Texture;
uniform sampler2D u_Map; //map
varying highp vec2 v_TexCoordinate;
void main()
{
//get the pixel
vec3 texel = texture2D(u_Map, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel, 1.0);
}
Above is a test shader which should show up the map that is being used.
Now, here is the behaviour of above shader with two maps
MAP_1 pixel size (256 x 1)
and here is the output using above shader:
MAP_2 pixel size (256 x 3)
and here is the output:
So, while using the map of 256 x 1 pixel it works as it should but on using the map of 256 x 3 pixel it shows up the black image. I have tested this well with other maps too and encountered that it is because of the pixel height of the map.
Here is the code on how I am loading the map:
+ (GLuint)loadImage:(UIImage *)image{
//Convert Image to Data
GLubyte* imageData = malloc(image.size.width * image.size.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, image.size.width, image.size.height, 8, image.size.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, image.size.width, image.size.height), image.CGImage);
//Release Objects
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
//load into texture
GLuint textureHandle;
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
[GLToolbox checkGLError:#"glTextureHandle"];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.size.width, image.size.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
[GLToolbox checkGLError:#"glTexImage2D"];
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//Free Image Data
free(imageData);
return textureHandle;}
I am not sure why this is happening. May be, I am doing something wrong in loading the Map of size 256x3. Someone, please show me the way up to fix above issue.
Thanks in advance.
In OpenGLES2, non-power-of-two textures are required to have no mipmaps (you're fine there) and use GL_CLAMP_TO_EDGE (I think this is your problem). From here:
Similarly, if the width or height of a texture image are not powers of two and either the GL_TEXTURE_MIN_FILTER is set to one of the functions that requires mipmaps or the GL_TEXTURE_WRAP_S or GL_TEXTURE_WRAP_T is not set to GL_CLAMP_TO_EDGE, then the texture image unit will return (R, G, B, A) = (0, 0, 0, 1).
You don't set the wrap mode, but from the same document:
Initially, GL_TEXTURE_WRAP_S is set to GL_REPEAT.
To fix, set the wrap mode to GL_CLAMP_TO_EDGE, or use a 256x4 texture instead of 256x3 (I'd lean toward the latter unless there's some obstacle to doing so, GPUs love powers of two!).

Achieving a persistence effect in GLKit view

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?
I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

GLImageProcessing ROI (Region of Interest)

I am currently trying to blur a part of an image. I use apple's example code here
The example code itself can blur whole image and draw it to the EAGLView, what I want to do is blur only part of the image by supplying an ROI.
I do not know how to supply an ROI to the function.
Here is the code which draws image to the view;
void drawGL(int wide, int high, float val, int mode)
{
static int prevmode = -1;
typedef void (*procfunc)(V2fT2f *, float);
typedef struct {
procfunc func;
procfunc degen;
} Filter;
const Filter filter[] = {
{ brightness },
{ contrast },
{ extrapolate, greyscale },
{ hue },
{ extrapolate, blur }, // The blur could be exaggerated by downsampling to half size
};
#define NUM_FILTERS (sizeof(filter)/sizeof(filter[0]))
rt_assert(mode < NUM_FILTERS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, wide, 0, high, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(wide, high, 1);
glBindTexture(GL_TEXTURE_2D, Input.texID);
if (prevmode != mode)
{
prevmode = mode;
if (filter[mode].degen)
{
// Cache degenerate image, potentially a different size than the system framebuffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, DegenFBO);
glViewport(0, 0, Degen.wide*Degen.s, Degen.high*Degen.t);
// The entire framebuffer won't be written to if the image was padded to POT.
// In this case, clearing is a performance win on TBDR systems.
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_BLEND);
filter[mode].degen(fullquad, 1.0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
}
}
// Render filtered image to system framebuffer
glViewport(0, 0, wide, high);
filter[mode].func(flipquad, val);
glCheckError();
}
And this is the function which blurs the image;
static void blur(V2fT2f *quad, float t) // t = 1
{
GLint tex;
V2fT2f tmpquad[4];
float offw = t / Input.wide;
float offh = t / Input.high;
int i;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);
// Three pass small blur, using rotated pattern to sample 17 texels:
//
// .\/..
// ./\\/
// \/X/\ rotated samples filter across texel corners
// /\\/.
// ../\.
// Pass one: center nearest sample
glVertexPointer (2, GL_FLOAT, sizeof(V2fT2f), &quad[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &quad[0].s);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glColor4f(1.0/5, 1.0/5, 1.0/5, 1.0);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass two: accumulate two rotated linear samples
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s + 1.5 * offw;
tmpquad[i].y = quad[i].t + 0.5 * offh;
tmpquad[i].s = quad[i].s - 1.5 * offw;
tmpquad[i].t = quad[i].t - 0.5 * offh;
}
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].x);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].s);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, tex);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glColor4f(0.5, 0.5, 0.5, 2.0/5);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass three: accumulate two rotated linear samples
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s - 0.5 * offw;
tmpquad[i].y = quad[i].t + 1.5 * offh;
tmpquad[i].s = quad[i].s + 0.5 * offw;
tmpquad[i].t = quad[i].t - 1.5 * offh;
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Restore state
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Half.texID);
glDisable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glDisable(GL_BLEND);
}
Where should I supply an ROI or if any other way is possible to blur a part of an image without ROI, I would like to know as well.
Thanks.
Im not a big OpenGL ES knower, but this code operate with whole (not ROI) textures on surface.
I using this example too.
I think, you should:
cut ROI of your image
create new texture with this image
blur whole new texture
set new texture over your original texture
Also few links:
How to implement a box or gaussian blur on iPhone, Blur Effect (Wet in Wet effect) in Paint Application Using OpenGL-ES,
how to sharp/blur an uiimage in iphone?
Have you tried glScissor() yet?
from the GLES1.1 spec:
glScissor defines a rectangle, called the scissor box, in window
coordinates. The first two arguments, x and y, specify the lower left
corner of the box. width and height specify the width and height of
the box.
To enable and disable the scissor test, call glEnable and glDisable
with argument GL_SCISSOR_TEST. The scissor test is initially disabled.
While scissor test is enabled, only pixels that lie within the scissor
box can be modified by drawing commands. Window coordinates have
integer values at the shared corners of frame buffer pixels.
glScissor(0, 0, 1, 1) allows modification of only the lower left pixel
in the window,and glScissor(0, 0, 0, 0) doesn't allow modification of
any pixels in the window.
You might have to do 2 draw passes; first the unfiltered image; the second is the filtered image but drawn with the scissor test.

Read pixels from off-screen OpenGL pixel buffer in iOS (OopenGL-ES)

I want to read pixels from an off-screen (not backed by a CAEAGLLayer) Framebuffer. My code to create the buffer looks like:
glGenFramebuffersOES(1, &_storeFramebuffer);
glGenRenderbuffersOES(1, &_storeRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, _storeRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, w, h);
I read raw pixels with:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
This works well. I can render to this buffer, and copy from it to the screen. But I can't get raw pixels. glReadPixels always returns zeros, and glReadBuffer seems not to exist. I can read from the on-screen frame buffer with glReadPixels. Any ideas?
Solved. RGBA to BGRA conversion is not supported by glReadPixels on iOS.
Changing
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
to
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
Solves the problem. glGetError is my new friend ;)

Resources