Achieving a persistence effect in GLKit view - ios

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?

I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

Related

How to view a renderbuffer of GLuints on the screen?

To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)
glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();

Display a image from the Webcam using openCV and openGL

I'm capturing a picture from a webcam with openCV. Then the frame should be transformed into an openGL texture and be shown on the screen. I got the following code but the window remains black. I'm very new to openGL and have no more ideas why it doesn't work.
int main ()
{
int w = 800,h=600;
glfwInit();
//configure glfw
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(w, h, "OpenGL", NULL, nullptr); // windowed
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit();
initializeCapturing();
//init GL
glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
glEnable(GL_TEXTURE_2D); // Enable 2D texturing
glMatrixMode(GL_PROJECTION); // Make a simple 2D projection on the entire window
glLoadIdentity();
glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
glMatrixMode(GL_MODELVIEW); // Set the matrix mode to object modeling
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
cv::Mat frame;
captureFromWebcam(frame,capture0);
/* OpenGL texture binding of the image loaded by DevIL */
GLuint texid;
glGenTextures(1, &texid); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texid); /* Binding of texture name */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */
glTexImage2D(GL_TEXTURE_2D, 0,3, frame.size().width, frame.size().height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0); /* Texture specification */
while(!glfwWindowShouldClose(window))
{
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
// Clear color and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texid);
glMatrixMode(GL_MODELVIEW); // Operate on model-view matrix
/* Draw a quad */
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0, 0);
glTexCoord2i(0, 1); glVertex2i(0, h);
glTexCoord2i(1, 1); glVertex2i(w, h);
glTexCoord2i(1, 0); glVertex2i(w, 0);
glEnd();
glFlush();
glfwSwapBuffers(window);
}
releaseCapturing();
glfwTerminate();
return 1;
}
and the other procedures
cv::VideoCapture capture0;
cv::VideoCapture capture1;
void captureFromWebcam(cv::Mat &frame, cv::VideoCapture &capture)
{
capture.read(frame);
}
bool initializeCapturing()
{
capture0.open(0);
capture1.open(1);
if(!capture0.isOpened() | !capture1.isOpened())
{
std::cout << "Ein oder mehrere VideoCaptures konnten nicht geöffnet werden" << std::endl;
if(!capture0.isOpened())
capture0.release();
if(!capture1.isOpened())
capture1.release();
return false;
}
return true;
}
void releaseCapturing()
{
capture0.release();
capture1.release();
}
It looks like you mixed several code fragments you found at different places. You are rquesting an OpenGL 3.2 core profile. But the drawing code you are using is immediate mode with the fixed-function pipeline, which is not available in a core profile. You basically requested a modern GL context, but the drawing code is completely outdated and not supported any more by the GL version you selected.
As a quick fix, you could simply remove the following lines:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
Doing so should provide you with some legacy GL context OR a compatibility profile of some modern context, depending on the operation system an GL implementation you use. However, you can expect from that to get at least OpenGL 2.1 that way (except on very old hardware not supporting GL2 at all, but even that would be OK for your code). The rest of the code should work in such a context.
I still suggest that you learn mordern GL instead of the old, deprecated legacy stuff you are using here.
You did not set the pointer from the frame.
glTexImage2D(GL_TEXTURE_2D, 0,3, frame.size().width, frame.size().height, 0, GL_RGB, GL_UNSIGNED_BYTE, (void*)frame.data()); /* Texture specification */
And with this code, you only are going to get the first frame.
Put "captureFromWebcam" and load the texture inside the while.
(Sorry for my english)

Create And Write GL_ALPHA texture (OpenGL ES 2)

I'm new to OpenGL so I'm not sure how to do this.
Currently I'm doing this to create an Alpha Texture in iOS :
GLuint framebuffer, renderBufferT;
glGenFramebuffers(1, &framebuffer);
glGenTextures(1, &renderBufferT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindTexture(GL_TEXTURE_2D, renderBufferT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderBufferT, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"createAlphaBufferWithSize not complete %x", status);
return NO;
}
But it returns an error : GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
And I also wonder how to write to this texture in the fragment shader. Is it simply the same with RGBA, like this :
gl_FragColor = vec1(0.5);
My intention is to use an efficient texture, because there are so much texture reading in my code, while I only need one component of color.
Thanks for any direction where I might go with this.
I'm not an iOS guy but that error indicates your OpenGL ES driver (PowerVR) does not support rendering to the GL_ALPHA format. I have not seen any OpenGL ES drivers that will do that on any platform. You can create GL_ALPHA textures to use with OpenGL ES using the texture compression tool in the PowerVR SDK, but I think the smallest format you can render to will be 16 bit color - if that is even available.
A better way to make textures efficient is to use compressed formats because the GPU decodes them with dedicated hardware. You really need the PowerVR Texture compression tool. It is a free download:
http://www.imgtec.com/powervr/insider/sdkdownloads/sdk_licence_agreement.asp
So I still haven't found the answers for my questions above, but I found a workaround.
In essence, since each pixel comprises of 4 color components, and I only need one for alpha only, then what I do is I use one texture to store 4 different logical alpha textures that I need. It takes a little effort to maintain these logical alpha textures.
And to draw to this one texture that contains 4 logical alpha textures, I use a shader and create a sign bit that marks which color component I intend to write to.
The blend I used is
(GL_ONE, GL_ONE_MINUS_SRC_COLOR),
And the fragment shader is like this :
uniform lowp vec4 uSignBit;
void main()
{
lowp vec4 textureColor = texture2D(uTexture,vTexCoord);
gl_FragColor = uSignBit * textureColor.a;
}
So, when I intend to write an alpha value of some texture to a logical alpha texture number 2, I write :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);
glUniform4f(uSignBit, 0.0, 1.0, 0.0, 0.0);

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

GLImageProcessing ROI (Region of Interest)

I am currently trying to blur a part of an image. I use apple's example code here
The example code itself can blur whole image and draw it to the EAGLView, what I want to do is blur only part of the image by supplying an ROI.
I do not know how to supply an ROI to the function.
Here is the code which draws image to the view;
void drawGL(int wide, int high, float val, int mode)
{
static int prevmode = -1;
typedef void (*procfunc)(V2fT2f *, float);
typedef struct {
procfunc func;
procfunc degen;
} Filter;
const Filter filter[] = {
{ brightness },
{ contrast },
{ extrapolate, greyscale },
{ hue },
{ extrapolate, blur }, // The blur could be exaggerated by downsampling to half size
};
#define NUM_FILTERS (sizeof(filter)/sizeof(filter[0]))
rt_assert(mode < NUM_FILTERS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, wide, 0, high, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(wide, high, 1);
glBindTexture(GL_TEXTURE_2D, Input.texID);
if (prevmode != mode)
{
prevmode = mode;
if (filter[mode].degen)
{
// Cache degenerate image, potentially a different size than the system framebuffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, DegenFBO);
glViewport(0, 0, Degen.wide*Degen.s, Degen.high*Degen.t);
// The entire framebuffer won't be written to if the image was padded to POT.
// In this case, clearing is a performance win on TBDR systems.
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_BLEND);
filter[mode].degen(fullquad, 1.0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
}
}
// Render filtered image to system framebuffer
glViewport(0, 0, wide, high);
filter[mode].func(flipquad, val);
glCheckError();
}
And this is the function which blurs the image;
static void blur(V2fT2f *quad, float t) // t = 1
{
GLint tex;
V2fT2f tmpquad[4];
float offw = t / Input.wide;
float offh = t / Input.high;
int i;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);
// Three pass small blur, using rotated pattern to sample 17 texels:
//
// .\/..
// ./\\/
// \/X/\ rotated samples filter across texel corners
// /\\/.
// ../\.
// Pass one: center nearest sample
glVertexPointer (2, GL_FLOAT, sizeof(V2fT2f), &quad[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &quad[0].s);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glColor4f(1.0/5, 1.0/5, 1.0/5, 1.0);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass two: accumulate two rotated linear samples
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s + 1.5 * offw;
tmpquad[i].y = quad[i].t + 0.5 * offh;
tmpquad[i].s = quad[i].s - 1.5 * offw;
tmpquad[i].t = quad[i].t - 0.5 * offh;
}
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].x);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].s);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, tex);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glColor4f(0.5, 0.5, 0.5, 2.0/5);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass three: accumulate two rotated linear samples
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s - 0.5 * offw;
tmpquad[i].y = quad[i].t + 1.5 * offh;
tmpquad[i].s = quad[i].s + 0.5 * offw;
tmpquad[i].t = quad[i].t - 1.5 * offh;
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Restore state
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Half.texID);
glDisable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glDisable(GL_BLEND);
}
Where should I supply an ROI or if any other way is possible to blur a part of an image without ROI, I would like to know as well.
Thanks.
Im not a big OpenGL ES knower, but this code operate with whole (not ROI) textures on surface.
I using this example too.
I think, you should:
cut ROI of your image
create new texture with this image
blur whole new texture
set new texture over your original texture
Also few links:
How to implement a box or gaussian blur on iPhone, Blur Effect (Wet in Wet effect) in Paint Application Using OpenGL-ES,
how to sharp/blur an uiimage in iphone?
Have you tried glScissor() yet?
from the GLES1.1 spec:
glScissor defines a rectangle, called the scissor box, in window
coordinates. The first two arguments, x and y, specify the lower left
corner of the box. width and height specify the width and height of
the box.
To enable and disable the scissor test, call glEnable and glDisable
with argument GL_SCISSOR_TEST. The scissor test is initially disabled.
While scissor test is enabled, only pixels that lie within the scissor
box can be modified by drawing commands. Window coordinates have
integer values at the shared corners of frame buffer pixels.
glScissor(0, 0, 1, 1) allows modification of only the lower left pixel
in the window,and glScissor(0, 0, 0, 0) doesn't allow modification of
any pixels in the window.
You might have to do 2 draw passes; first the unfiltered image; the second is the filtered image but drawn with the scissor test.

Resources