GLImageProcessing ROI (Region of Interest) - ios

I am currently trying to blur a part of an image. I use apple's example code here
The example code itself can blur whole image and draw it to the EAGLView, what I want to do is blur only part of the image by supplying an ROI.
I do not know how to supply an ROI to the function.
Here is the code which draws image to the view;
void drawGL(int wide, int high, float val, int mode)
{
static int prevmode = -1;
typedef void (*procfunc)(V2fT2f *, float);
typedef struct {
procfunc func;
procfunc degen;
} Filter;
const Filter filter[] = {
{ brightness },
{ contrast },
{ extrapolate, greyscale },
{ hue },
{ extrapolate, blur }, // The blur could be exaggerated by downsampling to half size
};
#define NUM_FILTERS (sizeof(filter)/sizeof(filter[0]))
rt_assert(mode < NUM_FILTERS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, wide, 0, high, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(wide, high, 1);
glBindTexture(GL_TEXTURE_2D, Input.texID);
if (prevmode != mode)
{
prevmode = mode;
if (filter[mode].degen)
{
// Cache degenerate image, potentially a different size than the system framebuffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, DegenFBO);
glViewport(0, 0, Degen.wide*Degen.s, Degen.high*Degen.t);
// The entire framebuffer won't be written to if the image was padded to POT.
// In this case, clearing is a performance win on TBDR systems.
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_BLEND);
filter[mode].degen(fullquad, 1.0);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO);
}
}
// Render filtered image to system framebuffer
glViewport(0, 0, wide, high);
filter[mode].func(flipquad, val);
glCheckError();
}
And this is the function which blurs the image;
static void blur(V2fT2f *quad, float t) // t = 1
{
GLint tex;
V2fT2f tmpquad[4];
float offw = t / Input.wide;
float offh = t / Input.high;
int i;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);
// Three pass small blur, using rotated pattern to sample 17 texels:
//
// .\/..
// ./\\/
// \/X/\ rotated samples filter across texel corners
// /\\/.
// ../\.
// Pass one: center nearest sample
glVertexPointer (2, GL_FLOAT, sizeof(V2fT2f), &quad[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &quad[0].s);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glColor4f(1.0/5, 1.0/5, 1.0/5, 1.0);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass two: accumulate two rotated linear samples
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s + 1.5 * offw;
tmpquad[i].y = quad[i].t + 0.5 * offh;
tmpquad[i].s = quad[i].s - 1.5 * offw;
tmpquad[i].t = quad[i].t - 0.5 * offh;
}
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].x);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].s);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, tex);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR);
glColor4f(0.5, 0.5, 0.5, 2.0/5);
validateTexEnv();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Pass three: accumulate two rotated linear samples
for (i = 0; i < 4; i++)
{
tmpquad[i].x = quad[i].s - 0.5 * offw;
tmpquad[i].y = quad[i].t + 1.5 * offh;
tmpquad[i].s = quad[i].s + 0.5 * offw;
tmpquad[i].t = quad[i].t - 1.5 * offh;
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Restore state
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Half.texID);
glDisable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glDisable(GL_BLEND);
}
Where should I supply an ROI or if any other way is possible to blur a part of an image without ROI, I would like to know as well.
Thanks.

Im not a big OpenGL ES knower, but this code operate with whole (not ROI) textures on surface.
I using this example too.
I think, you should:
cut ROI of your image
create new texture with this image
blur whole new texture
set new texture over your original texture
Also few links:
How to implement a box or gaussian blur on iPhone, Blur Effect (Wet in Wet effect) in Paint Application Using OpenGL-ES,
how to sharp/blur an uiimage in iphone?

Have you tried glScissor() yet?
from the GLES1.1 spec:
glScissor defines a rectangle, called the scissor box, in window
coordinates. The first two arguments, x and y, specify the lower left
corner of the box. width and height specify the width and height of
the box.
To enable and disable the scissor test, call glEnable and glDisable
with argument GL_SCISSOR_TEST. The scissor test is initially disabled.
While scissor test is enabled, only pixels that lie within the scissor
box can be modified by drawing commands. Window coordinates have
integer values at the shared corners of frame buffer pixels.
glScissor(0, 0, 1, 1) allows modification of only the lower left pixel
in the window,and glScissor(0, 0, 0, 0) doesn't allow modification of
any pixels in the window.
You might have to do 2 draw passes; first the unfiltered image; the second is the filtered image but drawn with the scissor test.

Related

How to view a renderbuffer of GLuints on the screen?

To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)
glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();

I have encountered strange behaviour with opengl shader in iOS

I am writing opengl shader in iOS to apply image effects. I need to use a map to lookup pixels from image. Here is the code for my shader:
precision lowp float;
uniform sampler2D u_Texture;
uniform sampler2D u_Map; //map
varying highp vec2 v_TexCoordinate;
void main()
{
//get the pixel
vec3 texel = texture2D(u_Map, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel, 1.0);
}
Above is a test shader which should show up the map that is being used.
Now, here is the behaviour of above shader with two maps
MAP_1 pixel size (256 x 1)
and here is the output using above shader:
MAP_2 pixel size (256 x 3)
and here is the output:
So, while using the map of 256 x 1 pixel it works as it should but on using the map of 256 x 3 pixel it shows up the black image. I have tested this well with other maps too and encountered that it is because of the pixel height of the map.
Here is the code on how I am loading the map:
+ (GLuint)loadImage:(UIImage *)image{
//Convert Image to Data
GLubyte* imageData = malloc(image.size.width * image.size.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, image.size.width, image.size.height, 8, image.size.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, image.size.width, image.size.height), image.CGImage);
//Release Objects
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
//load into texture
GLuint textureHandle;
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
[GLToolbox checkGLError:#"glTextureHandle"];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.size.width, image.size.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
[GLToolbox checkGLError:#"glTexImage2D"];
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//Free Image Data
free(imageData);
return textureHandle;}
I am not sure why this is happening. May be, I am doing something wrong in loading the Map of size 256x3. Someone, please show me the way up to fix above issue.
Thanks in advance.
In OpenGLES2, non-power-of-two textures are required to have no mipmaps (you're fine there) and use GL_CLAMP_TO_EDGE (I think this is your problem). From here:
Similarly, if the width or height of a texture image are not powers of two and either the GL_TEXTURE_MIN_FILTER is set to one of the functions that requires mipmaps or the GL_TEXTURE_WRAP_S or GL_TEXTURE_WRAP_T is not set to GL_CLAMP_TO_EDGE, then the texture image unit will return (R, G, B, A) = (0, 0, 0, 1).
You don't set the wrap mode, but from the same document:
Initially, GL_TEXTURE_WRAP_S is set to GL_REPEAT.
To fix, set the wrap mode to GL_CLAMP_TO_EDGE, or use a 256x4 texture instead of 256x3 (I'd lean toward the latter unless there's some obstacle to doing so, GPUs love powers of two!).

Display a image from the Webcam using openCV and openGL

I'm capturing a picture from a webcam with openCV. Then the frame should be transformed into an openGL texture and be shown on the screen. I got the following code but the window remains black. I'm very new to openGL and have no more ideas why it doesn't work.
int main ()
{
int w = 800,h=600;
glfwInit();
//configure glfw
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(w, h, "OpenGL", NULL, nullptr); // windowed
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit();
initializeCapturing();
//init GL
glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
glEnable(GL_TEXTURE_2D); // Enable 2D texturing
glMatrixMode(GL_PROJECTION); // Make a simple 2D projection on the entire window
glLoadIdentity();
glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
glMatrixMode(GL_MODELVIEW); // Set the matrix mode to object modeling
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
cv::Mat frame;
captureFromWebcam(frame,capture0);
/* OpenGL texture binding of the image loaded by DevIL */
GLuint texid;
glGenTextures(1, &texid); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texid); /* Binding of texture name */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */
glTexImage2D(GL_TEXTURE_2D, 0,3, frame.size().width, frame.size().height, 0, GL_RGB, GL_UNSIGNED_BYTE, 0); /* Texture specification */
while(!glfwWindowShouldClose(window))
{
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
// Clear color and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texid);
glMatrixMode(GL_MODELVIEW); // Operate on model-view matrix
/* Draw a quad */
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0, 0);
glTexCoord2i(0, 1); glVertex2i(0, h);
glTexCoord2i(1, 1); glVertex2i(w, h);
glTexCoord2i(1, 0); glVertex2i(w, 0);
glEnd();
glFlush();
glfwSwapBuffers(window);
}
releaseCapturing();
glfwTerminate();
return 1;
}
and the other procedures
cv::VideoCapture capture0;
cv::VideoCapture capture1;
void captureFromWebcam(cv::Mat &frame, cv::VideoCapture &capture)
{
capture.read(frame);
}
bool initializeCapturing()
{
capture0.open(0);
capture1.open(1);
if(!capture0.isOpened() | !capture1.isOpened())
{
std::cout << "Ein oder mehrere VideoCaptures konnten nicht geöffnet werden" << std::endl;
if(!capture0.isOpened())
capture0.release();
if(!capture1.isOpened())
capture1.release();
return false;
}
return true;
}
void releaseCapturing()
{
capture0.release();
capture1.release();
}
It looks like you mixed several code fragments you found at different places. You are rquesting an OpenGL 3.2 core profile. But the drawing code you are using is immediate mode with the fixed-function pipeline, which is not available in a core profile. You basically requested a modern GL context, but the drawing code is completely outdated and not supported any more by the GL version you selected.
As a quick fix, you could simply remove the following lines:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
Doing so should provide you with some legacy GL context OR a compatibility profile of some modern context, depending on the operation system an GL implementation you use. However, you can expect from that to get at least OpenGL 2.1 that way (except on very old hardware not supporting GL2 at all, but even that would be OK for your code). The rest of the code should work in such a context.
I still suggest that you learn mordern GL instead of the old, deprecated legacy stuff you are using here.
You did not set the pointer from the frame.
glTexImage2D(GL_TEXTURE_2D, 0,3, frame.size().width, frame.size().height, 0, GL_RGB, GL_UNSIGNED_BYTE, (void*)frame.data()); /* Texture specification */
And with this code, you only are going to get the first frame.
Put "captureFromWebcam" and load the texture inside the while.
(Sorry for my english)

Achieving a persistence effect in GLKit view

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?
I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

iPad texture loading differences (32-bit vs. 64-bit)

I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad.
Here is the texture drawn on a 32-bit iPad:
Here is the texture drawn on a 64-bit iPad:
The 64-bit is what I desire, but it seems like maybe it is losing some data?
I create a default brush texture with this code:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
And then actually set the brush texture like this:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
Update:
I've updated CGFloats to be GLfloats with no success. Maybe there is an issue with this rendering code?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
The vertex struct is the following:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
Update:
The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. The textures ended up looking exactly the same on both builds of the app.
I would like you to check two things.
Check your alpha blending logic(or option) in OpenGL.
Check your interpolation logic which is proportional to velocity of dragging.
It seems you don't have second one or not effective which is required to drawing app
I don't think the problem is in the texture but in the frame buffer to which you composite the line elements.
Your code fragments look like you draw segments by segment, so there are several overlapping segments drawn on top of each other. If the depth of the frame buffer is low there will be artifacts, especially in the lighter regions of the blended areas.
You can check the frame buffer using Xcode's OpenGL debugger. Activate it by running your code on the device and click the little "Capture OpenGL ES Frame" button: .
Select a "glBindFramebuffer" command in the "Debug Navigator" and look at the frame buffer description in the console area:
The interesting part is the GL_FRAMEBUFFER_INTERNAL_FORMAT.
In my opinion, the problem is in the blending mode you use when composing different image passes. I assume that you upload the texture for display only, and keep the in-memory image where you composite different drawing operations, or you read-back the image content using glReadPixels ?
Basically your second images appears like a straight-alpha image drawn like a pre-multiplied alpha image.
To be sure that it isn't a texture problem, save a NSImage to file before uploading to the texture, and check that the image is actually correct.

Resources