I'm trying to do video processing using GLSL. I'm using OpenCV to open a video file up and take each frame as a single image an then I want to use each frame in a GLSL shader
What is the best/ideal/smart solution to using video with GLSL?
Reading From Video
VideoCapture cap("movie.MOV");
Mat image;
bool success = cap.read(image);
if(!success)
{
printf("Could not grab a frame\n\7");
exit(0);
}
Image to Texture
GLuint tex;
glGenTextures(1, tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.cols, image.rows, 0,
GL_BGR, GL_UNSIGNED_BYTE, image.data);
glUniform1i(glGetUniformLocation(shaderProgram, "Texture"), 0);
What needs to be in my render while loop?
Do I need to recompile/reattach/relink my shader every time? Or Once my shader is created and compiled and I use glUseProgram(shaderProgram) can I keep sending it new textures?
The current loop I've been using to render a texture to the screen is as follows. How Could I adapt this to work with video? Where would I need to make my calls to update the texture being used in the shader?
while(!glfwWindowShouldClose(window))
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0,0,512,512);
glBindFramebuffer(GL_READ_FRAMEBUFFER, frameBuffer);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, image.cols, image.rows, 0, 0, image.cols, image.rows, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glfwSwapBuffers(window);
glfwPollEvents();
}
Let's clarify a few things that needs to happen before the loop:
Set the pixel storage mode with glPixelStorei();
Generate only one texture with glGenTextures(), because at every iteration of the loop its content will be replaced with new data;
Compile the shader with glCompileShader(), use glCreateShader() to create a shader object, invoke glCreateProgram()to create a program, call glAttachShader() to attach the shader object to the new program, and finally glLinkProgram() to make everything ready to go.
That said, every iteration of the loop must:
Clear the color and depth buffer;
Load the modelview matrix with the identity matrix;
Specify the location where the drawing is going to happen glTranslatef();
Retrieve a new frame from the video;
Enable the appropriate texture target, bind it and then transfer the frame to the GPU with glTexImage2D();
Invoke glUseProgram() to activate your GLSL shader;
Draw a 2D face using GL_QUADS or whatever;
Disable the program with glUseProgram(0);
Disable the texture target with glDisable(GL_TEXTURE_YOUR_TEXTURE_TARGET);
This is more or less what needs to be done.
By the way: here's my OpenCV/OpenGL/Qt application that retrieves frames from the camera and displays it in a window. No shaders, though.
Good luck!
You don't need to use a framebuffer to send textures to the shader. Once you've got texture 0 selected as the active texture and 0 set as the value of the uniform sampler2D in your shader, every time you call glBindTexture(), it will set the sampler2D to whichever texture you've specified in the function parameter. So no, you don't need to relink or recompile your shader each time you want to change texture.
Related
I am noob to WebGL, and I am trying to understand how WebGL textures works by reading this tutorial: WebGL Image Processing Continued.
There is another example in the same tutorial serie which is using two textures by setting the uniforms for both input textures units 0 and 1 explicitly:
// set which texture units to render with.
gl.uniform1i(u_image0Location, 0); // texture unit 0
gl.uniform1i(u_image1Location, 1); // texture unit 1
But in the first example, the fragment shader is using the sampler2D u_image for the input texture, so i would expect there shall be in code something like:
gl.uniform1i(gl.getUniformLocation(program, "u_image"), 0);
...but i can't find it. How this works? Just guessing: is the texture unit 0 used as default for all 2D samplers in all WebGL programs? Then, why is gl.uniform1i(u_image0Location, 0); needed in the second example?
EDIT:
So far, what i have understand from the tutorial mentioned above - just correct me if i am wrong:
There are a least two ways to use textures:
input textures, where i can read from -
here, i need to pass to the fragment shader the location (i.e. "u_image")
output textures, where i can write to - this is the texture currently bound (in the tutorial mentioned above, to the texture unit 0)
I am not able to fully understand how this example works because the u_image uniform is not set and moreover in the code there isn't any gl.activeTexture() call
EDIT 2:
Thanks to Rabbid76 I believe I found further clarification in a comment of gman (the Author of the tutorial mentioned above) in this answer to that question:
You need the framebuffers. By attaching a texture to a framebuffer and
then binding that framebuffer you're making the shader write to the
texture.
input textures, where i can read from - here, i need to pass to the fragment shader the location (i.e. "u_image")
Yes, textures can be used this way.
If you want to use different textures in a fragment shader, then you have to bind the textures to different texture units. And you have to set the index of the texture unit to the texture sampler uniform variable.
The texture unit where the texture is bound to, can be set by WebGLRenderingContext.activeTexture():
var textureObj = gl.createTexture();
.....
var texUnit = 0; // texture unit 0 in this example
gl.activeTexture(gl.TEXTURE0 + texUnit);
gl.bindTexture(gl.TEXTURE_2D, textureObj);
After the program is linked (linkProgram), the location of the texture sampler uniform can be retrieved:
u_image0Location = gl.getUniformLocation(program, "u_image");
After the program has become the activ program (useProgram), the texture sampler uniform can be set:
gl.uniform1i(u_image0Location, texUnit);
output textures, where i can write to - this is the texture currently bound (in the tutorial mentioned above, to the texture unit 0)
No, completely wrong. There is nothing like an output texture and for sure the texture which is bound to texture unit 0 is not the output texture.
In the fragment shader you write to gl_FragColor. The color you write to gl_FragColor is stored to the frame buffer at the fragments position.
If you wat to write to a texture, then you have to attach a texture to the frame buffer by WebGLRenderingContext.framebufferTexture2D():
var vp_w, vp_h; // withd and height of the viewport
var fbTexObj = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, fbTexObj );
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, vp_w, vp_h, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
.....
var fbo = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fbo);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, fbTexObj, 0);
Note, when you bind (gl.bindTexture) and specify (gl.texImage2D) fbTexObj any texture unit can be active.
I am trying to draw a texture in OpenGL ES 2.0 using GL_POINTS by applying a stencil buffer. The stencil buffer should come from a texture. I am rendering the results to another texture and then presenting the texture to screen. Here is my code for rendering to texture:
//Initialize buffers, initialize texture, bind frameBuffer
.....
glClearStencil(0);
glClear (GL_STENCIL_BUFFER_BIT);
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, stencil);
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_NEVER, 0, 1);
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP);
glBindTexture(GL_TEXTURE_2D, texture);
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
glDisable(GL_STENCIL_TEST);
....
//Render texture to screen
The result I am getting is just my texture being drawn without any masking applied from the stencil. I had a few questions regarding this issue:
Is is possible to use a stencil buffer with GL_POINTS?
Is is possible to use a stencil buffer when rendering to a texture?
Does the stencil texture have to have any special properties (solid colour, internal format...etc)?
Are there any apparent mistakes with my code?
This is the result I am looking for:
UPDATE:
My problem, as pointed out by the selected answer, was primarily that I did not attach the stencil to the stencil attachment of the FBO:
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT,
GL_RENDERBUFFER, stencilBufferId);
I did not know that it was required when rendering to a texture. Secondly I was not using the proper stencil test.
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
Did the job.
Addressing your questions in order:
Is is possible to use a stencil buffer with GL_POINTS?
Yes. The stencil test is applied to all fragments, no matter of the primitive type rendered. The only case where you write to the framebuffer without applying the stencil test is with glClear().
Is is possible to use a stencil buffer when rendering to a texture?
Yes. However, when you render to a texture using an FBO, the stencil buffer of your default framebuffer will not be used. You have to create a stencil renderbuffer, and attach it to the stencil attachment of the FBO:
GLuint stencilBufferId = 0;
glGenRenderbuffers(1, &stencilBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, stencilBufferId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT,
GL_RENDERBUFFER, stencilBufferId);
Does the stencil texture have to have any special properties (solid colour, internal format...etc)?
OpenGL ES 2.0 does not have stencil textures. You have to use a renderbuffer as the stencil attachment, as shown in the code fragment above. GL_STENCIL_INDEX8 is the only format supported for renderbuffers that can be used as stencil attachment. ES 3.0 supports depth/stencil textures.
Are there any apparent mistakes with my code?
Maybe. One thing that looks slightly odd is that you never really apply a stencil test in the code that is shown. While you do enable the stencil test, you only use GL_ALWAYS and GL_NEVER for the stencil function. As the names suggest, these functions either always or never pass the stencil test. So you don't let fragments pass/fail depending on the stencil value. I would have expected something like this before the second draw call:
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
This would only render the fragments where the current stencil buffer value is 1, which corresponds to the fragments you rendered with the previous draw call.
I want to render a yuv image on an iOS device. I presume it can be achieved using openGL. (Actually I have to render multiple such images in succession)
What I understand is that GLKit is an abstraction that iOS created, in which there is a GLKView which will have and handle the render buffer. I am currently trying to use a GLKViewController and frame update is being done successfully with desired fps. This I conform by using glClear function call.
Now the task is to render an image on the view.
There is a class GLKBaseEffect which will have basic shaders. I can't figure out what properties to set, so I just create it and call prepareToDraw before each render.
There is a class for handling textures, GLKTextureLoader, but it appears to me that it only works with Quartz images, i.e., yuv420 planar can't be loaded into the texture using this class.
// create the texture
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, [imageNSData bytes]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
I use this code for generating a texture and binding it, but I don't really know what I am trying to do here. And whatever it is, it doesn't bring up any image on screen, and I don't know what to do next.
I have not created any shaders, assuming baseEffect will have something.
And this https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW8 tells me that I'll have to use EAGLayer to render images on screen.
Can I use GLKit to render images? If YES, do we have any sample code or tutorial that wouldn't use GLKTextureLoader class (I couldn't find any)? If NO, is there a similar tutorial for render using EAGLLayer (I have not explored about it till now) ?
It sounds like you're really asking about a few different topics here:
How to draw with GLKView vs CAEAGLLayer
How GLKBaseEffect and GLKTextureLoader fit into OpenGL ES drawing in general
How to draw a texture once you have one
How to render YUV image data
I'll try to address each in turn...
GLKView is just fine for any OpenGL ES drawing you want to do -- it does everything that the older documentation you linked to does (setting up framebuffers, CAEAGLLayers, etc) for you so you don't have to write that code. Inside the GLKView drawing method (drawRect: or glkView:drawInRect:, depending on whether you're drawing in a subclass or delegate), you write the same OpenGL ES drawing code you would for CAEAGLLayer (or any other system).
You can use GLKView, GLKTextureLoader, and GLKBaseEffect independently of each other. If you want to write all your own drawing code and use your own shaders, you can draw in a GLKView without using GLKBaseEffect. (You can even mix and match GLKBaseEffect and your own stuff, like you see when you create a new Xcode project with the OpenGL Game template.) Likewise, GLKTextureLoader loads image data and spits out the name you'll need for binding it for drawing, and you can use that regardless of whether you're drawing it with GLKBaseEffect.
Once you get a texture, whether via GLKTextureLoader or reading/decoding the data yourself and providing it to glTexImage2D, there are three basic steps to drawing with it:
Bind the texture name with glBindTexture.
Draw some geometry to be textured (using glDrawArrays, glDrawElements, or similar)
Have a fragment shader that looks up texels and outputs colors.
If you just want to draw an image that fills your view, just draw a quad (two triangles). Here's the code I use to set up a vertex array object with one quad when I want to draw fullscreen:
typedef struct __attribute__((packed)) {
GLKVector3 position;
GLKVector2 texcoord;
} Vertex;
static Vertex vertexData[] = {
{{-1.0f, 1.0f, -1.0f}, {0.0f, 0.0f}},
{{-1.0f, -1.0f, -1.0f}, {0.0f, 1.0f}},
{{ 1.0f, 1.0f, -1.0f}, {1.0f, 0.0f}},
{{ 1.0f, -1.0f, -1.0f}, {1.0f, 1.0f}},
};
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid *)offsetof(Vertex, position));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid *)offsetof(Vertex, texcoord));
glBindVertexArrayOES(0);
Then, to draw it:
glBindVertexArrayOES(_vertexArray);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
That's just the geometry-drawing part. Combine this with a GLKBaseEffect -- whose transform property is the default identity transform, and whose texture2d0 property is set up with the name of a texture you've loaded via GLKTextureLoader or other means -- and you'll get a view-filling billboard with your texture on it.
Finally, the YUV part... for which I'll mostly punt. Where are you getting your YUV texture data? If it's from the device camera, you should look into CVOpenGLESTexture/CVOpenGLESTextureCache, as covered by this answer. Regardless, you should be able to handle YUV textures using the APPLE_rgb_422 extension, as covered by this answer. You can also look into this answer for some help writing fragment shaders to process YUV to RGB on the fly.
I render a scene (to the default renderbuffer)
I want to grab a rectangle from this scene and create a texture out of it
I would like to do it without glReadPixels()ing down to the CPU and then uploading the data back up to the GPU
Is this possible using OpenGL ES 2.0?
P.S. - I want to use a POT area of the screen, not some strange shape
Pseudocode of my already-working GPU->CPU->GPU implementation:
// Render stuff here
byte *magData = glReadPixels();
// Bind the already-generated texture object
BindTexture(GL_TEXTURE0, GL_TEXTURE_2D, alias);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, MAGWIDTH, MAGHEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, magData);
You can use glCopyTexImage2D to copy from the back buffer:
glBindTexture(GL_TEXTURE_2D, textureID);
glCopyTexImage2D(GL_TEXTURE_2D, level, internalFormat, x, y, width, height, border);
OpenGL ES 2.0 always copies from the back buffer (or front buffer for single-buffered configurations). Using OpenGL ES 3.0, you can specify the source for the copy with:
glReadBuffer(GL_BACK);
In light of ClayMontgomery's answer (glCopyTexImage2D is slow) - you might find using glCopyTexSubImage2D with a correctly sized and formatted texture is faster because it writes to the pre-allocated texture instead of allocating a a new buffer each time. If this is still too slow, you should try doing as he suggests and render to a framebuffer (although you'll also need to draw a quad to the screen using the framebuffer's texture to get the same results).
You will find that glCopyTexImage2D() is really slow. The fast way to do what you want is to render directly to the texture as an attachment to an FBO. This can be done with either OpenGL ES 2.0 or 1.1 (with extensions). This article explains in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
i know that apple offers a sample called GLCameraRipple which is using CVOpenGLESTextureCacheCreateTextureFromImage to achieve this. But when i changed to glTexImage2D, it displays nothing, what's wrong with my code?
if (format == kCVPixelFormatType_32BGRA) {
CVPixelBufferRef pixelBuf = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuf, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuf);
glGenTextures(1, &textureID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, baseaddress);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
CVPixelBufferUnlockBaseAddress(pixelBuf, 0);
}
thank you very much for any help!
There are a couple of problems here. First, the GLCameraRipple example was built to take in YUV camera data, not BGRA. Your above code is only uploading one texture of BGRA data, rather than the separate Y and UV planes expected by the application. It uses a colorspace conversion shader to merge these planes as a first stage, and that needs the YUV data to work.
Second, you are allocating a new texture for each uploaded frame, which is a really bad idea. This is particularly bad if you don't delete that texture when done, because you will chew up resources this way. You should allocate a texture once for each plane you'll upload, then keep that texture around as you upload each video frame, deleting it only when you're done processing video.
You'll either need to rework the above to upload the separate Y and UV planes, or remove / rewrite their color processing shader. If you go the BGRA route, you'll also need to be sure that the camera is now giving you BGRA frames instead of YUV ones.