About converting YUV(YV12) to RGB with GLSL for iOS - ios

I'm trying to convert YUV(YV12) to RGB with GLSL shader.
As below step.
read a raw YUV(YV12) data from image file
filtering Y, Cb and Cr from the raw YUV(YV12) data
mapping texture
send Fragment Shader.
but result image is not same as raw data.
below image is raw data.
screenshot of raw image link(Available for download)
and below image is convert data.
screenshot of convert image link(Available for download)
and below is my source code.
- (void) readYUVFile
{
...
NSData* fileData = [NSData dataWithContentsOfFile:file];
NSInteger width = 720;
NSInteger height = 480;
NSInteger uv_width = width / 2;
NSInteger uv_height = height / 2;
NSInteger dataSize = [fileData length];
GLint nYsize = width * height;
GLint nUVsize = uv_width * uv_height;
GLint nCbOffSet = nYsize;
GLint nCrOffSet = nCbOffSet + nUVsize;
Byte* uData = spriteData + nCbOffSet;
Byte* vData = uData + nUVsize;
GLfloat imageY[ 345600 ], imageU[ 86400 ], imageV[ 86400 ];
int x, y, nIndexY = 0, nIndexUV = 0;
for( y = 0; y < height; y++ )
{
for( x = 0; x < width; x++ )
{
imageY[ nIndexY ] = (GLfloat)spriteData[ nIndexY ] - 16.0;
if( (y < uv_height) && (x < uv_width) )
{
imageU[ nIndexUV ] = (GLfloat)uData[ nIndexUV ] - 128.0;
imageV[ nIndexUV ] = (GLfloat)vData[ nIndexUV ] - 128.0;
nIndexUV++;
}
nIndexY++;
}
}
m_YpixelTexture = [self textureY:imageY widthType:width heightType:height];
m_UpixelTexture = [self textureU:imageU widthType:uv_width heightType:uv_height];
m_VpixelTexture = [self textureV:imageV widthType:uv_width heightType:uv_height];
...
}
- (GLuint) textureY: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE0 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
- (GLuint) textureU: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE1 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
- (GLuint) textureV: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE2 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
and below is source code of Fragment Shader.
uniform sampler2D Ytexture; // Y Texture Sampler
uniform sampler2D Utexture; // U Texture Sampler
uniform sampler2D Vtexture; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y, u, v;
highp float r, g, b;
y = texture2D( Ytexture, TexCoordOut ).p;
u = texture2D( Utexture, TexCoordOut ).p;
v = texture2D( Vtexture, TexCoordOut ).p;
y = 1.1643 * ( y - 0.0625 );
u = u - 0.5;
v = v - 0.5;
r = y + 1.5958 * v;
g = y - 0.39173 * u - 0.81290 * v;
b = y + 2.017 * u;
gl_FragColor = highp vec4( r, g, b, 1.0 );
}
Y data is good, but U and V data is not good. And y-axis of image data is reverse output.
How to resolve this issue?

The image is probably mirrored across the horizontal because of a simple disagreement in axes — OpenGL follows the graph paper convention where (0, 0) is the bottom left corner and the y axis heads upwards, whereas almost all graphics image formats follow the English reading order convention where (0, 0) is the top left corner and the y axis heads downwards. Just flip your input y coordinates (in the vertex shader if necessary).
As for the colours, the second screenshot currently isn't working for me (as per my comment) but my best guess would be that you're subtracting 128 when building imageU and imageV, then subtracting 0.5 again in your shader. Presumably you actually want to do just the one or the other (specifically, do it in the shader because texture data is unsigned)? You make the same mistake with imageY but the net effect will just be to darken the image slightly rather than to shift all the colours half way around the scale.
My only other thought is that your individual textures have only one channel so it'd be better to upload them as GL_LUMINANCE rather than GL_RGBA.

Related

Copying from one non power of 2 texture to another in OpenGL ES 2.0

I am using OpenGL ES 2.0 on iOS .
I have the following code to copy from a smaller texture to within a larger texture; everything goes through and there are no glGetError() . Yet when I read pixels back it seems that nothing got written and the original texture is unmodified.
So I was wondering if its ok to use Texture Unit 0 to do the copy; or is the frame buffer implicitly using texture unit 0 when I ask it to use a texture for its color render buffer? In other words does glFramebufferTexture2D tie up the texture unit 0? Do I need to use texture unit 1 instead for the fromTexture ?
Or is there some other problem with the code that could be causing it to trip?
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, toTextureImage.name );
// Set the texture parameters for a non power of 2;
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glBindTexture( GL_TEXTURE_2D, 0 );
// Generate a new FBO. It will contain the texture as the color render buffer (color attachment).
GLuint offscreenFrameBuffer;
glGenFramebuffers( 1, &offscreenFrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, offscreenFrameBuffer );
// Bind the texture to the FBO; Question ? With this use the texture unit 0?
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, toTextureImage.texture.name, 0 );
// Test if everything failed
GLenum status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if( status != GL_FRAMEBUFFER_COMPLETE )
{
DebugLog( #"failed to make complete framebuffer object %x", status );
return ;
}
// Bind the FBO; it is already bound;
// glBindFramebuffer( GL_FRAMEBUFFER, offscreenFrameBuffer );
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, fromTextureImage.name );
BoundingBox uvBounds;
uvBounds.xmin = 0.0;
uvBounds.xmax = 1.0;
uvBounds.ymin = 0.0;
uvBounds.ymax = 1.0;
Vector2 renderUVs[6];
quadsFromBoundingBox( &uvBounds, renderUVs );
BoundingBox verticesBounds;
verticesBounds.xmin = toPos->v[0];
verticesBounds.xmax = toPos->v[0] + fromTextureImage.imageSize->v[0];
verticesBounds.ymin = toPos->v[1];
verticesBounds.ymax = toPos->v[1] + fromTextureImage.imageSize->v[1];
Vector2 renderVertices[6];
quadVerticesFromBoundingBox( &verticesBounds, renderVertices );
[ self prepareToDraw ];
// Tell the shader what the UVs are ...
[ self setTexture0UVs:renderUVs ];
// use linear filtering for non power of 2
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// set the default color to use
[ self setColor4b:&colorRGB_White ];
// Tell the shader what the vertices are;
[ self setVerticesVector2s:renderVertices ];
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ZERO);
glDrawArrays( GL_TRIANGLES, 0, 6 );
glDisable(GL_BLEND);
// Unbind the framebuffer;
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glDeleteFramebuffers( 1, &offscreenFrameBuffer );
Framebuffer attachment binding is nothing to do with the texture unit; they are totally independent concepts in the API.

iOS YUV 420v using GL_TEXTURE_2D shows wrong colour in OpenGL shader

Goal: To use GL_TEXTURE_2D instead of CVOpenGLESTextureRef to push the YUV data (format is '420v' kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) to the shaders (why? Because I need to use glTexSubImage2d to manipulate pixels, and I can't use that with the target being CVOpenGLESTextureGetTarget(<name>), it has no effect. I must use GL_TEXTURE_2D)
Problem:
I am using a custom video compositor to manipulate an AVPlayer video. When I use CVOpenGLESTextureRef like in Apple's AVCustomEdit sample code, which uses 2 separate shaders, one for Y (luma) and one for UV (chroma), video looks normal like this:
But trying to use GL_TEXTURE_2D instead makes video just show green and pink colors like this:
And like this if I use the GL_TEXTURE_2D with the fragment shader that combines both Y and UV textures it looks even worse like this:
My code:
First the track buffer and destination buffer are created:
CVPixelBufferRef foregroundSourceBuffer = [request sourceFrameByTrackID:currentInstruction.foregroundTrackID];
CVPixelBufferRef dstBuffer = [_renderContext newPixelBuffer];
Then they get passed to the render function which contains the following relevant code:
CVOpenGLESTextureRef foregroundLumaTexture = [self lumaTextureForPixelBuffer:foregroundPixelBuffer];
CVOpenGLESTextureRef foregroundChromaTexture = [self chromaTextureForPixelBuffer:foregroundPixelBuffer];
CVOpenGLESTextureRef destLumaTexture = [self lumaTextureForPixelBuffer:destinationPixelBuffer];
CVOpenGLESTextureRef destChromaTexture = [self chromaTextureForPixelBuffer:destinationPixelBuffer];
The luma texture function returns this:
CVOpenGLESTextureRef luma = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
(int)CVPixelBufferGetWidth(pixelBuffer),
(int)CVPixelBufferGetHeight(pixelBuffer),
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&lumaTexture);
The chroma texture function returns this:
CVOpenGLESTextureRef chroma = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
(int)CVPixelBufferGetWidthOfPlane(pixelBuffer, 1),
(int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 1),
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&chromaTexture);
Now the relevant body of the render function:
glBindFramebuffer(GL_FRAMEBUFFER, self.offscreenBufferHandle);
glViewport(0, 0, (int)CVPixelBufferGetWidthOfPlane(destinationPixelBuffer, 0), (int)CVPixelBufferGetHeightOfPlane(destinationPixelBuffer, 0));
#ifdef USE_GL_TEXTURE_2D
int bufferWidth = CVPixelBufferGetWidth(foregroundPixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(foregroundPixelBuffer);
GLuint frameTextureY;
GLuint frameTextureUV;
glGenTextures(1, &frameTextureY);
glGenTextures(1, &frameTextureUV);
if(CVPixelBufferLockBaseAddress(foregroundPixelBuffer, 0) == kCVReturnSuccess){
glBindTexture(GL_TEXTURE_2D, frameTextureY);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(foregroundPixelBuffer, 0));
glBindTexture(GL_TEXTURE_2D, frameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(foregroundPixelBuffer, 1));
CVPixelBufferUnlockBaseAddress(foregroundPixelBuffer, 0);
}
#endif
glActiveTexture(GL_TEXTURE0);
#ifdef USE_GL_TEXTURE_2D
glUseProgram(self.programYUV_2);
glBindTexture(GL_TEXTURE_2D, frameTextureY);
glUniformMatrix4fv(uniforms[UNIFORM_RENDER_TRANSFORM_YUV_2], 1, GL_FALSE, preferredRenderTransform);
#else
glUseProgram(self.programY);
glBindTexture(CVOpenGLESTextureGetTarget(foregroundLumaTexture), CVOpenGLESTextureGetName(foregroundLumaTexture));
glUniformMatrix4fv(uniforms[UNIFORM_RENDER_TRANSFORM_Y], 1, GL_FALSE, preferredRenderTransform);
#endif
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Attach the destination texture as a color attachment to the off screen frame buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, CVOpenGLESTextureGetTarget(destLumaTexture), CVOpenGLESTextureGetName(destLumaTexture), 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
goto bail;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
#ifdef USE_GL_TEXTURE_2D
glUniform1i(uniforms[UNIFORM_TEXTURE_YUV_2_Y], 0);
glVertexAttribPointer(ATTRIB_VERTEX_Y_UV_INONESHADER, 2, GL_FLOAT, 0, 0, quadVertexData1);
glEnableVertexAttribArray(ATTRIB_VERTEX_Y_UV_INONESHADER);
glVertexAttribPointer(ATTRIB_TEXCOORD_Y_UV_INONESHADER, 2, GL_FLOAT, 0, 0, quadTextureData1);
glEnableVertexAttribArray(ATTRIB_TEXCOORD_Y_UV_INONESHADER);
#else
glUniform1i(uniforms[UNIFORM_TEXTURE_Y], 0);
glVertexAttribPointer(ATTRIB_VERTEX_Y, 2, GL_FLOAT, 0, 0, quadVertexData1);
glEnableVertexAttribArray(ATTRIB_VERTEX_Y);
glVertexAttribPointer(ATTRIB_TEXCOORD_Y, 2, GL_FLOAT, 0, 0, quadTextureData1);
glEnableVertexAttribArray(ATTRIB_TEXCOORD_Y);
#endif
glDrawArrays(GL_TRIANGLE_STRIP, 0, 5);
glActiveTexture(GL_TEXTURE1);
#ifdef USE_GL_TEXTURE_2D
//no need to use different program
glBindTexture(GL_TEXTURE_2D, frameTextureUV);
glUniformMatrix4fv(uniforms[UNIFORM_RENDER_TRANSFORM_YUV_2], 1, GL_FALSE, preferredRenderTransform);
#else
glUseProgram(self.programUV);
glBindTexture(CVOpenGLESTextureGetTarget(foregroundChromaTexture), CVOpenGLESTextureGetName(foregroundChromaTexture));
glUniformMatrix4fv(uniforms[UNIFORM_RENDER_TRANSFORM_UV], 1, GL_FALSE, preferredRenderTransform);
#endif
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glViewport(0, 0, (int)CVPixelBufferGetWidthOfPlane(destinationPixelBuffer, 1), (int)CVPixelBufferGetHeightOfPlane(destinationPixelBuffer, 1));
// Attach the destination texture as a color attachment to the off screen frame buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, CVOpenGLESTextureGetTarget(destChromaTexture), CVOpenGLESTextureGetName(destChromaTexture), 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
goto bail;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
#ifdef USE_GL_TEXTURE_2D
glUniform1i(uniforms[UNIFORM_TEXTURE_YUV_2_UV], 1);
glVertexAttribPointer(ATTRIB_VERTEX_Y_UV_INONESHADER, 2, GL_FLOAT, 0, 0, quadVertexData1);
glEnableVertexAttribArray(ATTRIB_VERTEX_Y_UV_INONESHADER);
glVertexAttribPointer(ATTRIB_TEXCOORD_Y_UV_INONESHADER, 2, GL_FLOAT, 0, 0, quadTextureData1);
glEnableVertexAttribArray(ATTRIB_TEXCOORD_Y_UV_INONESHADER);#else
glUniform1i(uniforms[UNIFORM_TEXTURE_UV], 1);
glVertexAttribPointer(ATTRIB_VERTEX_UV, 2, GL_FLOAT, 0, 0, quadVertexData1);
glEnableVertexAttribArray(ATTRIB_VERTEX_UV);
glVertexAttribPointer(ATTRIB_TEXCOORD_UV, 2, GL_FLOAT, 0, 0, quadTextureData1);
glEnableVertexAttribArray(ATTRIB_TEXCOORD_UV);
#endif
glDrawArrays(GL_TRIANGLE_STRIP, 0, 5);
glFlush();
bail:
#ifdef USE_GL_TEXTURE_2D
glDeleteTextures(1, &frameTextureY);
glDeleteTextures(1, &frameTextureUV);
#endif
CFRelease(foregroundLumaTexture);
CFRelease(foregroundChromaTexture);
CFRelease(destLumaTexture);
CFRelease(destChromaTexture);
// Periodic texture cache flush every frame
CVOpenGLESTextureCacheFlush(self.videoTextureCache, 0);
Here are my fragment shaders, that I use depending on different test cases (whether I draw the Y and UV separately or together in one):
static const char kFragmentShaderY[] = {
"varying highp vec2 texCoordVarying; \n \
uniform sampler2D s_texture_y; \n \
void main() \n \
{ \n \
gl_FragColor.r = texture2D(s_texture_y, texCoordVarying).r; \n \
}"
};
static const char kFragmentShaderUV[] = {
"varying highp vec2 texCoordVarying; \n \
uniform sampler2D s_texture_uv; \n \
void main() \n \
{ \n \
gl_FragColor.rg = texture2D(s_texture_uv, texCoordVarying).rg; \n \
}"
};
static const char kFragmentShaderYUV_2Textures[] = {
"varying highp vec2 texCoordVarying; \n \
uniform sampler2D s_texture_y; \n \
uniform sampler2D s_texture_uv; \n \
\n \
void main() \n \
{ \n \
mediump vec3 yuv;// = vec3(1.1643 * (texture2D(s_texture_y, texCoordVarying).r - 0.0625), \n \
lowp vec3 rgb; \n \
yuv.x = texture2D(s_texture_y, texCoordVarying).r; \n \
yuv.yz = texture2D(s_texture_uv, texCoordVarying).rg - vec2(0.5, 0.5); \n \
\n \
rgb = mat3( 1, 1, 1, \n \
0, -.21482, 2.12798, \n \
1.28033, -.38059, 0) * yuv; \n \
gl_FragColor = vec4(rgb, 1.0); \n \
}"
};
Using GL_TEXTURE_2D, if I use the fragment shader containing both the Y and UV textures, the video looks like #3 above. If I use the two separate fragment shaders (one for Y, one for UV), the picture is #2 above (ALMOST right but the chroma colors are just greens and pinks) *(mind you I do comment out some of the code above to be able to use the 2 separate fragment shaders, and of course I glBind to the GL_TEXTURE_2D and not the CV, and so on, and so on).
Again, my problem is I need to use GL_TEXTURE_2D instead of CVOpenGLESTextureGetTarget, but it doesn't show the right chroma colour if I do. I wonder what I am doing wrong. Is it something to do with the YUV format being kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange instead of kCVPixelFormatType_420YpCbCr8BiPlanarFullRange perhaps? I have also experimented with the making 3 GL_LUMINANCE textures method as well, and many other permutations with no luck.
It turns out the problem was with using GL_LUMINANCE and GL_LUMINANCE_ALPHA, which are apparently deprecated formats. When I switched them to GL_RED_EXT and GL_RG_EXT, it worked and the chroma colors are finally right. I hope this question and answer will save other people time.

How to render a textured object to a frambuffer texture, acquire by OpenCL, convert to OpenCV

So I'm trying to combine the usefulness of all 3 libraries, I load an object with a texture:
// Load the model of the store, create a program with the shaders
GLint store = OpGL::initModel(MESH_PATH);
GLuint storeProgram = OpGL::initProgram(VS_GLSL_PATH, FS_GLSL_PATH);
glUseProgram (storeProgram);
// Find the location in the shader, for the texture image
GLuint TEX_ID = glGetUniformLocation(storeProgram, "tex_glsl");
GLuint TEX = OpGL::loadTexture(TEXTURE_IMAGE_PATH, 25);
// Bind texture in Texture Unit 0
glBindTexture(GL_TEXTURE_2D, TEX);
// Set
glUniform1i(TEX_ID, 0); // use texture 0
set up a framebuffer with a texture in GL:
GLuint g_fb = 0; // frame buffer
glGenFramebuffers (1, &g_fb);
glBindFramebuffer(GL_FRAMEBUFFER, g_fb);
GLuint g_fb_tex = 0;
glGenTextures (1, &g_fb_tex);
glBindTexture (GL_TEXTURE_2D, g_fb_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D ( GL_TEXTURE_2D,0, GL_RGBA,640,480, 0,GL_RGBA,GL_UNSIGNED_BYTE,NULL );
glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_fb_tex, 0);
GLuint g_db = 0; // depth buffer
glGenRenderbuffers(1, &g_db);
glBindRenderbuffer(GL_RENDERBUFFER, g_db);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 640, 480);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, g_db);
/* tell the framebuffer to expect a colour output attachment*/
GLenum draw_bufs[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers (1, draw_bufs);
create storage in CL:
cl_mem CL_image; //location for the gl rendering to reside in CL
CL_image = clCreateFromGLTexture2D(context, CL_MEM_READ_WRITE, GL_TEXTURE_2D, 0, g_fb_tex, &err);
create a UMat:
cv::UMat Umat;
re-bind the original texture:
glBindTexture(GL_TEXTURE_2D, TEX);
glBindFramebuffer(GL_FRAMEBUFFER, g_fb); // just as a precaution
render:
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glUseProgram (storeProgram);
glBindVertexArray (store);
glDrawArrays (GL_TRIANGLES, 0, 79227);
glfwPollEvents ();
glFlush();
// pass the images to CL
err = clEnqueueAcquireGLObjects(queue, 1, &CL_image, 0, NULL, NULL);
cl_event wait;
cv::ocl::convertFromImage(CL_image, Umat);
cv::flip(Umat,Umat,0);
cv::imshow("CVforCLimage", Umat);
cv::waitKey(1);
err = clEnqueueReleaseGLObjects(queue, 1, &out_toCL_image, 0, 0, 0);
err = clFinish(queue);
Everything renders fine if I just send it to the screen (glBindFramebuffer(GL_FRAMEBUFFER, 0);)... but I get a blue object instead of an orange object when I render it to CV. Almost as though the original texture I loaded is not making it to the rendering.
Thanks for the help!

OpenGL ES Depthbuffer not working

I hope someone can help me, sitting on that problem for long time now. I'm working on a 3D Game with OpenGl-ES. I have a map/world and some Objects and I want to make shadows with shadow mapping.
My problem ist, that the shadow map information does not fit with my information of lightspace while rendering.
I'm not sure if I have problems in my FBO, that's my I give it here with:
- (void)createShadowMap:(GLboolean)c Depth:(GLboolean)d Stencil:(GLboolean)s{
//Get ID
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFBO);
//Texture
glGenTextures(1, &_colorBuffer);
glBindTexture(GL_TEXTURE_2D, _colorBuffer);
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA,_screenWidth, _screenHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//SetupFBO
glGenFramebuffers(1, &_frameBuffer);
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _screenWidth, _screenHeight);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _colorBuffer, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
exit(1);
}
glBindFramebuffer ( GL_FRAMEBUFFER, defaultFBO );
glBindTexture ( GL_TEXTURE_2D, 0 );
}
But I think the problem is in the calculation, I just don't find it. So I have here the shaders for the depth map:
void main()
{
vTexCoord = projectionMatrix * viewMatrix * modelMatrix * position;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
}
Fragment:
void main()
{
highp float depth = gl_FragCoord.z;
highp float linearDepth = (2.0 * 0.1) / (60.0 + 1.0 - depth * (60.0 - 1.0));
lowp vec4 color = vec4(vec3(linearDepth), 1.0);
gl_FragColor = color;
}
And now the shader while rendering:
Vert:
vec4 lightSight = projectionMatrix * lightSpaceMatrix * modelMatrix * position;
projCoords = lightSight;
Frag:
highp vec3 depth = projCoords.xyz/projCoords.w;
depth = (depth +1.0)*0.5;
highp float currentDepth = (projCoords.xyz/projCoords.w).z;
highp float closestDepth = texture2D(shadowMap, depth).r;
lowp float shadow;
if (closestDepth > currentDepth) {
shadow = 0.0;
} else {
shadow = 0.5;
}
lowp vec4 color = vec4(vec3(shadow), 1.0);

a lot of GREEN Color at YUV420p --> RGB in OpenGL 2.0 Shader on iOS

I want to make a movie player for iOS using ffmpeg and OpenGL ES 2.0
but I have some problem. Output RGB image has a lot of GREEN color.
This is code and images
480x320 width & height:
512x512 Texture width & height
I got a YUV420p row data from ffmpeg AVFrame.
for (int i = 0, nDataLen = 0; i < 3; i++) {
int nShift = (i == 0) ? 0 : 1;
uint8_t *pYUVData = (uint8_t *)_frame->data[i];
for (int j = 0; j < (mHeight >> nShift); j++) {
memcpy(&pData->pOutBuffer[nDataLen], pYUVData, (mWidth >> nShift));
pYUVData += _frame->linesize[i];
nDataLen += (mWidth >> nShift);
}
}
and prepare texture for Y, U & V channel.
//: U Texture
if (sampler1Texture) glDeleteTextures(1, &sampler1Texture);
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &sampler1Texture);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: V Texture
if (sampler2Texture) glDeleteTextures(1, &sampler2Texture);
glActiveTexture(GL_TEXTURE2);
glGenTextures(1, &sampler2Texture);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: Y Texture
if (sampler0Texture) glDeleteTextures(1, &sampler0Texture);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &sampler0Texture);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW,
texH,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
Rendering part is below.
int _idxU = mFrameW * mFrameH;
int _idxV = _idxU + (_idxU / 4);
// U data
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glUniform1i(sampler1Uniform, 1);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxU]);
// V data
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glUniform1i(sampler2Texture, 2);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxV]);
// Y data
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glUniform1i(sampler0Uniform, 0);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW, // source width
mFrameH, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
_frameData);
Vertex Shader & Fragment Shader is below.
attribute vec4 Position;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
varying vec2 TexCoordOut_UV;
uniform mat4 Projection;
uniform mat4 Modelview;
void main()
{
gl_Position = Projection * Modelview * Position;
TexCoordOut = TexCoordIn;
}
uniform sampler2D sampler0; // Y Texture Sampler
uniform sampler2D sampler1; // U Texture Sampler
uniform sampler2D sampler2; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y = texture2D(sampler0, TexCoordOut).r;
highp float u = texture2D(sampler2, TexCoordOut).r - 0.5;
highp float v = texture2D(sampler1, TexCoordOut).r - 0.5;
//y = 0.0;
//u = 0.0;
//v = 0.0;
highp float r = y + 1.13983 * v;
highp float g = y - 0.39465 * u - 0.58060 * v;
highp float b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}
Y Texture (Grayscale) is correct but U & V has a lot of Green Color.
So final RGB image (Y+U+V) has a lot of GREEN Color.
What's the problem?
Please help.
thanks.
Change u and v uniforms (vice versa) and you will have correct result.
So pixel shader (stays the same):
uniform sampler2D sampler0; // Y Texture Sampler
uniform sampler2D sampler1; // U Texture Sampler
uniform sampler2D sampler2; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y = texture2D(sampler0, TexCoordOut).r;
highp float u = texture2D(sampler2, TexCoordOut).r - 0.5;
highp float v = texture2D(sampler1, TexCoordOut).r - 0.5;
highp float r = y + 1.13983 * v;
highp float g = y - 0.39465 * u - 0.58060 * v;
highp float b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}
and rendering code:
// RENDERING
int _idxU = mFrameW * mFrameH;
int _idxV = _idxU + (_idxU / 4);
// U data
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
GLint sampler1Uniform = glGetUniformLocation(programStandard, "sampler2");
glUniform1i(sampler1Uniform, 1);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxU]);
// V data
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
GLint sampler2Uniform = glGetUniformLocation(programStandard, "sampler1");
glUniform1i(sampler2Uniform, 2);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxV]);
// Y data
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
GLint sampler0Uniform = glGetUniformLocation(programStandard, "sampler0");
glUniform1i(sampler0Uniform, 0);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW, // source width
mFrameH, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
_frameData);
//draw RECT
glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
//ATTRIB_TEXTUREPOSITON
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureCoords);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
free(_frameData);
[(EAGLView *)self.view presentFramebuffer];
Conclusion: u <-> v uniforms.
Since iOS supports rgb_422 textures, Instead of using three luminance texture use one rgb_422 texture. http://www.opengl.org/registry/specs/APPLE/rgb_422.txt.
EDIT:
Whoops YUV480p is different than YUV422. In this case you must convert the YUV Data to an RGB data before uploading as a texture due to its odd layout.

Resources