OpenGL ES Depthbuffer not working - ios

I hope someone can help me, sitting on that problem for long time now. I'm working on a 3D Game with OpenGl-ES. I have a map/world and some Objects and I want to make shadows with shadow mapping.
My problem ist, that the shadow map information does not fit with my information of lightspace while rendering.
I'm not sure if I have problems in my FBO, that's my I give it here with:
- (void)createShadowMap:(GLboolean)c Depth:(GLboolean)d Stencil:(GLboolean)s{
//Get ID
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFBO);
//Texture
glGenTextures(1, &_colorBuffer);
glBindTexture(GL_TEXTURE_2D, _colorBuffer);
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA,_screenWidth, _screenHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//SetupFBO
glGenFramebuffers(1, &_frameBuffer);
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _screenWidth, _screenHeight);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _colorBuffer, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
exit(1);
}
glBindFramebuffer ( GL_FRAMEBUFFER, defaultFBO );
glBindTexture ( GL_TEXTURE_2D, 0 );
}
But I think the problem is in the calculation, I just don't find it. So I have here the shaders for the depth map:
void main()
{
vTexCoord = projectionMatrix * viewMatrix * modelMatrix * position;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
}
Fragment:
void main()
{
highp float depth = gl_FragCoord.z;
highp float linearDepth = (2.0 * 0.1) / (60.0 + 1.0 - depth * (60.0 - 1.0));
lowp vec4 color = vec4(vec3(linearDepth), 1.0);
gl_FragColor = color;
}
And now the shader while rendering:
Vert:
vec4 lightSight = projectionMatrix * lightSpaceMatrix * modelMatrix * position;
projCoords = lightSight;
Frag:
highp vec3 depth = projCoords.xyz/projCoords.w;
depth = (depth +1.0)*0.5;
highp float currentDepth = (projCoords.xyz/projCoords.w).z;
highp float closestDepth = texture2D(shadowMap, depth).r;
lowp float shadow;
if (closestDepth > currentDepth) {
shadow = 0.0;
} else {
shadow = 0.5;
}
lowp vec4 color = vec4(vec3(shadow), 1.0);

Related

OpenGL ES Texture Not Rendering

I am attempting to render a texture to a plane with OpenGL ES on an iPhone. I have worked with OpenGL before but I'm not sure why this isn't working.
When I run the code the plane is rendered as a black square and not a textured square. I believe the problem may be when loading the texture although I see no errors when I run the code.
Hopefully someone will spot a problem and be able to help. Thanks in advance.
Here is my code for the mesh.
// Mesh loading
- ( id ) init {
if ( self = [ super init ] ) {
glGenVertexArraysOES( 1, &m_vertexArray );
glBindVertexArrayOES( m_vertexArray );
glGenBuffers( 1, &m_vertexBuffer );
glBindBuffer( GL_ARRAY_BUFFER, m_vertexBuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof( g_vertices ), g_vertices, GL_STATIC_DRAW );
glGenBuffers( 1, &m_texCoordBuffer );
glBindBuffer( GL_ARRAY_BUFFER, m_texCoordBuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof( g_texCoords ), g_texCoords, GL_STATIC_DRAW );
}
return self;
}
- ( void ) render {
glBindBuffer( GL_ARRAY_BUFFER, m_vertexBuffer );
glVertexAttribPointer( GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, ( GLvoid* ) 0 );
glBindBuffer( GL_ARRAY_BUFFER, m_texCoordBuffer );
glVertexAttribPointer( GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, ( GLvoid* ) 0 );
glDrawArrays( GL_TRIANGLES, 0, sizeof( g_vertices ) / sizeof( g_vertices[ 0 ] ) );
}
const GLfloat g_vertices[] = {
-1.0, -1.0, 0.0,
1.0, 1.0, 0.0,
-1.0, 1.0, 0.0,
-1.0, -1.0, 0.0,
1.0, -1.0, 0.0,
1.0, 1.0, 0.0
};
const GLfloat g_texCoords[] = {
0.0, 0.0,
1.0, 1.0,
0.0, 1.0,
0.0, 0.0,
1.0, 0.0,
1.0, 1.0
};
I only need a my vertices and tex coords right now so that is all I'm using.
Next is my texture loading.
- ( id ) init: ( NSString* ) filename {
if ( self = [ super init ] ) {
CGImageRef spriteImage = [ UIImage imageNamed: filename ].CGImage;
if ( !spriteImage ) {
NSLog( #"Failed to load image %#", filename );
exit( 1 );
}
size_t width = CGImageGetWidth( spriteImage );
size_t height = CGImageGetHeight( spriteImage );
GLubyte *spriteData = ( GLubyte* ) calloc( width * height * 4, sizeof( GLubyte ) );
CGContextRef spriteContext = CGBitmapContextCreate( spriteData, width, height, 8, 4 * width, CGImageGetColorSpace( spriteImage ), kCGImageAlphaPremultipliedLast );
CGContextDrawImage( spriteContext, CGRectMake( 0, 0, width, height ), spriteImage );
CGContextRelease( spriteContext );
glGenTextures( 1, &m_texture );
glBindTexture( GL_TEXTURE_2D, m_texture );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, ( GLuint ) width, ( GLuint ) height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData );
free( spriteData );
}
return self;
}
- ( void ) bind {
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, m_texture );
}
I used the texture loading code from this tutorial.
Then here is my rendering code.
- ( void ) glkView: ( GLKView* ) view drawInRect: ( CGRect ) rect {
glClearColor( 0.65f, 0.65f, 0.65f, 1.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram( m_shaderProgram );
[ m_texture bind ];
glUniform1i( uniforms[ UNIFORM_SAMPLER ], 0 );
GLKMatrix4 mvp = GLKMatrix4Multiply( GLKMatrix4Multiply( m_projectionMatrix, m_viewMatrix ), m_modelMatrix );
glUniformMatrix4fv( uniforms[ UNIFORM_MODELVIEWPROJECTION_MATRIX ], 1, 0, mvp.m );
glEnableVertexAttribArray( GLKVertexAttribPosition );
glEnableVertexAttribArray( GLKVertexAttribTexCoord0 );
[ m_plane render ];
glDisableVertexAttribArray( GLKVertexAttribTexCoord0 );
glDisableVertexAttribArray( GLKVertexAttribPosition );
}
Vertex shader.
attribute vec3 position;
attribute vec2 texCoord;
varying lowp vec2 texCoord0;
uniform mat4 modelViewProjectionMatrix;
void main()
{
texCoord0 = texCoord;
gl_Position = modelViewProjectionMatrix * vec4( position, 1.0 );
}
And lastly fragment shader.
varying lowp vec2 texCoord0;
uniform sampler2D sampler;
void main()
{
gl_FragColor = texture2D( sampler, texCoord0.st );
}
As mentioned in the comment, you can check with a Power of Two (POT) texture. In addition, there are extensions that enable support for NonPOT (NPOT) textures like GL_IMG_texture_npot, refer to the discussion in this thread (Non power of two textures in iOS), and this thread (http://aras-p.info/blog/2012/10/17/non-power-of-two-textures/).

fragment shader: texture2D() and texelFetch()

my programm displays an image loaded with openCV from a webcam with openGL.
The Programm below works generally but I have some questions listed after the code.
main:
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
#include <fstream> //std::ifstream
#include <algorithm> //std::max()
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
cv::VideoCapture capture0;
cv::VideoCapture capture1;
void captureFromWebcam(cv::Mat &frame, cv::VideoCapture &capture)
{
capture.read(frame);
}
bool initializeCapturing()
{
capture0.open(0);
capture1.open(1);
if(!capture0.isOpened() | !capture1.isOpened())
{
std::cout << "Ein oder mehrere VideoCaptures konnten nicht geöffnet werden" << std::endl;
if(!capture0.isOpened())
capture0.release();
if(!capture1.isOpened())
capture1.release();
return false;
}
return true;
}
void releaseCapturing()
{
capture0.release();
capture1.release();
}
GLuint LoadShaders(const char * vertex_file_path,const char * fragment_file_path){
// Create the shaders
GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
// Read the Vertex Shader code from the file
std::string VertexShaderCode;
std::ifstream VertexShaderStream(vertex_file_path, std::ios::in);
if(VertexShaderStream.is_open())
{
std::string Line = "";
while(getline(VertexShaderStream, Line))
VertexShaderCode += "\n" + Line;
VertexShaderStream.close();
}
// Read the Fragment Shader code from the file
std::string FragmentShaderCode;
std::ifstream FragmentShaderStream(fragment_file_path, std::ios::in);
if(FragmentShaderStream.is_open()){
std::string Line = "";
while(std::getline(FragmentShaderStream, Line))
FragmentShaderCode += "\n" + Line;
FragmentShaderStream.close();
}
GLint Result = GL_FALSE;
int InfoLogLength;
// Compile Vertex Shader
printf("Compiling shader : %s\n", vertex_file_path);
char const * VertexSourcePointer = VertexShaderCode.c_str();
glShaderSource(VertexShaderID, 1, &VertexSourcePointer , NULL);
glCompileShader(VertexShaderID);
// Check Vertex Shader
glGetShaderiv(VertexShaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(VertexShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
std::vector<char> VertexShaderErrorMessage(InfoLogLength);
glGetShaderInfoLog(VertexShaderID, InfoLogLength, NULL, &VertexShaderErrorMessage[0]);
fprintf(stdout, "%s\n", &VertexShaderErrorMessage[0]);
// Compile Fragment Shader
printf("Compiling shader : %s\n", fragment_file_path);
char const * FragmentSourcePointer = FragmentShaderCode.c_str();
glShaderSource(FragmentShaderID, 1, &FragmentSourcePointer , NULL);
glCompileShader(FragmentShaderID);
// Check Fragment Shader
glGetShaderiv(FragmentShaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(FragmentShaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
std::vector<char> FragmentShaderErrorMessage(InfoLogLength);
glGetShaderInfoLog(FragmentShaderID, InfoLogLength, NULL, &FragmentShaderErrorMessage[0]);
fprintf(stdout, "%s\n", &FragmentShaderErrorMessage[0]);
// Link the program
fprintf(stdout, "Linking program\n");
GLuint ProgramID = glCreateProgram();
glAttachShader(ProgramID, VertexShaderID);
glAttachShader(ProgramID, FragmentShaderID);
glLinkProgram(ProgramID);
// Check the program
glGetProgramiv(ProgramID, GL_LINK_STATUS, &Result);
glGetProgramiv(ProgramID, GL_INFO_LOG_LENGTH, &InfoLogLength);
std::vector<char> ProgramErrorMessage( std::max(InfoLogLength, int(1)) );
glGetProgramInfoLog(ProgramID, InfoLogLength, NULL, &ProgramErrorMessage[0]);
fprintf(stdout, "%s\n", &ProgramErrorMessage[0]);
glDeleteShader(VertexShaderID);
glDeleteShader(FragmentShaderID);
return ProgramID;
}
int main ()
{
int w = 640,h=480;
glfwInit();
//configure glfw
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(w, h, "OpenGL", NULL, nullptr); // windowed
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit();
initializeCapturing();
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
// An array of 3 vectors which represents 3 vertices (singular: vertex -> ein Punkt im dreidimensionalen raum)
static const GLfloat g_vertex_buffer_data[] = {
//x,y,z
-1.0f, -1.0f, 0.0f, //unten links
1.0f, 1.0f, 0.0f, //oben rechts
-1.0f, 1.0f, 0.0f, //oben links
-1.0f, -1.0f, 0.0f, //unten links
1.0f, 1.0f, 0.0f, //oben rechts
1.0f,-1.0f,0.0f //unten rechts
};
static const GLfloat vertex_buffer_coordinates[] ={
0.0f,0.0f,
1.0f,1.0f,
0.0f,1.0f,
0.0f,0.0f,
1.0f,1.0f,
1.0f,0.0f,
};
GLuint coordinateBuffer;
glGenBuffers(1,&coordinateBuffer);
glBindBuffer(GL_ARRAY_BUFFER, coordinateBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_buffer_coordinates), vertex_buffer_coordinates, GL_STATIC_DRAW);
// This will identify our vertex buffer
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
GLuint shader_programm = LoadShaders("vertex.shader","fragment.shader");
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
//was passiert wenn die texture koordinaten außerhalb des bereichs sind?
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//was passiert wenn die textur gestreckt/gestaucht wird?
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
cv::Mat frame;
captureFromWebcam(frame,capture0);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,frame.size().width,frame.size().height,0,GL_RGB,GL_UNSIGNED_BYTE,frame.data);
glUniform1i(glGetUniformLocation(shader_programm, "myTextureSampler"), 0);
while(!glfwWindowShouldClose(window))
{
glfwPollEvents();
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// 2nd attribute buffer : colors
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, coordinateBuffer);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
const GLfloat color[] = {0.0f,0.2f,0.0f,1.0f};
glClearBufferfv(GL_COLOR,0,color);
glUseProgram(shader_programm);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 2*3); // Starting from vertex 0; 3 vertices total -> 1 triangle
//glDrawArrays(GL_POINTS,0,1);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers(window);
}
glDeleteVertexArrays(1,&VertexArrayID);
glDeleteProgram(shader_programm);
glDeleteVertexArrays(1,&VertexArrayID);
releaseCapturing();
glfwTerminate();
return 1;
}
vertex shader:
#version 330 core
layout (location = 0) in vec3 vertexPosition_modelspace; //input vom vertexbuffer
layout (location = 1) in vec2 UVcoord;
out vec2 UV;
void main(void)
{
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0; //Zoomfaktor
UV = UVcoord;
}
Fragment shader:
#version 330 core
in vec2 UV;
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main(void)
{
//color = texture2D(myTextureSampler,UV);
color = texelFetch(myTextureSampler,ivec2(gl_FragCoord.xy),0);
}
The commented line in the fragment shader with texture2D() won't work! It looks like this image. What is wrong? Output
Where are the diffrences between texture2D() and texelFetch() and what is best practice?
The image shown with texelFetch is bluish. Any idea why that happens? (the cv::Mat loaded has no tint)
GLSL texture addresses using normalized coordinates, i.e. values in the range [0, 1] and does perform filtering. texelFetch addresses by absolute pixel index from a specific mipmap level and does not filter.
Judging by your screenshot the texture coordinates you pass to texture are wrong, or wrongly processed; the texelFetch code does not use explicitly specified texture coordinated, but uses the viewport pixel coordinate.
Looking at your glVertexAttribPointer for the texture coordinates call, you tell OpenGL that there are 3 elements per texture coordinate, while the array has only 2. So that's likely your problem.

No transparency with simple OpenGL ES2.0 stencils

I am attempting to make a stencil mask in OpenGL. I have been following the model from this source (http://open.gl/depthstencils and more specifically, http://open.gl/content/code/c5_reflection.txt), and as far as I can tell, I have followed the example properly. My code is drawing one square stencil, and then another square on top of it. I expected to see only the parts of the second rotating green square that are covering the same space as the first. What I actually see is the two overlapping squares, one rotating with no transparency. One notable difference from the example is that I am not using a texture. Is that a problem? I figured that this would be a simpler example.
I'm fairly new to ES2.0, so if I'm doing something obviously stupid, please let me know.
Initialization:
GLuint attributes[] = { GLKVertexAttribPosition, GLKVertexAttribColor, GLKVertexAttribTexCoord0 };
const char *attributeNames[] = { "position", "color", "texcoord0" };
// These are all global GLuint variables
// vshSrc and fshSrc are const char* filenames (the files are found properly)
_myProgram = loadProgram(vshSrc, fshSrc, 3, attributes, attributeNames);
_myProgramUniformMVP = glGetUniformLocation(_myProgram, "modelViewProjectionMatrix");
_myProgramUniformTex = glGetUniformLocation(_myProgram, "tex");
_myProgramUniformOverrideColor = glGetUniformLocation(_myProgram, "overrideColor");
The draw loop:
glEnable(GL_DEPTH_TEST);
glUseProgram(_myProgram);
glDisable(GL_BLEND);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat gSquare[20] = { // not using the textures currently
// posX, posY, posZ, texX, texY,
-0.5f, -0.5f, 0, 0.0f, 0.0f,
0.5f, -0.5f, 0, 1.0f, 0.0f,
-0.5f, 0.5f, 0, 0.0f, 1.0f,
0.5f, 0.5f, 0, 1.0f, 1.0f
};
// Projection matrix
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
// Put the squares where they can be seen
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
glEnable(GL_STENCIL_TEST);
// Build the stencil
glStencilFunc(GL_ALWAYS, 1, 0xFF); // Set any stencil to 1
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilMask(0xFF); // Write to stencil buffer
glDepthMask(GL_FALSE); // Don't write to depth buffer
glClear(GL_STENCIL_BUFFER_BIT); // Clear stencil buffer (0 by default)
GLKMatrix4 mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
// Draw a stationary red square for the stencil (though the color shouldn't matter)
glUniformMatrix4fv(_chetProgramUniformMVP, 1, 0, mvp.m);
glUniform1i(_chetProgramUniformTex, 0);
glUniform4f(_chetProgramUniformOverrideColor, 1.0f, 1.0f, 1.0f,0.0f);
glVertexAttrib4f(GLKVertexAttribColor, 1, 0, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Prepare the mask
glStencilFunc(GL_EQUAL, 1, 0xFF); // Pass test if stencil value is 1
glStencilMask(0x00); // Don't write anything to stencil buffer
glDepthMask(GL_TRUE); // Write to depth buffer
glUniform4f(_myProgramUniformOverrideColor, 0.3f, 0.3f, 0.3f,1.0f);
// A slow rotating green square to be masked by the stencil
static float rotation = 0;
rotation += 0.01;
baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, rotation, 0, 0, 1);
mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
glUniformMatrix4fv(_myProgramUniformMVP, 1, 0, mvp.m);//The transformation matrix
glUniform1i(_myProgramUniformTex, 0); // The texture
glVertexAttrib4f(GLKVertexAttribColor, 0, 1, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_STENCIL_TEST);
EDIT: The following shader stuff is irrelevant to the problem I was having. The stenciling does not take place in the shader.
Vertex Shader:
attribute vec4 position;
attribute vec4 color;
attribute vec2 texcoord0;
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform mat4 modelViewProjectionMatrix;
uniform vec4 overrideColor;
void main()
{
colorVarying = overrideColor * color;
texcoord = texcoord0;
gl_Position = modelViewProjectionMatrix * position;
}
Fragment Shader:
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform sampler2D tex;
void main()
{
gl_FragColor = colorVarying * texture2D(tex, texcoord);
}
It was necessary to initialize a stencil buffer. Here is the code that fixed it.
glGenRenderbuffersOES(1, &depthStencilRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH24_STENCIL8_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_STENCIL_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);

a lot of GREEN Color at YUV420p --> RGB in OpenGL 2.0 Shader on iOS

I want to make a movie player for iOS using ffmpeg and OpenGL ES 2.0
but I have some problem. Output RGB image has a lot of GREEN color.
This is code and images
480x320 width & height:
512x512 Texture width & height
I got a YUV420p row data from ffmpeg AVFrame.
for (int i = 0, nDataLen = 0; i < 3; i++) {
int nShift = (i == 0) ? 0 : 1;
uint8_t *pYUVData = (uint8_t *)_frame->data[i];
for (int j = 0; j < (mHeight >> nShift); j++) {
memcpy(&pData->pOutBuffer[nDataLen], pYUVData, (mWidth >> nShift));
pYUVData += _frame->linesize[i];
nDataLen += (mWidth >> nShift);
}
}
and prepare texture for Y, U & V channel.
//: U Texture
if (sampler1Texture) glDeleteTextures(1, &sampler1Texture);
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &sampler1Texture);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: V Texture
if (sampler2Texture) glDeleteTextures(1, &sampler2Texture);
glActiveTexture(GL_TEXTURE2);
glGenTextures(1, &sampler2Texture);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: Y Texture
if (sampler0Texture) glDeleteTextures(1, &sampler0Texture);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &sampler0Texture);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW,
texH,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
Rendering part is below.
int _idxU = mFrameW * mFrameH;
int _idxV = _idxU + (_idxU / 4);
// U data
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glUniform1i(sampler1Uniform, 1);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxU]);
// V data
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glUniform1i(sampler2Texture, 2);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxV]);
// Y data
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glUniform1i(sampler0Uniform, 0);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW, // source width
mFrameH, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
_frameData);
Vertex Shader & Fragment Shader is below.
attribute vec4 Position;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
varying vec2 TexCoordOut_UV;
uniform mat4 Projection;
uniform mat4 Modelview;
void main()
{
gl_Position = Projection * Modelview * Position;
TexCoordOut = TexCoordIn;
}
uniform sampler2D sampler0; // Y Texture Sampler
uniform sampler2D sampler1; // U Texture Sampler
uniform sampler2D sampler2; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y = texture2D(sampler0, TexCoordOut).r;
highp float u = texture2D(sampler2, TexCoordOut).r - 0.5;
highp float v = texture2D(sampler1, TexCoordOut).r - 0.5;
//y = 0.0;
//u = 0.0;
//v = 0.0;
highp float r = y + 1.13983 * v;
highp float g = y - 0.39465 * u - 0.58060 * v;
highp float b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}
Y Texture (Grayscale) is correct but U & V has a lot of Green Color.
So final RGB image (Y+U+V) has a lot of GREEN Color.
What's the problem?
Please help.
thanks.
Change u and v uniforms (vice versa) and you will have correct result.
So pixel shader (stays the same):
uniform sampler2D sampler0; // Y Texture Sampler
uniform sampler2D sampler1; // U Texture Sampler
uniform sampler2D sampler2; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y = texture2D(sampler0, TexCoordOut).r;
highp float u = texture2D(sampler2, TexCoordOut).r - 0.5;
highp float v = texture2D(sampler1, TexCoordOut).r - 0.5;
highp float r = y + 1.13983 * v;
highp float g = y - 0.39465 * u - 0.58060 * v;
highp float b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}
and rendering code:
// RENDERING
int _idxU = mFrameW * mFrameH;
int _idxV = _idxU + (_idxU / 4);
// U data
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
GLint sampler1Uniform = glGetUniformLocation(programStandard, "sampler2");
glUniform1i(sampler1Uniform, 1);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxU]);
// V data
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
GLint sampler2Uniform = glGetUniformLocation(programStandard, "sampler1");
glUniform1i(sampler2Uniform, 2);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxV]);
// Y data
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
GLint sampler0Uniform = glGetUniformLocation(programStandard, "sampler0");
glUniform1i(sampler0Uniform, 0);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW, // source width
mFrameH, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
_frameData);
//draw RECT
glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
//ATTRIB_TEXTUREPOSITON
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureCoords);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
free(_frameData);
[(EAGLView *)self.view presentFramebuffer];
Conclusion: u <-> v uniforms.
Since iOS supports rgb_422 textures, Instead of using three luminance texture use one rgb_422 texture. http://www.opengl.org/registry/specs/APPLE/rgb_422.txt.
EDIT:
Whoops YUV480p is different than YUV422. In this case you must convert the YUV Data to an RGB data before uploading as a texture due to its odd layout.

About converting YUV(YV12) to RGB with GLSL for iOS

I'm trying to convert YUV(YV12) to RGB with GLSL shader.
As below step.
read a raw YUV(YV12) data from image file
filtering Y, Cb and Cr from the raw YUV(YV12) data
mapping texture
send Fragment Shader.
but result image is not same as raw data.
below image is raw data.
screenshot of raw image link(Available for download)
and below image is convert data.
screenshot of convert image link(Available for download)
and below is my source code.
- (void) readYUVFile
{
...
NSData* fileData = [NSData dataWithContentsOfFile:file];
NSInteger width = 720;
NSInteger height = 480;
NSInteger uv_width = width / 2;
NSInteger uv_height = height / 2;
NSInteger dataSize = [fileData length];
GLint nYsize = width * height;
GLint nUVsize = uv_width * uv_height;
GLint nCbOffSet = nYsize;
GLint nCrOffSet = nCbOffSet + nUVsize;
Byte* uData = spriteData + nCbOffSet;
Byte* vData = uData + nUVsize;
GLfloat imageY[ 345600 ], imageU[ 86400 ], imageV[ 86400 ];
int x, y, nIndexY = 0, nIndexUV = 0;
for( y = 0; y < height; y++ )
{
for( x = 0; x < width; x++ )
{
imageY[ nIndexY ] = (GLfloat)spriteData[ nIndexY ] - 16.0;
if( (y < uv_height) && (x < uv_width) )
{
imageU[ nIndexUV ] = (GLfloat)uData[ nIndexUV ] - 128.0;
imageV[ nIndexUV ] = (GLfloat)vData[ nIndexUV ] - 128.0;
nIndexUV++;
}
nIndexY++;
}
}
m_YpixelTexture = [self textureY:imageY widthType:width heightType:height];
m_UpixelTexture = [self textureU:imageU widthType:uv_width heightType:uv_height];
m_VpixelTexture = [self textureV:imageV widthType:uv_width heightType:uv_height];
...
}
- (GLuint) textureY: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE0 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
- (GLuint) textureU: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE1 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
- (GLuint) textureV: (GLfloat*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glActiveTexture( GL_TEXTURE2 );
glGenTextures( 1, &texName );
glBindTexture( GL_TEXTURE_2D, texName );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData );
return texName;
}
and below is source code of Fragment Shader.
uniform sampler2D Ytexture; // Y Texture Sampler
uniform sampler2D Utexture; // U Texture Sampler
uniform sampler2D Vtexture; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y, u, v;
highp float r, g, b;
y = texture2D( Ytexture, TexCoordOut ).p;
u = texture2D( Utexture, TexCoordOut ).p;
v = texture2D( Vtexture, TexCoordOut ).p;
y = 1.1643 * ( y - 0.0625 );
u = u - 0.5;
v = v - 0.5;
r = y + 1.5958 * v;
g = y - 0.39173 * u - 0.81290 * v;
b = y + 2.017 * u;
gl_FragColor = highp vec4( r, g, b, 1.0 );
}
Y data is good, but U and V data is not good. And y-axis of image data is reverse output.
How to resolve this issue?
The image is probably mirrored across the horizontal because of a simple disagreement in axes — OpenGL follows the graph paper convention where (0, 0) is the bottom left corner and the y axis heads upwards, whereas almost all graphics image formats follow the English reading order convention where (0, 0) is the top left corner and the y axis heads downwards. Just flip your input y coordinates (in the vertex shader if necessary).
As for the colours, the second screenshot currently isn't working for me (as per my comment) but my best guess would be that you're subtracting 128 when building imageU and imageV, then subtracting 0.5 again in your shader. Presumably you actually want to do just the one or the other (specifically, do it in the shader because texture data is unsigned)? You make the same mistake with imageY but the net effect will just be to darken the image slightly rather than to shift all the colours half way around the scale.
My only other thought is that your individual textures have only one channel so it'd be better to upload them as GL_LUMINANCE rather than GL_RGBA.

Resources