OpenGL ES 2.0 - Fragment shader with multiple textures - ios

I'm setting up two textures as follow:
GLKTextureInfo *texture = [GLKTextureLoader ...
glActiveTexture(GL_TEXTURE0);
glUniform1i(glGetUniformLocation(self.program, "uTextureMask"), 0);
glBindTexture(GL_TEXTURE_2D, texture.name);
texture = [GLKTextureLoader ...
glActiveTexture(GL_TEXTURE1);
glUniform1i(glGetUniformLocation(self.program, "uTextureLabel"), 1);
glBindTexture(GL_TEXTURE_2D, texture.name);
referred in the fragment shader:
uniform sampler2D uTextureMask;
uniform sampler2D uTextureLabel;
The problem is that only the last texture I bind is available in the shader.
In the example above, only uTextureLabel works.
Any idea?
Thanks,
DAN
UPDATE:
glGetUniformLocation returns 13 for uTextureMask and 14 for uTextureLabel.
In the shader I do:
highp vec4 label = texture2D(uTextureLabel, vTexel);
highp vec4 mask = texture2D(uTextureMask, vTexel);
highp vec3 surface;
surface = label.rgb;
// surface = mask.rgb; // <--- DOESN'T WORK
gl_FragColor = vec4(surface, 1.0);

On IOS GLKTextureLoader not only load PNG. It's also create texture name byglGenTexture`, bind it and load data on GPU and also set default parametres for GL_WRAP_MODE and MIN/MAG algorithms.
In your code i see that you call
glActiveTexture(GL_TEXTURE0)
glBindTexture(MaskTexture)
glBindTexture(LabelTature) // this call made GLKTextureLoader
glActiveTexture(GL_TEXTURE1)
glBindTexture(LabelTexture)
In result you have LabelTexture binded into GL_TEXTURE0 and GL_TEXTURE1 texture blocks.

Related

Does a Vertex Shader VAO need a VBO?

I am trying to use a VAO with a vertex shader. This works, but only if I set the length of the bufferData to 0. My understanding is that when using a vertex shader, a VBO is not required because my shader is generating the vertices of a quad. If I attempt to create the VAO without binding a buffer, it will also crash.
As I mentioned, this works, however I am concerned because in Apple's Instruments, the OpenGL Expert reports a severe error:
Draw Call Exceeded Array Buffer Bounds
No Buffer Data - DYFKNoBufferData
Here is the code for generating the VAO:
glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object
glBindVertexArray(avoid); // Bind VAO
GLfloat vertices[12]; // Vertices for our square
vertices[0] = -0.5; vertices[1] = 0.5; vertices[2] = 0.0; // Top left corner
vertices[3] = -0.5; vertices[4] = -0.5; vertices[5] = 0.0; // Bottom left corner
vertices[6] = 0.5; vertices[7] = 0.5; vertices[8] = 0.0; // Top Right corner
vertices[9] = 0.5; vertices[10] = -0.5; vertices[11] = 0.0; // Bottom right corner
glGenBuffers(1, &fboTextureVboID); // Create our Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, fboTextureVboID); // Bind VBO
// As long as I set the buffer data length to 0
// then my glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call works
// otherwise I get EXC_BAD_ACCESS
glBufferData(GL_ARRAY_BUFFER, 0, vertices, GL_STATIC_DRAW);
// configure vertex attributes
glEnableVertexAttribArray (...
glVertexAttribPointer(...
...
glEnableVertexAttribArray(0); // Make our Vertex Array Object Inactive
glBindVertexArray(0); // Make our Vertex Buffer Object Inactive
Drawing with:
glUseProgram(vertexShaderProgram);
glBindVertexArray(vaoID);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can I safely ignore Apple's errors? I am trying to use a VAO for the vertex shader because I would like to eliminate all the vertex attribute bindings in my drawing code. Or is there a better way to do this with a shader with or without a VAO?
EDIT:
Here is my vertex shader source:
#version 300 es
uniform lowp mat4 uProjectionMatrix;
in lowp vec4 a_position;
in lowp vec2 a_texCoord;
out lowp vec2 v_texCoord;
void main()
{
gl_Position = uProjectionMatrix * a_position;
v_texCoord = a_texCoord;
}
And fragment shader source:
#version 300 es
precision mediump float;
uniform lowp sampler2D uTexture;
in lowp vec2 v_texCoord;
out lowp vec4 fragmentColor;
void main()
{
fragmentColor = texture( uTexture, v_texCoord );
}
You can pick one of two things.
It is perfectly legal to have a VAO that has no attached buffer objects. However, this does not mean "create a buffer object, but don't put anything in it". It means not to attach buffer objects to the VAO. You just call glGenVertexArrays to generate the vertex array, and you're done.
No calls to glEnableVertexAttribArrays. No calls to glVertexAttribPointer. If you're not using vertex arrays at all, you should not be making these calls at all.
It is also perfectly legal to have a VAO that contains buffer objects. These work like normal.
What you can't do is create a buffer object that has no allocation, then try to use it for vertex data. That's what happens if you remove just glBufferData.
So you have to pick one side of the road or the other. Either your VAO uses one or more buffers, or it doesn't. If it uses a buffer, those buffers have to have storage. If it doesn't, then it won't care.

Working with shaders using lookup data IOS

I have lookup data provided by one software and I want to use this data with shader as written below:
7999745,8000001,8000258,8066051,8066308,8132357,8132614,8198407,8198664,8264457,8264969,8330762,8331019,8396812,8397069,8463118,8463375,8529168,8529425,8595218,8595730,8661523,8661780,8727573,8727830,8793879,8794136,8859929,8860186,8925979,8926491,8992284,8992541,9058334,9058591,9059104,9124897,9125154,9190947,9191204,9257252,9257509,9323302,9323559,9389352,9389865,9455658,9455915,9521708,9521965,9588013,9588270,9654063,9654320,9720113,9720626,9786419,9786676,9852469,9852726,9918774,9919031,9984824,9985081,10050874,10051387,10117180,10117437,10183230,10183743,10183999,10249792,10250049,10315842,10316355,10382148,10382405,10448198,10448455,10514503,10514760,10580553,10580810,10646603,10647116,10712909,10713166,10778959,10779216,10845264,10845521,10911314,10911571,10977364,10977877,11043670,11043927,11109720,11109977,11176025,11176282,11242075,11242332,11308125,11308638,11308895,11374688,11374945,11440738,11441250,11507043,11507300,11573093,11573350,11639399,11639656,11705449,11705706,11771499,11772011,11837804,11838061,11903854,11904111,11970160,11970417,12036210,12036467,12102260,12102772,12168565,12168822,12234615,12234872,12300921,12301178,12366971,12367228,12433277,12433278,12433535,12433536,12433793,12499330,12499587,12499588,12499845,12565382,12565639,12565896,12565897,12566154,12631691,12631948,12631949,12632206,12697743,12698000,12698001,12698258,12698515,12764052,12764310,12764311,12764568,12830105,12830362,12830363,12830620,12830621,12896414,12896671,12896672,12896929,12962466,12962723,12962724,12962981,12962982,13028775,13028776,13029033,13029290,13094827,13095084,13095086,13095343,13095344,13161137,13161138,13161395,13161396,13227189,13227446,13227447,13227704,13227705,13293498,13293499,13293756,13293757,13359550,13359807,13359808,13360065,13360066,13425859,13425860,13426117,13426119,13491912,13491913,13492170,13492427,13492428,13558221,13558222,13558479,13558480,13624273,13624274,13624531,13624532,13624789,13690582,13690583,13690840,13690841,13756634,13756635,13756892,13756893,13757151,13822688,13822945,13823202,13823203,13888996,13888997,13889254,13889255,13889512,13955049,13955306,13955307,13955564,14021357,14021358,14021615,14021616,14021873,14087410,14087667,14087668,14087925,14153719
Fragment Shader code:
precision highp float;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
varying vec2 textureCoordinate;
uniform float uAmount;
void main() {
vec4 color = texture2D(inputImageTexture, textureCoordinate);
vec2 pos = vec2((color.r + color.g + color.b)/ 3.0, 0.0);
vec4 dstColor = texture2D(inputImageTexture2, pos);
gl_FragColor = mix(
color,
dstColor,
uAmount);
}
Help me to pass this data to sampler2D inputimageTexture2.
I am thinking that these should converted to rgb(image texture) somehow, so I can pass this to sampler2D.
I take it that the lookup table is 1 channel and 2D (16x16?).
You could try uploading it with glTexImage2D as
GL_FLOAT with GL_LUMINANCE or GL_ALPHA and your shader would become
vec4 color = texture2D(inputImageTexture, textureCoordinate).xxxx // GL_LUMINANCE
or
vec4 color = texture2D(inputImageTexture, textureCoordinate).aaaa // GL_ALPHA
This question is tagged as GPUImage, which I don't know at all (so what follows could be completely wrong!), but I imagine it manages its own textures so you may have to ask it to make the LUT available to your shader. Looking through the source, GPUImageRawDataInput looks like a good place to start to get your lookup table into GPUImage, maybe with something like
GPUImageRawDataInput *rawInput =
[[GPUImageRawDataInput alloc] initWithBytes:yourTable
size:CGSizeMake(16, 16)
pixelFormat:GPUPixelFormatLuminance
type:GPUPixelTypeFloat];
I found solution these lookup data are 32-bit integer type.
Convert 32-bit integer to RGB, then pass RGB array as texture to shader.

Serious Lag due OpenGL Fragment Shader

Fragment shader causes serious lag when I run it on iPhone 4. I tried to comment part of calculations, however still there are some jitters even though I barely am doing any calculation in the Fragment Shader.
// Fragment Shader Code
uniform sampler2D texture;
varying lowp vec2 fragmentTexCoords;
uniform lowp float passAlpha;
uniform lowp vec2 inPosition;
uniform lowp float varUniform;
void main()
{
gl_FragColor = texture2D(texture, fragmentTexCoords);
lowp float disY = gl_FragCoord.y - inPosition.y;
lowp float disMax = 250.0;
lowp float coeff = 1.0 - varUniform;
gl_FragColor.rgb *= coeff;
}
//My render function is:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFlush();
I am still not sure what could be the problem, I am sure iPhone can handle way more complex calculations ... Any ideas ?
Thanks in advance.
I would try this, it avoids several situations where you are doing calculations at enhanced precision because of the way your variables are declared... if this improves your performance, I can further explain why it works.
// Fragment Shader Code
uniform lowp sampler2D texture;
varying lowp vec2 fragmentTexCoords;
uniform lowp float passAlpha;
uniform lowp vec2 inPosition;
uniform lowp float varUniform;
void main ()
{
lowp vec4 color = texture2D (texture, fragmentTexCoords);
lowp float disY = gl_FragCoord.y - inPosition.y;
lowp float disMax = 250.0;
lowp float coeff = 1.0 - varUniform;
color.rgb *= coeff;
gl_FragColor = color;
}
Are you certain the stutter is caused by your fragment shader? Can you verify this by removing most operations from the fragment shader? I'm asking because you're really not doing anything too expensive in your shader code, and it would be odd for that to cause any performance problems. Are you sure you're not doing anything else, like uploading textures, in your update loop? What do the xcode profiling tools say about your performance?

GLSL Shaders compile but don't draw anything on Windows

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

Resources