currently I am having problems with passing multiple Textures to a glsl shader on iOS.
I have read through several similar questions and also tried whats written in eg. How can I pass multiple textures to a single shader, but that did not work either.
Here is my code:
[self setupVBOs];
[self compileSimpleShaders];
[self render];
in compileSimpleShaders the Shader gets compiled and uniforms get set.
For the Textures it does:
_textureUniform = glGetAttribLocation(programHandle, "Texture");
_textureUniform2 = glGetAttribLocation(programHandle, "Texture2");
in render the wanted Textures get bound to the Uniform as
glActiveTexture(GL_TEXTURE0);
glBindTexture(CVOpenGLESTextureGetTarget(_display.chromaTexture),
CVOpenGLESTextureGetName(_display.chromaTexture));
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(CVOpenGLESTextureGetTarget(_display.lumaTexture),
CVOpenGLESTextureGetName(_display.lumaTexture));
glUniform1i(_textureUniform2, 0);
glDisable(GL_BLEND);
I have been using GL_TEXTURE0 and GL_TEXTURE1 here, because when the Textures, which are actually the luma and Chroma Textures from the iPhone Camera,that are ultimately used to calculate the corresponding rgb value, are getting created, these Textureslots are used.
The fragmentshader I am using is very simple, it just textures a simple Screen Filling Quad with a given texture:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordsOut;
uniform sampler2D Texture;
uniform sampler2D Texture2;
void main(void) {
gl_FragColor = texture2D(Texture2,TexCoordsOut);
}
I have been using this to check if both textures are getting correctly uploaded by swapping Texture/Texture2 in the gl_FragColor.
The Textures themselves work just perfectly fine. The problem here is, that only one Texture gets used despite with what Texture the Quad gets textured.
Since thats the case, the Problem might be that the first loaded texture gets overwritten with the second texture.
I Hope someone can help me here and sees what I did wrong, because i simply don't see it.
You use the same uniform location (0) for both textures. They have to be different :
glUniform1i(_textureUniform, 0);
glUniform1i(_textureUniform2, 1);
Also you can change the color space of AVCaptureVideoDataOutput videoSettings to kCVPixelFormatType_32BGRA if you want to create one texture instead of luma, chroma.
The value you set inside your uniform has to be the number / ID of the texture unit you bound the textures to.
So in you're case you've been using TEXTURE0 and TEXTURE1, so you need to set the uniforms to 0 & 1.
Related
I want to generate a texture map in WebGl in one fragment shader and then pass that texture map to another fragment shader for processing but the syntax escapes me. I believe if I understood it correctly, an example I found online said I could do something like this:
(1)
// setup frame buffer and depth buffer for the two fragment shaders.
(2)
// texture map generating frag shader:
uniform sampler2D texturemap;
void main(){
// generate texture map
vec4 coorindate_value = ...;
output_texture = texture( texturemap , coorindate_value );
// my understanding is that that sampler2d will generate some kind of a mapping. how can I map coorindate_value to some other vec4, like another vec4 coordinate???
}
(3)
// second fragment shader:
uniform sampler2D same_texturemap;
void main(){
vec4 coorindate_value = ...;
vec4 value = texture2D( same_texturemap , coorindate_value );
// the returned value should be one of the color values from the first fragment shader, correct??
}
I'm not looking for anyone to provide code to help me here necessarily, but just to get some confirmation that I have an understanding of how this could work. I suppose my main confusion is over what sampler2D actually does. Is it like a dictionary or hashtable in that it maps between two values, and if so, how do I choose what those two values are? Any tips or corrections would be great.
thanks much in advance
A sampler2D is a reference to a texture unit. A texture unit holds a reference to a texture. A texture is a 2D array of data you can pull data out of using the texture2D function. You pass it the sampler2D uniform and a normalized texture coordinate. It returns a sampled value from the texture. I say sampled value because how that value is generated depends on the filter settings of the texture.
Output in WebGL is via a special variable gl_FragColor. The output goes to the current framebuffer to the canvas if no framebuffer is bound.
You probably need to read some tutorials on webgl. Here is one specifically about textures and also rendering to texture but if you're not familiar with the rest of WebGL you'll probably need to read the preceding articles.
I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?
Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.
Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.
This question has been asked before but the quite a few years ago in my searches. The answer was always to use texture mapping but what I really want to do is represent the star as a single vertex - you may think I'm copping out with a simplistic method but in fact, a single point source of light actually looks pretty good and realistic. But I want to process that point of light with something like a gaussian blur too give it a little more body when zooming in or for brighter stars. I was going to texture map a gaussian blur image but if I understand things correctly I would then have to draw each star with 4 vertexes. Maybe not so difficult but I don't want to go there if I can just process a single vertex. Would a vertex-shader do this? Can GLKBaseEffects get me there? Any suggestions?
Thanks.
You can use point sprites.
Draw Calls
You use a texture containing the image of the star, and use the typical setup to bind a texture, bind it to a sampler uniform in the shader, etc.
You draw a single vertex for each star, with GL_POINTS as the primitive type passed as the first argument to glDrawArrays()/glDrawElements(). No texture coordinates are needed.
Vertex Shader
In the vertex shader, you transform the vertex as you normally would, and also set the built-in gl_PointSize variable:
uniform float PointSize;
attribute vec4 Position;
void main() {
gl_Position = ...; // Transform Position attribute;
gl_PointSize = PointSize;
}
For the example, I used a uniform for the point size, which means that all stars will have the same size. Depending on the desired effect, you could also calculate the size based on the distance, or use an additional vertex attribute to specify a different size for each star.
Fragment Shader
In the fragment shader, you can now access the built-in gl_PointCoord variable to get the relative coordinates of the fragment within the point sprite. If your point sprite is a simple texture image, you can use it directly as the texture coordinates.
uniform sampler2D SpriteTex;
void main() {
gl_FragColor = texture2D(SpriteTex, gl_PointCoord);
}
Additional Material
I answered a somewhat similar question here: Render large circular points in modern OpenGL. Since it was for desktop OpenGL, and not for a textured sprite, this seemed worth a separate answer. But some of the steps are shared, and might be explained in more detail in the other answer.
I've been busy educating myself on this and trying it but I'm getting strange results. It seems to work with regard to vertex transform - because I see the points moved out on the screen - but pointsize and colour are not being affected. The colour seems to be some sort of default yellow colour with some shading between vertices.
What bothers me too is that I get error messages on built-ins in the vertex shader. Here are the vertex/fragment code and the error messages:
#Vertex shader
precision mediump float;
precision lowp int;
attribute float Pointsize;
varying vec4 color_out;
void main()
{
gl_PointSize = Pointsize;
gl_Position = gl_ModelViewMatrix * gl_Vertex;
color_out = vec4(0.0, 1.0, 0.0, 1.0); // output only green for test
}
#Fragment shader
precision mediump float;
precision lowp int;
varying vec4 color_out;
void main()
{
gl_FragColor = color_out;
}
Here's the error message:
ERROR: 0:24: Use of undeclared identifier 'gl_ModelViewMatrix'
ERROR: 0:24: Use of undeclared identifier 'gl_Vertex'
ERROR: One or more attached shaders not successfully compiled
It seems the transform is being passed from my iOS code where I'm using GLKBaseEffects such as in the following lines:
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
But I'm not sure exactly whats happening, especially with the shader compile errors.
I need to render an object using multi-texturing but both the textures have different uv coordinates for same object. One is normal map and other one is light map.
Please provide any useful material regarding this.
In OpenGL ES 2 you use shaders anyway. So you're completely free to use whatever texture coordinates you like. Just introduce an additional attribute for the second texture cooridnate pair and delegate this to the fragment shader, as usual:
...
attribute vec2 texCoord0;
attribute vec2 texCoord1;
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
...
vTexCoord0 = texCoord0;
vTexCoord1 = texCoord1;
}
And in the fragment shader use the respective coordinates to access the textures:
...
uniform sampler2D tex0;
uniform sampler2D tex1;
...
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
... = texture2D(tex0, vTexCoord0);
... = texture2D(tex1, vTexCoord1);
}
And of course you need to provide data to this new attribute (using glVertexAttribPointer). But if all this sounds very alien to you, then you should either delve a little deeper into GLSL shaders or you actually use OpenGL ES 1. In this case you should retag your question and I will update my answer.
EDIT: According to your update for OpenGL ES 1 the situation is a bit different. I assume you already know how to use a single texture and specify texture coordinates for this, otherwise you should start there before delving into multi-texturing.
With glActiveTexture(GL_TEXTUREi) you can activate the ith texture unit. All following operations related to texture state only refer to the ith texture unit (like glBindTexture, but also glTexEnv and gl(En/Dis)able(GL_TEXTURE_2D)).
For specifying the texture coordinates you still use the glTexCoordPointer function, as with single texturing, but with glCientActiveTexture(GL_TEXTUREi) you can select the texture unit to which following calls to glTexCoordPointer and glEnableClientAttrib(GL_TEXTURE_COORD_ARRAY) refer.
So it would be something like:
//bind and enable textures
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, <second texture>);
glTexEnv(<texture environment for second texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, <first texture>);
glTexEnv(<texture environment for first texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
//set texture coordinates
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(<texCoords for second texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(<texCoords for first texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//other arrays, like glVertexPointer, ...
glDrawArrays(...)/glDrawElements(...);
//disable arrays
glClientActiveTexture(GL_TEXTURE1);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
//disable textures
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D);
The reason I set the parameters for the second texture before the first texture is only so that after setting them we end up with texture unit 0 active. I think I have already seen drivers making problems when drawing and another unit than unit 0 was active. And it's always a good idea to leave a more or less clean state at the end, which means the default texture unit (GL_TEXTURE0) active, as otherwise code that doesn't care about multi-texturing could get problems.
EDIT: If you use immediate mode (glBegin/glEnd) instead of vertex arrays, then you don't use glTexCoordPointer, of course. In this case you also don't need glClientAttribTexture, of course. You just need to use glMultiTexCoord(GL_TEXTUREi, ...) with the appropriate texture unit (GL_TEXTURE0, GL_TEXTURE1, ...) instead of glTexCoord(...). But if I'm informed correctly, OpenGL ES doesn't have immediate mode, anyway.
I want to accomplish a GLSL Shader, that can texture and color my vertex objects. In fact, it works for me in 2 from 3 cases:
1) If I only have an texture assigned (but no specific color - so the color is "white"), I simply get an textured object - works
2) If I have an texture and an color assigned, I get an textured Object modulated with that color - works
3) If I only have an color assigned but no texture, I get an black object - doesn't work
My Shader looks like this:
varying lowp vec4 colorVarying;
varying mediump vec2 texcoordVarying;
uniform sampler2D texture;
void main(){
gl_FragColor = texture2D(texture, texcoordVarying) * colorVarying;
}
I guess that, texture2D(...,...) returns zero if no texture is assigned - so that seems to be the problem. Is there a way in GLSL I can check, if no texture is assigned (in this case, I want simply gl_FragColor = colorVarying;)
"If" isn't really an option in an GLSL Shader, if I understanded correctly - any ideas to accomplish this? Or is it really necessary to make 2 different shaders for both cases?
Like you I'd picked up the general advice that 'if' is to be avoided on current mobile hardware, but a previous optimisation discussion on StackOverflow seemed to suggest it wasn't a significant hurdle. So don't write it off. In any case, the GLSL compiler is reasonably smart and may chose to switch which code it actually runs depending on the values set for uniforms, though in that case all you're getting is the same as if you'd just supplied alternative shaders.
That said, if you're absolutely certain you want to use the same shader for all purposes and want to avoid if statements then you probably want something like:
uniform lowp float textureFlag;
[...]
gl_FragColor = textureFlag * texture2D(texture, texcoordVarying) * colorVarying +
(1.0 - textureFlag) * colorVarying;
Or even:
gl_FragColor = mix(
colorVarying,
texture2D(texture, texcoordVarying) * colorVarying,
textureFlag
)
With you setting textureFlag to 1.0 or 0.0 depending on whether you want the texture values to be factored in.
I'm not aware of any way to determine (i) whether a sampler2D is pointed to an actual texture unit; and (ii) whether a particular texture unit has a buffer bound to it for which data has been uploaded.