OpenGL ES 1 multi-texturing with different uv coordinates - ios

I need to render an object using multi-texturing but both the textures have different uv coordinates for same object. One is normal map and other one is light map.
Please provide any useful material regarding this.

In OpenGL ES 2 you use shaders anyway. So you're completely free to use whatever texture coordinates you like. Just introduce an additional attribute for the second texture cooridnate pair and delegate this to the fragment shader, as usual:
...
attribute vec2 texCoord0;
attribute vec2 texCoord1;
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
...
vTexCoord0 = texCoord0;
vTexCoord1 = texCoord1;
}
And in the fragment shader use the respective coordinates to access the textures:
...
uniform sampler2D tex0;
uniform sampler2D tex1;
...
varying vec2 vTexCoord0;
varying vec2 vTexCoord1;
void main()
{
... = texture2D(tex0, vTexCoord0);
... = texture2D(tex1, vTexCoord1);
}
And of course you need to provide data to this new attribute (using glVertexAttribPointer). But if all this sounds very alien to you, then you should either delve a little deeper into GLSL shaders or you actually use OpenGL ES 1. In this case you should retag your question and I will update my answer.
EDIT: According to your update for OpenGL ES 1 the situation is a bit different. I assume you already know how to use a single texture and specify texture coordinates for this, otherwise you should start there before delving into multi-texturing.
With glActiveTexture(GL_TEXTUREi) you can activate the ith texture unit. All following operations related to texture state only refer to the ith texture unit (like glBindTexture, but also glTexEnv and gl(En/Dis)able(GL_TEXTURE_2D)).
For specifying the texture coordinates you still use the glTexCoordPointer function, as with single texturing, but with glCientActiveTexture(GL_TEXTUREi) you can select the texture unit to which following calls to glTexCoordPointer and glEnableClientAttrib(GL_TEXTURE_COORD_ARRAY) refer.
So it would be something like:
//bind and enable textures
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, <second texture>);
glTexEnv(<texture environment for second texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, <first texture>);
glTexEnv(<texture environment for first texture>); //maybe, if needed
glEnable(GL_TEXTURE_2D);
//set texture coordinates
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(<texCoords for second texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(<texCoords for first texture>);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//other arrays, like glVertexPointer, ...
glDrawArrays(...)/glDrawElements(...);
//disable arrays
glClientActiveTexture(GL_TEXTURE1);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
//disable textures
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D);
The reason I set the parameters for the second texture before the first texture is only so that after setting them we end up with texture unit 0 active. I think I have already seen drivers making problems when drawing and another unit than unit 0 was active. And it's always a good idea to leave a more or less clean state at the end, which means the default texture unit (GL_TEXTURE0) active, as otherwise code that doesn't care about multi-texturing could get problems.
EDIT: If you use immediate mode (glBegin/glEnd) instead of vertex arrays, then you don't use glTexCoordPointer, of course. In this case you also don't need glClientAttribTexture, of course. You just need to use glMultiTexCoord(GL_TEXTUREi, ...) with the appropriate texture unit (GL_TEXTURE0, GL_TEXTURE1, ...) instead of glTexCoord(...). But if I'm informed correctly, OpenGL ES doesn't have immediate mode, anyway.

Related

pass texture map to another fragment shader

I want to generate a texture map in WebGl in one fragment shader and then pass that texture map to another fragment shader for processing but the syntax escapes me. I believe if I understood it correctly, an example I found online said I could do something like this:
(1)
// setup frame buffer and depth buffer for the two fragment shaders.
(2)
// texture map generating frag shader:
uniform sampler2D texturemap;
void main(){
// generate texture map
vec4 coorindate_value = ...;
output_texture = texture( texturemap , coorindate_value );
// my understanding is that that sampler2d will generate some kind of a mapping. how can I map coorindate_value to some other vec4, like another vec4 coordinate???
}
(3)
// second fragment shader:
uniform sampler2D same_texturemap;
void main(){
vec4 coorindate_value = ...;
vec4 value = texture2D( same_texturemap , coorindate_value );
// the returned value should be one of the color values from the first fragment shader, correct??
}
I'm not looking for anyone to provide code to help me here necessarily, but just to get some confirmation that I have an understanding of how this could work. I suppose my main confusion is over what sampler2D actually does. Is it like a dictionary or hashtable in that it maps between two values, and if so, how do I choose what those two values are? Any tips or corrections would be great.
thanks much in advance
A sampler2D is a reference to a texture unit. A texture unit holds a reference to a texture. A texture is a 2D array of data you can pull data out of using the texture2D function. You pass it the sampler2D uniform and a normalized texture coordinate. It returns a sampled value from the texture. I say sampled value because how that value is generated depends on the filter settings of the texture.
Output in WebGL is via a special variable gl_FragColor. The output goes to the current framebuffer to the canvas if no framebuffer is bound.
You probably need to read some tutorials on webgl. Here is one specifically about textures and also rendering to texture but if you're not familiar with the rest of WebGL you'll probably need to read the preceding articles.

OpenGL iOS Passing multiple Textures to Shader

currently I am having problems with passing multiple Textures to a glsl shader on iOS.
I have read through several similar questions and also tried whats written in eg. How can I pass multiple textures to a single shader, but that did not work either.
Here is my code:
[self setupVBOs];
[self compileSimpleShaders];
[self render];
in compileSimpleShaders the Shader gets compiled and uniforms get set.
For the Textures it does:
_textureUniform = glGetAttribLocation(programHandle, "Texture");
_textureUniform2 = glGetAttribLocation(programHandle, "Texture2");
in render the wanted Textures get bound to the Uniform as
glActiveTexture(GL_TEXTURE0);
glBindTexture(CVOpenGLESTextureGetTarget(_display.chromaTexture),
CVOpenGLESTextureGetName(_display.chromaTexture));
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(CVOpenGLESTextureGetTarget(_display.lumaTexture),
CVOpenGLESTextureGetName(_display.lumaTexture));
glUniform1i(_textureUniform2, 0);
glDisable(GL_BLEND);
I have been using GL_TEXTURE0 and GL_TEXTURE1 here, because when the Textures, which are actually the luma and Chroma Textures from the iPhone Camera,that are ultimately used to calculate the corresponding rgb value, are getting created, these Textureslots are used.
The fragmentshader I am using is very simple, it just textures a simple Screen Filling Quad with a given texture:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordsOut;
uniform sampler2D Texture;
uniform sampler2D Texture2;
void main(void) {
gl_FragColor = texture2D(Texture2,TexCoordsOut);
}
I have been using this to check if both textures are getting correctly uploaded by swapping Texture/Texture2 in the gl_FragColor.
The Textures themselves work just perfectly fine. The problem here is, that only one Texture gets used despite with what Texture the Quad gets textured.
Since thats the case, the Problem might be that the first loaded texture gets overwritten with the second texture.
I Hope someone can help me here and sees what I did wrong, because i simply don't see it.
You use the same uniform location (0) for both textures. They have to be different :
glUniform1i(_textureUniform, 0);
glUniform1i(_textureUniform2, 1);
Also you can change the color space of AVCaptureVideoDataOutput videoSettings to kCVPixelFormatType_32BGRA if you want to create one texture instead of luma, chroma.
The value you set inside your uniform has to be the number / ID of the texture unit you bound the textures to.
So in you're case you've been using TEXTURE0 and TEXTURE1, so you need to set the uniforms to 0 & 1.

How to draw a star in iOS OpenGL ES 2.0

This question has been asked before but the quite a few years ago in my searches. The answer was always to use texture mapping but what I really want to do is represent the star as a single vertex - you may think I'm copping out with a simplistic method but in fact, a single point source of light actually looks pretty good and realistic. But I want to process that point of light with something like a gaussian blur too give it a little more body when zooming in or for brighter stars. I was going to texture map a gaussian blur image but if I understand things correctly I would then have to draw each star with 4 vertexes. Maybe not so difficult but I don't want to go there if I can just process a single vertex. Would a vertex-shader do this? Can GLKBaseEffects get me there? Any suggestions?
Thanks.
You can use point sprites.
Draw Calls
You use a texture containing the image of the star, and use the typical setup to bind a texture, bind it to a sampler uniform in the shader, etc.
You draw a single vertex for each star, with GL_POINTS as the primitive type passed as the first argument to glDrawArrays()/glDrawElements(). No texture coordinates are needed.
Vertex Shader
In the vertex shader, you transform the vertex as you normally would, and also set the built-in gl_PointSize variable:
uniform float PointSize;
attribute vec4 Position;
void main() {
gl_Position = ...; // Transform Position attribute;
gl_PointSize = PointSize;
}
For the example, I used a uniform for the point size, which means that all stars will have the same size. Depending on the desired effect, you could also calculate the size based on the distance, or use an additional vertex attribute to specify a different size for each star.
Fragment Shader
In the fragment shader, you can now access the built-in gl_PointCoord variable to get the relative coordinates of the fragment within the point sprite. If your point sprite is a simple texture image, you can use it directly as the texture coordinates.
uniform sampler2D SpriteTex;
void main() {
gl_FragColor = texture2D(SpriteTex, gl_PointCoord);
}
Additional Material
I answered a somewhat similar question here: Render large circular points in modern OpenGL. Since it was for desktop OpenGL, and not for a textured sprite, this seemed worth a separate answer. But some of the steps are shared, and might be explained in more detail in the other answer.
I've been busy educating myself on this and trying it but I'm getting strange results. It seems to work with regard to vertex transform - because I see the points moved out on the screen - but pointsize and colour are not being affected. The colour seems to be some sort of default yellow colour with some shading between vertices.
What bothers me too is that I get error messages on built-ins in the vertex shader. Here are the vertex/fragment code and the error messages:
#Vertex shader
precision mediump float;
precision lowp int;
attribute float Pointsize;
varying vec4 color_out;
void main()
{
gl_PointSize = Pointsize;
gl_Position = gl_ModelViewMatrix * gl_Vertex;
color_out = vec4(0.0, 1.0, 0.0, 1.0); // output only green for test
}
#Fragment shader
precision mediump float;
precision lowp int;
varying vec4 color_out;
void main()
{
gl_FragColor = color_out;
}
Here's the error message:
ERROR: 0:24: Use of undeclared identifier 'gl_ModelViewMatrix'
ERROR: 0:24: Use of undeclared identifier 'gl_Vertex'
ERROR: One or more attached shaders not successfully compiled
It seems the transform is being passed from my iOS code where I'm using GLKBaseEffects such as in the following lines:
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
But I'm not sure exactly whats happening, especially with the shader compile errors.

Pass Texture to Uniform with CVOpenGLESTextureCache in OpenGL ES

I'm trying to apply a video file as a texture in OpenGL ES on iOS 5.0+ using CVOpenGLESTextureCache.
I've found Apple's RosyWriter sample code, and have been reading through it.
The question I have is:
How are the textures finally being delivered to the uniforms in the fragment shader?
In the RosyWriterPreviewView class, I follow it all the way up to
glBindTexture(CVOpenGLESTextureGetTarget(texture),
CVOpenGLESTextureGetName(texture))
after which some texture parameters are specified.
However, I don't see the texture uniform (sampler2D videoframe) ever being explicitly referenced by the sample code. The texture-sending code I've become used to would look something like:
GLint uniform = glGetUniformLocation(program, "u_uniformName");
with a subsequent call to actually send the texture to the uniform:
glUniform1i(GLint location, GLint x);
So I know that SOMEhow RosyWriter is delivering the texture to the uniform in its fragment shader, but I can't see how and where it's happening.
In fact, the sample code includes the comment where it builds up the OpenGL program:
// we don't need to get uniform locations in this example
Any help on why this is & how the texture is getting sent over would be great.
In the RosyWriter example, I think the reason they're able to get away without using glUniformi() at any point for the videoframe uniform is that they're binding the input texture to texture unit 0.
When specifying a uniform value for a texture, the value you use is the texture unit that texture is bound to. If you don't set a value for a uniform, I believe it should default to 0, so by binding the texture to unit 0 always they never have to set a value for the videoframe uniform. It will still pull in the texture attached to unit 0.

What is a good way to manage state when swapping out Direct3D 11 shaders?

I currently have two shaders, a processing shader (both vertex and pixel) - which calculates lighting and projection transformations. This is then rendered to a texture. I then have my second shader, a postprocessing shader, which reads the rendered texture and outputs it to the screen (again both vertex and pixel shaders).
Once I've rendered my scene to a texture I swap the Immediate Context's Vertex and Pixel shaders with my postprocessing ones, but I'm not sure how I should manage the state (e.g. my texture parameters and my constant buffers). Swapping shaders and then manually resetting the constant buffers and textures twice each frame seems incredibly wasteful, and kind of defeats the point of constant buffers in the first place, but as far as I can see you can't set the data on the shader object, it has to be passed to the context.
What do other people suggest for fairly simple and efficient ways of managing variables and textures when swapping in and out shaders?
Since you've had no answers from D3D11 experts, I offer my limited D3D9 experience, for what it's worth.
... twice each frame seems incredibly wasteful ...
"Seems" is a suspicious word there. Have you measured the performance hit? Twice per frame doesn't sound too bad.
For textures, I assign my texture registers according to purpose. I use registers < s4 for material-specific textures, and >= s4 for render target textures which are assigned only once at the beginning of the game. So for example my main scene shader has the following:
sampler2D DiffuseMap : register(s0);
sampler2D DetailMap : register(s1);
sampler2D DepthMap : register(s4);
sampler2D ShadowMap : register(s5);
sampler2D LightMap : register(s6);
sampler2D PrevFrame : register(s7);
So my particle shader has a reduced set, but the DepthMap texture is still in the same slot...
sampler2D Tex : register(s0);
sampler2D DepthMap : register(s4);
I don't know how much of this is applicable to D3D11, but I hope it's of some use.

Resources