I have some problems with sphere map texturing in webgl.
My texture:
Now I texturize a sphere. Everything is OK, if the sphere is in front of the camera:
The sphere is a unit-sphere (r = 1), defined with longitudes and latitudes.
But i get some artefacts, if i translate the sphere in x-direction -2.5 (without rotating the camera):
This image is without mipmapping. And the following image is with mipmapping:
Vertices and normals seems to be ok.
vertex-shader:
precision highp float;
uniform mat4 mvMatrix; // Matrix zum Transformieren des Verex vom model-space in den view-space
uniform mat4 mvpMatrix; // Matrix zum Transformieren des Vertex vom model-space in den clip-space
uniform mat3 mvNMatrix; // Matrix zum Transformieren der Vertex-Normale vom model-space in den view-space
attribute vec4 mV; // Vertex im model-space
attribute vec3 mVN; // Vertex-Normale im model-space
varying vec2 vN;
void main(void)
{
vec3 e = normalize( vec3( mvMatrix * mV ) );
vec3 n = normalize( mvNMatrix * mVN );
vec3 r = reflect( e, n );
//float d = dot(n, e);
//vec3 r = e - 2.0 * d * n;
float m = 2.0 * sqrt(
pow( r.x, 2.0 ) +
pow( r.y, 2.0 ) +
pow( r.z + 1.0, 2.0 )
);
vN.s = r.x / m + 0.5;
vN.t = r.y / m + 0.5;
gl_Position = mvpMatrix * mV;
}
And fragment shader:
precision highp float;
uniform sampler2D uSampler;
varying vec2 vN;
void main(void)
{
vec3 base = texture2D( uSampler, vN ).rgb;
gl_FragColor = vec4( base, 1.0 );
}
Does anybody know, why i get these artefacts? I am working with windows and firefox.
Your surface normals are backward, or you're culling the wrong side in general. You've done something like azimuthal projection, but you're rendering the inside of the sphere instead of the outside, so you're seeing greater than 180 degrees, including the 'bridge' separating forward and back. See also: map projections, which shows that azimuthal mapping is conformal everywhere but excludes at least the true equator with line of sight.
Related
Instead of giving -1 to 1 values to my shaders, I would prefer giving them pixel values like for the 2D canvas context. So according to what I read, I did add a uniform variable which I set to the size of the canvas, and I divide.
But I must be missing something. The rendering is way too big...
gl_.resolutionLocation = gl.getUniformLocation( gl_.program , "u_resolution" );
gl.uniform4f(gl_.resolutionLocation , game.w , game.h , game.w , game.h );
My vertex shader :
attribute vec4 position;
attribute vec2 texcoord;
uniform vec4 u_resolution;
uniform mat4 u_matrix;
varying vec3 v_texcoord;
void main() {
vec4 zeroToOne = position / u_resolution ;
gl_Position = u_matrix * zeroToOne ;
v_texcoord = vec3(texcoord.xy, 1) * abs(position.x);
v_texcoord = v_texcoord/u_resolution.xyz ;
}
My fragment shader :
precision mediump float;
varying vec3 v_texcoord;
uniform sampler2D tex;
uniform float alpha;
void main()
{
gl_FragColor = texture2DProj(tex, v_texcoord);
gl_FragColor.rgb *= gl_FragColor.a ;
}
If you want to stay in pixels with code like the code you have then you'd want to apply the conversion to clip space after you've done everything in pixels.
In other words the code would be something like
rotatedPixelPosition = rotationMatrix * pixelPosition
clipSpacePosition = (rotatedPixelPosition / resolution) * 2.0 - 1.0;
So in other words you'd want
vec4 rotatedPosition = u_matrix * position;
vec2 zeroToOne = rotatedPosition.xy / u_resolution.xy;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 minusOneToPlusOne = zeroToTwo - 1.0;
vec2 clipspacePositiveYDown = minusOneToPlusOne * vec2(1, -1);
gl_Position = vec4(clipspacePositiveYDown, 0, 1);
If you do that and you set u_matrix to the identity then if position is in pixels you should see those positions at pixel positions. If u_matrix is strictly a rotation matrix the positions will rotate around the top left corner since rotation always happens around 0 and the conversion above puts 0 at the top left corner.
But really here's no reason to convert to from pixels to clip space by hand. You can instead convert and rotate all in the same matrix. This article covers that process. It starts with translate, rotation, scale, and converting from pixels to clip space with no matrices and converts it to something that does all of that combined using a single matrix.
Effectively
matrix = scaleYByMinusMatrix *
subtract1FromXYMatrix *
scaleXYBy2Matrix *
scaleXYBy1OverResolutionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
And then in your shader you only need
gl_Position = u_matrix * vec4(position, 0, 1);
Those top 4 matrixes are easy to compute as a single matrix, often called an orthographic projection in which case it simplifies to
matrix = projectionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
There's also this article which reproduces the matrix stack from canvas2D in WebGL
I've seen different methods for shadow mapping all around the internet but I've only seen one method for mapping the shadows cast by a point light source, i.e. Cube mapping. Even though I've heard of it I've never seen an actual explanation of it.
I started writing this code before I had heard of cube mapping. My goal with this code was to map the shadow depths from spherical coordinates to a 2D texture.
I've simplified the coloring of the fragments for now in order to better visualize what's happening.
But, basically the models are a sphere of radius 2.0 at coordinates (0.0, 0.0, -5.0) and a hyperboloid of height 1.0 at (0.0, 0.0, -2.0) with the light source at (0.0, 0.0, 8.0).
If I scale(written in the code) the depth values by an inverse factor of less than 9.6 they both appear completely colored as the ambient color. Greater than 9.6 and they slowly become normally textured. I tried to make an example in jsfiddle but I couldn't get textures to work.
The method isn't working all together and I'm lost.
<script id="shadow-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
varying float vDepth;
uniform vec3 uLightLocation;
uniform mat4 uMMatrix;
void main(void){
const float I_PI = 0.318309886183790671537767; //Inverse pi
vec4 aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = 2.0 * acos(position.y/len) * I_PI - 1.0; //The angle of the vector from the xz plane bound between -1 and 1
float phi = atan(position.z, position.x) * I_PI; //The angle of the vector on the xz plane bound between -1 and 1
vDepth = len; //Divided by some scale. The depth of the vertex from the light source
gl_Position = vec4(phi, theta, len, 1.0);
}
</script>
<script id="shadow-fs" type="x-shader/x-fragment">
precision mediump float;
varying float vDepth;
void main(void){
gl_FragColor = vec4(vDepth, 0.0, 0.0, 1.0); //Records the depth in the red channel of the fragment color
}
</script>
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat4 uMMatrix;
uniform mat3 uNMatrix;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
void main(void) {
aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vPosition = uMVMatrix * uMMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * vPosition;
vTextureCoord = aTextureCoord;
vTransformedNormal = normalize(uNMatrix * mat3(uMMatrix) * aVertexNormal);
}
</script>
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
uniform sampler2D uSampler;
uniform sampler2D uShSampler;
uniform vec3 uLightLocation;
uniform vec3 uAmbientColor;
uniform vec4 uLightColor;
void main(void) {
const float I_PI = 0.318309886183790671537767;
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = acos(position.y/len) * I_PI; //The angle of the vector from the xz axis bound between 0 and 1
float phi = 0.5 + 0.5 * atan(position.z, position.x) * I_PI; //The angle of the vector on the xz axis bound between 0 and 1
float posDepth = len; //Divided by some scale. The depth of the vertex from the light source
vec4 shadowMap = texture2D(uShSampler, vec2(phi, theta)); //The color at the texture coordinates of the current vertex
float shadowDepth = shadowMap.r; //The depth of the vertex closest to the light source
if (posDepth > shadowDepth){ //Check if this vertex is further away from the light source than the closest vertex
gl_FragColor = vec4(uAmbientColor, 1.0);
}
else{
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
}
}
</script>
I'm currently trying to write a shader that should include a simple point light in OpenGL ES 2.0, but it's not quite working.
I built my own small SceneGraph and each Object (currently only Boxes) can have its own translation/rotation/scale and rendering works fine. Each of the boxes assigns its own modelView and normals matrix and all of them use the same projection matrix.
For each object I pass the matrices and the light position to the shader as a uniform.
If the Object does not rotate the light works fine, but as soon as the Object rotates the light seems to rotate with the object instead of staying at the same position.
Here is some Code.
First the creating the matrices:
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
Each of the nodes computes an own transformation matrix containing the translation/rotation/scale and multiplies it with the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, transformation);
This matrix is passed to the shader and after the object has been rendered the old matrix is recovered.
The normal matrix is calculated as follows:
GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
Vertex-Shader:
attribute vec4 Position;
attribute vec2 TexCoordIn;
attribute vec3 Normal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
varying vec2 TexCoordOut;
varying vec3 n, PointToLight;
void main(void) {
gl_Position = modelViewProjectionMatrix * Position;
n = normalMatrix * Normal;
PointToLight = ((modelViewMatrix * vec4(lightPosition,1.0)) - (modelViewMatrix * Position)).xyz;
// Pass texCoord
TexCoordOut = TexCoordIn;
}
Fragment-Shader:
varying lowp vec2 TexCoordOut;
varying highp vec3 n, PointToLight;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = texture2D(Texture, TexCoordOut);
highp vec3 nn = normalize(n);
highp vec3 L = normalize(PointToLight);
lowp float NdotL = clamp(dot(n, L), -0.8, 1.0);
gl_FragColor *= (NdotL+1.)/2.;
}
I guess the PointToLight is computed wrong, but I can't figure out what's going wrong.
I finally figured out what went wrong.
Instead of multiplying the lightPosition with the modelViewMatrix, I just need to multiply it with the viewMatrix, which only contains the transformations of the camera and not the transformations for the box:
PointToLight = ((viewMatrix * vec4(lightPosition,1.0)) - (viewMatrix * modelMatrix * Position)).xyz;
Now it works fine.
I am finding that in my fragment shader, these 2 statements give identical output:
// #1
// pos is set from gl_Position in vertex shader
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// #2 - equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
If this is correct, could you please explain the math? I understand #2, which is what I came up with, but saw #1 in a paper. Is this an NDC (normalized device coordinate) calculation?
The context is that I am using the texture coordinates with an FBO the same size as the viewport. It's all working, but I'd like to understand the math.
Relevant portion of vertex shader:
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
varying lowp vec4 vColor;
// transformed position
varying highp vec4 pos;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
// for fragment shader
pos = gl_Position;
vColor = aColor;
}
Relevant portion of fragment shader:
// transformed position - from vsh
varying highp vec4 pos;
// viewport dimensions
uniform highp vec2 uWinDims;
void main()
{
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
...
}
(pos.xy / pos.w) is the coordinate value in normalized device coordinates (NDC). This value ranges from -1 to 1 in each dimension.
(NDC + 1.0)/2.0 changes the range from (-1 to 1) to (0 to 1) (0 on the left of the screen, and 1 on the right, similar for top/bottom).
Alternatively, gl_FragCoord gives the coordinate in pixels, so it ranges from (0 to width) and (0 to height).
Dividing this value by width and height (uWinDims), gives the position again from 0 on the left side of the screen, to 1 on the right side.
So yes they appear to be equivalent.
I have some computations (below) in my fragment shader function which is called a huge number of times. I'd like to know if it is possible to optimize this code. I took a look at the OpenGL.org glsl optimisation page, and made some modifications, but it is possible to make this code faster?
uniform int mn;
highp float Nx;
highp float Ny;
highp float Nz;
highp float invXTMax;
highp float invYTMax;
int m;
int n;
highp vec4 func(in highp vec3 texCoords3D)
{
// tile index
int Ti = int(texCoords3D.z * Nz);
// (r, c) position of tile withn texture unit
int r = Ti / n; // integer division
int c = Ti - r * n;
// x/y offsets in pixels of tile origin with the texture unit
highp float xOff = float(c) * Nx;
highp float yOff = float(r) * Ny;
// 2D texture coordinates
highp vec2 texCoords2D;
texCoords2D.x = (Nx * texCoords3D.x + xOff)*invXTMax;
texCoords2D.y = (Ny * texCoords3D.y + yOff)*invYTMax;
return texture2D(uSamplerTex0, texCoords2D);
}
Edit:
To give some context, func() is used as part of a ray casting setup. It is called up to
300 times from main() for each fragment.
It is very easy to vectorize the code as follows:
highp vec3 N;
highp vec2 invTMax;
highp vec4 func(in highp vec3 texCoords3D)
{
// tile index
int Ti = int(texCoords3D.z * N.z);
// (r, c) position of tile within texture unit
int r = Ti / n;
int c = Ti - r * n;
// x/y offsets in pixels of tile origin with the texture unit
highp vec2 Off = vec2( float(c), float(r) ) * N;
// 2D texture coordinates
highp vec2 texCoords2D = ( N * texCoords3D.xy + Off ) * invTMax;
return texture2D(uSamplerTex0, texCoords2D);
}
To make sure the similar calculations run in parallel.
Modifying the texture coordinates instead of using the ones passed into the fragment shader creates a dynamic texture read and the largest performance hit on earlier hardware.
Check the last section on Dynamic Texture Lookups
https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/BestPracticesforShaders/BestPracticesforShaders.html
They suggest moving the texture coordinates up into the fragment shader. Looks like you can without much issue if I understand the intent of the code correctly. Your adding offset and tile support for fine adjustments, scaling, and animation on your UVs (and thus textures) ? Thought so. Use this.
//
// Vertex Shader
//
attribute vec4 position;
attribute vec2 texture;
uniform mat4 modelViewProjectionMatrix;
// tiling parameters:
// -- x and y components of the Tiling (x,y)
// -- x and y components of the Offset (w,z)
// a value of vec4(1.0, 1.0, 0.0, 0.0) means no adjustment
uniform vec4 texture_ST;
// UV calculated in the vertex shader, GL will interpolate over the pixels
// and prefetch the texel to avoid dynamic texture read on pre ES 3.0 hw.
// This should be highp in the fragment shader.
varying vec2 uv;
void main()
{
uv = ((texture.xy * texture_ST.xy) + texture_ST.zw);
gl_Position = modelViewProjectionMatrix * position;
}