Specular Lighting breaks at origin (0, 0, 0) - directx

im having problem to get specular lighting to work. It looks like im having some kind of bug in my application which im not able to trace.
Light is coming from the front (screenshots shows camera looking at -z direction, on the left is -x).
A simple specular output shows the following failure:
Code used:
float3 n = normalize(input.normal); // world space normal, mul(float4(normal, 0.0), modelMatrix)
float3 l = normalize(-sDirection); // constant direction (like 0.7, -0.8, -0.7)
float3 v = normalize(viewPos.xyz - input.vertexWorldSpace.xyz); // viewpos = world space camera position
float3 LightReflect = normalize(reflect(n,l));
float SpecularFactor = dot(v, LightReflect);
color = float4(SpecularFactor, SpecularFactor, SpecularFactor, 1.0);
To check for possible variable errors i checked the input.VertexWorldSpace:
to also check lightdirection and normal, i checked the diffuse term:
and the camera to vertex view vector (v):
For me, all parts look fine, but still the specular gets black at origin(0,0,0) and perpendicular to the light direction.
Ive also made a gif showing what happens with the view direction vector at origin(0, 0, 0)
http://imgur.com/2YlqcGP
And another gif showing camera position and where the specular goes black:
http://i.imgur.com/ajUaekA.gifv
Am i using a wrong calculation for v?

Ok, the problem was with my buffer alignment: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509632(v=vs.85).aspx
Instead of passing
Matrix
Matrix
Vec3
Vec3
hlsl wants 16 byte alignment vectors
so i had to use
Matrix
Vec3
float1 // 16 byte alignment
Vec3
float1 // 16 byte alignment

Related

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

Metal fragment shader uv coordinates change when reading in vertex color

I was playing with applying dithering to a simple colored quad, and found a strange issue. I have a fragment shader which should calculate dithering at some uv and return a dithered color. This works fine on a textured quad, but strangely enough, when I access color data from my inVertex, the uv coordinates change to some bizarre values, and y value seems to be mapped to x axis. I'll try to illustrate what happens when I change stuff around the fragment shader code.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return color;
}
}
Produces the following result:
Where the left side of the image shows my gradient quad, notice that if (uv.y < 0.67) maps to x values in the image 🤔.
If I change this fragment shader and nothing else in the code, like so, where I return float4(0, 0, 1, 0) instead of inVertex.color, the uv coordinates are mapped correctly.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return float4(0, 0, 1, 0); //return color;
}
}
Produces this (correct) result:
I think I can hack around this problem by applying a 1x1 texture to the gradient and using texture coordinates, but I'd really like to know what is happening here, is this a bug or a feature that I don't understand?
Why are you using [[point_coord]]? What do you think it represents?
Unless you're drawing point primitives, you shouldn't be using that. Since you're drawing a "quad", and given the screenshots, I assume you're not drawing point primitives.
I suspect [[point_coord]] is simply undefined and subject to random-ish behavior when you're drawing triangles. The randomness is apparently affected by the specifics (such as stack layout) of the fragment shader.
You should either be using [[position]] and scaling by the window size or using an interpolated field within your ColoredVertex struct to carry "texture" coordinates.

Centering all of the points in iOS OpenGL ES app

I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.

How to map texture image onto a part of a sphere, or how to cut out (intersect) a rectangle of a sphere's surface?

I am working with WebGL using three.js and I have an image that I want to project onto the (inner) surface of a sphere. The problem I am facing is how to limit the extent of that mapping to a horizontal and vertical field of view. Imagine projecting an image from the centre of a sphere onto a rectangular section of it.
I suspect I can do this one of 2 ways, but am unsure about how to do either...
1) Map the texture onto the sphere based on the field of view angles. Mapping the image straight onto the sphere as below does a 360x180 degree wrap. Is there a UV mapping trick involved, or some other technique available?
var sphere = new THREE.Mesh(
new THREE.SphereGeometry(radius, 60, 40),
new THREE.MeshBasicMaterial(
{ map: THREE.ImageUtils.loadTexture( filename ) }
)
);
2) Chop up a sphere so that it only has the subset of the surface covered by the angles given (ie intersecting with a rectangular pyramid), or producing an equivalent curved surface. Any ideas?
The easiest way to scale down the projection is to tweak the UV coords in the fragment shader:
// how large the projection should be
uniform vec2 uScale;
...
// this is the color for pixels outside the mapped texture
vec4 texColor = vec4(0.0, 0.0, 0.0, 1.0);
vec2 scale = vec2(1.0/uScale.s, 1.0/uScale.t);
vec2 mappedUv = vUv*scale + vec2(0.5,0.5)*(vec2(1.0,1.0)-scale);
// if the mapped uv is inside the texture area, read from texture
if (mappedUv.s >= 0.0 && mappedUv.s <= 1.0 &&
mappedUv.t >= 0.0 && mappedUv.t <= 1.0) {
texColor = texture2D(map, mappedUv);
}
For THREE.SphereGeometry UVs the full field of view in radians is 2pi for x and pi for y. The scale factor for a reduced field is vec2(fovX/2pi, fovY/pi).
You can also do the UV scaling in the vertex shader. Other ways are to copypaste https://github.com/mrdoob/three.js/blob/master/src/extras/geometries/SphereGeometry.js and change the generated UVs to match your scaling factors (uv*1/scale + 0.5*(1-1/scale))
Lemme know if this helps.

Computing half vector on WebGL

http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2 shows the following:
H = Eye - L
I did the following on my WebGL vertex shader to compute the half-vector:
vec4 ecPosition = u_mvMatrix * vec4(a_position.xyz, 1.0); // Get the eye coordinate position
vec3 eyeDirection = normalize(-ecPosition.xyz); // Get the eye direction
v_halfVector = normalize(eyeDirection + lightDirection); // Compute and normalize the half-vector
But I am not sure if the above code snippets are correct.
Any pointer/help is appreciated. Thanks in advance for your help.
EDIT: It seems the correct code should be
vec4 ecPosition = u_mvMatrix * vec4(a_position.xyz, 1.0); // Position in the eye coordinate position
vec3 ecLightPosition = (u_mvMatrix * lightPosition).xyz; // Light position in the eye coordinate
vec3 lightDirection = ecLightPosition - ecPosition.xyz // Light direction
vec3 eyeDirection = (-ecPosition.xyz); // Eye direction
v_halfVector = normalize(eyeDirection + lightDirection); // Compute and normalize the half-vector
Assuming you're trying to get the average of the eye and light vectors, shouldn't that last line be "normalize(eyeDirection + lightDirection)" instead? Also, it might make more sense to invert the light vector instead of the eye since it's coming out of the surface.
I'm not an expert here, so take my advice with a huge grain of salt. :)

Resources