Computing half vector on WebGL - webgl

http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2 shows the following:
H = Eye - L
I did the following on my WebGL vertex shader to compute the half-vector:
vec4 ecPosition = u_mvMatrix * vec4(a_position.xyz, 1.0); // Get the eye coordinate position
vec3 eyeDirection = normalize(-ecPosition.xyz); // Get the eye direction
v_halfVector = normalize(eyeDirection + lightDirection); // Compute and normalize the half-vector
But I am not sure if the above code snippets are correct.
Any pointer/help is appreciated. Thanks in advance for your help.
EDIT: It seems the correct code should be
vec4 ecPosition = u_mvMatrix * vec4(a_position.xyz, 1.0); // Position in the eye coordinate position
vec3 ecLightPosition = (u_mvMatrix * lightPosition).xyz; // Light position in the eye coordinate
vec3 lightDirection = ecLightPosition - ecPosition.xyz // Light direction
vec3 eyeDirection = (-ecPosition.xyz); // Eye direction
v_halfVector = normalize(eyeDirection + lightDirection); // Compute and normalize the half-vector

Assuming you're trying to get the average of the eye and light vectors, shouldn't that last line be "normalize(eyeDirection + lightDirection)" instead? Also, it might make more sense to invert the light vector instead of the eye since it's coming out of the surface.
I'm not an expert here, so take my advice with a huge grain of salt. :)

Related

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

How to understand Exponential Shadow Mapping in HLSL with Directional Light?

I've tried to understand how ESM is working - I have regular Shadow Mapping in Place (occluded/not occluded) in a deferred rendering pipeline and are trying to use ESM instead.
I've tried to adapt this from Cansin:
http://homepage.lnu.se/staff/tblma/Deferred Rendering in XNA 4.pdf
But as he does not use directional lights, I may have a misunderstanding. This is basically my approach on adapting it to directional lights:
Create ShadowMap:
float4 PS(VSO input) : COLOR0
{
float depth = input.Position2D.z / input.Position2D.w;
return exp(depth);
}
I am using an Orthogonal Projection Matrix (same NearFarClip as actual cam), as I do with regular Shadow Mapping (Position2D is ScreenSpace, because it's a directional light, Z is always the distance surface/light, or am I wrong?)
Get Shadow Factor - basically like regular Shadow Mapping, I transform into Light/Screenspace, getting the depth from the ShadowMap
float4 Position = 1;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth; // saved depth from gbuffer
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float shadowDepth = tex2D(sampler_shadow, LUV).r;
float shadow = shadowDepth * exp(-10 * LightScreenPos.z);
Is my thinking fundamentally wrong?

Swift SceneKit lighting and effecting emission texture

I'm developing an app which is about the solar system.
I'm trying to turnoff the Emission Texture, where the light hits the surface of the planet. But the problem is that an emission texture by default, always shows the emission points regardless the absence or presence of the light.
My request in a nutshell: ( I wanna hide the emission points, on places where the light hits the surface )
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let earth = SCNSphere(radius: 1)
let earthNode = SCNNode()
let earthMaterial = SCNMaterial()
earthMaterial.diffuse.contents = UIImage(named: "earth.jpg")
earthMaterial.emission.contents = UIImage(named: "earthEmission.jpg")
earth.materials = [earthMaterial]
earthNode.geometry = earth
scene.rootNode.addChildNode(earthNode)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light?.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 5)
scene.rootNode.addChildNode(lightNode)
sceneView.scene = scene
}
SceneKit's shader modifiers are a perfect fit for this kind of task.
You can see footage of the final result here.
Fragment shader modifier
We can use _lightingContribution.diffuse (RGB (vec3) color representing lights that are applied to the diffuse) to determine areas of an object (in this case - Earth) that are illuminated and then use it to mask the emission texture in the fragment shader modifier.
The way you use it is really up to you. Here's the simplest solution I've come up with (using GLSL syntax, though it will be automatically converted to Metal at runtime if you are using it)
uniform sampler2D emissionTexture;
vec3 light = _lightingContribution.diffuse;
float lum = max(0.0, 1 - (0.2126*light.r + 0.7152*light.g + 0.0722*light.b)); // 1
vec4 emission = texture2D(emissionTexture, _surface.diffuseTexcoord) * lum; // 2, 3
_output.color += emission; // 4
calculate luminance (using formula from here) of the _lightingContribution.diffuse color (in case the lighting is not pure white)
subtract it from one to get luminance of the "dark side"
get emission from a custom texture using diffuse UV coordinates (granted emission and diffuse textures have the same ones) and apply luminance to it by multiplication
Add it to the final output color (the same way regular emission is applied)
That's it for the shader part, now let's go though the Swift side of things.
Swift setup
First-off, we are not going to use emission.contents property of a material, instead we would need to create a custom SCNMaterialProperty
let emissionTexture = UIImage(named: "earthEmission.jpg")!
let emission = SCNMaterialProperty(contents: emissionTexture)
and set it to the material using setValue(_:forKey:)
earthMaterial.setValue(emission, forKey: "emissionTexture")
Pay close attention to the key – it should be the same as the uniform in the shader modifier. Also you don't need to persist the material property yourself, setValue creates a strong reference.
All that is left to do is to set the fragment shader modifier to the material:
let shaderModifier =
"""
uniform sampler2D emissionTexture;
vec3 light = _lightingContribution.diffuse;
float lum = max(0.0, 1 - (0.2126*light.r + 0.7152*light.g + 0.0722*light.b));
vec4 emission = texture2D(emissionTexture, _surface.diffuseTexcoord) * lum;
_output.color += emission;
"""
earthMaterial.shaderModifiers = [.fragment: shaderModifier]
Here's footage of this shader modifier in motion.
Note that a light source has to be quite bright otherwise dim lights are going to be seen around the "globe". I had to set lightNode.light?.intensity to at least 2000 in your setup for it to work as expected. You might want to experiment with the way luminosity is calculated and applied to emission to get better results.
In case you might need it, _lightingContribution is a structure available in the fragment shader modifier that has also has ambient and specular members (below is Metal syntax):
struct SCNShaderLightingContribution {
float3 ambient;
float3 diffuse;
float3 specular;
} _lightingContribution;
I like Lësha's answer, made a small modification to the shader so that it will work with lower light levels. Added a threshold (t) below which emission values will not show, and then between the threshold and zero it interpolates values between diffuse and diffuse + emission. Changing the value of t adusts the width of the band depicting the transition between night and day. I also appended a 0.5 multiplier on the emission formula, since the emission texture I'm using looked artificially bright without it.
let shaderModifier =
"""
uniform sampler2D emissionTexture;
vec3 light = _lightingContribution.diffuse;
float lum = max(0.0, 1 - (0.2126*light.r + 0.7152*light.g + 0.0722*light.b));
vec4 emission = texture2D(emissionTexture, _surface.diffuseTexcoord) * lum * 0.5;
float t = 0.11; // no emission will show above this threshold
_output.color = vec4(
light.r > t ? _output.color.r : light.r/t * _output.color.r + (1-light.r/t) * (_output.color.r + emission.r),
light.g > t ? _output.color.g : light.g/t * _output.color.g + (1-light.g/t) * (_output.color.g + emission.g),
light.b > t ? _output.color.b : light.b/t * _output.color.b + (1-light.b/t) * (_output.color.b + emission.b),1);
"""

Specular Lighting breaks at origin (0, 0, 0)

im having problem to get specular lighting to work. It looks like im having some kind of bug in my application which im not able to trace.
Light is coming from the front (screenshots shows camera looking at -z direction, on the left is -x).
A simple specular output shows the following failure:
Code used:
float3 n = normalize(input.normal); // world space normal, mul(float4(normal, 0.0), modelMatrix)
float3 l = normalize(-sDirection); // constant direction (like 0.7, -0.8, -0.7)
float3 v = normalize(viewPos.xyz - input.vertexWorldSpace.xyz); // viewpos = world space camera position
float3 LightReflect = normalize(reflect(n,l));
float SpecularFactor = dot(v, LightReflect);
color = float4(SpecularFactor, SpecularFactor, SpecularFactor, 1.0);
To check for possible variable errors i checked the input.VertexWorldSpace:
to also check lightdirection and normal, i checked the diffuse term:
and the camera to vertex view vector (v):
For me, all parts look fine, but still the specular gets black at origin(0,0,0) and perpendicular to the light direction.
Ive also made a gif showing what happens with the view direction vector at origin(0, 0, 0)
http://imgur.com/2YlqcGP
And another gif showing camera position and where the specular goes black:
http://i.imgur.com/ajUaekA.gifv
Am i using a wrong calculation for v?
Ok, the problem was with my buffer alignment: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509632(v=vs.85).aspx
Instead of passing
Matrix
Matrix
Vec3
Vec3
hlsl wants 16 byte alignment vectors
so i had to use
Matrix
Vec3
float1 // 16 byte alignment
Vec3
float1 // 16 byte alignment

Centering all of the points in iOS OpenGL ES app

I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.

Resources