Alpha blending with two transparent textures not working correctly - ios

I have a destination texture:
Here the whole texture will be transparent (alpha = 0) except red color part. Red color will have alpha value of 0.5. I used a rectangle plane to present this texture.
Then i have this source texture. It is also a transparent texture with black color part. Black color will have alpha value of 0.5. I used another rectangle plane to present this texture and i change MTLRenderPipelineDescriptor blending to
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Here blending works fine between two textures.
Then i try to merge these two textures into one destination texture. using MTLComputeCommandEncoder. My kernel function:
kernel void compute(
texture2d<float, access::read_write> des [[texture(0)]],
texture2d<float, access::read> src [[texture(1)]],
uint2 gid [[thread_position_in_grid]])
{
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float srcAlpha = srcColor.a;
float4 outColor = srcColor + desColor * (1 - srcAlpha);
des.write(outColor, gid);
}
But after that blended color will be different than previous. Blending color is lighter than previous one.
How do I properly blend two transparent textures in kernel function? What is wrong with my solution?

I think that you are using premultiplied alpha...
Try this instead (which is not premultiplied alpha):
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float4 outColor;
outColor.a = srcColor.a + desColor.a * (1f - srcColor.a);
if (outColor.a == 0f) {
outColor.r = 0f;
outColor.g = 0f;
outColor.b = 0f;
} else {
outColor.r = (srcColor.r * srcColor.a + desColor.r * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.g = (srcColor.g * srcColor.a + desColor.g * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.b = (srcColor.b * srcColor.a + desColor.b * desColor.a * (1f - srcColor.a)) / outColor.a;
}

Related

Metal Shader Function that can deal with both RGB and YUV textures

I'm trying to teach myself the basics of computer graphics on the iPhone and Apple's Metal API. I'm trying to do something pretty basic, but I'm getting a little stuck.
What I want to do is just "texture a quad". Basically, I make a rectangle and I have an image texture that covers the rectangle. I can make that work for the basic case where the image texture just comes from an image of a known format, but I'm having trouble figuring out how to make my code a little more generic and able to handle different formats.
For example, sometimes the image texture comes from an image file, which after decoding it, the pixel data is in the RGB format. Sometimes, my image texture actually comes from a video frame where the data is stored in the YUV format.
Ideally, I'd want to create some sort of "sampler" object or function that can just hand me back an RGB color for a particular texture coordinate. In the code where I prepare for rendering, that's the part with context on which format is getting used, and so it would have enough information to figure out which type of sampler should get used. For example, in the video frame case, it knows that it's working with a video frame and so it creates a YUV sampler and passes it the relevant data. And then from my shader code that just wants to read colors, it can just ask for the color at some particular coordinates, and the YUV sampler would do the proper work to compute the right RGB color. If I passed in an RGB sampler instead, it would just read the RGB data without doing any sort of calculations.
I thought this would be really simple to do? I feel like this has to be a common problem for graphics code that deals with textures in different formats, or colorspaces, or whatever? Am I missing something obvious?
How do you do this without writing a bunch of versions of all of your shaders?
Here are functions for transforming RGBA to YUVA and vice versa on the fly.
float4 rgba2yuva(float4 rgba)
{
float4 yuva = float4(0.0);
yuva.x = rgba.r * 0.299 + rgba.g * 0.587 + rgba.b * 0.114;
yuva.y = rgba.r * -0.169 + rgba.g * -0.331 + rgba.b * 0.5 + 0.5;
yuva.z = rgba.r * 0.5 + rgba.g * -0.419 + rgba.b * -0.081 + 0.5;
yuva.w = rgba.a;
return yuva;
}
float4 yuva2rgba(float4 yuva)
{
float4 rgba = float4(0.0);
rgba.r = yuva.x * 1.0 + yuva.y * 0.0 + yuva.z * 1.4;
rgba.g = yuva.x * 1.0 + yuva.y * -0.343 + yuva.z * -0.711;
rgba.b = yuva.x * 1.0 + yuva.y * 1.765 + yuva.z * 0.0;
rgba.a = yuva.a;
return rgba;
}
I adapted the code from here: https://github.com/libretro/glsl-shaders/blob/master/nnedi3/shaders/
Simple OpenGL shaders are quite straightforward to port to Metal. I pretty much just changed the datatype vec4 to float4. If you want a half version, just substitute float4 for half4.
metal shader function ARK, now you can use #Jeshua Lacock to convert between the two.
// tweak your color offsets as desired
#include <metal_stdlib>
using namespace metal;
kernel void YUVColorConversion(texture2d<uint, access::read> yTexture [[texture(0)]],
texture2d<uint, access::read> uTexture [[texture(1)]],
texture2d<uint, access::read> vTexture [[texture(2)]],
texture2d<float, access::write> outTexture [[texture(3)]],
uint2 gid [[thread_position_in_grid]])
{
float3 colorOffset = float3(0, -0.5, -0.5);
float3x3 colorMatrix = float3x3(
float3(1, 1, 1),
float3(0, -0.344, 1.770),
float3(1.403, -0.714, 0)
);
uint2 uvCoords = uint2(gid.x / 2, gid.y / 2);
float y = yTexture.read(gid).r / 255.0;
float u = uTexture.read(uvCoords).r / 255.0;
float v = vTexture.read(uvCoords).r / 255.0;
float3 yuv = float3(y, u, v);
float3 rgb = colorMatrix * (yuv + colorOffset);
outTexture.write(float4(float3(rgb), 1.0), gid);
}
Good ref here , and then you can build pipelines or variants for processing specifically what you need like here
#include <metal_stdlib>
#include <simd/simd.h>
#include <metal_texture>
#include <metal_matrix>
#include <metal_geometric>
#include <metal_math>
#include <metal_graphics>
#include "AAPLShaderTypes.h"
using namespace metal;
// Variables in constant address space.
constant float3 lightPosition = float3(0.0, 1.0, -1.0);
// Per-vertex input structure
struct VertexInput {
float3 position [[attribute(AAPLVertexAttributePosition)]];
float3 normal [[attribute(AAPLVertexAttributeNormal)]];
half2 texcoord [[attribute(AAPLVertexAttributeTexcoord)]];
};
// Per-vertex output and per-fragment input
typedef struct {
float4 position [[position]];
half2 texcoord;
half4 color;
} ShaderInOut;
// Vertex shader function
vertex ShaderInOut vertexLight(VertexInput in [[stage_in]],
constant AAPLFrameUniforms& frameUniforms [[ buffer(AAPLFrameUniformBuffer) ]],
constant AAPLMaterialUniforms& materialUniforms [[ buffer(AAPLMaterialUniformBuffer) ]]) {
ShaderInOut out;
// Vertex projection and translation
float4 in_position = float4(in.position, 1.0);
out.position = frameUniforms.projectionView * in_position;
// Per vertex lighting calculations
float4 eye_normal = normalize(frameUniforms.normal * float4(in.normal, 0.0));
float n_dot_l = dot(eye_normal.rgb, normalize(lightPosition));
n_dot_l = fmax(0.0, n_dot_l);
out.color = half4(materialUniforms.emissiveColor + n_dot_l);
// Pass through texture coordinate
out.texcoord = in.texcoord;
return out;
}
// Fragment shader function
fragment half4 fragmentLight(ShaderInOut in [[stage_in]],
texture2d<half> diffuseTexture [[ texture(AAPLDiffuseTextureIndex) ]]) {
constexpr sampler defaultSampler;
// Blend texture color with input color and output to framebuffer
half4 color = diffuseTexture.sample(defaultSampler, float2(in.texcoord)) * in.color;
return color;
}

Directx 9 Normal Mapping Pixelshader

I have a question about normal mapping in directx9 shader.
Currently my Terrain shader Output for Normal Map + Diffuse Color only result into this Image.
Which looks good to me.
If i use an empty Normal map image like this one.
My shader output for normal diffuse and color map looks like this.
But if i use 1 including a ColorMap i get a really stange result.
Does anyone have an idea what could cause this issue?
Here is some snippets.
float4 PS_TERRAIN(VSTERRAIN_OUTPUT In) : COLOR0
{
float4 fDiffuseColor;
float lightIntensity;
float3 bumpMap = 2.0f * tex2D( Samp_Bump, In.Tex.xy ).xyz-1.0f;
float3 bumpNormal = (bumpMap.x * In.Tangent) + (bumpMap.y * In.Bitangent) + (bumpMap.z * In.Normal);
bumpNormal = normalize(bumpNormal);
// Direction Light Test ( Test hardcoded )
float3 lightDirection = float3(0.0f, -0.5f, -0.2f);
float3 lightDir = -lightDirection;
// Bump
lightIntensity = saturate(dot( bumpNormal, lightDir));
// We are using a lightmap to do our alpha calculation for given pixel
float4 LightMaptest = tex2D( Samp_Lightmap, In.Tex.zw ) * 2.0f;
fDiffuseColor.a = LightMaptest.a;
if( !bAlpha )
fDiffuseColor.a = 1.0;
// Sample the pixel color from the texture using the sampler at this texture coordinate location.
float4 textureColor = tex2D( Samp_Diffuse, In.Tex.xy );
// Combine the color map value into the texture color.
textureColor = saturate(textureColor * LightMaptest);
textureColor.a = LightMaptest.a;
fDiffuseColor.rgb = saturate(lightIntensity * I_d).rgb;
fDiffuseColor = fDiffuseColor * textureColor; // If i enable this line it goes crazy
return fDiffuseColor;
}

Applying Spotlights Over Dark Ambient Light - HLSL - Monogame

I wrote an HLSL shader for my Monogame project that uses ambient lighting to create a day/night cycle.
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
sampler s0;
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
float ambient = 1.0f;
float percentThroughDay = 0.0f;
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.3;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float exposeRed = (1 + (.39 - input.TextureCoordinates.y) * 8); // overexpose red
float exposeGreen = (1 + (.39 - input.TextureCoordinates.y) * 2); // some extra green for the blue pixels
float exposeBlue = (1 + (.39 - input.TextureCoordinates.y) * 6); // some extra blue
// happens over full screen
if (input.TextureCoordinates.y < 1.0f) {
float redAdder = max(1, (exposeRed * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f && percentThroughDay < 0.50f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.25f)/0.25f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
blueAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
}
//mid day
else if (percentThroughDay >= 0.50f && percentThroughDay < 0.75f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
else if (percentThroughDay >= 0.75f && percentThroughDay < 0.85f) {
redAdder = max(1, (exposeRed * ((percentThroughDay - 0.75f)/0.10f)));
greenAdder = max(1, (exposeGreen * ((percentThroughDay - 0.75f)/0.10f)));
blueAdder = max(1, (exposeBlue * ((percentThroughDay - 0.75f)/0.10f)));
}
// begin reducing adders
else if (percentThroughDay >= 0.85f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.85f)/0.15f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.85f)/0.15f)));
blueAdder = max(1, (exposeBlue * (1-(percentThroughDay - 0.85f)/0.15f)));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
}
return outputColor;
}
technique ambientLightDayNight
{
pass P0
{
PixelShader = compile ps_2_0 MainPS();
}
};
This works how I want it to for the most part (it could definitely use some calculation optimization though).
However, I am now looking at adding spotlights in my game for the player to use. I followed along with this method which I got working independently of the ambientLight shader. It is a pretty simple shader that uses a lightMask.
sampler s0;
texture lightMask;
sampler lightSampler = sampler_state{Texture = lightMask;};
float4 PixelShaderLight(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float4 lightColor = tex2D(lightSampler, coords);
return color * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderLight();
}
}
My problem is now using both of these shaders together. My current method is to draw my game scene to a render target, apply the ambient light shader, and then finish by drawing the gamescene (with the ambient light now) to the client screen while applying the spotlight shader.
This bring up multiple issues:
Applying the spotlight shader after the ambient light completely blacks out anything around the light, when in reality the area surrounding the light should be the ambient light.
The light intensity (how bright the light is) calculated in the spotlight shader is too dull when it is "night" because it is calculating the light color based on the ambient light shader's output.
I've tried to apply the ambient light shader after the spotlight shader instead, but this just renders most of everything black because the ambient light calculates against a mostly black background.
I've tried adding some code to the spotlight shader to color black pixels to white in order to reveal the ambient light background, however the light intensity is still being calculated against the darker ambient light - resulting in a very dull light.
Another thought was to just modify my ambient light shader to take the lightMask as a param and just not apply the ambient light to lights marked on the light mask. Then I could just use the spotlight shader to apply the graident of the light and modify the color. But I was unsure if I should be cramming these two seemingly separate light effects into one pixel shader. When I tried this, my shader also didn't compile because there were too many arithmetic ops.
So my questions for everyone are:
Should I avoid cramming multiple effects into one pixel shader?
Generally, how would I apply spot lighting over an ambient light effect that can be "dark"?
EDIT
my solution - Did not end up using the spot light shader, but still draw the light mask with the texture given in the article, then pass that light mask to this ambient light shader and offset the texture gradient.
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 constant = 1.5f;
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.05;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float gval = (1 - input.TextureCoordinates.y); // replace 1 with .39 to lock to 39 percent of screen (this is how it was before)
float exposeRed = (1 + gval * 8); // overexpose red
float exposeGreen = (1 + gval * 2); // some extra green
float exposeBlue = (1 + gval * 4); // some extra blue
float quarterDayPercent = (percentThroughDay/0.25f);
float redAdder = max(1, (exposeRed * quarterDayPercent)); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * quarterDayPercent)); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * quarterDayPercent)); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f ) {
float gradientVal1 = (1-(percentThroughDay - 0.25f)/0.25f);
redAdder = max(1, (exposeRed * gradientVal1));
greenAdder = max(1, (exposeGreen * gradientVal1));
blueAdder = max(1, (exposeGreen * gradientVal1));
}
//mid day
if (percentThroughDay >= 0.50f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
if (percentThroughDay >= 0.75f) {
float gradientVal2 = ((percentThroughDay - 0.75f)/0.10f);
redAdder = max(1, (exposeRed * gradientVal2));
greenAdder = max(1, (exposeGreen * gradientVal2));
blueAdder = max(1, (exposeBlue * gradientVal2));
}
// begin reducing adders
if (percentThroughDay >= 0.85f) {
float gradientVal3 = (1-(percentThroughDay - 0.85f)/0.15f);
redAdder = max(1, (exposeRed * gradientVal3));
greenAdder = max(1, (exposeGreen * gradientVal3));
blueAdder = max(1, (exposeBlue * gradientVal3));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
// first check if we are in a lightMask light
float4 lightMaskColor = tex2D(lightSampler, input.TextureCoordinates);
if (lightMaskColor.r != 0.0f || lightMaskColor.g != 0.0f || lightMaskColor.b != 0.0f)
{
// we are in the light so don't apply ambient light
return pixelColor * (lightMaskColor + outputColor) * constant; // have to offset by outputColor here because the lightMask is pure black
}
return outputColor * pixelColor * constant; // must multiply by pixelColor here to offset the lightMask bounds. TODO: could try to restore original color by removing this multiplaction and factoring in more of an offset on ln 91
}
To chain lights as you want, you need a different approach. As you already encountered, chaining lights solely on the color won't work, as once the color has become black it can't be highlighted anymore. To deal with multiple lights there are two typical approaches: forward shading and deferred shading. Each has its advantages and disadvantages, so you need to look which fits better your situation.
Forward Shading
This approach is the one you tested with stuffing all lighting computations in a single shading pass. You are adding all light intensities together to a final light intensity and then multiply it with the color.
Pros are the performance and simplicity, Cons are the limitation in the amount of lights and more complex shader code.
Deferred Shading
This approach decouples single lights from each other and can be used to draw scenes with very many lights. Each light needs the original scene color (albedo) to compute its part of the final image. Therefore you first render your scene without any lighting onto a texture (usually called color buffer or albedo buffer). Then you can render each light separately with multiplying it with the albedo and adding it to the final image. So even in the dark parts the original color comes back again with a light.
Pros are the cleaner structure and possibility to use a lot of lights, even with different shapes. Cons are the extra buffers and draw calls which have to be made.

iOS render-to-texture with metal does not update render texture

I would like to use a fragment shader to render to a texture offscreen. Once this is done I want to use the result of that as the input for another fragment shader.
I create a texture and clear it with red (to know it is set). I use the render pass that is connected to my target texture and draw. I then use a blit command encoder to transfer the contents of that target texture to a buffer. The buffer contains red so I know it is reading the texture correctly but the drawing should make the texture green so something is wrong.
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = MTLTextureType.type2D
textureDescriptor.width = 2048
textureDescriptor.height = 1024
textureDescriptor.pixelFormat = .rgba8Unorm
textureDescriptor.storageMode = .shared
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
bakeTexture = device.makeTexture(descriptor: textureDescriptor)
bakeRenderPass = MTLRenderPassDescriptor()
bakeRenderPass.colorAttachments[0].texture = bakeTexture
bakeRenderPass.colorAttachments[0].loadAction = .clear
bakeRenderPass.colorAttachments[0].clearColor = MTLClearColor(red:1.0,green:0.0,blue:0.0,alpha:1.0)
bakeRenderPass.colorAttachments[0].storeAction = .store
for drawing I do this:
let bakeCommandEncoder = commandBuffer.makeRenderCommandEncode(descriptor: bakeRenderPass)
let vp = MTLViewport(originX:0, originY:0, width:2048, height:1024,znear:0.0,zfar:1.0)
bakeCommandEncoder.setViewport(vp)
bakeCommandEncoder.setCullMode(.none) // disable culling
// draw here. Fragment shader sets final color to float4(0.0,1.0,0.0,1.0);
bakeCommandEncoder.endEncoding()
let blitEncoder = commandBuffer.makeBlitCommandEncoder()
blitEncoder!.copy(...) // this works as my buffer is all red
blitEncoder.endEncoding()
Here is the vertex shader - it is based on an OpenGL vertex shader to dump out the uv-layout of the texture:
struct VertexOutBake {
float4 position [[position]];
float3 normal;
float3 tangent;
float3 binormal;
float3 worldPosition;
float2 texCoords;
};
vertex VertexOutBake vertex_main_bake(VertexInBake vertexIn [[stage_in]],
constant VertexUniforms &uniforms [[buffer(1)]])
{
VertexOutBake vertexOut;
float4 worldPosition = uniforms.modelMatrix * float4(vertexIn.position, 1);
vertexOut.worldPosition = worldPosition.xyz;
vertexOut.normal = normalize(vertexIn.normal);
vertexOut.tangent = normalize(vertexIn.tangent);
vertexOut.binormal = normalize(vertexIn.binormal);
vertexOut.texCoords = vertexIn.texCoords;
vertexOut.texCoords.y = 1.0 - vertexOut.texCoords.y; // flip image
// now use uv coordinates instead of 3D positions
vertexOut.position.x = vertexOut.texCoords.x * 2.0 - 1.0;
vertexOut.position.y = 1.0 - vertexOut.texCoords.y * 2.0;
vertexOut.position.z = 1.0;
vertexOut.position.w = 1.0;
return vertexOut;
}
The buffer that gets filled as a result of the blit copy should be green but it is red. This seems to mean that either the bakeTexture is not being written to in the fragment shader or that it is, but there is some synchronization missing to make the content available at the time I am doing the copy.

why sPos.z is uesd to get texcoord in shadow mapping

why use sPos.z here to get tescoord?
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
It is a shader which achieves shadow mapping in "Shaders for Game Programming and Artists".
The first pass render depth texture in light space.( light is camera and watch towards the origin )
The second pass get the depth and calculate the shadow.
Before these codes, the model has already been transformed to light space.
Then the texcoord should be calculated to read depth texture.
But I can't understand the algorithm of calculating the texcoord. Why sPos.z will be here?
Here is the whole vertex shader of the second pass
float distanceScale;
float4 lightPos;
float4 view_position;
float4x4 view_proj_matrix;
float4x4 proj_matrix;
float time_0_X;
struct VS_OUTPUT
{
float4 Pos: POSITION;
float3 normal: TEXCOORD0;
float3 lightVec : TEXCOORD1;
float3 viewVec: TEXCOORD2;
float4 shadowCrd: TEXCOORD3;
};
VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL)
{
VS_OUTPUT Out;
// Animate the light position.
float3 lightPos;
lightPos.x = cos(1.321 * time_0_X);
lightPos.z = sin(0.923 * time_0_X);
lightPos.xz = 100 * normalize(lightPos.xz);
lightPos.y = 100;
// Project the object's position
Out.Pos = mul(view_proj_matrix, inPos);
// World-space lighting
Out.normal = inNormal;
Out.lightVec = distanceScale * (lightPos - inPos.xyz);
Out.viewVec = view_position - inPos.xyz;
// Create view vectors for the light, looking at (0,0,0)
float3 dirZ = -normalize(lightPos);
float3 up = float3(0,0,1);
float3 dirX = cross(up, dirZ);
float3 dirY = cross(dirZ, dirX);
// Transform into light's view space.
float4 pos;
inPos.xyz -= lightPos;
pos.x = dot(dirX, inPos);
pos.y = dot(dirY, inPos);
pos.z = dot(dirZ, inPos);
pos.w = 1;
// Project it into light space to determine she shadow
// map position
float4 sPos = mul(proj_matrix, pos);
// Use projective texturing to map the position of each fragment
// to its corresponding texel in the shadow map.
sPos.z += 10;
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
return Out;
}
Pixel Shader:
float shadowBias;
float backProjectionCut;
float Ka;
float Kd;
float Ks;
float4 modelColor;
sampler ShadowMap;
sampler SpotLight;
float4 ps_main(
float3 inNormal: TEXCOORD0,
float3 lightVec: TEXCOORD1,
float3 viewVec: TEXCOORD2,
float4 shadowCrd: TEXCOORD3) : COLOR
{
// Normalize the normal
inNormal = normalize(inNormal);
// Radial distance and normalize light vector
float depth = length(lightVec);
lightVec /= depth;
// Standard lighting
float diffuse = saturate(dot(lightVec, inNormal));
float specular = pow(saturate(
dot(reflect(-normalize(viewVec), inNormal), lightVec)),
16);
// The depth of the fragment closest to the light
float shadowMap = tex2Dproj(ShadowMap, shadowCrd);
// A spot image of the spotlight
float spotLight = tex2Dproj(SpotLight, shadowCrd);
// If the depth is larger than the stored depth, this fragment
// is not the closest to the light, that is we are in shadow.
// Otherwise, we're lit. Add a bias to avoid precision issues.
float shadow = (depth < shadowMap + shadowBias);
// Cut back-projection, that is, make sure we don't lit
// anything behind the light.
shadow *= (shadowCrd.w > backProjectionCut);
// Modulate with spotlight image
shadow *= spotLight;
// Shadow any light contribution except ambient
return Ka * modelColor +
(Kd * diffuse * modelColor + Ks * specular) * shadow;
}

Resources