Applying Spotlights Over Dark Ambient Light - HLSL - Monogame - xna

I wrote an HLSL shader for my Monogame project that uses ambient lighting to create a day/night cycle.
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
sampler s0;
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
float ambient = 1.0f;
float percentThroughDay = 0.0f;
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.3;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float exposeRed = (1 + (.39 - input.TextureCoordinates.y) * 8); // overexpose red
float exposeGreen = (1 + (.39 - input.TextureCoordinates.y) * 2); // some extra green for the blue pixels
float exposeBlue = (1 + (.39 - input.TextureCoordinates.y) * 6); // some extra blue
// happens over full screen
if (input.TextureCoordinates.y < 1.0f) {
float redAdder = max(1, (exposeRed * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f && percentThroughDay < 0.50f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.25f)/0.25f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
blueAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
}
//mid day
else if (percentThroughDay >= 0.50f && percentThroughDay < 0.75f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
else if (percentThroughDay >= 0.75f && percentThroughDay < 0.85f) {
redAdder = max(1, (exposeRed * ((percentThroughDay - 0.75f)/0.10f)));
greenAdder = max(1, (exposeGreen * ((percentThroughDay - 0.75f)/0.10f)));
blueAdder = max(1, (exposeBlue * ((percentThroughDay - 0.75f)/0.10f)));
}
// begin reducing adders
else if (percentThroughDay >= 0.85f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.85f)/0.15f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.85f)/0.15f)));
blueAdder = max(1, (exposeBlue * (1-(percentThroughDay - 0.85f)/0.15f)));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
}
return outputColor;
}
technique ambientLightDayNight
{
pass P0
{
PixelShader = compile ps_2_0 MainPS();
}
};
This works how I want it to for the most part (it could definitely use some calculation optimization though).
However, I am now looking at adding spotlights in my game for the player to use. I followed along with this method which I got working independently of the ambientLight shader. It is a pretty simple shader that uses a lightMask.
sampler s0;
texture lightMask;
sampler lightSampler = sampler_state{Texture = lightMask;};
float4 PixelShaderLight(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float4 lightColor = tex2D(lightSampler, coords);
return color * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderLight();
}
}
My problem is now using both of these shaders together. My current method is to draw my game scene to a render target, apply the ambient light shader, and then finish by drawing the gamescene (with the ambient light now) to the client screen while applying the spotlight shader.
This bring up multiple issues:
Applying the spotlight shader after the ambient light completely blacks out anything around the light, when in reality the area surrounding the light should be the ambient light.
The light intensity (how bright the light is) calculated in the spotlight shader is too dull when it is "night" because it is calculating the light color based on the ambient light shader's output.
I've tried to apply the ambient light shader after the spotlight shader instead, but this just renders most of everything black because the ambient light calculates against a mostly black background.
I've tried adding some code to the spotlight shader to color black pixels to white in order to reveal the ambient light background, however the light intensity is still being calculated against the darker ambient light - resulting in a very dull light.
Another thought was to just modify my ambient light shader to take the lightMask as a param and just not apply the ambient light to lights marked on the light mask. Then I could just use the spotlight shader to apply the graident of the light and modify the color. But I was unsure if I should be cramming these two seemingly separate light effects into one pixel shader. When I tried this, my shader also didn't compile because there were too many arithmetic ops.
So my questions for everyone are:
Should I avoid cramming multiple effects into one pixel shader?
Generally, how would I apply spot lighting over an ambient light effect that can be "dark"?
EDIT
my solution - Did not end up using the spot light shader, but still draw the light mask with the texture given in the article, then pass that light mask to this ambient light shader and offset the texture gradient.
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 constant = 1.5f;
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.05;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float gval = (1 - input.TextureCoordinates.y); // replace 1 with .39 to lock to 39 percent of screen (this is how it was before)
float exposeRed = (1 + gval * 8); // overexpose red
float exposeGreen = (1 + gval * 2); // some extra green
float exposeBlue = (1 + gval * 4); // some extra blue
float quarterDayPercent = (percentThroughDay/0.25f);
float redAdder = max(1, (exposeRed * quarterDayPercent)); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * quarterDayPercent)); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * quarterDayPercent)); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f ) {
float gradientVal1 = (1-(percentThroughDay - 0.25f)/0.25f);
redAdder = max(1, (exposeRed * gradientVal1));
greenAdder = max(1, (exposeGreen * gradientVal1));
blueAdder = max(1, (exposeGreen * gradientVal1));
}
//mid day
if (percentThroughDay >= 0.50f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
if (percentThroughDay >= 0.75f) {
float gradientVal2 = ((percentThroughDay - 0.75f)/0.10f);
redAdder = max(1, (exposeRed * gradientVal2));
greenAdder = max(1, (exposeGreen * gradientVal2));
blueAdder = max(1, (exposeBlue * gradientVal2));
}
// begin reducing adders
if (percentThroughDay >= 0.85f) {
float gradientVal3 = (1-(percentThroughDay - 0.85f)/0.15f);
redAdder = max(1, (exposeRed * gradientVal3));
greenAdder = max(1, (exposeGreen * gradientVal3));
blueAdder = max(1, (exposeBlue * gradientVal3));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
// first check if we are in a lightMask light
float4 lightMaskColor = tex2D(lightSampler, input.TextureCoordinates);
if (lightMaskColor.r != 0.0f || lightMaskColor.g != 0.0f || lightMaskColor.b != 0.0f)
{
// we are in the light so don't apply ambient light
return pixelColor * (lightMaskColor + outputColor) * constant; // have to offset by outputColor here because the lightMask is pure black
}
return outputColor * pixelColor * constant; // must multiply by pixelColor here to offset the lightMask bounds. TODO: could try to restore original color by removing this multiplaction and factoring in more of an offset on ln 91
}

To chain lights as you want, you need a different approach. As you already encountered, chaining lights solely on the color won't work, as once the color has become black it can't be highlighted anymore. To deal with multiple lights there are two typical approaches: forward shading and deferred shading. Each has its advantages and disadvantages, so you need to look which fits better your situation.
Forward Shading
This approach is the one you tested with stuffing all lighting computations in a single shading pass. You are adding all light intensities together to a final light intensity and then multiply it with the color.
Pros are the performance and simplicity, Cons are the limitation in the amount of lights and more complex shader code.
Deferred Shading
This approach decouples single lights from each other and can be used to draw scenes with very many lights. Each light needs the original scene color (albedo) to compute its part of the final image. Therefore you first render your scene without any lighting onto a texture (usually called color buffer or albedo buffer). Then you can render each light separately with multiplying it with the albedo and adding it to the final image. So even in the dark parts the original color comes back again with a light.
Pros are the cleaner structure and possibility to use a lot of lights, even with different shapes. Cons are the extra buffers and draw calls which have to be made.

Related

Alpha blending with two transparent textures not working correctly

I have a destination texture:
Here the whole texture will be transparent (alpha = 0) except red color part. Red color will have alpha value of 0.5. I used a rectangle plane to present this texture.
Then i have this source texture. It is also a transparent texture with black color part. Black color will have alpha value of 0.5. I used another rectangle plane to present this texture and i change MTLRenderPipelineDescriptor blending to
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Here blending works fine between two textures.
Then i try to merge these two textures into one destination texture. using MTLComputeCommandEncoder. My kernel function:
kernel void compute(
texture2d<float, access::read_write> des [[texture(0)]],
texture2d<float, access::read> src [[texture(1)]],
uint2 gid [[thread_position_in_grid]])
{
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float srcAlpha = srcColor.a;
float4 outColor = srcColor + desColor * (1 - srcAlpha);
des.write(outColor, gid);
}
But after that blended color will be different than previous. Blending color is lighter than previous one.
How do I properly blend two transparent textures in kernel function? What is wrong with my solution?
I think that you are using premultiplied alpha...
Try this instead (which is not premultiplied alpha):
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float4 outColor;
outColor.a = srcColor.a + desColor.a * (1f - srcColor.a);
if (outColor.a == 0f) {
outColor.r = 0f;
outColor.g = 0f;
outColor.b = 0f;
} else {
outColor.r = (srcColor.r * srcColor.a + desColor.r * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.g = (srcColor.g * srcColor.a + desColor.g * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.b = (srcColor.b * srcColor.a + desColor.b * desColor.a * (1f - srcColor.a)) / outColor.a;
}

why sPos.z is uesd to get texcoord in shadow mapping

why use sPos.z here to get tescoord?
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
It is a shader which achieves shadow mapping in "Shaders for Game Programming and Artists".
The first pass render depth texture in light space.( light is camera and watch towards the origin )
The second pass get the depth and calculate the shadow.
Before these codes, the model has already been transformed to light space.
Then the texcoord should be calculated to read depth texture.
But I can't understand the algorithm of calculating the texcoord. Why sPos.z will be here?
Here is the whole vertex shader of the second pass
float distanceScale;
float4 lightPos;
float4 view_position;
float4x4 view_proj_matrix;
float4x4 proj_matrix;
float time_0_X;
struct VS_OUTPUT
{
float4 Pos: POSITION;
float3 normal: TEXCOORD0;
float3 lightVec : TEXCOORD1;
float3 viewVec: TEXCOORD2;
float4 shadowCrd: TEXCOORD3;
};
VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL)
{
VS_OUTPUT Out;
// Animate the light position.
float3 lightPos;
lightPos.x = cos(1.321 * time_0_X);
lightPos.z = sin(0.923 * time_0_X);
lightPos.xz = 100 * normalize(lightPos.xz);
lightPos.y = 100;
// Project the object's position
Out.Pos = mul(view_proj_matrix, inPos);
// World-space lighting
Out.normal = inNormal;
Out.lightVec = distanceScale * (lightPos - inPos.xyz);
Out.viewVec = view_position - inPos.xyz;
// Create view vectors for the light, looking at (0,0,0)
float3 dirZ = -normalize(lightPos);
float3 up = float3(0,0,1);
float3 dirX = cross(up, dirZ);
float3 dirY = cross(dirZ, dirX);
// Transform into light's view space.
float4 pos;
inPos.xyz -= lightPos;
pos.x = dot(dirX, inPos);
pos.y = dot(dirY, inPos);
pos.z = dot(dirZ, inPos);
pos.w = 1;
// Project it into light space to determine she shadow
// map position
float4 sPos = mul(proj_matrix, pos);
// Use projective texturing to map the position of each fragment
// to its corresponding texel in the shadow map.
sPos.z += 10;
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
return Out;
}
Pixel Shader:
float shadowBias;
float backProjectionCut;
float Ka;
float Kd;
float Ks;
float4 modelColor;
sampler ShadowMap;
sampler SpotLight;
float4 ps_main(
float3 inNormal: TEXCOORD0,
float3 lightVec: TEXCOORD1,
float3 viewVec: TEXCOORD2,
float4 shadowCrd: TEXCOORD3) : COLOR
{
// Normalize the normal
inNormal = normalize(inNormal);
// Radial distance and normalize light vector
float depth = length(lightVec);
lightVec /= depth;
// Standard lighting
float diffuse = saturate(dot(lightVec, inNormal));
float specular = pow(saturate(
dot(reflect(-normalize(viewVec), inNormal), lightVec)),
16);
// The depth of the fragment closest to the light
float shadowMap = tex2Dproj(ShadowMap, shadowCrd);
// A spot image of the spotlight
float spotLight = tex2Dproj(SpotLight, shadowCrd);
// If the depth is larger than the stored depth, this fragment
// is not the closest to the light, that is we are in shadow.
// Otherwise, we're lit. Add a bias to avoid precision issues.
float shadow = (depth < shadowMap + shadowBias);
// Cut back-projection, that is, make sure we don't lit
// anything behind the light.
shadow *= (shadowCrd.w > backProjectionCut);
// Modulate with spotlight image
shadow *= spotLight;
// Shadow any light contribution except ambient
return Ka * modelColor +
(Kd * diffuse * modelColor + Ks * specular) * shadow;
}

Using shaders from Shadertoy in Interface Builder (Xcode)

I'm attempting to see what shaders look like in Interface Builder using sprite kit, and would like to use some of the shaders at ShaderToy. To do it, I created a "shader.fsh" file, a scene file, and added a color sprite to the scene, giving it a custom shader (shader.fsh)
While very basic shaders seem to work:
void main() {
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
}
Any attempt I make to convert shaders from ShaderToy cause Xcode to freeze up (spinning color ball) as soon as the attempt is made to render them.
The shader I am working with for example, is this one:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * fragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(iGlobalTime + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(fragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(fragCoord.y - center.y)) * t / (abs(fragCoord.x - center.x));
}
else if (rand(fragCoord.xy / iResolution.xy) > 0.996)
{
float r = rand(fragCoord.xy);
color = r * (0.25 * sin(iGlobalTime * (r * 5.0) + 720.0 * r) + 0.75);
}
fragColor = vec4(vec3(color), 1.0);
}
I've tried:
Replacing mainImage() with main(void) (so that it will be called)
Replacing the iXxxxx variables (iGlobalTime, iResolution) and fragCoord variables with their related variables (based on the suggestions here)
Replacing some of the variables (iGlobalTime)...
While changing mainImage to main() and swapping out the variables got it to work without error in TinyShading realtime tester app - the outcome is always the same in Xcode (spinning ball, freeze). Any advice here would be helpful as there is a surprisingly small amount of information currently available on the topic.
I managed to get this working in SpriteKit using SKShader. I've been able to render every shader from ShaderToy that I've attempted so far. The only exception is that you must remove any code using iMouse, since there is no mouse in iOS. I did the following...
1) Change the mainImage function declaration in the ShaderToy to...
void main(void) {
...
}
The ShaderToy mainImage function has an input named fragCoord. In iOS, this is globally available as gl_FragCoord, so your main function no longer needs any inputs.
2) Do a replace all to change the following from their ShaderToy names to their iOS names...
fragCoord becomes gl_FragCoord
fragColor becomes gl_FragColor
iGlobalTime becomes u_time
Note: There are more that I haven't encountered yet. I'll update as I do
3) Providing iResolution is slightly more involved...
iResolution is the viewport size (in pixels), which translates to the sprite size in SpriteKit. This used to be available as u_sprite_size in iOS, but has been removed. Luckily, Apple provides a nice example of how to inject it into your shader using uniforms in their SKShader documentation.
However, as stated in Shader Inputs section of ShaderToy, the type of iResolution is vec3 (x, y and z) as opposed to u_sprite_size, which is vec2 (x and y). I am yet to see a single ShaderToy that uses the z value of iResolution. So, we can simply use a z value of zero. I modified the example in the Apple documentation to provide my shader an iResolution of type vec3 like so...
let uniformBasedShader = SKShader(fileNamed: "YourShader.fsh")
let sprite = SKSpriteNode()
sprite.shader = uniformBasedShader
let spriteSize = vector_float3(
Float(sprite.frame.size.width), // x
Float(sprite.frame.size.height), // y
Float(0.0) // z - never used
)
uniformBasedShader.uniforms = [
SKUniform(name: "iResolution", vectorFloat3: spriteSize)
]
That's it :)
Here is the change to the shader that works when loaded as a shader with swift:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0; //Item 1:
float prob = 0.95; //Item 2:
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0); //Item 3:
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
Play with Item 1: to increase the number of stars in the sky the smaller the number the more stars I like the number to be around 50 not too dense
Item 2: changes the randomness or how close together the stars will appear 1 = none, 0.1 = side by side around 0.75 gives a nice feel.
Item 3 is where most of the magic happens this is the size and pulse of the stars.
float t = 0.9
Changing 0.9, will increase the initial star sign up or down a nice value is 1.4 not too big not too small.
float t = 0.9 + 0.2
Changing the second value in this equation 0.2, will increase the pulse effect width of the stars proportionally to the original size I like with 1.4 a value of 1.2.
To add the shader to your swift project add a sprite to the scene the size of the screen then add the shader like this:
let backgroundImage = SKSpriteNode()
backgroundImage.texture = textureAtlas.textureNamed("any )
backgroundImage.size = screenSize
let shader = SKShader(fileNamed: "nightSky.fsh")
backgroundImage.shader = shader

Applying a DirectX shader to a rotated texture in XNA

I had a dynamic light shader for which the shaded sprite was fine in my own test program, but started resembling an eclipse once I imported it into my friend's physics based game. I narrowed it down by simplifying the gradient to be purely based on the X value within the shape, and making the outside of the circle in the sprite red, but as you can see, the rotation continues to cause problems (can't post images, so here's links to the album).
Circle at different rotations(not in order, but labelled by radian values): http://imgur.com/a/Preth
Everything I researched about matrix math says I am using the correct formula for rotation, but I figure maybe I'm doing something wrong. Here is my .fx shader code:
float rotationrads; /*assumed rotation is in radians*/
sampler TextureSampler: register(s0);
float4 staticlight(float2 Tex: TEXCOORD0) : COLOR0
{
float4 Color = tex2D(TextureSampler, Tex);
float2 NewTex;
/*Get the new X and Y values by applying the UV formula with the rotation*/
NewTex.x = (Tex.x * cos(rotationrads)) - (Tex.y * sin(rotationrads));
NewTex.y = (Tex.y * sin(rotationrads)) + (Tex.y * cos(rotationrads));
if(Color.a > 0.0)
{
Color.r = (Color.r * NewTex.x);
Color.g = (Color.g * NewTex.x);
Color.b = (Color.b * NewTex.x);
}
else
{
Color.r = 100;
Color.g = 0;
Color.b = 0;
Color.a = 100;
}
return Color;
}
technique StaticLightOnly
{
pass Pass1
{
PixelShader = compile ps_2_0 staticlight();
}
}
If anyone has experience with sprite-based rotation in 2d shaders, I'd appreciate any help with this! Thanks in advanced!
Because rotations are performed about the origin, you have to move the rotation center (0.5, 0.5) to the origin, execute the rotation and then undo the translation.

Dot Product and Luminance/ Findmyicone

All,
I have a basic question that I am struggling with here. When you look at the findmyicone sample code from WWDC 2010, you will see this:
static const uint8_t orangeColor[] = {255, 127, 0};
uint8_t referenceColor[3];
// Remove luminance
static inline void normalize( const uint8_t colorIn[], uint8_t colorOut[] ) {
// Dot product
int sum = 0;
for (int i = 0; i < 3; i++)
sum += colorIn[i] / 3;
for (int j = 0; j < 3; j++)
colorOut[j] = (float) ((colorIn[j] / (float) sum) * 255);
}
And then it is called:
normalize(orangeColor, referenceColor);
Running the debugger, it is converting BGRA: (Red 255, Green 127, Blue 0) to (Red 0, Green 255, Blue 0). I have looked on the web and SO to find details on luminance and dot product and there is really no information.
1- Can someone guide me on what this function is doing?
2- Can you guide me to some helpful topics/primer online as well?
Thanks again
KMB
What they're trying to do is track a particular color across variations in brightness, so they're normalizing for the luminance of the color. I do something similar in the fragment shader I use in a color tracking example based on a GPU Gems paper from Apple, as well as the ColorObjectTracking sample application in my GPUImage framework:
vec3 normalizeColor(vec3 color)
{
return color / max(dot(color, vec3(1.0/3.0)), 0.3);
}
vec4 maskPixel(vec3 pixelColor, vec3 maskColor)
{
float d;
vec4 calculatedColor;
// Compute distance between current pixel color and reference color
d = distance(normalizeColor(pixelColor), normalizeColor(maskColor));
// If color difference is larger than threshold, return black.
calculatedColor = (d > threshold) ? vec4(0.0) : vec4(1.0);
//Multiply color by texture
return calculatedColor;
}
The above calculation takes the average of the three color components by multiplying each channel by 1/3 and then summing them (that's what the dot product does here). It then divides each color channel by this average to arrive at a normalized color.
The distance between this normalized color and the target one is calculated, and if it is within a certain threshold the pixel is marked as being of that color.
This is just one way of determining proximity of one color to another. Another way is to convert the RGB values into Y, Cr, and Cb (Y, U, and V) components and then take the distance between just the chrominance portions (Cr and Cb):
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = 1.0 - smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
This code is what I use in a chroma keying shader, and it's based on a similar calculation that Apple uses in one of their sample applications. Which one is best can depend on the particular situation you're facing.

Resources