Image thresholding in LUA - LOVE - lua

In order to only have only one input image for a "spread like" effect I would like to do some threshold operation on some drawable or to find any other way that work.
Are there any such tools in LOVE2d/Lua ?

I'm not exactly sure about the desired outcome, the "spread like" effect, but to create thresholding, you best use pixel shader something like this.
extern float threshold; //external variable from our lua script
vec4 effect(vec4 color, Image tex, vec2 texture_coords, vec2 screen_coords)
{
vec4 texturecolor = Texel(tex, texture_coords); //default shader code
//get average color of pixel
float average = (texturecolor[0] + texturecolor[1] + texturecolor[2])/3;
//set alpha of pixel to 0 if average of RGB is below threshold
if (average < threshold) {
texturecolor[3] = 0;
}
return texturecolor * color; //default shader code
}
This code calculates the average of RGB for each pixel and if the average is below threshold, it changes alpha of that pixel to 0 to make it invisible.
To use the pixel effect in your code you need to do something like this (only once, perhaps in love.load):
shader = love.graphics.newShader([==[ ... shader code above ... ]==])
and when drawing the image:
love.graphics.setShader(shader)
love.graphics.draw(img)
love.graphics.setShader()
To adjust the threshold:
shader:send("threshold", number) --0 to 1 float
Result:
References:
LÖVE Shader object
love.graphics.newShader for examples of the default shader code

Related

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

Linear Depth to World Position

I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.

Color in floating-point texture of HLSL

In the HLSL Pixel Shader, the code is as follows:
float Exposure_Level;
sampler Environment;
float4 ps_main(float3 dir: TEXCOORD0) : COLOR
{
// Read texture and determine HDR color based on alpha
// channel and exposure level
float4 color = texCUBE(Environment, dir);
return color * ((1.0+(color.a*64.0))* Exposure_Level);
}
this pass would be rendered to a floating-point texture, which format is A16R16G16B16. But I don't quite understand why the color should be multiplied by
((1.0+(color.a*64.0))* Exposure_Level)
which could be as large as 65 or larger.
A in color is between 0 to 1, and Exposure_Level should be greater than 0.
If the color is multiplied by a number like this, the result may be very large, and why would that still work?

Pix, A couple of issues I'm not understanding

I've been asked to split questions which I asked here:
HLSL and Pix number of questions
I thought two and three would both fit in the same question as a solution of one may help resolve the other. I'm trying to debug a shader and seem to be running into issues. Firstly Pix seems to be skipping a large amount of code when I'm running analyse mode. This is analysing an experiment with F12 captures and with D3DX analysis turned off. I have to turn it off as I'm using XNA. The shader code in question is below:
float4 PixelShaderFunction(float2 OriginalUV : TEXCOORD0) : COLOR0
{
// Get the depth buffer value at this pixel.
float4 color = float4 (0, 0,0,0);
float4 finalColor = float4(0,0,0,0);
float zOverW = tex2D(mySampler, OriginalUV);
// H is the viewport position at this pixel in the range -1 to 1.
float4 H = float4(OriginalUV.x * 2 - 1, (1 - OriginalUV.y) * 2 - 1,
zOverW, 1);
// Transform by the view-projection inverse.
float4 D = mul(H, xViewProjectionInverseMatrix);
// Divide by w to get the world position.
float4 worldPos = D / D.w;
// Current viewport position
float4 currentPos = H;
// Use the world position, and transform by the previous view-
// projection matrix.
float4 previousPos = mul(worldPos, xPreviousViewProjectionMatrix);
// Convert to nonhomogeneous points [-1,1] by dividing by w.
previousPos /= previousPos.w;
// Use this frame's position and last frame's to compute the pixel
// velocity.
float2 velocity = (currentPos - previousPos)/2.f;
// Get the initial color at this pixel.
color = tex2D(sceneSampler, OriginalUV);
OriginalUV += velocity;
for(int i = 1; i < 1; ++i, OriginalUV += velocity)
{
// Sample the color buffer along the velocity vector.
float4 currentColor = tex2D(sceneSampler, OriginalUV);
// Add the current color to our color sum.
color += currentColor;
}
// Average all of the samples to get the final blur color.
finalColor = color / xNumSamples;
return finalColor;
}
With a captured frame and when debugging a pixel I can only see two lines working. These are color = tex2D(sceneSampler, OriginalUV) and finalColor = color / xNumSamples. The rest of it Pix just skips or doesn't do.
Also can I debug in real time using Pix? I'm wondering if this method would reveal more information.
Cheers,
It would appear that most of that shader code is being optimized out (not compiled because it is irrelevant).
In the end, all that matters in the return value of finalColor which is set with color and xNumSamples.
// Average all of the samples to get the final blur color.
finalColor = color / xNumSamples;
I am not sure where xNumSamples gets set, but you can see that the only line that matters to color is color = tex2D(sceneSampler, OriginalUV); (hence it not being removed).
Every line before that is irrelevant because it will be overwritten by that line.
The only bit that follows is that for loop:
for(int i = 1; i < 1; ++i, OriginalUV += velocity)
But this would never execute because i < 1 is false from the get-go (i is assigned a starting value of 1).
Hope that helps!
To answer you second question, I believe to debug shaders in real-time you need to use something like Nvidia's FX Composer and Shader Debugger. However, those run outside of your game, so results are not always useful.

How Selective Color and Color Balance filter of Photoshop work?

Hi everyone I am newbie in image processing. I want to do some color change filter on IOS just like "selective color" and "color balance" in photoshop. However, I don't know the algorithm of these awesome features.
I tried find in source code of Paint.Net but unfortunately, PainDotNet does not have this feature .
With color balance, I tried this link color balance on the iPhone but the result is not good. It's different with photoshop's result.
So anybody know the algorithm of two techniques: selective color and Color balance?
Thanks u and sorry about my complicate presentation
I've been trying to find a solution for this for 2 days now, got some results but they are different from Photoshop implementation and I'm afraid not exactly correct.
The way I'm trying to approach it is to convert RGB color to HSL color space and then adjust Hue and Saturation color values along different axis (Cyan/Red, Yellow/Blue, Green/Magenta).
I'm doing this by using cartesian coordinate system instead of polar one, as described here:
http://en.wikipedia.org/wiki/HSL_and_HSV#Hue_and_chroma
My idea is that in cartesian coordinate space (with alpha and beta axis) changing alpha results in modifying color along Cyan/Red axis. Changing color along yellow/blue axis can be achieved by modifying alpha and beta at the same time:
alpha = alpha + adjustment * cos(PI/3)
beta = beta + adjustment * sin(PI/3)
Same can be done for other axes.
After you got new alpha and beta values you can convert them to HSL and then to RGB.
Unfortunately the result is still quiet different from Photoshop implementation. Also I can't figure out the proper way to adjust only Reds, Yellows, Neutrals, Blacks, etc without touching the rest of the colours.
Does anyone have any hints on how this type of adjustment can be achieved?
Update:
Here's a discussion about color balance filter in GIMP and Photoshop:
https://github.com/BradLarson/GPUImage/issues/193
And here is sample code recreating GIMP color balance filter as a shader:
https://gist.github.com/3168961
It's only implemented for midtones at the moment, but it should be pretty straight forward to make changes for highlights and shadows.
Unfortunately GIMP's color balance filter gives different results from Photoshop :(
I've created a color balance filter for GPUImage framework:
https://github.com/liovch/GPUImage/commit/fcc85db4fdafae1d4e41313c96bb1cac54dc93b4
Maybe this will help. I wrote this as photoshop and flash extension via Pixel Bender
which is Adobe shader language but it is equivalent to any other shader language.
This is roughly converted Adobe ShaderLab to CG shader language.
Shader "Filters/ColorBalance"
{
Properties
{
_MainTex ("Main (RGB)", 2D) = "white" {}
// Red, Green, Blue
_Shadows ("shadows", Vector) = (0.0,0.0,0.0,0.0)
_Midtones ("midtones", Vector) = (0.0,0.0,0.0,0.0)
_Hilights ("hilights", Vector) = (0.0,0.0,0.0,0.0)
_Amount ("amount mix", Range (0.0, 1.0)) = 1.0
}
SubShader
{
Tags {"RenderType"="Transparent" "Queue"="Transparent"}
Lighting Off
Pass
{
ZWrite Off
Cull Off
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
sampler2D _MainTex;
uniform float4 _Shadows;
uniform float4 _Midtones;
uniform float4 _Hilights;
uniform float _Amount;
uniform float pi = 3.14159265358979;
struct appdata
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
float4 color : COLOR;
};
// vertex output
struct vertdata
{
float4 pos : SV_POSITION;
float2 texcoord : TEXCOORD0;
float4 color : COLOR;
};
vertdata vert(appdata ad)
{
vertdata o;
o.pos = mul(UNITY_MATRIX_MVP, ad.vertex);
o.texcoord = ad.texcoord;
o.color = ad.color;
return o;
}
fixed4 frag(vertdata i) : COLOR
{
float4 dst = tex2D(_MainTex, i.texcoord);
float intensity = (dst.r + dst.g + dst.b) * 0.333333333;
// Exponencial Shadows
float shadowsBleed = 1.0 - intensity;
shadowsBleed *= shadowsBleed;
shadowsBleed *= shadowsBleed;
// Exponencial midtones
float midtonesBleed = 1.0 - abs(-1.0 + intensity * 2.0);
midtonesBleed *= midtonesBleed;
midtonesBleed *= midtonesBleed;
// Exponencial Hilights
float hilightsBleed = intensity;
hilightsBleed *= hilightsBleed;
hilightsBleed *= hilightsBleed;
float3 colorization = dst.rgb + _Shadows.rgb * shadowsBleed +
_Midtones.rgb * midtonesBleed +
_Hilights.rgb * hilightsBleed;
dst.rgb = lerp(dst.rgb, colorization, _Amount);
return dst;
}
ENDCG
}
}
}
About color balance, change red channel value for red/cyan adjustment, green for magenta/green, blue for yellow/blue. This solved problem. But with preserve luminosity option like in photoshop, we must keep the lightness of one pixel constant (original and color balanced pixel). I read Gimp's source code, it solve this problem using convert RGB of new pixel to HSL, replace L with L value of old pixel, then convert new HSL back to RGB. This RGB value remain the lightness of pixel while balancing color. However, when I test on GIMP, adjust color channel of shadow and highlight mode with "preserve luminosity" option, It's wrong.
And I am confusing how to determine shadow, mid-tone or highlight?
The mystery of Selective Color
Reds are the colors, that consist of Magenta, Yellow and Black (no Cyan). However, if you select the range Reds and decrease Cyan, colors change.

Resources