In the HLSL Pixel Shader, the code is as follows:
float Exposure_Level;
sampler Environment;
float4 ps_main(float3 dir: TEXCOORD0) : COLOR
{
// Read texture and determine HDR color based on alpha
// channel and exposure level
float4 color = texCUBE(Environment, dir);
return color * ((1.0+(color.a*64.0))* Exposure_Level);
}
this pass would be rendered to a floating-point texture, which format is A16R16G16B16. But I don't quite understand why the color should be multiplied by
((1.0+(color.a*64.0))* Exposure_Level)
which could be as large as 65 or larger.
A in color is between 0 to 1, and Exposure_Level should be greater than 0.
If the color is multiplied by a number like this, the result may be very large, and why would that still work?
Related
In order to only have only one input image for a "spread like" effect I would like to do some threshold operation on some drawable or to find any other way that work.
Are there any such tools in LOVE2d/Lua ?
I'm not exactly sure about the desired outcome, the "spread like" effect, but to create thresholding, you best use pixel shader something like this.
extern float threshold; //external variable from our lua script
vec4 effect(vec4 color, Image tex, vec2 texture_coords, vec2 screen_coords)
{
vec4 texturecolor = Texel(tex, texture_coords); //default shader code
//get average color of pixel
float average = (texturecolor[0] + texturecolor[1] + texturecolor[2])/3;
//set alpha of pixel to 0 if average of RGB is below threshold
if (average < threshold) {
texturecolor[3] = 0;
}
return texturecolor * color; //default shader code
}
This code calculates the average of RGB for each pixel and if the average is below threshold, it changes alpha of that pixel to 0 to make it invisible.
To use the pixel effect in your code you need to do something like this (only once, perhaps in love.load):
shader = love.graphics.newShader([==[ ... shader code above ... ]==])
and when drawing the image:
love.graphics.setShader(shader)
love.graphics.draw(img)
love.graphics.setShader()
To adjust the threshold:
shader:send("threshold", number) --0 to 1 float
Result:
References:
LÖVE Shader object
love.graphics.newShader for examples of the default shader code
I am currently working on a multi-textured terrain and I have problems with the Sample function of Texture2DArray.
In my example, I use a Texture2DArray to store a set of different terrain texture, e.g. grass, sand, asphalt, etc. Each of my vertices stores a texture coordinate (UV coordinate) and an index of the texture I want to use. So, if my index is 0, I use the first texture. If the index is 1, I use the second texture, and so on. This works fine, as long as my index is a natural number (0, 1, ..). However, it fails, if the index is a real number (like 1.5f).
In order to look for the problem, I reduced my entire pixel shader to this:
Texture2DArray DiffuseTextures : register(t0);
Texture2DArray NormalTextures : register(t1);
Texture2DArray EmissiveTextures : register(t2);
Texture2DArray SpecularTextures : register(t3);
SamplerState Sampler : register(s0);
struct PS_IN
{
float4 pos : SV_POSITION;
float3 nor : NORMAL;
float3 tan : TANGENT;
float3 bin : BINORMAL;
float4 col : COLOR;
float4 TextureIndices : COLOR1;
float4 tra : COLOR2;
float2 TextureUV : TEXCOORD0;
};
float4 PS(PS_IN input) : SV_Target
{
float4 texCol = DiffuseTextures.Sample(Sampler, float3(input.TextureUV, input.TextureIndices.r));
return texCol;
}
The following image shows the result of a sample scene on the left side. As you can see, there is a hard border between the used textures. There is no form of interpolation.
In order to check my texture indices, I changed my pixel shader from above by returning the texture indices as a color:
return float4(input.TextureIndices.r, input.TextureIndices.r, input.TextureIndices.r, 1.0f);
The result can be seen on the right side of the image. The texture indices are correct, since they range in the interval [0, 1] and you can clearly see the interpolation at the border of the area. However, my sampled texture does not show any form of interpolation.
Since my pixel shader is pretty simple, I wonder what causes this behaviour? Is there any setting in DirextX responsible for this?
I use DirectX 11, pixel shader ps_5_0 (I also tested with ps_4_0) and I use DDS textures (BC3 compression).
Edit
This is the sampler I am using:
SharpDX.Direct3D11.SamplerStateDescription samplerStateDescription = new SharpDX.Direct3D11.SamplerStateDescription()
{
AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressW = SharpDX.Direct3D11.TextureAddressMode.Wrap,
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear
};
SharpDX.Direct3D11.SamplerState samplerState = new SharpDX.Direct3D11.SamplerState(_device, samplerStateDescription);
_deviceContext.PixelShader.SetSampler(0, samplerState);
Solution
I made a function using the code presented by catflier for getting a texture color:
float4 GetTextureColor(Texture2DArray textureArray, float2 textureUV, float textureIndex)
{
float tid = textureIndex;
int id = (int)tid;
float l = frac(tid);
float4 texCol1 = textureArray.Sample(Sampler, float3(textureUV, id));
float4 texCol2 = textureArray.Sample(Sampler, float3(textureUV, id + 1));
return lerp(texCol1, texCol2, l);
}
This way, I can get the desired texture color for all texture types (diffuse, specular, emissive, ...) with a simple function call:
float4 texCol = GetTextureColor(DiffuseTextures, input.TextureUV, input.TextureIndices.r);
float4 bumpMap = GetTextureColor(NormalTextures, input.TextureUV, input.TextureIndices.g);
float4 emiCol = GetTextureColor(EmissiveTextures, input.TextureUV, input.TextureIndices.b);
float4 speCol = GetTextureColor(SpecularTextures, input.TextureUV, input.TextureIndices.a);
The result is as smooth as I wanted it to be: :-)
Texture arrays do not sample across slices, so technically, this is expected result.
If you want to interpolate between slices (eg: 1.5f gives you "half" of second texture and "half" of third texture), you can use a Texture3d instead, which allows this (but will cost some more as it will perform trilinear filtering)
Otherwise, you can perform your sampling that way :
float4 PS(PS_IN input) : SV_Target
{
float tid = input.TextureIndices.r;
int id = (int)tid;
float l = frac(tid); //lerp amount
float4 texCol1 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id));
float4 texCol2 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id+1));
return lerp(texCol1,texCol2, l);
}
Please note that this technique is quite more flexible, since you can also provide non adjacent slices as input (so you can lerp between slice 2 and 23 for example), and eventually use a different blend mode by changing lerp by some other function.
I was playing with applying dithering to a simple colored quad, and found a strange issue. I have a fragment shader which should calculate dithering at some uv and return a dithered color. This works fine on a textured quad, but strangely enough, when I access color data from my inVertex, the uv coordinates change to some bizarre values, and y value seems to be mapped to x axis. I'll try to illustrate what happens when I change stuff around the fragment shader code.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return color;
}
}
Produces the following result:
Where the left side of the image shows my gradient quad, notice that if (uv.y < 0.67) maps to x values in the image 🤔.
If I change this fragment shader and nothing else in the code, like so, where I return float4(0, 0, 1, 0) instead of inVertex.color, the uv coordinates are mapped correctly.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return float4(0, 0, 1, 0); //return color;
}
}
Produces this (correct) result:
I think I can hack around this problem by applying a 1x1 texture to the gradient and using texture coordinates, but I'd really like to know what is happening here, is this a bug or a feature that I don't understand?
Why are you using [[point_coord]]? What do you think it represents?
Unless you're drawing point primitives, you shouldn't be using that. Since you're drawing a "quad", and given the screenshots, I assume you're not drawing point primitives.
I suspect [[point_coord]] is simply undefined and subject to random-ish behavior when you're drawing triangles. The randomness is apparently affected by the specifics (such as stack layout) of the fragment shader.
You should either be using [[position]] and scaling by the window size or using an interpolated field within your ColoredVertex struct to carry "texture" coordinates.
I generate simple 2D grid with triangle strip representing water surface. First generated vertex has position [0,0] and the last one has [1,1]. For my water simulation I need to store current positions of vertices to a texture and then sample these values from the texture in the next frame to get the previous state of the water surface.
So, I created the texture in a size of vertices. For example if I will have a 10x10 vertices grid, I use a texture with 10x10 pixels (one pixel for one vertex data). And set this texture as a render target to render all vertex data into it.
According to this: MSDN Coordinate Systems, If I will use current positions of vertices in the grid (bottom-left at [0;0], top-right at [1;1]), rendered texture looks like this:
So I need to do some conversion to NDC. I convert it in a vertex shader like this:
[vertex.x * 2 - 1; vertex.y * 2 - 1]
Consider this 3x3 grid:
Now, grid is stretched to whole texture size. Texture coordinates are different from NDC and apparently I can use original coordinates of the grid (before conversion) to sample values from the texture and get previous values (positions) of vertices.
Here is a sample of my vertex/pixel shader code:
This vertex shader converts coordinates and sends it to pixel shader with SV_POSITION semantics (describes the pixel location).
struct VertexInput
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct VertexOutput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
// convertes coordinates from 0,0 origin to -1,-1, etc.
float2 toNDC(float2 px)
{
return float2(px.x * 2 - 1, px.y * 2 - 1);
}
VertexOutput main( VertexInput input )
{
VertexOutput output;
float2 ndc = toNDC(float2(input.pos.x, input.pos.z));
output.pos = float4(ndc, 1, 1);
output.tex = float2(input.pos.x, input.pos.z);
return output;
}
And here's the pixel shader saving values from vertex shader at defined pixel location (SV_POSITION).
struct PixelInput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PixelInput input) : SV_TARGET
{
return float4(input.tex.x, input.tex.y, 0, 1);
}
And we're finally getting to my problem! I use graphics debugger in Visual Studio 2012 which allows me to look at the rendered texture and its values. I would expect that at the pixel location [0,1] (in texel coordinate system) should be value [0,0] (or [0,0,0,1] to be precise, for RGBA format) but it seems that value of final pixel is interpolated between 3 vertices and I have a wrong value for a given vertex.
Screenshot from VS graphics debugger:
Rendered 3x3 texture ([0;1] location in texel coordinate system):
Values from vertex and pixel shader:
How to render the exact value from vertex shader to texture for a given pixel?
I am pretty new to computer graphics and Direct3D 11, so please excuse my deficiencies.
Hi everyone I am newbie in image processing. I want to do some color change filter on IOS just like "selective color" and "color balance" in photoshop. However, I don't know the algorithm of these awesome features.
I tried find in source code of Paint.Net but unfortunately, PainDotNet does not have this feature .
With color balance, I tried this link color balance on the iPhone but the result is not good. It's different with photoshop's result.
So anybody know the algorithm of two techniques: selective color and Color balance?
Thanks u and sorry about my complicate presentation
I've been trying to find a solution for this for 2 days now, got some results but they are different from Photoshop implementation and I'm afraid not exactly correct.
The way I'm trying to approach it is to convert RGB color to HSL color space and then adjust Hue and Saturation color values along different axis (Cyan/Red, Yellow/Blue, Green/Magenta).
I'm doing this by using cartesian coordinate system instead of polar one, as described here:
http://en.wikipedia.org/wiki/HSL_and_HSV#Hue_and_chroma
My idea is that in cartesian coordinate space (with alpha and beta axis) changing alpha results in modifying color along Cyan/Red axis. Changing color along yellow/blue axis can be achieved by modifying alpha and beta at the same time:
alpha = alpha + adjustment * cos(PI/3)
beta = beta + adjustment * sin(PI/3)
Same can be done for other axes.
After you got new alpha and beta values you can convert them to HSL and then to RGB.
Unfortunately the result is still quiet different from Photoshop implementation. Also I can't figure out the proper way to adjust only Reds, Yellows, Neutrals, Blacks, etc without touching the rest of the colours.
Does anyone have any hints on how this type of adjustment can be achieved?
Update:
Here's a discussion about color balance filter in GIMP and Photoshop:
https://github.com/BradLarson/GPUImage/issues/193
And here is sample code recreating GIMP color balance filter as a shader:
https://gist.github.com/3168961
It's only implemented for midtones at the moment, but it should be pretty straight forward to make changes for highlights and shadows.
Unfortunately GIMP's color balance filter gives different results from Photoshop :(
I've created a color balance filter for GPUImage framework:
https://github.com/liovch/GPUImage/commit/fcc85db4fdafae1d4e41313c96bb1cac54dc93b4
Maybe this will help. I wrote this as photoshop and flash extension via Pixel Bender
which is Adobe shader language but it is equivalent to any other shader language.
This is roughly converted Adobe ShaderLab to CG shader language.
Shader "Filters/ColorBalance"
{
Properties
{
_MainTex ("Main (RGB)", 2D) = "white" {}
// Red, Green, Blue
_Shadows ("shadows", Vector) = (0.0,0.0,0.0,0.0)
_Midtones ("midtones", Vector) = (0.0,0.0,0.0,0.0)
_Hilights ("hilights", Vector) = (0.0,0.0,0.0,0.0)
_Amount ("amount mix", Range (0.0, 1.0)) = 1.0
}
SubShader
{
Tags {"RenderType"="Transparent" "Queue"="Transparent"}
Lighting Off
Pass
{
ZWrite Off
Cull Off
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
sampler2D _MainTex;
uniform float4 _Shadows;
uniform float4 _Midtones;
uniform float4 _Hilights;
uniform float _Amount;
uniform float pi = 3.14159265358979;
struct appdata
{
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
float4 color : COLOR;
};
// vertex output
struct vertdata
{
float4 pos : SV_POSITION;
float2 texcoord : TEXCOORD0;
float4 color : COLOR;
};
vertdata vert(appdata ad)
{
vertdata o;
o.pos = mul(UNITY_MATRIX_MVP, ad.vertex);
o.texcoord = ad.texcoord;
o.color = ad.color;
return o;
}
fixed4 frag(vertdata i) : COLOR
{
float4 dst = tex2D(_MainTex, i.texcoord);
float intensity = (dst.r + dst.g + dst.b) * 0.333333333;
// Exponencial Shadows
float shadowsBleed = 1.0 - intensity;
shadowsBleed *= shadowsBleed;
shadowsBleed *= shadowsBleed;
// Exponencial midtones
float midtonesBleed = 1.0 - abs(-1.0 + intensity * 2.0);
midtonesBleed *= midtonesBleed;
midtonesBleed *= midtonesBleed;
// Exponencial Hilights
float hilightsBleed = intensity;
hilightsBleed *= hilightsBleed;
hilightsBleed *= hilightsBleed;
float3 colorization = dst.rgb + _Shadows.rgb * shadowsBleed +
_Midtones.rgb * midtonesBleed +
_Hilights.rgb * hilightsBleed;
dst.rgb = lerp(dst.rgb, colorization, _Amount);
return dst;
}
ENDCG
}
}
}
About color balance, change red channel value for red/cyan adjustment, green for magenta/green, blue for yellow/blue. This solved problem. But with preserve luminosity option like in photoshop, we must keep the lightness of one pixel constant (original and color balanced pixel). I read Gimp's source code, it solve this problem using convert RGB of new pixel to HSL, replace L with L value of old pixel, then convert new HSL back to RGB. This RGB value remain the lightness of pixel while balancing color. However, when I test on GIMP, adjust color channel of shadow and highlight mode with "preserve luminosity" option, It's wrong.
And I am confusing how to determine shadow, mid-tone or highlight?
The mystery of Selective Color
Reds are the colors, that consist of Magenta, Yellow and Black (no Cyan). However, if you select the range Reds and decrease Cyan, colors change.