I want an smooth curve in webgl so I tried to draw a rectangle in vertex shader and send positions to fragment shader in normalize size and Size parameter which is the real size of square
in vec2 Position;
in vec2 Size;
out vec2 vPosition;
out vec2 vSize;
void main() {
vSize = Size;
vPosition = Position
gl_Position = Position * Size
}
when size = [width, height] of square and is equal for every vertex and position = [
-1 , -1 ,
-1 , 1 ,
1 , -1 ,
1 , 1 ,
]
so my rectangle will be drawn in [2 * width , 2 * height] but I can do geometric operations in fragment shader with a 2 * 2 square which is normalized
but I have a problem with drawing ellipse(or circle with this sizes) in fragment shader when I want to make hollow circle with a thickness parameter it's thickness in horizontal direction is not same as vertical direction and I know it's because of that I'm using same size for horizontal and vertical directions(2,2) but in display they are different and this is the problem which make as you can see thickness in all of it is not same.
I want a geometry formula to calculate thickness in each direction then I can draw a hollow ellipse.
thanks in advance.
sorry for bad English
If you put heavy mathematical computing in your fragment shader, it will be slow.
The trick can be to use an approximation that can be visually acceptable.
Your problem is that the thickness is different on the vertical and the horizontal axis.
What you need is discarding fragments if the radius of the current point is greater than 1 and lower than radiusMin. Let uniWidth and uniHeight be the size of your rectangle.
* When y is null, on the horizontal axis, radiusMin = 1.0 - BORDER / uniWidth.
* When x is null, on the vertical axis, radiusMin = 1.0 - BORDER / uniHeight.
The trick is to interpolate between this two radii using the mix() function.
Take a look at my live example below to convince you that the result is not that bad.
Here is the fragment shader to do such a computation:
precision mediump float;
uniform float uniWidth;
uniform float uniHeight;
varying vec2 varCoords;
const float BORDER = 32.0;
void main() {
float x = varCoords.x;
float y = varCoords.y;
float radius = x * x + y * y;
if( radius > 1.0 ) discard;
radius = sqrt( radius );
float radiusH = 1.0 - BORDER / uniWidth;
float radiusV = 1.0 - BORDER / uniHeight;
float radiusAverage = (radiusH + radiusV) * 0.5;
float minRadius = 0.0;
x = abs( x );
y = abs( y );
if( x > y ) {
minRadius = mix( radiusH, radiusAverage, y / x );
}
else {
minRadius = mix( radiusV, radiusAverage, x / y );
}
if( radius < minRadius ) discard;
gl_FragColor = vec4(1, .5, 0, 1);
}
Here is a live example: https://jsfiddle.net/7rh2eog1/5/
There is a implicit formula for x and y which are in the blue area in hollow ellipse with thickness parameter, As we know the thickness and we have the size of our view we can make two ellipse with size1 = Size - vec2(thickness) and size2 = Size + vec2(thickness)
and then length(position/size1) < 1.0 < length(position/size2)
I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :
Heello, everyone!
I've been trying to write a script that uses GLSL to render a Mandelbrot set, but something weird is happening.
I call the effect functio like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
But, when I try to use the texture_coords values, say, like this:
vec2 c = vec2((texture_coords[0]-WD/2)/100, (texture_coords[1]-HT/2)/100);
It returns the same value for every pixel; if, on the other hand, I use screen_coords instead, it works, but I'm affraid that if I drag the window around it might fuzz with the results.
Why am I unable to retrieve texture_coords?
More insight on the program and the problems here
UPDATE
I have reworked the code, now it looks like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords)
{
vec2 c = vec2( ( MinRe + window_coords[0] * ( MaxRe - MinRe ) / ( width + 1 ) ),
( MaxIm - window_coords[1] * ( MaxIm - MinIm ) / ( height + 1 ) )
);
vec2 z = c;
vec2 zn = vec2(0.0, 0.0);
int n_iter = 0;
while( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter)) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2* z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
Which works beautifully. But when I use texture_coords instead of window_coords, the code returns the same value to every pixel, despite the fact that the texture I'm using is the same size of the window.
The problem is that some drawable objects of love.graphics don't set any texture coordinate if you don't load an image. So, instead of using draw.rectangle, you should use a Mesh:
A 2D polygon mesh used for drawing arbitrary textured shapes
In order to add a mesh object you can add to the load function:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
-- the Mesh DrawMode "fan" works well for 4-vertex Meshes.
mesh = love.graphics.newMesh(vertices, "fan")
-- ... other stuff here ...
end
and in the draw function:
function love.draw()
-- ...
love.graphics.draw(mesh,0,0)
-- ...
end
The complete code, considering your previous question and my answer to that, adding some lines to manage the coordinate tranformations become:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
mesh = love.graphics.newMesh(vertices, "fan")
GLSLShader = love.graphics.newShader[[
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
extern int max_iter;
extern vec2 size;
extern vec2 left_top;
vec4 clr(int n){
if(n == max_iter){return black;}
float m = float(n)/float(max_iter);
float r = float(mod(n,256))/32;
float g = float(128 - mod(n+64,127))/255;
float b = float(127 + mod(n,64))/255;
if (r > 1.0) {r = 1.0;}
else{
if(r<0){r = 0;}
}
if (g > 1.0) {g = 1.0;}
else{
if(g<0){g = 0;}
}
if (b > 1.0) {b = 1.0;}
else{
if(b<0){b = 0;}
}
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords){
vec2 c = vec2(texture_coords[0]*size[0] + left_top[0],texture_coords[1]*size[1] - left_top[1]);
vec2 z = vec2(0.0,0.0);
vec2 zn = vec2(0.0,0.0);
int n_iter = 0;
while ( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter) ) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2*z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
return clr(n_iter);
}
]]
end
function love.draw()
center_x = -0.5
center_y = 0.0
size_x = 3
size_y = size_x*height/width
GLSLShader:send("left_top",{center_x-size_x*0.5,center_y+size_y*0.5})
GLSLShader:send("size",{size_x,size_y})
GLSLShader:sendInt("max_iter",1024)
love.graphics.setShader(GLSLShader)
love.graphics.draw(mesh,0,0)
love.graphics.setShader()
end
But it's somewhat misguiding, because my texture was the size of the window, and it didn't work
Well, let's investigate that. You didn't exactly provide a lot of information, but let's look anyway.
(texture_coords[0]-WD/2)/100
What is that? Well, we know what texture_coords is. From the Love2D wiki:
The location inside the texture to get pixel data from. Texture coordinates are usually normalized to the range of (0, 0) to (1, 1), with the top-left corner being (0, 0).
So you subtract from this texture coordinate WD/2. You didn't bother mentioning what that WD value was. But regardless, you divide the result by 100.
So, what exactly is WD? Let's see if algebra can help:
val = (texture_coords[0]-WD/2)/100
val * 100 = texture_coords[0] - WD / 2
(val * 100) - texture_coords[0] = -WD / 2
-2 * ((val * 100) - texture_coords[0]) = WD
So, what is WD? Well, from this equation, I can determine... nothing. This equation seems to be gibberish.
I'm guessing you intend for WD to mean "width" (seriously, it's three more characters; you couldn't type that out?). Presumably, the texture's width. If so... the equation remains gibberish.
You're taking a value that ranges from [0, 1], then subtracting half of the texture width from it. What does that mean? Why divide by 100? Since the texture width is probably much larger than the largest value from texture_coords (aka: 1), the result of this is going to be basically -WD/200.
And unless you're rendering to a floating-point image, that's going to get clamped to the valid color range: [0, 1]. So all your values come out to be the same color: black.
Since you're talking about Mandelbrot and so forth, I suspect you're trying to generate values on the range [-1, 1] or whatever. And your equation might do that... if texture_coords weren't normalized texture coordinates on the range [0, 1]. You know, exactly like the Wiki says they are.
If you want to turn texture coordinates into the [-1, 1] range, it's really much simpler. This is why we use normalized texture coordinates:
vec2 c = (2 * texture_coord) - 1; //Vector math is good.
If you want that to be the [-100, 100] range, just multiply the result by 100.
I had a dynamic light shader for which the shaded sprite was fine in my own test program, but started resembling an eclipse once I imported it into my friend's physics based game. I narrowed it down by simplifying the gradient to be purely based on the X value within the shape, and making the outside of the circle in the sprite red, but as you can see, the rotation continues to cause problems (can't post images, so here's links to the album).
Circle at different rotations(not in order, but labelled by radian values): http://imgur.com/a/Preth
Everything I researched about matrix math says I am using the correct formula for rotation, but I figure maybe I'm doing something wrong. Here is my .fx shader code:
float rotationrads; /*assumed rotation is in radians*/
sampler TextureSampler: register(s0);
float4 staticlight(float2 Tex: TEXCOORD0) : COLOR0
{
float4 Color = tex2D(TextureSampler, Tex);
float2 NewTex;
/*Get the new X and Y values by applying the UV formula with the rotation*/
NewTex.x = (Tex.x * cos(rotationrads)) - (Tex.y * sin(rotationrads));
NewTex.y = (Tex.y * sin(rotationrads)) + (Tex.y * cos(rotationrads));
if(Color.a > 0.0)
{
Color.r = (Color.r * NewTex.x);
Color.g = (Color.g * NewTex.x);
Color.b = (Color.b * NewTex.x);
}
else
{
Color.r = 100;
Color.g = 0;
Color.b = 0;
Color.a = 100;
}
return Color;
}
technique StaticLightOnly
{
pass Pass1
{
PixelShader = compile ps_2_0 staticlight();
}
}
If anyone has experience with sprite-based rotation in 2d shaders, I'd appreciate any help with this! Thanks in advanced!
Because rotations are performed about the origin, you have to move the rotation center (0.5, 0.5) to the origin, execute the rotation and then undo the translation.
i have a 640x480 rendertarget (its the main backbuffer).
im passing a fullscreen quad to the vertex shader, the fullscreen quad has coordinates between [-1,1] for both X and Y, that means that i only pass the coordinates into the pixel shader with no calculation:
struct VSInput
{
float4 Position : SV_POSITION0;
};
struct VSOutput
{
float4 Position : SV_POSITION0;
};
VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;
output.Position = input.Position;
return output;
}
but on the pixel shader, the x and y coordinate for each fragment is in screen space (0 < x < 640 and 0 < y < 480).
why is that? i always thought that the coordinates would get interpolated on their way to the pixel shader and be set between -1,1 and in this case even more so because I'm passing the coordinates between -1 and 1 hardcoded on the vertex shader!
but truth is, this pixel shader works:
float x = input.Position.x;
if(x < 200)
output.Diffuse = float4(1.0, 0.0, 0.0, 1.0);
else if( x > 400)
output.Diffuse = float4(0.0, 0.0, 1.0, 1.0);
else
output.Diffuse = float4(0.0, 1.0, 0.0, 1.0);
return output;
it outputs 3 color stripes on my rendering window, but if i change the values from screen space (the 200 and 400 from the code above) to -1,1 space and use something like if(x < 0.5) it wont work.
i already tried
float x = input.Position.x / input.Position.w;
because i read somewhere that that way i could get values between -1,1 but it doesn't work either.
thanks in advance.
From the MSDN on the Semantics-page about SV_POSITION:
When used in a pixel shader, SV_Position describes the pixel location.
So you are seeing expected behavior.
The best solution is to pass screen space coordinates as an additional parameter. I like to use this "full-screen-triangle" vertex shader:
struct VSQuadOut {
float4 position : SV_Position;
float2 uv: TexCoord;
};
// outputs a full screen triangle with screen-space coordinates
// input: three empty vertices
VSQuadOut VSQuad( uint vertexID : SV_VertexID ){
VSQuadOut result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
(Original source: Full screen quad without vertex buffer?)