Love2d GLSL shader script fails to retrieve texture_coords variable - lua

Heello, everyone!
I've been trying to write a script that uses GLSL to render a Mandelbrot set, but something weird is happening.
I call the effect functio like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
But, when I try to use the texture_coords values, say, like this:
vec2 c = vec2((texture_coords[0]-WD/2)/100, (texture_coords[1]-HT/2)/100);
It returns the same value for every pixel; if, on the other hand, I use screen_coords instead, it works, but I'm affraid that if I drag the window around it might fuzz with the results.
Why am I unable to retrieve texture_coords?
More insight on the program and the problems here
UPDATE
I have reworked the code, now it looks like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords)
{
vec2 c = vec2( ( MinRe + window_coords[0] * ( MaxRe - MinRe ) / ( width + 1 ) ),
( MaxIm - window_coords[1] * ( MaxIm - MinIm ) / ( height + 1 ) )
);
vec2 z = c;
vec2 zn = vec2(0.0, 0.0);
int n_iter = 0;
while( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter)) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2* z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
Which works beautifully. But when I use texture_coords instead of window_coords, the code returns the same value to every pixel, despite the fact that the texture I'm using is the same size of the window.

The problem is that some drawable objects of love.graphics don't set any texture coordinate if you don't load an image. So, instead of using draw.rectangle, you should use a Mesh:
A 2D polygon mesh used for drawing arbitrary textured shapes
In order to add a mesh object you can add to the load function:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
-- the Mesh DrawMode "fan" works well for 4-vertex Meshes.
mesh = love.graphics.newMesh(vertices, "fan")
-- ... other stuff here ...
end
and in the draw function:
function love.draw()
-- ...
love.graphics.draw(mesh,0,0)
-- ...
end
The complete code, considering your previous question and my answer to that, adding some lines to manage the coordinate tranformations become:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
mesh = love.graphics.newMesh(vertices, "fan")
GLSLShader = love.graphics.newShader[[
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
extern int max_iter;
extern vec2 size;
extern vec2 left_top;
vec4 clr(int n){
if(n == max_iter){return black;}
float m = float(n)/float(max_iter);
float r = float(mod(n,256))/32;
float g = float(128 - mod(n+64,127))/255;
float b = float(127 + mod(n,64))/255;
if (r > 1.0) {r = 1.0;}
else{
if(r<0){r = 0;}
}
if (g > 1.0) {g = 1.0;}
else{
if(g<0){g = 0;}
}
if (b > 1.0) {b = 1.0;}
else{
if(b<0){b = 0;}
}
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords){
vec2 c = vec2(texture_coords[0]*size[0] + left_top[0],texture_coords[1]*size[1] - left_top[1]);
vec2 z = vec2(0.0,0.0);
vec2 zn = vec2(0.0,0.0);
int n_iter = 0;
while ( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter) ) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2*z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
return clr(n_iter);
}
]]
end
function love.draw()
center_x = -0.5
center_y = 0.0
size_x = 3
size_y = size_x*height/width
GLSLShader:send("left_top",{center_x-size_x*0.5,center_y+size_y*0.5})
GLSLShader:send("size",{size_x,size_y})
GLSLShader:sendInt("max_iter",1024)
love.graphics.setShader(GLSLShader)
love.graphics.draw(mesh,0,0)
love.graphics.setShader()
end

But it's somewhat misguiding, because my texture was the size of the window, and it didn't work
Well, let's investigate that. You didn't exactly provide a lot of information, but let's look anyway.
(texture_coords[0]-WD/2)/100
What is that? Well, we know what texture_coords is. From the Love2D wiki:
The location inside the texture to get pixel data from. Texture coordinates are usually normalized to the range of (0, 0) to (1, 1), with the top-left corner being (0, 0).
So you subtract from this texture coordinate WD/2. You didn't bother mentioning what that WD value was. But regardless, you divide the result by 100.
So, what exactly is WD? Let's see if algebra can help:
val = (texture_coords[0]-WD/2)/100
val * 100 = texture_coords[0] - WD / 2
(val * 100) - texture_coords[0] = -WD / 2
-2 * ((val * 100) - texture_coords[0]) = WD
So, what is WD? Well, from this equation, I can determine... nothing. This equation seems to be gibberish.
I'm guessing you intend for WD to mean "width" (seriously, it's three more characters; you couldn't type that out?). Presumably, the texture's width. If so... the equation remains gibberish.
You're taking a value that ranges from [0, 1], then subtracting half of the texture width from it. What does that mean? Why divide by 100? Since the texture width is probably much larger than the largest value from texture_coords (aka: 1), the result of this is going to be basically -WD/200.
And unless you're rendering to a floating-point image, that's going to get clamped to the valid color range: [0, 1]. So all your values come out to be the same color: black.
Since you're talking about Mandelbrot and so forth, I suspect you're trying to generate values on the range [-1, 1] or whatever. And your equation might do that... if texture_coords weren't normalized texture coordinates on the range [0, 1]. You know, exactly like the Wiki says they are.
If you want to turn texture coordinates into the [-1, 1] range, it's really much simpler. This is why we use normalized texture coordinates:
vec2 c = (2 * texture_coord) - 1; //Vector math is good.
If you want that to be the [-100, 100] range, just multiply the result by 100.

Related

WebGL is there a way to load dynamic buffers in fragment shaders?

I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?
If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.

draw a ellipse curve in fragment shader

I want an smooth curve in webgl so I tried to draw a rectangle in vertex shader and send positions to fragment shader in normalize size and Size parameter which is the real size of square
in vec2 Position;
in vec2 Size;
out vec2 vPosition;
out vec2 vSize;
void main() {
vSize = Size;
vPosition = Position
gl_Position = Position * Size
}
when size = [width, height] of square and is equal for every vertex and position = [
-1 , -1 ,
-1 , 1 ,
1 , -1 ,
1 , 1 ,
]
so my rectangle will be drawn in [2 * width , 2 * height] but I can do geometric operations in fragment shader with a 2 * 2 square which is normalized
but I have a problem with drawing ellipse(or circle with this sizes) in fragment shader when I want to make hollow circle with a thickness parameter it's thickness in horizontal direction is not same as vertical direction and I know it's because of that I'm using same size for horizontal and vertical directions(2,2) but in display they are different and this is the problem which make as you can see thickness in all of it is not same.
I want a geometry formula to calculate thickness in each direction then I can draw a hollow ellipse.
thanks in advance.
sorry for bad English
If you put heavy mathematical computing in your fragment shader, it will be slow.
The trick can be to use an approximation that can be visually acceptable.
Your problem is that the thickness is different on the vertical and the horizontal axis.
What you need is discarding fragments if the radius of the current point is greater than 1 and lower than radiusMin. Let uniWidth and uniHeight be the size of your rectangle.
* When y is null, on the horizontal axis, radiusMin = 1.0 - BORDER / uniWidth.
* When x is null, on the vertical axis, radiusMin = 1.0 - BORDER / uniHeight.
The trick is to interpolate between this two radii using the mix() function.
Take a look at my live example below to convince you that the result is not that bad.
Here is the fragment shader to do such a computation:
precision mediump float;
uniform float uniWidth;
uniform float uniHeight;
varying vec2 varCoords;
const float BORDER = 32.0;
void main() {
float x = varCoords.x;
float y = varCoords.y;
float radius = x * x + y * y;
if( radius > 1.0 ) discard;
radius = sqrt( radius );
float radiusH = 1.0 - BORDER / uniWidth;
float radiusV = 1.0 - BORDER / uniHeight;
float radiusAverage = (radiusH + radiusV) * 0.5;
float minRadius = 0.0;
x = abs( x );
y = abs( y );
if( x > y ) {
minRadius = mix( radiusH, radiusAverage, y / x );
}
else {
minRadius = mix( radiusV, radiusAverage, x / y );
}
if( radius < minRadius ) discard;
gl_FragColor = vec4(1, .5, 0, 1);
}
Here is a live example: https://jsfiddle.net/7rh2eog1/5/
There is a implicit formula for x and y which are in the blue area in hollow ellipse with thickness parameter, As we know the thickness and we have the size of our view we can make two ellipse with size1 = Size - vec2(thickness) and size2 = Size + vec2(thickness)
and then length(position/size1) < 1.0 < length(position/size2)

SKSpriteNode nearest neighbor using SKShader fragment shader

I'm trying to apply palette swapping capabilities in my spritekit 2d pixel art game and it appears that when applying an SKShader the filteringMode on the SKSpriteNode's texture is ignored.
As a result, I believe I need to apply nearest neighbor coloring first, then do the palette swapping logic second.
Based on some code found here on shader toy I've made this attempt which seems like the right direction and the logic seems sound to me if the coordinates are normalized and (0.0, 0.0) is the bottom left and (1.0, 1.0) top right, but the result is coming out WAY too blocky.
https://www.shadertoy.com/view/MllSzX
My adaptation for a shader.fsh file:
void main() {
float texSize = 48.0;
vec2 pixel = v_tex_coord * texSize;
float c_onePixel = 1.0 / texSize;
pixel = (floor(pixel) / texSize);
gl_FragColor = texture2D(u_texture, pixel + vec2(c_onePixel/2.0));
}
How can I get nearest neighbor working on my SKShader before I move on to my palette swapping?
Not a perfect answer to my own but I managed to prevent this problem by adding PrefersOpenGL to YES in my info.plist however I understand that it is preferred to use Metal when possible on ios
I ran into a similar issue. I wanted to create an outline around pixels while keeping it pixelated but it was blurring the existing pixels making it look bad. I ended up implementing nearest neighbor then checking the neighbor pixels after calculating the nearest neighbor location to see if the pixel had an alpha greater than 0. If it did I'd fill in the pixel. Here is how I did it:
Outline.fsh:
vec2 nearestNeighbor(vec2 loc, vec2 size) {
vec2 onePixel = vec2(1.0, 1.0) / size;
vec2 coordinate = floor(loc * size) / size;
return coordinate + onePixel / 2.0;
}
void main() {
vec2 onePixel = vec2(1.0, 1.0) / a_sprite_size;
vec4 texture = texture2D(u_texture, nearestNeighbor(v_tex_coord, a_sprite_size)); // Nearest neighbor for the current pixel
if (texture.a == 0.0) {
// Pixel has no alpha, check if any neighboring pixels have a non 0 alpha
vec4 outlineColor = vec4(0.9, 0.9, 0.0, 1.0);
if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(onePixel.x, 0), a_sprite_size)).a > 0.0) {
// Right neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(-onePixel.x, 0), a_sprite_size)).a > 0.0) {
// Left neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(0, onePixel.y), a_sprite_size)).a > 0.0) {
// Top neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(0, -onePixel.y), a_sprite_size)).a > 0.0) {
// Bottom neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else {
// No neighbors with an alpha > 0, don't change the color
gl_FragColor = texture;
}
} else {
// Pixel has an alpha > 0
gl_FragColor = texture;
}
}
You then need to add the shader to your sprite and set the defined attributes on your sprite and shader so values defined in the shader can be used.
spriteNode.setValue(SKAttributeValue(vectorFloat2: vector_float2(Float(spriteNode.size.width), Float(spriteNode.size.height))), forAttribute: "a_sprite_size")
let shader = SKShader(fileNamed: "Outline.fsh")
shader.attributes = [
SKAttribute(name: "a_sprite_size", type: .vectorFloat2)
]
spriteNode.shader = shader
Hopefully this helps anyone else that has a similar issue!

WebGL; Instanced rendering - setting up divisors

I'm trying to draw a lot of cubes in webgl using instanced rendering (ANGLE_instanced_arrays).
However I can't seem to wrap my head around how to setup the divisors. I have the following buffers;
36 vertices (6 faces made from 2 triangles using 3 vertices each).
6 colors per cube (1 for each face).
1 translate per cube.
To reuse the vertices for each cube; I've set it's divisor to 0.
For color I've set the divisor to 2 (i.e. use same color for two triangles - a face)).
For translate I've set the divisor to 12 (i.e. same translate for 6 faces * 2 triangles per face).
For rendering I'm calling
ext_angle.drawArraysInstancedANGLE(gl.TRIANGLES, 0, 36, num_cubes);
This however does not seem to render my cubes.
Using translate divisor 1 does but the colors are way off then, with cubes being a single solid color.
I'm thinking it's because my instances are now the full cube, but if I limit the count (i.e. vertices per instance), I do not seem to get all the way through the vertices buffer, effectively I'm just rendering one triangle per cube then.
How would I go about rendering a lot of cubes like this; with varying colored faces?
Instancing works like this:
Eventually you are going to call
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
So let's say you're drawing instances of a cube. One cube has 36 vertices (6 per face * 6 faces). So
numVertices = 36
And lets say you want to draw 100 cubes so
numInstances = 100
Let's say you have a vertex shader like this
Let's say you have the following shader
attribute vec4 position;
uniform mat4 matrix;
void main() {
gl_Position = matrix * position;
}
If you did nothing else and just called
var mode = gl.TRIANGLES;
var first = 0;
var numVertices = 36
var numInstances = 100
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
It would just draw the same cube in the same exact place 100 times
Next up you want to give each cube a different translation so you update your shader to this
attribute vec4 position;
attribute vec3 translation;
uniform mat4 matrix;
void main() {
gl_Position = matrix * (position + vec4(translation, 0));
}
You now make a buffer and put one translation per cube then you setup the attribute like normal
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0)
But you also set a divisor
ext.vertexAttribDivisorANGLE(translationLocation, 1);
That 1 says 'only advance to the next value in the translation buffer once per instance'
Now you want have a different color per face per cube and you only want one color per face in the data (you don't want to repeat colors). There is no setting that would to that Since your numVertices = 36 you can only choose to advance every vertex (divisor = 0) or once every multiple of 36 vertices (ie, numVertices).
So you say, what if instance faces instead of cubes? Well now you've got the opposite problem. Put one color per face. numVertices = 6, numInstances = 600 (100 cubes * 6 faces per cube). You set color's divisor to 1 to advance the color once per face. You can set translation divisor to 6 to advance the translation only once every 6 faces (every 6 instances). But now you no longer have a cube you only have a single face. In other words you're going to draw 600 faces all facing the same way, every 6 of them translated to the same spot.
To get a cube back you'd have to add something to orient the face instances in 6 direction.
Ok, you fill a buffer with 6 orientations. That won't work. You can't set divisor to anything that will use those 6 orientations advance only once every face but then resetting after 6 faces for the next cube. There's only 1 divisor setting. Setting it to 6 to repeat per face or 36 to repeat per cube but you want advance per face and reset back per cube. No such option exists.
What you can do is draw it with 6 draw calls, one per face direction. In other words you're going to draw all the left faces, then all the right faces, the all the top faces, etc...
To do that we make just 1 face, 1 translation per cube, 1 color per face per cube. We set the divisor on the translation and the color to 1.
Then we draw 6 times, one for each face direction. The difference between each draw is we pass in an orientation for the face and we change the attribute offset for the color attribute and set it's stride to 6 * 4 floats (6 * 4 * 4).
var vs = `
attribute vec4 position;
attribute vec3 translation;
attribute vec4 color;
uniform mat4 viewProjectionMatrix;
uniform mat4 localMatrix;
varying vec4 v_color;
void main() {
vec4 localPosition = localMatrix * position + vec4(translation, 0);
gl_Position = viewProjectionMatrix * localPosition;
v_color = color;
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var m4 = twgl.m4;
var gl = document.querySelector("canvas").getContext("webgl");
var ext = gl.getExtension("ANGLE_instanced_arrays");
if (!ext) {
alert("need ANGLE_instanced_arrays");
}
var program = twgl.createProgramFromSources(gl, [vs, fs]);
var positionLocation = gl.getAttribLocation(program, "position");
var translationLocation = gl.getAttribLocation(program, "translation");
var colorLocation = gl.getAttribLocation(program, "color");
var localMatrixLocation = gl.getUniformLocation(program, "localMatrix");
var viewProjectionMatrixLocation = gl.getUniformLocation(
program,
"viewProjectionMatrix");
function r(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
function rp() {
return r(-20, 20);
}
// make translations and colors, colors are separated by face
var numCubes = 1000;
var colors = [];
var translations = [];
for (var cube = 0; cube < numCubes; ++cube) {
translations.push(rp(), rp(), rp());
// pick a random color;
var color = [r(1), r(1), r(1), 1];
// now pick 4 similar colors for the faces of the cube
// that way we can tell if the colors are correctly assigned
// to each cube's faces.
var channel = r(3) | 0; // pick a channel 0 - 2 to randomly modify
for (var face = 0; face < 6; ++face) {
color[channel] = r(.7, 1);
colors.push.apply(colors, color);
}
}
var buffers = twgl.createBuffersFromArrays(gl, {
position: [ // one face
-1, -1, -1,
-1, 1, -1,
1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
],
color: colors,
translation: translations,
});
var faceMatrices = [
m4.identity(),
m4.rotationX(Math.PI / 2),
m4.rotationX(Math.PI / -2),
m4.rotationY(Math.PI / 2),
m4.rotationY(Math.PI / -2),
m4.rotationY(Math.PI),
];
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.translation);
gl.enableVertexAttribArray(translationLocation);
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.color);
gl.enableVertexAttribArray(colorLocation);
ext.vertexAttribDivisorANGLE(positionLocation, 0);
ext.vertexAttribDivisorANGLE(translationLocation, 1);
ext.vertexAttribDivisorANGLE(colorLocation, 1);
gl.useProgram(program);
var fov = 60;
var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
var projection = m4.perspective(fov * Math.PI / 180, aspect, 0.5, 100);
var radius = 30;
var eye = [
Math.cos(time) * radius,
Math.sin(time * 0.3) * radius,
Math.sin(time) * radius,
];
var target = [0, 0, 0];
var up = [0, 1, 0];
var camera = m4.lookAt(eye, target, up);
var view = m4.inverse(camera);
var viewProjection = m4.multiply(projection, view);
gl.uniformMatrix4fv(viewProjectionMatrixLocation, false, viewProjection);
// 6 faces * 4 floats per color * 4 bytes per float
var stride = 6 * 4 * 4;
var numVertices = 6;
faceMatrices.forEach(function(faceMatrix, ndx) {
var offset = ndx * 4 * 4; // 4 floats per color * 4 floats
gl.vertexAttribPointer(
colorLocation, 4, gl.FLOAT, false, stride, offset);
gl.uniformMatrix4fv(localMatrixLocation, false, faceMatrix);
ext.drawArraysInstancedANGLE(gl.TRIANGLES, 0, numVertices, numCubes);
});
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/2.x/twgl-full.min.js"></script>
<canvas></canvas>

fullscreen quad in pixel shader has screen coordinates?

i have a 640x480 rendertarget (its the main backbuffer).
im passing a fullscreen quad to the vertex shader, the fullscreen quad has coordinates between [-1,1] for both X and Y, that means that i only pass the coordinates into the pixel shader with no calculation:
struct VSInput
{
float4 Position : SV_POSITION0;
};
struct VSOutput
{
float4 Position : SV_POSITION0;
};
VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;
output.Position = input.Position;
return output;
}
but on the pixel shader, the x and y coordinate for each fragment is in screen space (0 < x < 640 and 0 < y < 480).
why is that? i always thought that the coordinates would get interpolated on their way to the pixel shader and be set between -1,1 and in this case even more so because I'm passing the coordinates between -1 and 1 hardcoded on the vertex shader!
but truth is, this pixel shader works:
float x = input.Position.x;
if(x < 200)
output.Diffuse = float4(1.0, 0.0, 0.0, 1.0);
else if( x > 400)
output.Diffuse = float4(0.0, 0.0, 1.0, 1.0);
else
output.Diffuse = float4(0.0, 1.0, 0.0, 1.0);
return output;
it outputs 3 color stripes on my rendering window, but if i change the values from screen space (the 200 and 400 from the code above) to -1,1 space and use something like if(x < 0.5) it wont work.
i already tried
float x = input.Position.x / input.Position.w;
because i read somewhere that that way i could get values between -1,1 but it doesn't work either.
thanks in advance.
From the MSDN on the Semantics-page about SV_POSITION:
When used in a pixel shader, SV_Position describes the pixel location.
So you are seeing expected behavior.
The best solution is to pass screen space coordinates as an additional parameter. I like to use this "full-screen-triangle" vertex shader:
struct VSQuadOut {
float4 position : SV_Position;
float2 uv: TexCoord;
};
// outputs a full screen triangle with screen-space coordinates
// input: three empty vertices
VSQuadOut VSQuad( uint vertexID : SV_VertexID ){
VSQuadOut result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
(Original source: Full screen quad without vertex buffer?)

Resources