I'm trying to draw a lot of cubes in webgl using instanced rendering (ANGLE_instanced_arrays).
However I can't seem to wrap my head around how to setup the divisors. I have the following buffers;
36 vertices (6 faces made from 2 triangles using 3 vertices each).
6 colors per cube (1 for each face).
1 translate per cube.
To reuse the vertices for each cube; I've set it's divisor to 0.
For color I've set the divisor to 2 (i.e. use same color for two triangles - a face)).
For translate I've set the divisor to 12 (i.e. same translate for 6 faces * 2 triangles per face).
For rendering I'm calling
ext_angle.drawArraysInstancedANGLE(gl.TRIANGLES, 0, 36, num_cubes);
This however does not seem to render my cubes.
Using translate divisor 1 does but the colors are way off then, with cubes being a single solid color.
I'm thinking it's because my instances are now the full cube, but if I limit the count (i.e. vertices per instance), I do not seem to get all the way through the vertices buffer, effectively I'm just rendering one triangle per cube then.
How would I go about rendering a lot of cubes like this; with varying colored faces?
Instancing works like this:
Eventually you are going to call
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
So let's say you're drawing instances of a cube. One cube has 36 vertices (6 per face * 6 faces). So
numVertices = 36
And lets say you want to draw 100 cubes so
numInstances = 100
Let's say you have a vertex shader like this
Let's say you have the following shader
attribute vec4 position;
uniform mat4 matrix;
void main() {
gl_Position = matrix * position;
}
If you did nothing else and just called
var mode = gl.TRIANGLES;
var first = 0;
var numVertices = 36
var numInstances = 100
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
It would just draw the same cube in the same exact place 100 times
Next up you want to give each cube a different translation so you update your shader to this
attribute vec4 position;
attribute vec3 translation;
uniform mat4 matrix;
void main() {
gl_Position = matrix * (position + vec4(translation, 0));
}
You now make a buffer and put one translation per cube then you setup the attribute like normal
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0)
But you also set a divisor
ext.vertexAttribDivisorANGLE(translationLocation, 1);
That 1 says 'only advance to the next value in the translation buffer once per instance'
Now you want have a different color per face per cube and you only want one color per face in the data (you don't want to repeat colors). There is no setting that would to that Since your numVertices = 36 you can only choose to advance every vertex (divisor = 0) or once every multiple of 36 vertices (ie, numVertices).
So you say, what if instance faces instead of cubes? Well now you've got the opposite problem. Put one color per face. numVertices = 6, numInstances = 600 (100 cubes * 6 faces per cube). You set color's divisor to 1 to advance the color once per face. You can set translation divisor to 6 to advance the translation only once every 6 faces (every 6 instances). But now you no longer have a cube you only have a single face. In other words you're going to draw 600 faces all facing the same way, every 6 of them translated to the same spot.
To get a cube back you'd have to add something to orient the face instances in 6 direction.
Ok, you fill a buffer with 6 orientations. That won't work. You can't set divisor to anything that will use those 6 orientations advance only once every face but then resetting after 6 faces for the next cube. There's only 1 divisor setting. Setting it to 6 to repeat per face or 36 to repeat per cube but you want advance per face and reset back per cube. No such option exists.
What you can do is draw it with 6 draw calls, one per face direction. In other words you're going to draw all the left faces, then all the right faces, the all the top faces, etc...
To do that we make just 1 face, 1 translation per cube, 1 color per face per cube. We set the divisor on the translation and the color to 1.
Then we draw 6 times, one for each face direction. The difference between each draw is we pass in an orientation for the face and we change the attribute offset for the color attribute and set it's stride to 6 * 4 floats (6 * 4 * 4).
var vs = `
attribute vec4 position;
attribute vec3 translation;
attribute vec4 color;
uniform mat4 viewProjectionMatrix;
uniform mat4 localMatrix;
varying vec4 v_color;
void main() {
vec4 localPosition = localMatrix * position + vec4(translation, 0);
gl_Position = viewProjectionMatrix * localPosition;
v_color = color;
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var m4 = twgl.m4;
var gl = document.querySelector("canvas").getContext("webgl");
var ext = gl.getExtension("ANGLE_instanced_arrays");
if (!ext) {
alert("need ANGLE_instanced_arrays");
}
var program = twgl.createProgramFromSources(gl, [vs, fs]);
var positionLocation = gl.getAttribLocation(program, "position");
var translationLocation = gl.getAttribLocation(program, "translation");
var colorLocation = gl.getAttribLocation(program, "color");
var localMatrixLocation = gl.getUniformLocation(program, "localMatrix");
var viewProjectionMatrixLocation = gl.getUniformLocation(
program,
"viewProjectionMatrix");
function r(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
function rp() {
return r(-20, 20);
}
// make translations and colors, colors are separated by face
var numCubes = 1000;
var colors = [];
var translations = [];
for (var cube = 0; cube < numCubes; ++cube) {
translations.push(rp(), rp(), rp());
// pick a random color;
var color = [r(1), r(1), r(1), 1];
// now pick 4 similar colors for the faces of the cube
// that way we can tell if the colors are correctly assigned
// to each cube's faces.
var channel = r(3) | 0; // pick a channel 0 - 2 to randomly modify
for (var face = 0; face < 6; ++face) {
color[channel] = r(.7, 1);
colors.push.apply(colors, color);
}
}
var buffers = twgl.createBuffersFromArrays(gl, {
position: [ // one face
-1, -1, -1,
-1, 1, -1,
1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
],
color: colors,
translation: translations,
});
var faceMatrices = [
m4.identity(),
m4.rotationX(Math.PI / 2),
m4.rotationX(Math.PI / -2),
m4.rotationY(Math.PI / 2),
m4.rotationY(Math.PI / -2),
m4.rotationY(Math.PI),
];
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.translation);
gl.enableVertexAttribArray(translationLocation);
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.color);
gl.enableVertexAttribArray(colorLocation);
ext.vertexAttribDivisorANGLE(positionLocation, 0);
ext.vertexAttribDivisorANGLE(translationLocation, 1);
ext.vertexAttribDivisorANGLE(colorLocation, 1);
gl.useProgram(program);
var fov = 60;
var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
var projection = m4.perspective(fov * Math.PI / 180, aspect, 0.5, 100);
var radius = 30;
var eye = [
Math.cos(time) * radius,
Math.sin(time * 0.3) * radius,
Math.sin(time) * radius,
];
var target = [0, 0, 0];
var up = [0, 1, 0];
var camera = m4.lookAt(eye, target, up);
var view = m4.inverse(camera);
var viewProjection = m4.multiply(projection, view);
gl.uniformMatrix4fv(viewProjectionMatrixLocation, false, viewProjection);
// 6 faces * 4 floats per color * 4 bytes per float
var stride = 6 * 4 * 4;
var numVertices = 6;
faceMatrices.forEach(function(faceMatrix, ndx) {
var offset = ndx * 4 * 4; // 4 floats per color * 4 floats
gl.vertexAttribPointer(
colorLocation, 4, gl.FLOAT, false, stride, offset);
gl.uniformMatrix4fv(localMatrixLocation, false, faceMatrix);
ext.drawArraysInstancedANGLE(gl.TRIANGLES, 0, numVertices, numCubes);
});
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/2.x/twgl-full.min.js"></script>
<canvas></canvas>
Related
I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?
If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.
I've been trying out some WebGL but there's a bug I cannot seem to find out how to fix.
Currently I have the following setup:
I have around 100 triangles which all have a position and are being drawn by a single gl.drawArrays function. To have them drawn in the correct order I used gl.enable(gl.DEPTH_TEST); which gave the correct result.
The problem I have now is that if I update the gl_Position of the triangles in the vertex shader the updated Z value is not being used in the depth test. The result is that a triangle with a gl_Position.z of 1 can be drawn on top of a triangle with a gl_Position.z of 10, which is not exactly what I want..
What have I tried?
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GEQUAL);
with
gl.clear(gl.DEPTH_BUFFER_BIT);
gl.clearDepth(0);
gl.drawArrays(gl.TRIANGLES, 0, verticesCount);
in the render function.
The following code is used to create the buffer:
gl.bindBuffer(gl.ARRAY_BUFFER, dataBuffer);
gl.bufferData(gl.ARRAY_BUFFER, positionBufferData, gl.STATIC_DRAW);
const positionLocation = gl.getAttribLocation(program, 'position');
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 3, gl.FLOAT, false, false, 0, 0);
The triangles with a higher z value are much bigger in size (due to the perspective) but small triangles still appear over it (due to the render order).
In the fragment shader I've used gl_fragCoord.z to see if that was correct and smaller triangles (further away) received a higher alpha than bigger ones (up close).
What could be the cause of the weird drawing behaviour?
Depth in clipspace goes from -1 to 1. Depth written to the depth buffer goes from 0 to 1. You're clearing to 1. There is no depth value > 1 so the only things you should see drawn are at gl_Position.z = 1. Anything less than 1 will fail the test gl.depthFunc(gl.GEQUAL);. Anything > 1 will be clipped. Only 1 is both in the depth range and Greater than or Equal to 1
The example below draws smaller to larger rectangles with different z values. The red is standard gl.depthFunc(gl.LESS) with depth cleared to 1. The green is gl.depthFunc(gl.GEQUAL) with depth cleared to 0. The blue is gl.depthFunc(gl.GEQUAL) with depth cleared to 1. Notice blue only draws the single rectangle at gl_Position.z = 1 because all other rectangles fail the test since they are at Z < 1.
const m4 = twgl.m4;
const gl = document.querySelector("canvas").getContext("webgl");
const vs = `
attribute vec4 position;
varying vec4 v_position;
uniform mat4 matrix;
void main() {
gl_Position = matrix * position;
v_position = abs(position);
}
`;
const fs = `
precision mediump float;
varying vec4 v_position;
uniform vec4 color;
void main() {
gl_FragColor = vec4(1. - v_position.xxx, 1) * color;
}
`;
// compiles shaders, links program, looks up attributes
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createBuffer, gl.bindBindbuffer, gl.bufferData for each array
const z0To1BufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: [
...makeQuad( .2, 0.00),
...makeQuad( .4, .25),
...makeQuad( .6, .50),
...makeQuad( .8, .75),
...makeQuad(1.0, 1.00),
],
});
const z1To0BufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: [
...makeQuad(.2, 1.00),
...makeQuad(.4, .75),
...makeQuad(.6, .50),
...makeQuad(.8, .25),
...makeQuad(1., 0.00),
],
});
function makeQuad(xy, z) {
return [
-xy, -xy, z,
xy, -xy, z,
-xy, xy, z,
-xy, xy, z,
xy, -xy, z,
xy, xy, z,
];
}
gl.useProgram(programInfo.program);
gl.enable(gl.DEPTH_TEST);
gl.clearDepth(1);
gl.clear(gl.DEPTH_BUFFER_BIT);
gl.depthFunc(gl.LESS);
drawRects(-0.66, z0To1BufferInfo, [1, 0, 0, 1]);
gl.clearDepth(0);
gl.clear(gl.DEPTH_BUFFER_BIT);
gl.depthFunc(gl.GEQUAL);
drawRects(0, z1To0BufferInfo, [0, 1, 0, 1]);
gl.clearDepth(1);
gl.clear(gl.DEPTH_BUFFER_BIT);
gl.depthFunc(gl.GEQUAL);
drawRects(0.66, z1To0BufferInfo, [0, 0, 1, 1]);
function drawRects(xoffset, bufferInfo, color) {
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
let mat = m4.translation([xoffset, 0, 0]);
mat = m4.scale(mat, [.3, .5, 1]);
// calls gl.uniformXXX
twgl.setUniforms(programInfo, {
color: color,
matrix: mat,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
}
<script src="https://twgljs.org/dist/3.x/twgl-full.min.js"></script>
<canvas></canvas>
<pre>
red : depthFunc: LESS, clearDepth: 1
green: depthFunc: GEQUAL, clearDepth: 0
blue : depthFunc: GEQUAL, clearDepth: 1
</pre>
Heello, everyone!
I've been trying to write a script that uses GLSL to render a Mandelbrot set, but something weird is happening.
I call the effect functio like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
But, when I try to use the texture_coords values, say, like this:
vec2 c = vec2((texture_coords[0]-WD/2)/100, (texture_coords[1]-HT/2)/100);
It returns the same value for every pixel; if, on the other hand, I use screen_coords instead, it works, but I'm affraid that if I drag the window around it might fuzz with the results.
Why am I unable to retrieve texture_coords?
More insight on the program and the problems here
UPDATE
I have reworked the code, now it looks like this:
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords)
{
vec2 c = vec2( ( MinRe + window_coords[0] * ( MaxRe - MinRe ) / ( width + 1 ) ),
( MaxIm - window_coords[1] * ( MaxIm - MinIm ) / ( height + 1 ) )
);
vec2 z = c;
vec2 zn = vec2(0.0, 0.0);
int n_iter = 0;
while( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter)) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2* z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
Which works beautifully. But when I use texture_coords instead of window_coords, the code returns the same value to every pixel, despite the fact that the texture I'm using is the same size of the window.
The problem is that some drawable objects of love.graphics don't set any texture coordinate if you don't load an image. So, instead of using draw.rectangle, you should use a Mesh:
A 2D polygon mesh used for drawing arbitrary textured shapes
In order to add a mesh object you can add to the load function:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
-- the Mesh DrawMode "fan" works well for 4-vertex Meshes.
mesh = love.graphics.newMesh(vertices, "fan")
-- ... other stuff here ...
end
and in the draw function:
function love.draw()
-- ...
love.graphics.draw(mesh,0,0)
-- ...
end
The complete code, considering your previous question and my answer to that, adding some lines to manage the coordinate tranformations become:
function love.load()
width, height = love.graphics.getDimensions( )
local vertices = {
{
-- top-left corner
0, 0, -- position of the vertex
0, 0, -- texture coordinate at the vertex position
255, 0, 0, -- color of the vertex
},
{
-- top-right corner
width, 0,
1, 0, -- texture coordinates are in the range of [0, 1]
0, 255, 0
},
{
-- bottom-right corner
width, height,
1, 1,
0, 0, 255
},
{
-- bottom-left corner
0, height,
0, 1,
255, 255, 0
},
}
mesh = love.graphics.newMesh(vertices, "fan")
GLSLShader = love.graphics.newShader[[
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
extern int max_iter;
extern vec2 size;
extern vec2 left_top;
vec4 clr(int n){
if(n == max_iter){return black;}
float m = float(n)/float(max_iter);
float r = float(mod(n,256))/32;
float g = float(128 - mod(n+64,127))/255;
float b = float(127 + mod(n,64))/255;
if (r > 1.0) {r = 1.0;}
else{
if(r<0){r = 0;}
}
if (g > 1.0) {g = 1.0;}
else{
if(g<0){g = 0;}
}
if (b > 1.0) {b = 1.0;}
else{
if(b<0){b = 0;}
}
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 window_coords){
vec2 c = vec2(texture_coords[0]*size[0] + left_top[0],texture_coords[1]*size[1] - left_top[1]);
vec2 z = vec2(0.0,0.0);
vec2 zn = vec2(0.0,0.0);
int n_iter = 0;
while ( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter) ) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2*z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
return clr(n_iter);
}
]]
end
function love.draw()
center_x = -0.5
center_y = 0.0
size_x = 3
size_y = size_x*height/width
GLSLShader:send("left_top",{center_x-size_x*0.5,center_y+size_y*0.5})
GLSLShader:send("size",{size_x,size_y})
GLSLShader:sendInt("max_iter",1024)
love.graphics.setShader(GLSLShader)
love.graphics.draw(mesh,0,0)
love.graphics.setShader()
end
But it's somewhat misguiding, because my texture was the size of the window, and it didn't work
Well, let's investigate that. You didn't exactly provide a lot of information, but let's look anyway.
(texture_coords[0]-WD/2)/100
What is that? Well, we know what texture_coords is. From the Love2D wiki:
The location inside the texture to get pixel data from. Texture coordinates are usually normalized to the range of (0, 0) to (1, 1), with the top-left corner being (0, 0).
So you subtract from this texture coordinate WD/2. You didn't bother mentioning what that WD value was. But regardless, you divide the result by 100.
So, what exactly is WD? Let's see if algebra can help:
val = (texture_coords[0]-WD/2)/100
val * 100 = texture_coords[0] - WD / 2
(val * 100) - texture_coords[0] = -WD / 2
-2 * ((val * 100) - texture_coords[0]) = WD
So, what is WD? Well, from this equation, I can determine... nothing. This equation seems to be gibberish.
I'm guessing you intend for WD to mean "width" (seriously, it's three more characters; you couldn't type that out?). Presumably, the texture's width. If so... the equation remains gibberish.
You're taking a value that ranges from [0, 1], then subtracting half of the texture width from it. What does that mean? Why divide by 100? Since the texture width is probably much larger than the largest value from texture_coords (aka: 1), the result of this is going to be basically -WD/200.
And unless you're rendering to a floating-point image, that's going to get clamped to the valid color range: [0, 1]. So all your values come out to be the same color: black.
Since you're talking about Mandelbrot and so forth, I suspect you're trying to generate values on the range [-1, 1] or whatever. And your equation might do that... if texture_coords weren't normalized texture coordinates on the range [0, 1]. You know, exactly like the Wiki says they are.
If you want to turn texture coordinates into the [-1, 1] range, it's really much simpler. This is why we use normalized texture coordinates:
vec2 c = (2 * texture_coord) - 1; //Vector math is good.
If you want that to be the [-100, 100] range, just multiply the result by 100.
I'm an OpenGL newbie. I'm trying to create a system of particles and I have everything set up, but the particle trails.
The easiest way, that I see, for me to implement this is to clear the screen with a nearly transparent colour e.g alpha = 0.05. This will fade the previous positions drawn.
However, this doesn't work. I've also tried to draw a rectangle over the screen.
After setting alpha of my particles to 0.3, my transparency doesn't seem to be working.
This is my code:
do{
glBindVertexArray(VertexArrayID);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
time = (float)glfwGetTime();
// ** Calculating new positions and placing into vertex array
iter = 0;
for(int i = 0; i < n; i++){
bodies[i].F(bodies, i, n, 1);
bodies[i].calcPosition(dt);
bodies[i].getVertexArray(vertexArray, iter, scale, i);
}
for(int i = 0; i < n; i++){
bodies[i].F(bodies, i, n, 2);
bodies[i].calcVelocity(dt);
}
// **
glBufferData(GL_ARRAY_BUFFER, 21 * 6 * n * sizeof(float), vertexArray, GL_STREAM_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 20 * 3 * n * sizeof(GLuint), elements, GL_STREAM_DRAW);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0,
2,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)0
);
glEnableVertexAttribArray(1);
glVertexAttribPointer(
1,
4,
GL_FLOAT,
GL_FALSE,
6*sizeof(float),
(void*)(2*sizeof(float))
);
glDrawElements(GL_TRIANGLES, 20 * 3 * n, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
while((float)glfwGetTime() - time < dt){
}
} // Check if the ESC key was pressed or the window was closed
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
My shaders:
#version 330 core
in vec4 Color;
out vec4 outColor;
void main()
{
outColor = Color;
}
#version 330 core
layout(location = 0) in vec2 position;
layout(location = 1) in vec4 color;
out vec4 Color;
void main(){
gl_Position = vec4(position, 0.0, 1.0);
Color = color;
}
This outputs n circles (20 sided polygons) travelling round the screen in different colours. All previous drawings stay on the screen, I want them to fade
Thanks
Andy
The easiest way, that I see, for me to implement this is to clear the screen with a nearly transparent colour e.g alpha = 0.05. This will fade the previous positions drawn.
That is not going to work in a double-buffered window (and you don't want a single buffered one). The contents of the back buffer are undefined after SwapBuffer. If you really, really lucky, you might get some of the older image contents (but not the last one, as this is the front buffer now).
To solve this issue, you have to render to a texture, so you can redraw the previous contents (with your fadeout), add the new particle positions (still rendering into a texture for the next frame), and finally render or blit that texture to the real framebuffer. So you need at least two additional textures in a ping-pong fashion.
I was reading tutorials from here.
<script class = "WebGL">
var gl;
function initGL() {
// Get A WebGL context
var canvas = document.getElementById("canvas");
gl = getWebGLContext(canvas);
if (!gl) {
return;
}
}
var positionLocation;
var resolutionLocation;
var colorLocation;
var translationLocation;
var rotationLocation;
var translation = [50,50];
var rotation = [0, 1];
var angle = 0;
function initShaders() {
// setup GLSL program
vertexShader = createShaderFromScriptElement(gl, "2d-vertex-shader");
fragmentShader = createShaderFromScriptElement(gl, "2d-fragment-shader");
program = createProgram(gl, [vertexShader, fragmentShader]);
gl.useProgram(program);
// look up where the vertex data needs to go.
positionLocation = gl.getAttribLocation(program, "a_position");
// lookup uniforms
resolutionLocation = gl.getUniformLocation(program, "u_resolution");
colorLocation = gl.getUniformLocation(program, "u_color");
translationLocation = gl.getUniformLocation(program, "u_translation");
rotationLocation = gl.getUniformLocation(program, "u_rotation");
// set the resolution
gl.uniform2f(resolutionLocation, canvas.width, canvas.height);
}
function initBuffers() {
// Create a buffer.
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// Set Geometry.
setGeometry(gl);
}
function setColor(red, green, blue) {
gl.uniform4f(colorLocation, red, green, blue, 1);
}
// Draw the scene.
function drawScene() {
// Clear the canvas.
gl.clear(gl.COLOR_BUFFER_BIT);
// Set the translation.
gl.uniform2fv(translationLocation, translation);
// Set the rotation.
gl.uniform2fv(rotationLocation, rotation);
// Draw the geometry.
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
// Fill the buffer with the values that define a letter 'F'.
function setGeometry(gl) {
/*Assume size1 is declared*/
var vertices = [
-size1/2, -size1/2,
-size1/2, size1/2,
size1/2, size1/2,
size1/2, size1/2,
size1/2, -size1/2,
-size1/2, -size1/2 ];
gl.bufferData(
gl.ARRAY_BUFFER,
new Float32Array(vertices),
gl.STATIC_DRAW);
}
function animate() {
translation[0] += 0.01;
translation[1] += 0.01;
angle += 0.01;
rotation[0] = Math.cos(angle);
rotation[1] = Math.sin(angle);
}
function tick() {
requestAnimFrame(tick);
drawScene();
animate();
}
function start() {
initGL();
initShaders();
initBuffers();
setColor(0.2, 0.5, 0.5);
tick();
}
</script>
<!-- vertex shader -->
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
void main() {
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the position from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
</script>
<!-- fragment shader -->
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
</script>
My WebGL program for 1 shape works something like this:
Get a context (gl) from the canvas element.
initialize buffers with the shape of my object
drawScene() : a call to gl.drawArrays()
If there is animation, an update function, which updates my shape's angles, positions,
and then drawScene() both in tick(), so that it gets called repeatedly.
Now when I need more than 1 shape, should I fill the single buffer at once with many objects and then use it to later call drawScene() drawing all the objects at once
[OR]
should I repeated call the initBuffer and drawScene() from requestAnimFrame().
In pseudo code
At init time
Get a context (gl) from the canvas element.
for each shader
create shader
look up attribute and uniform locations
for each shape
initialize buffers with the shape
for each texture
create textures and/or fill them with data.
At draw time
for each shape
if the last shader used is different than the shader needed for this shape call gl.useProgram
For each attribute needed by shader
call gl.enableVertexAttribArray, gl.bindBuffer and gl.vertexAttribPointer for each attribute needed by shape with the attribute locations for the current shader.
For each uniform needed by shader
call gl.uniformXXX with the desired values using the locations for the current shader
call gl.drawArrays or if the data is indexed called gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, bufferOfIndicesForCurrentShape) followed by gl.drawElements
Common Optimizations
1) Often you don't need to set every uniform. For example if you are drawing 10 shapes with the same shader and that shader takes a viewMatrix or cameraMatrix it's likely that viewMatrix uniform or cameraMatrix uniform is the same for every shape so just set it once.
2) You can often move the calls to gl.enableVertexAttribArray to initialization time.
Having multiple meshes in one buffer (and rendering them with a single gl.drawArrays() as you're suggesting) yields better performance in complex scenes but obviously at that point you're not able to change shader uniforms (such as transformations) per mesh.
If you want to have the meshes running around independently, you'll have to render each one separately. You could still keep all the meshes in one buffer to avoid some overhead from gl.bindBuffer() calls but imho that won't help that much, at least not in simple scenes.
Create your buffers separately for each object you want on the scene otherwise they won't be able to move and use shader effects independently.
But that is in case your objects are different. From what I got here I think you just want to draw the same shape more than once on different positions right?
The way you go about that is you just set that translationLocation uniform right there with a different translation matrix after drawing the shape for the first time. That way when you draw the shape again it will be located somewhere else and not in top of the other one so you can see it. You can set all those transformation matrices differently and then just call gl.drawElements again since you're going to draw the same buffers that are already in use.