I wrote a demo application to test the capability of vertex shader as it is said everywhere that it can handle millions of quads per second but in my case it fails at a certain limit.
I wrote a demo here which has a input box only accepting numbers, it will render squares according to the input number dynamically.
I can easily see it in action without any delay until some 25 square quads after which it slows down and at some point even the GPU crashes which is worse.
Can we optimise the code I wrote or is it a limitation of GPU and OpenGL-ES?
Code :
`
<script type="vertexShader" id="vertexShader">
#version 300 es
in vec3 position;
in vec4 color;
out vec4 fcolor;
void main () {
gl_Position = vec4(position, 1.0);
fcolor = color;
}
</script>
<script type="fragmentShader" id="fragmentShader">
#version 300 es
precision mediump float;
in vec4 fcolor;
out vec4 finalColor;
void main () {
finalColor = vec4(fcolor);
}
</script>
var gl, canvas;
gl = initWebGLCanvas('canvas');
var program = getProgram('vertexShader', 'fragmentShader', true);
document.getElementById('numOfGrids').oninput = function () {
clearCanvas(gl);
this.value = this.value || 1;
this.value = this.value >= 0 ? this.value : 0;
var gridVertices = createGridVerticesBuffer(this.value || 1);
enableVerticesToPickBinaryDataWithinGPU(program, 'position', 'color');
fetchDataFromGPUAndRenderToCanvas({
positionIndex : gl.positionIndex,
vertices : gridVertices,
positionSize : 3,
stride : 7,
colorIndex : gl.colorIndex,
colorSize : 4,
positionoffset : 0,
colorOffset : 3,
startIndexToDraw : 0,
numOfComponents : 6
}, gl);
};
var r = [1.0, 0.0, 0.0, 1.0];
var g = [0.0, 1.0, 0.0, 1.0];
var b = [0.0, 0.0, 1.0, 1.0];
var z = 0.0;
var createGridVerticesBuffer = (gridsRequested) => {
var vertices = [
1.0, -1.0, z, r[0], r[1], r[2], r[3],
-1.0, 1.0, z, g[0], g[1], g[2], g[3],
-1.0, -1.0, z, b[0], b[1], b[2], b[3],
1.0, -1.0, z, r[0], r[1], r[2], r[3],
-1.0, 1.0, z, g[0], g[1], g[2], g[3],
1.0, 1.0, z, b[0], b[1], b[2], b[3]];
var vertexArray = [];
var factor = 2.0/gridsRequested;
var areaRequired = -1.0 + factor;
vertices[21] = vertices[0] = areaRequired;
vertices[15] = vertices[1] = -areaRequired;
vertices[22] = -areaRequired;
vertices[35] = areaRequired;
vertices[36] = vertices[8];
vertexArray.push(vertices);
var lastVertices = vertices.slice();
var processX = true;
for (var i = 1; i <= gridsRequested * gridsRequested - 1; i++) {
var arr = lastVertices.slice();
if (processX) {
arr[21] = arr[0] = lastVertices[0] + factor;
arr[28] = arr[7] = lastVertices[35];
arr[8] = lastVertices[36];
arr[14] = lastVertices[0];
arr[15] = lastVertices[1];
arr[35] = lastVertices[35] + factor;
} else {
arr[22] = arr[1] = lastVertices[1] - factor;
arr[29] = arr[8] = lastVertices[8] - factor;
arr[15] = lastVertices[15] - factor;
arr[36] = lastVertices[36] - factor;
}
vertexArray.push(arr);
if ((i + 1) % gridsRequested === 0) {
lastVertices = vertexArray[i + 1 - gridsRequested];
processX = false;
} else {
processX = true;
lastVertices = arr;
}
}
return createBuffers(gl, vertexArray, true);
};
`
You've asked an un-answerable question
How many quads can be drawn?
Which device are we talking about? A Raspberry PI? An iPhone 2? A Windows PC with an NVidia 2080 RTX? What size quads? What's your fragment shader doing?
In any case the issue you're running into is not a vertex shader issue it's a fragement shader issue. Drawing pixels takes time. Drawing say a 1024x1024 quad is drawing 1 million pixels. Drawing one hundred 1024x1024 quads is drawing 100 million pixels. Trying to do that 60 times a second is asking to draw 6 billion pixels.
Change all the quads to be 1x1 pixel and it will run just fine because then you are only asking it to draw 6000 pixels per second.
This is often referred to as being fillrate bound. You just can't draw that many pixels. In your case it's also called having too much overdraw. By drawing overlapping quads your doing wasteful work drawing the same pixel more than ones. You should try to draw each pixel only once if possible.
I would guess a general rule of thumb is that most GPUs can only draw between 2 and 12 fullscreen quads at 60 frames a second. I just pulled that number out of no where but for example an original android GPU could only draw 1.5 fullscreen quads at 60fps. I think a Raspberry PI it's probably less than 4. I know a 2015 Intel Graphics (not sure which model) could do about 7 at 1280x720. My 2019 Macbook Air can do about 3 (it's resolution is 2560x1600 which is about the same as 12 screens at 1280x720.
As for crashing the GPU didn't crash, rather the browser or the OS reset the GPU because it was taking too long. Most GPUs (as of 2019) are not multitasking like a CPU. A CPU can be interrupted and switched to do something else. This is how your PC is able to run lots of apps and services at the same time. Most GPUs can't do this. They can only do one thing at a time. So if you give them 30 seconds of work to do in a single draw call they will do 30 seconds of work no interruptions. This is no good for the OS since it needs the GPU to update windows and run other apps. So, the OS (and some browsers) time every draw call. If one draw call takes more than a certain amount of time they reset the GPU and often kill the program or page that asked the GPU to do something that took too long.
As for how to know what's too much work, you shouldn't be trying to do as much work as possible, rather you should be trying to do as little as possible. Your app is drawing overlapping quads for no reason.
If you really do need to draw overlapping quads or do some calculation that takes a long time then you probably need to find a way to break it up into smaller operations per draw call. Process 5 quads per draw call and use 5 draw calls instead of one with 25. Process more smaller quads instead of fewer large quads, etc...
Related
I have a cross-platform LibGDX app. This particular GLSL shader code is used to shift the hue of a particular texture.
It works great on Android and when debugging on Desktop, but on an iPad this is the result (excuse photos of screen, easiest way to get data from this device).
Code:
const mat3 rgb2yiq = mat3(0.299, 0.595716, 0.211456, 0.587, -0.274453, -0.522591, 0.114, -0.321263, 0.311135);
const mat3 yiq2rgb = mat3(1.0, 1.0, 1.0, 0.9563, -0.2721, -1.1070, 0.6210, -0.6474, 1.7046);
vec4 outColor = texture2D(u_texture, v_texCoord) * v_color;
float alpha = outColor.a;
// Hue shift
if (u_hueAdjust > 0.0 && u_hueAdjust < 1.0 && alpha > 0.0)
{
vec3 unmultipliedRGB = outColor.rgb / alpha;
vec3 yColor = rgb2yiq * unmultipliedRGB;
float originalHue = atan(yColor.b, yColor.g);
float finalHue = originalHue + u_hueAdjust * 6.28318; //convert 0-1 to radians
float chroma = sqrt(yColor.b * yColor.b + yColor.g * yColor.g);
vec3 yFinalColor = vec3(yColor.r, chroma * cos(finalHue), chroma * sin(finalHue));
outColor.rgb = (yiq2rgb * yFinalColor) * alpha;
}
Obviously there's some really weird artifacts that seem to affect certain areas, in particular black/white colors. But also in general a subtle change in color is noted that isn't attributable to a desired hue-change effect.
Overall this shader is wonky on IOS (but working fine on Android/Desktop), but after playing with it for a while I'm completely out of ideas, anyone lead me in the right direction?
In the documentation for atan, it says The result is undefined if x=0..
Is it possible that yColor.g is zero on the greyscale?
The issue is discussed here: Robust atan(y,x) on GLSL for converting XY coordinate to angle
Question
I'm working on porting from OpenGL (OGL) to MetalKit (MTK) on iOS. I'm failing to get identical display in the MetalKit version of the app. I modified the projection matrix to account for differences in Normalized Device Coordinates between the two frameworks, but don't know what else to change to get identical display. Any ideas what else needs to be changed to port from OpenGL to MetalKit?
Projection Matrix Changes so far...
I understand that the Normalized Device Coordinates (NDC) are different in OGL vs MTK:
OGL NDC: -1 < z < 1
MTK NDC: 0 < z < 1
I modified the projection matrix to address the NDC difference, as indicated here. Unfortunately, this modification to the projection matrix doesn't result in identical display to the old OGL code.
I'm struggling to even know what else to try.
Background
For reference, here's some misc background information:
The view matrix is very simple (identity matrix); i.e. camera is at (0, 0, 0) and looking toward (0, 0, -1)
In the legacy OpenGL code, I used GLKMatrix4MakeFrustum to produce the projection matrix, using the screen bounds for left, right, top, bottom, and near=1, far=1000
I stripped the scene down to bare bones while debugging and below are 2 images, the first from legacy OGL code and the second from MTK, both just showing the "ground" plane with a debug texture and a black background.
Any ideas about what else might need to change to get to identical display in MetalKit would be greatly appreciated.
Screenshots
OpenGL (legacy)
MetalKit
Edit 1
I tried to extract code relevant to calculation and use of the projection matrix:
float aspectRatio = 1.777; // iPhone 8 device
float top = 1;
float bottom = -1;
float left = -aspectRatio;
float right = aspectRatio;
float RmL = right - left;
float TmB = top - bottom;
float nearZ = 1;
float farZ = 1000;
GLKMatrix4 projMatrix = { 2 * nearZ / RmL, 0, 0, 0,
0, 2 * nearZ / TmB, 0, 0,
0, 0, -farZ / (farZ - nearZ), -1,
0, 0, -farZ * nearZ / (farZ - nearZ), 0 };
GLKMatrix4 viewMatrix = ...; // Identity matrix: camera at origin, looking at (0, 0, -1), yUp=(0, 1, 0);
GLKMatrix4 modelMatrix = ...; // Different for various models, but even when this is the identity matrix in old/new code the visual output is different
GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projMatrix, GLKMatrix4Multiply(viewMatrix, modelMatrix));
...
GLKMatrix4 x = mvpMatrix; // rename for brevity below
float mvpMatrixArray[16] = {x.m00, x.m01, x.m02, x.m03, x.m10, x.m11, x.m12, x.m13, x.m20, x.m21, x.m22, x.m23, x.m30, x.m31, x.m32, x.m33};
// making MVP matrix available to vertex shader
[renderCommandEncoder setVertexBytes:&mvpMatrixArray
length:16 * sizeof(float)
atIndex:1]; // vertex data is at "0"
[renderCommandEncoder setVertexBuffer:vertexBuffer
offset:0
atIndex:0];
...
[renderCommandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip
vertexStart:0
vertexCount:4];
Sadly this issue ended up being due to a bug in the vertex shader that was pushing all geometry +1 on the Z axis, leading to the visual differences.
For any future OpenGL-to-Metal porters: the projection matrix changes above, accounting for the differences in normalized device coordinates, are enough.
Without seeing the code it's hard to say what the problem is. One of the most common issues could be a wrongly configured viewport:
// Set the region of the drawable to draw into.
[renderEncoder setViewport:(MTLViewport){0.0, 0.0, _viewportSize.x, _viewportSize.y, 0.0, 1.0 }];
The default values for the viewport are:
originX = 0.0
originY = 0.0
width = w
height = h
znear = 0.0
zfar = 1.0
*Metal: znear = minZ, zfar = maxZ.
MinZ and MaxZ indicate the depth-ranges into which the scene will be
rendered and are not used for clipping. Most applications will set
these members to 0.0 and 1.0 to enable the system to render to the
entire range of depth values in the depth buffer. In some cases, you
can achieve special effects by using other depth ranges. For instance,
to render a heads-up display in a game, you can set both values to 0.0
to force the system to render objects in a scene in the foreground, or
you might set them both to 1.0 to render an object that should always
be in the background.
Applications typically set MinZ and MaxZ to 0.0 and 1.0 respectively
to cause the system to render to the entire depth range. However, you
can use other values to achieve certain affects. For example, you
might set both values to 0.0 to force all objects into the foreground,
or set both to 1.0 to render all objects into the background.
I queries the pointSize range gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE) and got [1,1024] this means, that using this point to cover a texture (so it triggers the fragment shader to draw all pixels spans by the pointSize
at best, using this method i cannot render images larger then 1024x1024, ?
I guess i have to bind 2 triangles (6 points) to the fragment shader so it covers all of clipspace and then gl.viewport(x, y, width, height); will map this entire area to the output texture (frame buffer object or canvas)?
is there any other way (maybe something new in webgl2) other then using an attribute in the fragment shader?
Correct, the largest size area you can render with a single point is whatever is returned by gl.getParameter(gl.ALIASED_POINT_SIZE_RANGE)
The spec does not require any size larger than 1. The fact that your GPU/Driver/Browser returned 1024 does not mean that your users' machines will also return 1024.
note: Answering based on your history of questions
The normal thing to do in WebGL for 99% off all cases is to submit vertices. Want to draw a quad, submit 4 vertices and 6 indices or 6 vertex. Want to draw a triangle, submit 3 vertices. Want to draw a circle, submit the vertices for a circle. Want to draw a car, submit the vertices for a car or more likely submit the vertices for a wheel, draw 4 wheels with those vertices, submit the vertices for other parts of the car, draw each part of the car.
You multiply those vertices by some matrices to move, scale, rotate, and project them into 2D or 3D space. All your favorite games do this. The canvas 2D api does this via OpenGL ES internally. Chrome itself does this to render all the parts of this webpage. That's the norm. Anything else is an exception and will likely lead to limitations.
For fun, in WebGL2, there are some other things you can do. They are not the normal thing to do and they are not recommended to actually solve real world problems. They can be fun though just for the challenge.
In WebGL2 there is an global variable in the vertex shader called gl_VertexID which is the count of the vertex currently being processed. You can use that with clever math to generate vertices in the vertex shader with no other data.
Here's some code that draws a quad that covers the canvas
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
void main() {
int x = gl_VertexID % 2;
int y = (gl_VertexID / 2 + gl_VertexID / 3) % 2;
gl_Position = vec4(ivec2(x, y) * 2 - 1, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 6;
gl.drawArrays(gl.TRIANGLES, 0, count);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
Example: And one that draws a circle
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
const vs = `#version 300 es
#define PI radians(180.0)
void main() {
const int TRIANGLES_AROUND_CIRCLE = 100;
int triangleId = gl_VertexID / 3;
int pointId = gl_VertexID % 3;
int pointIdOffset = pointId % 2;
float angle = float((triangleId + pointIdOffset) * 2) * PI /
float(TRIANGLES_AROUND_CIRCLE);
float radius = 1. - step(1.5, float(pointId));
float x = sin(angle) * radius;
float y = cos(angle) * radius;
gl_Position = vec4(x, y, 0, 1);
}
`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}
`;
// compile shaders, link program
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
const count = 300; // 100 triangles, 3 points each
gl.drawArrays(gl.TRIANGLES, 0, 300);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
There is an entire website based on this idea. The site is based on the puzzle of making pretty pictures given only an id for each vertex. It's the vertex shader equivalent of shadertoy.com. On Shadertoy.com the puzzle is basically given only gl_FragCoord as input to a fragment shader write a function to draw something interesting.
Both sites are toys/puzzles. Doing things this way is not recommended for solving real issues like drawing a 3D world in a game, doing image processing, rendering the contents of a browser window, etc. They are cute puzzles on given only minimal inputs, drawing something interesting.
Why is this technique not advised? The most obvious reason is it's hard coded and inflexible where as the standard techniques are super flexible. For example above to draw a fullscreen quad required one shader. To draw a circle required a different shader. Where a standard vertex buffer based attributes multiplied by matrices can be used for any shape provided, 2d or 3d. Not just any shape, with just a simple single matrix multiply in the shader those shapes can be translated, rotated, scaled, projected into 3D, there rotation centers and scale centers can be independently set, etc.
Note: you are free to do whatever you want. If you like these techniques then by all means use them. The reason I'm trying to steer you away form them is based on your previous questions you're new to WebGL and I feel like you'll end up making WebGL much harder for yourself if you use obscure and hard coded techniques like these instead of the traditional more common flexible techniques that experienced devs use to get real work done. But again, it's up to you, do whatever you want.
I'm trying to draw a lot of cubes in webgl using instanced rendering (ANGLE_instanced_arrays).
However I can't seem to wrap my head around how to setup the divisors. I have the following buffers;
36 vertices (6 faces made from 2 triangles using 3 vertices each).
6 colors per cube (1 for each face).
1 translate per cube.
To reuse the vertices for each cube; I've set it's divisor to 0.
For color I've set the divisor to 2 (i.e. use same color for two triangles - a face)).
For translate I've set the divisor to 12 (i.e. same translate for 6 faces * 2 triangles per face).
For rendering I'm calling
ext_angle.drawArraysInstancedANGLE(gl.TRIANGLES, 0, 36, num_cubes);
This however does not seem to render my cubes.
Using translate divisor 1 does but the colors are way off then, with cubes being a single solid color.
I'm thinking it's because my instances are now the full cube, but if I limit the count (i.e. vertices per instance), I do not seem to get all the way through the vertices buffer, effectively I'm just rendering one triangle per cube then.
How would I go about rendering a lot of cubes like this; with varying colored faces?
Instancing works like this:
Eventually you are going to call
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
So let's say you're drawing instances of a cube. One cube has 36 vertices (6 per face * 6 faces). So
numVertices = 36
And lets say you want to draw 100 cubes so
numInstances = 100
Let's say you have a vertex shader like this
Let's say you have the following shader
attribute vec4 position;
uniform mat4 matrix;
void main() {
gl_Position = matrix * position;
}
If you did nothing else and just called
var mode = gl.TRIANGLES;
var first = 0;
var numVertices = 36
var numInstances = 100
ext.drawArraysInstancedANGLE(mode, first, numVertices, numInstances);
It would just draw the same cube in the same exact place 100 times
Next up you want to give each cube a different translation so you update your shader to this
attribute vec4 position;
attribute vec3 translation;
uniform mat4 matrix;
void main() {
gl_Position = matrix * (position + vec4(translation, 0));
}
You now make a buffer and put one translation per cube then you setup the attribute like normal
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0)
But you also set a divisor
ext.vertexAttribDivisorANGLE(translationLocation, 1);
That 1 says 'only advance to the next value in the translation buffer once per instance'
Now you want have a different color per face per cube and you only want one color per face in the data (you don't want to repeat colors). There is no setting that would to that Since your numVertices = 36 you can only choose to advance every vertex (divisor = 0) or once every multiple of 36 vertices (ie, numVertices).
So you say, what if instance faces instead of cubes? Well now you've got the opposite problem. Put one color per face. numVertices = 6, numInstances = 600 (100 cubes * 6 faces per cube). You set color's divisor to 1 to advance the color once per face. You can set translation divisor to 6 to advance the translation only once every 6 faces (every 6 instances). But now you no longer have a cube you only have a single face. In other words you're going to draw 600 faces all facing the same way, every 6 of them translated to the same spot.
To get a cube back you'd have to add something to orient the face instances in 6 direction.
Ok, you fill a buffer with 6 orientations. That won't work. You can't set divisor to anything that will use those 6 orientations advance only once every face but then resetting after 6 faces for the next cube. There's only 1 divisor setting. Setting it to 6 to repeat per face or 36 to repeat per cube but you want advance per face and reset back per cube. No such option exists.
What you can do is draw it with 6 draw calls, one per face direction. In other words you're going to draw all the left faces, then all the right faces, the all the top faces, etc...
To do that we make just 1 face, 1 translation per cube, 1 color per face per cube. We set the divisor on the translation and the color to 1.
Then we draw 6 times, one for each face direction. The difference between each draw is we pass in an orientation for the face and we change the attribute offset for the color attribute and set it's stride to 6 * 4 floats (6 * 4 * 4).
var vs = `
attribute vec4 position;
attribute vec3 translation;
attribute vec4 color;
uniform mat4 viewProjectionMatrix;
uniform mat4 localMatrix;
varying vec4 v_color;
void main() {
vec4 localPosition = localMatrix * position + vec4(translation, 0);
gl_Position = viewProjectionMatrix * localPosition;
v_color = color;
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var m4 = twgl.m4;
var gl = document.querySelector("canvas").getContext("webgl");
var ext = gl.getExtension("ANGLE_instanced_arrays");
if (!ext) {
alert("need ANGLE_instanced_arrays");
}
var program = twgl.createProgramFromSources(gl, [vs, fs]);
var positionLocation = gl.getAttribLocation(program, "position");
var translationLocation = gl.getAttribLocation(program, "translation");
var colorLocation = gl.getAttribLocation(program, "color");
var localMatrixLocation = gl.getUniformLocation(program, "localMatrix");
var viewProjectionMatrixLocation = gl.getUniformLocation(
program,
"viewProjectionMatrix");
function r(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
function rp() {
return r(-20, 20);
}
// make translations and colors, colors are separated by face
var numCubes = 1000;
var colors = [];
var translations = [];
for (var cube = 0; cube < numCubes; ++cube) {
translations.push(rp(), rp(), rp());
// pick a random color;
var color = [r(1), r(1), r(1), 1];
// now pick 4 similar colors for the faces of the cube
// that way we can tell if the colors are correctly assigned
// to each cube's faces.
var channel = r(3) | 0; // pick a channel 0 - 2 to randomly modify
for (var face = 0; face < 6; ++face) {
color[channel] = r(.7, 1);
colors.push.apply(colors, color);
}
}
var buffers = twgl.createBuffersFromArrays(gl, {
position: [ // one face
-1, -1, -1,
-1, 1, -1,
1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
],
color: colors,
translation: translations,
});
var faceMatrices = [
m4.identity(),
m4.rotationX(Math.PI / 2),
m4.rotationX(Math.PI / -2),
m4.rotationY(Math.PI / 2),
m4.rotationY(Math.PI / -2),
m4.rotationY(Math.PI),
];
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.translation);
gl.enableVertexAttribArray(translationLocation);
gl.vertexAttribPointer(translationLocation, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.color);
gl.enableVertexAttribArray(colorLocation);
ext.vertexAttribDivisorANGLE(positionLocation, 0);
ext.vertexAttribDivisorANGLE(translationLocation, 1);
ext.vertexAttribDivisorANGLE(colorLocation, 1);
gl.useProgram(program);
var fov = 60;
var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
var projection = m4.perspective(fov * Math.PI / 180, aspect, 0.5, 100);
var radius = 30;
var eye = [
Math.cos(time) * radius,
Math.sin(time * 0.3) * radius,
Math.sin(time) * radius,
];
var target = [0, 0, 0];
var up = [0, 1, 0];
var camera = m4.lookAt(eye, target, up);
var view = m4.inverse(camera);
var viewProjection = m4.multiply(projection, view);
gl.uniformMatrix4fv(viewProjectionMatrixLocation, false, viewProjection);
// 6 faces * 4 floats per color * 4 bytes per float
var stride = 6 * 4 * 4;
var numVertices = 6;
faceMatrices.forEach(function(faceMatrix, ndx) {
var offset = ndx * 4 * 4; // 4 floats per color * 4 floats
gl.vertexAttribPointer(
colorLocation, 4, gl.FLOAT, false, stride, offset);
gl.uniformMatrix4fv(localMatrixLocation, false, faceMatrix);
ext.drawArraysInstancedANGLE(gl.TRIANGLES, 0, numVertices, numCubes);
});
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/2.x/twgl-full.min.js"></script>
<canvas></canvas>
I'm making a drawing application using swift (based on GLPaint) and open gl. Now I would like to improve the curve so that it varies with stroke speed (in eg thicker if drawing fast)
However, since my knowledge in open gl is quite limited I need some guidance. What I want to do is to vary the size of my texture/point for each CGPoint I calculate and add to the screen. Is it possible?
func addQuadBezier(var from:CGPoint, var ctrl:CGPoint, var to:CGPoint, startTime:CGFloat, endTime:CGFloat) {
scalePoints(from: from, ctrl: ctrl, to: to)
let pointCount = calculatePointsNeeded(from: from, to: to, min: 16.0, max: 256.0)
var vertexBuffer: [GLfloat] = [GLfloat](count: Int(pointCount), repeatedValue:0.0)
var t : CGFloat = startTime + 0.0002
for i in 0..<Int(pointCount) {
let p = calculatePoint(from:from, ctrl: ctrl, to: to)
vertexBuffer.insert(p.x.f, atIndex: i*2)
vertexBuffer.insert(p.y.f, atIndex: i*2+1)
t += (CGFloat(1)/CGFloat(pointCount))
}
glBufferData(GL_ARRAY_BUFFER.ui, Int(pointCount)*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW.ui)
glDrawArrays(GL_POINTS.ui, 0, Int(pointCount).i)
}
func render()
{
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
where render() is called every 1/60 s.
shader
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
Thanks in advance!
In your vertex shader, set gl_pointSize to the width you want. That measurement is in framebuffer pixels, so if the size of your framebuffer changes with the device's scale factor, you'll need to adjust your point size appropriately.
If you find a way to control the line width in the vertex shader it would most likely be the best solution. Not only the lines would have different width but even a single line may have an increasing width (interpolated) between the points. I am not sure you will be able to achieve this on your platform though.
So if you do find a way you would add the point size to your buffer and use it with a new attribute in the vertex shader.
If not you will need to use triangles to draw the line which is generally a better practice anyway. To define vertices between point A and B you can get the normal as W = (B-A).normalized(), normal = N = (W.y, -W.x). Then the 4 positions are k = lineWidth/2.0, t1 = A + N*k, t2 = A - N*k, t3 = B + N*k, t4 = B - N*k. So this is what you add into your buffer and draw as a triangle strip or triangles depending on what you are looking for.