Related
I'd like to know if there are any async calls for WebGL that one could take advantage of?
I have looked into Spec v1 and Spec v2 they don't mention anything. In V2, there is a WebGL Query mechanism which I don't think is what I'm looking for.
A search on the web didn't come up with anything definitive. There is this example and is not clear how the sync and async version differ. http://toji.github.io/shader-perf/
Ultimately I'd like to be able to some of all of these asynchronously:
readPixels
texSubImage2D and texImage2D
Shader compilation
program linking
draw???
There is a glFinish operation and the documentation for it says: "does not return until the effects of all previously called GL commands are complete.". To me this means that there are asynchronous operations which can be awaited for by calling Finish()?
And some posts on the web suggest that calling getError() also forces some synchronicity and is not a very desired thing to do after every call.
It depends on your definition of async.
In Chrome (Firefox might also do this now? not sure). Chrome runs all GPU code in a separate process from JavaScript. That means your commands are running asynchronous. Even OpenGL itself is designed to be asynchronous. The functions (WebGL/OpenGL) insert commands into a command buffer. Those are executed by some other thread/process. You tell OpenGL "hey, I have new commands for you to execute!" by calling gl.flush. It executes those commands asynchronously. If you don't call gl.flush it will be called for you periodically when too many commands have been issued. It will also be called when the current JavaScript event exits, assuming you called any rendering command to the canvas (gl.drawXXX, gl.clear).
In this sense everything about WebGL is async. If you don't query something (gl.getXXX, gl.readXXX) then stuff is being handled (drawn) out of sync with your JavaScript. WebGL is giving you access to a GPU after all running separately from your CPU.
Knowing this one way to take advantage of it in Chrome is to compile shaders async by submitting the shaders
for each shader
s = gl.createShader()
gl.shaderSource(...);
gl.compileShader(...);
gl.attachShader(...);
gl.linkProgram(...)
gl.flush()
The GPU process will now be compiling your shaders. So, say, 250ms later you only then start asking if it succeeded and querying locations, then if it took less then 250ms to compile and link the shaders it all happened async.
In WebGL2 there is at least one more clearly async operation, occlusion queries, in which WebGL2 can tell you how many pixels were drawn for a group of draw calls. If non were drawn then your draws were occluded. To get the answer you periodically pole to see if the answer is ready. Typically you check next frame and in fact the WebGL spec requires the answer to not be available until the next frame.
Otherwise, at the moment (August 2018), there is no explicitly async APIs.
Update
HankMoody brought up in comments that texImage2D is sync. Again, it depends on your definition of async. It takes time to add commands and their data. A command like gl.enable(gl.DEPTH_TEST) only has to add 2-8 bytes. A command like gl.texImage2D(..., width = 1024, height = 1024, RGBA, UNSIGNED_BYTE) has to add 4meg!. Once that 4meg is uploaded the rest is async, but the uploading takes time. That's the same for both commands it's just that adding 2-8 bytes takes a lot less time than adding 4meg.
To more be clear, after that 4 meg is uploaded many other things happen asynchronously. The driver is called with the 4 meg. The driver copies that 4meg. The driver schedules that 4meg to be used sometime later as it can't upload the data immediately if the texture is already in use. Either that or it does upload it immediately just to a new area and then swaps what the texture is pointing to at just before a draw call that actually uses that new data. Other drivers just copy the data and store it and wait until the texture is used in a draw call to actually update the texture. This is because texImage2D has crazy semantics where you can upload different size mips in any order so the driver can't know what's actually needed in GPU memory until draw time since it has no idea what order you're going to call texIamge2D. All of this stuff mentioned in this paragraph happens asynchronously.
But that does bring up some more info.
gl.texImage2D and related commands have to do a TON of work. One is they have to honor UNPACK_FLIP_Y_WEBGL and UNPACK_PREMULTIPLY_ALPHA_WEBGL so they man need to make a copy of multiple megs of data to flip it or premultiply it. Second, if you pass them a video, canvas, or image they may have to do heavy conversions or even reparse the image from source especially in light of UNPACK_COLORSPACE_CONVERSION_WEBGL. Whether this happens in some async like way or not is up to the browser. Since you don't have direct access to the image/video/canvas it would be possible for the browser to do all of this async but one way or another all that work has to happen.
To make much of that work ASYNC the ImageBitmap API was added. Like most Web APIs it's under-specified but the idea is you first do a fetch (which is async). You then request to create an ImageBitmap and give it options for color conversion, flipping, pre-multiplied alpha. This also happens async. You then pass the result to gl.texImage2D with the hope being that the browser was able to make do all the heavy parts before it got to this last step.
Example:
// note: mode: 'cors' is because we are loading
// from a different domain
async function main() {
const response = await fetch('https://i.imgur.com/TSiyiJv.jpg', {mode: 'cors'})
if (!response.ok) {
return console.error('response not ok?');
}
const blob = await response.blob();
const bitmap = await createImageBitmap(blob, {
premultiplyAlpha: 'none',
colorSpaceConversion: 'none',
});
const gl = document.querySelector("canvas").getContext("webgl");
const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
{
const level = 0;
const internalFormat = gl.RGBA;
const format = gl.RGBA;
const type = gl.UNSIGNED_BYTE;
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
format, type, bitmap);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
}
const vs = `
uniform mat4 u_worldViewProjection;
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texCoord;
void main() {
v_texCoord = texcoord;
gl_Position = u_worldViewProjection * position;
}
`;
const fs = `
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_tex;
void main() {
gl_FragColor = texture2D(u_tex, v_texCoord);
}
`;
const m4 = twgl.m4;
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const bufferInfo = twgl.primitives.createCubeBufferInfo(gl, 2);
const uniforms = {
u_tex: tex,
};
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
const fov = 30 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.5;
const zFar = 10;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [1, 4, -6];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
const world = m4.rotationY(time);
uniforms.u_worldViewProjection = m4.multiply(viewProjection, world);
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
gl.drawElements(gl.TRIANGLES, bufferInfo.numElements, gl.UNSIGNED_SHORT, 0);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
main();
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Unfortunately this only works in Chrome as of 2018 August. Firefox bug is here. Other browsers I don't know.
I've created a subclass of GLKViewController to render some waveforms using GLKBaseEffect which all works fine.
I want to use my own vertex and fragment shaders, so I used the boilerplate code from the default OpenGL Game project. Both shaders compile, link, and are usable. I can alter the color in the vertex shader without issue.
The problem I'm having is passing my own uniforms into the fragment shader. I can pass it to the vertex shader and use it there, but when I declare the same variable in the
Code:
// Vertex shader
attribute vec4 position;
uniform float amplitudeMax;
void main() {
gl_Position = position + amplitudeMax; // This works and offsets the drawing
}
// Fragment shader
uniform float amplitudeMax; // Fails to compile
//uniform float amplitudeMax; // Compiles fine
void main()
{
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
If curious, here is how I set up the uniforms and shaders
func loadShaders() -> Bool {
// Copied from default OpenGL Game project
// link program ...
uniforms[UNIFORM_AMPLITUDE_MAX] = glGetUniformLocation(program, "amplitudeMax")
// release shaders ...
}
// The draw loop
func render() {
configureShaders()
// ... draw
}
func configureShaders() {
glUseProgram(program)
let max = AudioManager.sharedInstance.bufferManager.currentPower)
glUniform1f(uniforms[UNIFORM_AMPLITUDE_MAX], max)
}
I had another idea of passing the value from the vertex shader to the fragment shader using a varying float. Again I can declare and use the variable in the vertex shader but delaring it in the fragment shader cause it to fail compiling.
Edit: --------------------------------------------------
Through trial and error I found that if (in my fragment shader) qualify my uniform declaration with a precision that it works for both uniforms and varying (can pass from vertex to shader).
uniform lowp float amplitudeMax;
void main() {
gl_FragColor = vec4(amplitudeMax, 0.0, 1.0, 1.0);
}
My code works but i am wondering why !
I have 2 textures :
uniform sampler2D uSampler0;
uniform sampler2D uSampler1;
void main() {
vec4 color0 = texture2D(uSampler0, vTexCoord);
vec4 color1 = texture2D(uSampler1, vTexCoord);
gl_FragColor = color0 * color1;
}
and my js code
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ZERO);
gl.uniform1i(program.uSampler0,0);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ONE);
gl.uniform1i(program.uSampler1);
// uncomment one of the 3, it works.
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ZERO);
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ONE);
// gl.bindTexture(gl.TEXTURE_2D, texture_FOR_PURPOSE_ONLY);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
before gl.draw, i have tested the 3 bindings,
each one works !
So, i do not understand the real pipeline underlying .
Thanks for some explanations
This line is invalid
gl.uniform1i(program.uSampler1);
You're not passing a value to the sampler
The way WebGL texture units work is they are global state inside WebGL
gl.activeTexture sets the texture unit all other texture commands effect. For each texture unit there are 2 bind points, TEXTURE_2D and TEXTURE_CUBE_MAP.
You can think of it like this
gl = {
activeTextureUnit: 0,
textureUnits: [
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
...
],
};
gl.activeTexture just does this
gl.activeTexture = function(unit) {
gl.activeTextureUnit = unit - gl.TEXTURE0;
};
gl.bindTexture does this
gl.bindTexture = function(bindPoint, texture) {
gl.textureUnits[gl.activeTextureUnit][bindPoint] = texture;
};
gl.texImage2D and gl.texParamteri look up which texture to work with like this
gl.texImage2D = function(bindPoint, .....) {
var texture = gl.textureUnits[gl.activeTextureUnit][bindPoint];
// now do something with texture
In other words, inside WebGL there is a global array of texture units. gl.activeTexture and gl.bindTexture manipulate that array.
gl.texXXX manipulate the textures themselves but they reference the textures indirectly through that array.
gl.uniform1i(someSamplerLocation, unitNumber) sets the shader's uniform to look at a particular index in that array of texture units.
It's working correctly because in the presented code you are sending the appropriate uniforms for the samplers.
First texture was set to unit 0 by calling glActiveTexture(GL_TEXTURE0) and was bound afterward. Then a switch was made to unit1.
At that point there were two separate bound textures in each unit.
At the end these units were passed as the uniforms for samplers - which is how to indicate which texture should be in a sampler: in this case passing 0 corresponding to the GL_TEXTURE0 unit to the first uniform and analogousy for the second uniform.
Probably even without uncommenting these lines - things should work.
I'm writing an app on iOS allows drawing free style (using finger) and drawing image on screen. I use OpenGL ES to implement. I have 2 functions, one is drawing free style, one is drawing texture
--- Code drawing free style
- (void)drawFreeStyle:(NSMutableArray *)pointArray {
//Prepare vertex data
.....
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
**GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
glVertexAttrib1f(a_ver_flag_drawing_type, 0.0f);
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
glUniform1f(u_fra_flag_drawing_type, 0.0);**
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
--- Code drawing texture
- (void)drawTexture:(UIImage *)image atRect:(CGRect)rect {
GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
GLuint a_position_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_Position");
GLuint a_texture_coordinates_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_TextureCoordinates");
GLuint u_texture_unit_location = glGetUniformLocation(program[PROGRAM_POINT].id, "u_TextureUnit");
glUseProgram(PROGRAM_POINT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(u_texture_unit_location, 0);
glUniform1f(u_fra_flag_drawing_type, 1.0);
const float textrect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f};
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(textrect), textrect, GL_STATIC_DRAW);
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
glEnableVertexAttribArray(a_ver_flag_drawing_type);
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Notice 2 variables a_ver_flag_drawing_type (attribute) and u_fra_flag_drawing_type (uniform). They're use for setting flags on vertex shader and fragment shader to determine drawing free style or texture on both files
--- Vertex shader
//Flag
attribute lowp float a_drawingType;
//For drawing
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
//For texture
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
if (abs(a_drawingType - 1.0) < 0.0001) {
//Draw texture
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
} else {
//Draw free style
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
}
--- Fragment shader
precision mediump float;
uniform sampler2D texture;
varying lowp vec4 color;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
uniform lowp float v_drawing_type;
void main()
{
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
}
My idea is setting these flags from drawing code at drawing time. Attribute a_drawingType is used for vertex shader and Uniform v_drawing_type is used for fragment shader. Depending these flags to know draw free style or texture.
But if I run independently, each time just one type (if run drawing free style, comment code config drawing texture on vertex shader and fragment shader file and vice versa) it can draw as I want. If I combine them, it not only can't draw but also makes app crash as
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I'm new for OpenGL ES and GLSL language, so I'm not sure my thinking about setting flags like that is right or wrong. Can anyone help me
So why don't you just build 2 seperate shader programs and useProgram() on one of them, instead of sending flag values to GL and making expensive conditional branch in vertex and especially fragment shader?
Attributes are per-vertex, uniforms are per-shader program.
You might see a crash if you supplied only one value for an attribute then asked OpenGL to draw, say, 100 points. In that case OpenGL is going to do an out-of-bounds array access when it attempts to fetch the attributes for vertices 2–100.
It'd be more normal to use two separate programs. Conditionals are very expensive on GPUs because GPUs try to maintain one program counter while processing multiple fragments. They're SIMD units. Any time the evaluation of an if differs between two neighbouring fragments you're probably reducing parallelism. The idiom is therefore not to use if statements where possible.
If you switch to a uniform there's a good chance you won't lose any performance through absent parallelism because the paths will never diverge. Your GLSL compiler may even be smart enough to recompile the shader every time you reset the constant, effectively performing constant folding. But you'll be paying for the recompilation every time.
If you just had two programs and switched between them you wouldn't pay the recompilation fee.
It's not completely relevant in this case but e.g. you'd also often see code like:
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
Written more like:
gl_FragColor = mix(texture2D(u_TextureUnit, v_TextureCoordinates),
color * texture2D(texture, gl_PointCoord),
v_drawing_type);
... because that avoids the conditional entirely. In this case you'd want to adjust the texture2D so that both calls were identical, and probably factor them out of the call, to ensure you don't end up always doing two samples instead of one.
The previously posted answers explain certain pieces, but an important part is missing in all of them. It is perfectly legal to specify a single attribute value that is applied to all vertices in a draw call. What you did here was basically valid:
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
The direct problem was this call that followed shortly after:
glEnableVertexAttribArray(a_ver_flag_drawing_type);
There are two main ways to specify the value of a vertex attribute:
Use the current value, as specified in your case by glVertexAttrib1f().
Use values from an array, as specified with glVertexAttribPointer().
You select which of the two options is used for any given attribute by enabling/disabling the array, which is done by calling glEnableVertexAttribArray()/glDisableVertexAttribArray().
In the posted code, the vertex attribute was specified as only a current value, but the attribute was then enabled to fetch from an array with glEnableVertexAttribArray(). This conflict caused the crash, because the attribute values would have been fetched from an array that was never specified. To use the specified current value, the call simply has to be changed to:
glDisableVertexAttribArray(a_ver_flag_drawing_type);
Or, if the array was never enabled, the call could be left out completely. But just in case another part of the code might have enabled it, it's safer to disable it explicitly.
As a side note, the following statement sequence from the first draw function also looks suspicious. glUniform*() sets value on the active program, so this will set a value on the previously active program, not the one specified in the second statement. If you want to set the value on the new program, the order of the statements has to be reversed.
glUniform1f(u_fra_flag_drawing_type, 0.0);
glUseProgram(program[PROGRAM_POINT].id);
On the whole thing, I think there are at least two approaches that are better than the one chosen:
Use separate shader programs for the two different types of rendering. While using a single program with switchable behavior is a valid option, it looks artificial, and using separate programs seems much cleaner.
If you want to stick with a single program, use a single uniform to do the switching, instead of using an attribute and a uniform. You could use the one you already have, but you might just as well make it a boolean while you're at it. So in both the vertex and fragment shader, use the same uniform declaration:
uniform bool v_use_texture;
Then the tests become:
if (v_use_texture) {
Getting the uniform location is the same as before, and you can set the value, which will the be available in both the vertex and fragment shader, with one of:
glUniform1i(loc, 0);
glUniform1i(loc, 1);
I found the problem, just change variable a_drawingType from Attribute to Uniform then use glGetUniformLocation and glUniform1f to get index and pass value. I think Attribute will pass for any vertex so use uniform to pass once.
I'm not entirely clear on the scope of enabling vertex attrib arrays. I've got several different shader programs with differing numbers of vertex attributes. Are glEnableVertexAttribArray calls local to a shader program, or global?
Right now I'm enabling vertex attrib arrays when I create the shader program, and never disabling them, and all seems to work, but it seems like I'm possibly supposed to enable/disable them right before/after draw calls. Is there an impact to this?
(I'm in WebGL, as it happens, so we're really talking about gl.enableVertexAttribArray and gl.disableVertexAttribArray. I'll note also that the orange book, OpenGL Shading Language, is quite uninformative about these calls.)
The state of which Vertex Attribute Arrays are enabled can be either bound to a Vertex Array Object (VAO), or be global.
If you're using VAOs, then you should not disable attribute arrays, as they are encapsulated in the VAO.
However for global vertex attribute array enabled state you should disable them, because if they're left enabled OpenGL will try to read from arrays, which may be bound to a invalid pointer, which may either crash your program if the pointer is to client address space, or raise a OpenGL error if it points out of the limits of a bound Vertex Buffer Object.
WebGL is not the same as OpenGL.
In WebGL leaving arrays enabled is explicitly allowed as long as there is a buffer attached to the attribute and (a) if it's used it's large enough to satisfy the draw call or (b) it's not used.
Unlike OpenGL ES 2.0, WebGL doesn't allow client side arrays.
Proof:
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses2Attributes = `
attribute vec4 position;
attribute vec4 color;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = color;
}
`;
const vsUses1Attribute = `
attribute vec4 position;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = vec4(0,1,1,1);
}
`
const fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
const program2Attribs = twgl.createProgram(gl, [vsUses2Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attribute, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer3Colors = createBuffer([
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 1, 1,
]);
const buffer9Points = createBuffer([
-0.8, -0.5,
-0.6, -0.5,
-0.4, -0.5,
-0.2, -0.5,
0.0, -0.5,
0.2, -0.5,
0.4, -0.5,
0.6, -0.5,
0.8, -0.5,
]);
// set up 2 attributes
{
const posLoc = gl.getAttribLocation(program2Attribs, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
const colorLoc = gl.getAttribLocation(program2Attribs, 'color');
gl.enableVertexAttribArray(colorLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Colors);
gl.vertexAttribPointer(colorLoc, 4, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program2Attribs);
gl.drawArrays(gl.POINTS, 0, 3);
// setup 1 attribute (don't disable the second attribute
{
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer9Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program1Attrib);
gl.drawArrays(gl.POINTS, 0, 9);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it draws 3 points (3 vertices, 2 attributes)<br/>
2nd it draws 9 points (9 vertices, 1 attribute)<br/>
It does NOT call gl.disableVertexAttrib so on the second draw call one of the attributes is still enabled. It is pointing to a buffer with only 3 vertices in it even though 9 vertices will be drawn. There are no errors.
</p>
<canvas></canvas>
Another example, just enable all the attributes, then draw with a shader that uses no attributes (no error) and also draw with a shader that uses 1 attribute (again no error), no need to call gl.disbleVertexAttribArray
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses1Attributes = `
attribute vec4 position;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
}
`;
const vsUses0Attributes = `
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 20.0;
}
`
const fs = `
precision mediump float;
void main() {
gl_FragColor = vec4(1, 0, 0, 1);
}
`;
const program0Attribs = twgl.createProgram(gl, [vsUses0Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attributes, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer0Points = createBuffer([]);
// enable all the attributes and bind a buffer to them
const maxAttrib = gl.getParameter(gl.MAX_VERTEX_ATTRIBS);
for (let i = 0; i < maxAttrib; ++i) {
gl.enableVertexAttribArray(i);
gl.vertexAttribPointer(i, 2, gl.FLOAT, false, 0, 0);
}
gl.useProgram(program0Attribs);
gl.drawArrays(gl.POINTS, 0, 1);
gl.useProgram(program1Attrib);
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.POINTS, 0, 3);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it enables all attributes<br/>
2nd it draws 1 point that needs no attributes (no error)<br/>
3rd it draws 3 points that need 1 attribute (no error)<br/>
It does NOT call gl.disableVertexAttribArray on any of the attributes so they are all still enabled. There are no errors.
</p>
<canvas></canvas>
For webGL I'm going to go with yes, it is important to call gl.disableVertexAttribArray.
Chrome was giving me this warning:
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
This was happening when the program changed to one using less than the maximum number of attributes. Obviously the solution was to disable the unused attributes before drawing.
If all your programs use the same number of attributes, you may well get away with calling gl.enableVertexAttribArray once on initialization. Otherwise you'll need to manage them when you change programs.
Think of it as attributes are local to a VAO and not a shader program. the VBOs are in GPU memory.
Now consider that, in WebGL, there is a default VAO that WebGL uses by default. (it can also be a VAO created by the programmer, the same concept applies). This VAO contains a target called ARRAY_BUFFER to which any VBO in the GPU memory can bound. This VAO also contains and attribute array with a fixed number of attribute slots (number depends on implementation and platform, here lets say 8 which is the minimum required by the Webgl specification). Also, this VAO will have ELEMENT_ARRAY_BUFFER target to which any index data buffer can be bound.
Now, when you create a shader program, it will have the attributes you specify. Webgl will assign one of the possible attribute slot "numbers" to all of the attributes specified in the program when you link the shader program. now attributes will use the corresponding attribute slots in the VAO to access the data bound to the ARRAY_BUFFER or ELEMENT_ARRAY_BUFFER targets in the VAO. Now, when you use functions gl.enableVertexAttribArray(location) and gl.vertexAttribPointer(location,....) you are not changing any characteristics of the attributes in the shader program (they simply have a attribute number which refers to one of the attribute slots in the VAO that they will use to access data). What you are actually doing is modifying the state of the attribute slot in the VAO using its location number. SO then for the attributes in the program to be able to access the data, its corresponding attribute slot in the VAO must be enabled (gl.enableVertexAttribArray()). And we have to configure the attribute slot so it can read the data from the buffer bound to the ARRAY_BUFFER correctly (gl.vertexAttribPointer()). one a VBO is set for a slot it wont change, even if we unbind it from the target the attribute slot csn still red from the VBO as long as it is there in GPU memory. Also, there must be some buffer bound to the targets of the VAO (gl.bindBuffer()). So gl.enableVertexAttribArray(location) will enable the attribute slot specified by 'location' in the current VAO. gl.disableVertexAttribArray(location) will disable it. It has nothing to do with the shader program though. Even if you uses a different shader program, these attribute slots's state wont be affected.
So, if two different shader programs use the same attribute slots, there wont be any error because the corresponding attribute slots in the VAO is already active. but the data from the targets might be read incorrectly if the attributes are required interpret the data differently in the two shader programs. Now consider, if the two shader programs use different attribute slots, then you might enable the second shaders program's required attribute slots and think that your program should work. But the already enabled attribute slots (which were enabled by the previous shader program) will still be enabled but wont be used. This causes an error.
So when changing shader programs, we must ensure that the enabled attribute slots in the VAO that wont be used by this shader program must be disabled. Although we might now specify any VAOs explicitly, Webgl works like this by default.
One way is to maintain a list of enabled attributes on the javascript side and disable all enabled attribute slots when switching program while still using the same VAO. Another way to deal with this problem is to create custom VAOs that is accessed by one shader program only. but it is less efficient. Yet another way is to binding attribute locations to fixed slots before the shader program is linked using gl.bindAttribLocation().