I sometimes find myself struggling between declaring the buffers (with createBuffer/bindBuffer/bufferdata) in different order and rebinding them in other parts of the code, usually in the draw loop.
If I don't rebind the vertex buffer before drawing arrays, the console complains about an attempt to access out of range vertices. My suspect is the the last bound object is passed at the pointer and then to the drawarrays but when I change the order at the beginning of the code, nothing changes. What effectively works is rebinding the buffer in the draw loop. So, I can't really understand the logic behind that. When do you need to rebind? Why do you need to rebind? What is attribute0 referring to?
I don't know if this will help. As some people have said, GL/WebGL has a bunch of internal state. All the functions you call set up the state. When it's all setup you call drawArrays or drawElements and all of that state is used to draw things
This has been explained elsewhere on SO but binding a buffer is just setting 1 of 2 global variables inside WebGL. After that you refer to the buffer by its bind point.
You can think of it like this
gl = function() {
// internal WebGL state
let lastError;
let arrayBuffer = null;
let vertexArray = {
elementArrayBuffer: null,
attributes: [
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
...
],
}
// these values are used when a vertex attrib is disabled
let attribValues = [
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
...
];
...
// Implementation of gl.bindBuffer.
// note this function is doing nothing but setting 2 internal variables.
this.bindBuffer = function(bindPoint, buffer) {
switch(bindPoint) {
case gl.ARRAY_BUFFER;
arrayBuffer = buffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
vertexArray.elementArrayBuffer = buffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
};
...
}();
After that other WebGL functions reference those. For example gl.bufferData might do something like
// implementation of gl.bufferData
// Notice you don't pass in a buffer. You pass in a bindPoint.
// The function gets the buffer one of its internal variable you set by
// previously calling gl.bindBuffer
this.bufferData = function(bindPoint, data, usage) {
// lookup the buffer from the bindPoint
var buffer;
switch (bindPoint) {
case gl.ARRAY_BUFFER;
buffer = arrayBuffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
buffer = vertexArray.elemenArrayBuffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
// copy data into buffer
buffer.copyData(data); // just making this up
buffer.setUsage(usage); // just making this up
};
Separate from those bindpoints there's number of attributes. The attributes are also global state by default. They define how to pull data out of the buffers to supply to your vertex shader. Calling gl.getAttribLocation(someProgram, "nameOfAttribute") tells you which attribute the vertex shader will look at to get data out of a buffer.
So, there's 4 functions that you use to configure how an attribute will get data from a buffer. gl.enableVertexAttribArray, gl.disableVertexAttribArray, gl.vertexAttribPointer, and gl.vertexAttrib??.
They're effectively implemented something like this
this.enableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = true; // true means get data from attribute.buffer
};
this.disableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = false; // false means get data from attribValues[location]
};
this.vertexAttribPointer = function(location, size, type, normalized, stride, offset) {
const attribute = vertexArray.attributes[location];
attribute.size = size; // num values to pull from buffer per vertex shader iteration
attribute.type = type; // type of values to pull from buffer
attribute.normalized = normalized; // whether or not to normalize
attribute.stride = stride; // number of bytes to advance for each iteration of the vertex shader. 0 = compute from type, size
attribute.offset = offset; // where to start in buffer.
// IMPORTANT!!! Associates whatever buffer is currently *bound* to
// "arrayBuffer" to this attribute
attribute.buffer = arrayBuffer;
};
this.vertexAttrib4f = function(location, x, y, z, w) {
const attrivValue = attribValues[location];
attribValue[0] = x;
attribValue[1] = y;
attribValue[2] = z;
attribValue[3] = w;
};
Now, when you call gl.drawArrays or gl.drawElements the system knows how you want to pull data out of the buffers you made to supply your vertex shader. See here for how that works.
Since the attributes are global state that means every time you call drawElements or drawArrays how ever you have the attributes setup is how they'll be used. If you set up attributes #1 and #2 to buffers that each have 3 vertices but you ask to draw 6 vertices with gl.drawArrays you'll get an error. Similarly if you make an index buffer which you bind to the gl.ELEMENT_ARRAY_BUFFER bindpoint and that buffer has an indice that is > 2 you'll get that index out of range error. If your buffers only have 3 vertices then the only valid indices are 0, 1, and 2.
Normally, every time you draw something different you rebind all the attributes needed to draw that thing. Drawing a cube that has positions and normals? Bind the buffer with position data, setup the attribute being used for positions, bind the buffer with normal data, setup the attribute being used for normals, now draw. Next you draw a sphere with positions, vertex colors and texture coordinates. Bind the buffer that contains position data, setup the attribute being used for positions. Bind the buffer that contains vertex color data, setup the attribute being used for vertex colors. Bind the buffer that contains texture coordinates, setup the attribute being used for texture coordinates.
The only time you don't rebind buffers is if you're drawing the same thing more than once. For example drawing 10 cubes. You'd rebind the buffers, then set the uniforms for one cube, draw it, set the uniforms for the next cube, draw it, repeat.
I should also add that there's an extension [OES_vertex_array_object] which is also a feature of WebGL 2.0. A Vertex Array Object is the global state above called vertexArray which includes the elementArrayBuffer and all the attributes.
Calling gl.createVertexArray makes new one of those. Calling gl.bindVertexArray sets the global attributes to point to the one in the bound vertexArray.
Calling gl.bindVertexArray would then be
this.bindVertexArray = function(vao) {
vertexArray = vao ? vao : defaultVertexArray;
}
This has the advantage of letting you set up all attributes and buffers at init time and then at draw time just 1 WebGL call will set all buffers and attributes.
Here is a webgl state diagram that might help visualize this better.
Related
Problem constraints:
I am not using three.js or similar, but pure WebGL
WebGL 2 is not an option either
I have a couple of models loaded stored as Vertices and Normals arrays (coming from an STL reader).
So far there is no problem when both models are the same size. Whenever I load 2 different models, an error message is shown in the browser:
WebGL: INVALID_OPERATION: drawArrays: attempt to access out of bounds arrays so I suspect I am not manipulating multiple buffers correctly.
The models are loaded using the following typescript method:
public AddModel(model: Model)
{
this.models.push(model);
model.VertexBuffer = this.gl.createBuffer();
model.NormalsBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Vertices, this.gl.STATIC_DRAW);
model.CoordLocation = this.gl.getAttribLocation(this.shaderProgram, "coordinates");
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.NormalsBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Normals, this.gl.STATIC_DRAW);
model.NormalLocation = this.gl.getAttribLocation(this.shaderProgram, "vertexNormal");
this.gl.vertexAttribPointer(model.NormalLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.NormalLocation);
}
After loaded, the Render method is called for drawing all loaded models:
public Render(viewMatrix: Matrix4, perspective: Matrix4)
{
this.gl.uniformMatrix4fv(this.viewRef, false, viewMatrix);
this.gl.uniformMatrix4fv(this.perspectiveRef, false, perspective);
this.gl.uniformMatrix4fv(this.normalTransformRef, false, viewMatrix.NormalMatrix());
// Clear the canvas
this.gl.clearColor(0, 0, 0, 0);
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// Draw the triangles
if (this.models.length > 0)
{
for (var i = 0; i < this.models.length; i++)
{
var model = this.models[i];
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.enableVertexAttribArray(model.NormalLocation);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.uniformMatrix4fv(this.modelRef, false, model.TransformMatrix);
this.gl.uniform3fv(this.materialdiffuseRef, model.Color.AsVec3());
this.gl.drawArrays(this.gl.TRIANGLES, 0, model.TrianglesCount);
}
}
}
One model works just fine. Two cloned models also work OK. Different models fail with the error mentioned.
What am I missing?
The normal way to use WebGL
At init time
for each shader program
create and compile vertex shader
create and compile fragment shader
create program, attach shaders, link program
for each model
for each type of vertex data (positions, normal, color, texcoord
create a buffer
copy data to buffer
create textures
Then at render time
for each model
use shader program appropriate for model
bind buffers, enable and setup attributes
bind textures and set uniforms
call drawArrays or drawElements
But looking at your code it's binding buffers, and enabling and setting up attributes at init time instead of render time.
Maybe see this article and this one
I'm trying to render a mesh with an additional attribute present in the vertex data, but I'm noticing that the offset value I've set in the vertex descriptor for this attribute doesn't seem to be respected. It's acting like the offset is zero, thus pulling in vertex values instead of the data I'm looking for.
My vertex data is defined like so:
vertices metadata
0, 1, 0, 1, 1, 0,
0, 1, 0, 0, 4, 0,
In the shader, I pull this in like so:
typedef struct {
float4 data [[attribute(0)]];
float2 index [[attribute(1)]];
} Vertex;
vertex ColorInOut vertexShader(Vertex in [[stage_in]],
constant VertexShaderUniforms & u [[ buffer(2) ]]) {...}
I'm then setting up the vertex descriptor to handle this format:
auto _mtlVertexDescriptor = [[MTLVertexDescriptor alloc] init];
_mtlVertexDescriptor.attributes[0].format = MTLVertexFormatFloat4;
_mtlVertexDescriptor.attributes[0].offset = 0;
_mtlVertexDescriptor.attributes[0].bufferIndex = 0;
_mtlVertexDescriptor.attributes[1].format = MTLVertexFormatFloat2;
_mtlVertexDescriptor.attributes[1].offset = sizeof(vector_float4);
_mtlVertexDescriptor.attributes[1].bufferIndex = 0;
_mtlVertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
_mtlVertexDescriptor.layouts[0].stepRate = 1;
_mtlVertexDescriptor.layouts[0].stride = sizeof(vector_float4) + sizeof(vector_float2);
In the Metal debugger, I'm noticing that instead of the data entries above, I'm seeing the following output in the Geometry viewer:
vertices metadata
0, 1, 0, 1, 0, 1,
0, 1, 0, 0, 0, 1,
As it might be important to this situation, I should point out that I load my models manually as I'm plugging this in as an extra render option to my application, I'm doing this in the following way:
std::vector<float> vertices = {...};
auto allocator = [[MTKMeshBufferAllocator alloc] initWithDevice:device];
auto vertexData = [NSData dataWithBytes: vertices.data() length:vertices.size() * sizeof(float)];
auto vertexBuffer = [allocator newBufferWithData:vertexData type:MDLMeshBufferTypeVertex];
auto mdlVertexDescriptor = MTKModelIOVertexDescriptorFromMetal(vertexDescriptor);
// For this particlular example, `row_size` is 6, corresponding to the number of values in each vertex
auto mdlMesh = [[MDLMesh alloc] initWithVertexBuffer:vertexBuffer vertexCount:vertices.size() / row_size descriptor:mdlVertexDescriptor submeshes:#[]];
mdlMesh.vertexDescriptor = mdlVertexDescriptor;
NSError* error = nil;
auto m = [[MTKMesh alloc] initWithMesh:mdlMesh
device:device
error:&error];
Is there any magic invocation I'm missing to get the offset applying properly?
[edit]
I've verified that the same issue is present in the vertex buffer object itself. I've confirmed that the vertex descriptor being passed into the MTKModelIOVertexDescriptorFromMetal call is the expected descriptor, and I've also confirmed that the raw data in the NSData object is identical to the std::vector values, so the issue may lie in how I'm interacting with MDLMesh.
It seems that ModelIO expects the vertex descriptor attributes to all have names matching the field names used in the vertex buffer struct for the above scenario to work. I fixed this like so:
vertexDescriptor.attributes[0].name = #"data";
vertexDescriptor.attributes[1].name = #"index";
After attaching names to each attribute, the correct data was loaded by the shader.
I managed to find this information out through a random chance run-in with the header that declares the MTKModelIOVertexDescriptorFromMetal method. The requirement is mentioned right at the end:
This method can only set vertex format, offset, bufferIndex, and stride information in the produced Model I/O vertex descriptor. It does not add any semantic information such at attributes names. Names must be set in the returned Model I/O vertex descriptor before it can be applied to a a Model I/O mesh.
I understand it is possible to pass a 1D array buffer to a metal shader, but is it possible to have it output to a 1D array buffer? I don't want it to write to a texture - I just need an array of processed values.
I can get values out with the shader with the following code, but they are one value at a time. Ideally I could get a whole array out (in the same order as the input 1D array buffer).
Any examples or pointers would be greatly appreciated!
var resultdata = [Float](repeating: 0, count: 3)
let outVectorBuffer = device.makeBuffer(bytes: &resultdata, length: MemoryLayout<float3>.size, options: [])
commandEncoder!.setBuffer(outVectorBuffer, offset: 0, index: 6)
commandBuffer!.addCompletedHandler {commandBuffer in
let data = NSData(bytes: outVectorBuffer!.contents(), length: MemoryLayout<float3>.size)
var out: float3 = float3(0,0,0)
data.getBytes(&out, length: MemoryLayout<float3>.size)
print("data: \(out)")
}
//In the Shader:
kernel void compute1d(
...
device float3 &outBuffer [[buffer(6)]],
outBuffer = float3(1.0, 2.0, 3.0);
)
Two things:
You need to create the buffer large enough to hold however many float3 elements as you want. You really need to use .stride and not .size when calculating the buffer size, though. In particular, float3 has 16-byte alignment, so there's padding between elements in an array. So, you would use something like MemoryLayout<float3>.stride * desiredNumberOfElements.
Then, in the shader, you need to change the declaration of outBuffer from a reference to a pointer. So, device float3 *outBuffer [[buffer(6)]]. Then you can index into it to access the elements (e.g. outBuffer[2] = ...;).
My code works but i am wondering why !
I have 2 textures :
uniform sampler2D uSampler0;
uniform sampler2D uSampler1;
void main() {
vec4 color0 = texture2D(uSampler0, vTexCoord);
vec4 color1 = texture2D(uSampler1, vTexCoord);
gl_FragColor = color0 * color1;
}
and my js code
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ZERO);
gl.uniform1i(program.uSampler0,0);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ONE);
gl.uniform1i(program.uSampler1);
// uncomment one of the 3, it works.
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ZERO);
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ONE);
// gl.bindTexture(gl.TEXTURE_2D, texture_FOR_PURPOSE_ONLY);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
before gl.draw, i have tested the 3 bindings,
each one works !
So, i do not understand the real pipeline underlying .
Thanks for some explanations
This line is invalid
gl.uniform1i(program.uSampler1);
You're not passing a value to the sampler
The way WebGL texture units work is they are global state inside WebGL
gl.activeTexture sets the texture unit all other texture commands effect. For each texture unit there are 2 bind points, TEXTURE_2D and TEXTURE_CUBE_MAP.
You can think of it like this
gl = {
activeTextureUnit: 0,
textureUnits: [
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
...
],
};
gl.activeTexture just does this
gl.activeTexture = function(unit) {
gl.activeTextureUnit = unit - gl.TEXTURE0;
};
gl.bindTexture does this
gl.bindTexture = function(bindPoint, texture) {
gl.textureUnits[gl.activeTextureUnit][bindPoint] = texture;
};
gl.texImage2D and gl.texParamteri look up which texture to work with like this
gl.texImage2D = function(bindPoint, .....) {
var texture = gl.textureUnits[gl.activeTextureUnit][bindPoint];
// now do something with texture
In other words, inside WebGL there is a global array of texture units. gl.activeTexture and gl.bindTexture manipulate that array.
gl.texXXX manipulate the textures themselves but they reference the textures indirectly through that array.
gl.uniform1i(someSamplerLocation, unitNumber) sets the shader's uniform to look at a particular index in that array of texture units.
It's working correctly because in the presented code you are sending the appropriate uniforms for the samplers.
First texture was set to unit 0 by calling glActiveTexture(GL_TEXTURE0) and was bound afterward. Then a switch was made to unit1.
At that point there were two separate bound textures in each unit.
At the end these units were passed as the uniforms for samplers - which is how to indicate which texture should be in a sampler: in this case passing 0 corresponding to the GL_TEXTURE0 unit to the first uniform and analogousy for the second uniform.
Probably even without uncommenting these lines - things should work.
I'm not entirely clear on the scope of enabling vertex attrib arrays. I've got several different shader programs with differing numbers of vertex attributes. Are glEnableVertexAttribArray calls local to a shader program, or global?
Right now I'm enabling vertex attrib arrays when I create the shader program, and never disabling them, and all seems to work, but it seems like I'm possibly supposed to enable/disable them right before/after draw calls. Is there an impact to this?
(I'm in WebGL, as it happens, so we're really talking about gl.enableVertexAttribArray and gl.disableVertexAttribArray. I'll note also that the orange book, OpenGL Shading Language, is quite uninformative about these calls.)
The state of which Vertex Attribute Arrays are enabled can be either bound to a Vertex Array Object (VAO), or be global.
If you're using VAOs, then you should not disable attribute arrays, as they are encapsulated in the VAO.
However for global vertex attribute array enabled state you should disable them, because if they're left enabled OpenGL will try to read from arrays, which may be bound to a invalid pointer, which may either crash your program if the pointer is to client address space, or raise a OpenGL error if it points out of the limits of a bound Vertex Buffer Object.
WebGL is not the same as OpenGL.
In WebGL leaving arrays enabled is explicitly allowed as long as there is a buffer attached to the attribute and (a) if it's used it's large enough to satisfy the draw call or (b) it's not used.
Unlike OpenGL ES 2.0, WebGL doesn't allow client side arrays.
Proof:
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses2Attributes = `
attribute vec4 position;
attribute vec4 color;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = color;
}
`;
const vsUses1Attribute = `
attribute vec4 position;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = vec4(0,1,1,1);
}
`
const fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
const program2Attribs = twgl.createProgram(gl, [vsUses2Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attribute, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer3Colors = createBuffer([
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 1, 1,
]);
const buffer9Points = createBuffer([
-0.8, -0.5,
-0.6, -0.5,
-0.4, -0.5,
-0.2, -0.5,
0.0, -0.5,
0.2, -0.5,
0.4, -0.5,
0.6, -0.5,
0.8, -0.5,
]);
// set up 2 attributes
{
const posLoc = gl.getAttribLocation(program2Attribs, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
const colorLoc = gl.getAttribLocation(program2Attribs, 'color');
gl.enableVertexAttribArray(colorLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Colors);
gl.vertexAttribPointer(colorLoc, 4, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program2Attribs);
gl.drawArrays(gl.POINTS, 0, 3);
// setup 1 attribute (don't disable the second attribute
{
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer9Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program1Attrib);
gl.drawArrays(gl.POINTS, 0, 9);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it draws 3 points (3 vertices, 2 attributes)<br/>
2nd it draws 9 points (9 vertices, 1 attribute)<br/>
It does NOT call gl.disableVertexAttrib so on the second draw call one of the attributes is still enabled. It is pointing to a buffer with only 3 vertices in it even though 9 vertices will be drawn. There are no errors.
</p>
<canvas></canvas>
Another example, just enable all the attributes, then draw with a shader that uses no attributes (no error) and also draw with a shader that uses 1 attribute (again no error), no need to call gl.disbleVertexAttribArray
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses1Attributes = `
attribute vec4 position;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
}
`;
const vsUses0Attributes = `
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 20.0;
}
`
const fs = `
precision mediump float;
void main() {
gl_FragColor = vec4(1, 0, 0, 1);
}
`;
const program0Attribs = twgl.createProgram(gl, [vsUses0Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attributes, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer0Points = createBuffer([]);
// enable all the attributes and bind a buffer to them
const maxAttrib = gl.getParameter(gl.MAX_VERTEX_ATTRIBS);
for (let i = 0; i < maxAttrib; ++i) {
gl.enableVertexAttribArray(i);
gl.vertexAttribPointer(i, 2, gl.FLOAT, false, 0, 0);
}
gl.useProgram(program0Attribs);
gl.drawArrays(gl.POINTS, 0, 1);
gl.useProgram(program1Attrib);
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.POINTS, 0, 3);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it enables all attributes<br/>
2nd it draws 1 point that needs no attributes (no error)<br/>
3rd it draws 3 points that need 1 attribute (no error)<br/>
It does NOT call gl.disableVertexAttribArray on any of the attributes so they are all still enabled. There are no errors.
</p>
<canvas></canvas>
For webGL I'm going to go with yes, it is important to call gl.disableVertexAttribArray.
Chrome was giving me this warning:
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
This was happening when the program changed to one using less than the maximum number of attributes. Obviously the solution was to disable the unused attributes before drawing.
If all your programs use the same number of attributes, you may well get away with calling gl.enableVertexAttribArray once on initialization. Otherwise you'll need to manage them when you change programs.
Think of it as attributes are local to a VAO and not a shader program. the VBOs are in GPU memory.
Now consider that, in WebGL, there is a default VAO that WebGL uses by default. (it can also be a VAO created by the programmer, the same concept applies). This VAO contains a target called ARRAY_BUFFER to which any VBO in the GPU memory can bound. This VAO also contains and attribute array with a fixed number of attribute slots (number depends on implementation and platform, here lets say 8 which is the minimum required by the Webgl specification). Also, this VAO will have ELEMENT_ARRAY_BUFFER target to which any index data buffer can be bound.
Now, when you create a shader program, it will have the attributes you specify. Webgl will assign one of the possible attribute slot "numbers" to all of the attributes specified in the program when you link the shader program. now attributes will use the corresponding attribute slots in the VAO to access the data bound to the ARRAY_BUFFER or ELEMENT_ARRAY_BUFFER targets in the VAO. Now, when you use functions gl.enableVertexAttribArray(location) and gl.vertexAttribPointer(location,....) you are not changing any characteristics of the attributes in the shader program (they simply have a attribute number which refers to one of the attribute slots in the VAO that they will use to access data). What you are actually doing is modifying the state of the attribute slot in the VAO using its location number. SO then for the attributes in the program to be able to access the data, its corresponding attribute slot in the VAO must be enabled (gl.enableVertexAttribArray()). And we have to configure the attribute slot so it can read the data from the buffer bound to the ARRAY_BUFFER correctly (gl.vertexAttribPointer()). one a VBO is set for a slot it wont change, even if we unbind it from the target the attribute slot csn still red from the VBO as long as it is there in GPU memory. Also, there must be some buffer bound to the targets of the VAO (gl.bindBuffer()). So gl.enableVertexAttribArray(location) will enable the attribute slot specified by 'location' in the current VAO. gl.disableVertexAttribArray(location) will disable it. It has nothing to do with the shader program though. Even if you uses a different shader program, these attribute slots's state wont be affected.
So, if two different shader programs use the same attribute slots, there wont be any error because the corresponding attribute slots in the VAO is already active. but the data from the targets might be read incorrectly if the attributes are required interpret the data differently in the two shader programs. Now consider, if the two shader programs use different attribute slots, then you might enable the second shaders program's required attribute slots and think that your program should work. But the already enabled attribute slots (which were enabled by the previous shader program) will still be enabled but wont be used. This causes an error.
So when changing shader programs, we must ensure that the enabled attribute slots in the VAO that wont be used by this shader program must be disabled. Although we might now specify any VAOs explicitly, Webgl works like this by default.
One way is to maintain a list of enabled attributes on the javascript side and disable all enabled attribute slots when switching program while still using the same VAO. Another way to deal with this problem is to create custom VAOs that is accessed by one shader program only. but it is less efficient. Yet another way is to binding attribute locations to fixed slots before the shader program is linked using gl.bindAttribLocation().