I'm trying to render a mesh with an additional attribute present in the vertex data, but I'm noticing that the offset value I've set in the vertex descriptor for this attribute doesn't seem to be respected. It's acting like the offset is zero, thus pulling in vertex values instead of the data I'm looking for.
My vertex data is defined like so:
vertices metadata
0, 1, 0, 1, 1, 0,
0, 1, 0, 0, 4, 0,
In the shader, I pull this in like so:
typedef struct {
float4 data [[attribute(0)]];
float2 index [[attribute(1)]];
} Vertex;
vertex ColorInOut vertexShader(Vertex in [[stage_in]],
constant VertexShaderUniforms & u [[ buffer(2) ]]) {...}
I'm then setting up the vertex descriptor to handle this format:
auto _mtlVertexDescriptor = [[MTLVertexDescriptor alloc] init];
_mtlVertexDescriptor.attributes[0].format = MTLVertexFormatFloat4;
_mtlVertexDescriptor.attributes[0].offset = 0;
_mtlVertexDescriptor.attributes[0].bufferIndex = 0;
_mtlVertexDescriptor.attributes[1].format = MTLVertexFormatFloat2;
_mtlVertexDescriptor.attributes[1].offset = sizeof(vector_float4);
_mtlVertexDescriptor.attributes[1].bufferIndex = 0;
_mtlVertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
_mtlVertexDescriptor.layouts[0].stepRate = 1;
_mtlVertexDescriptor.layouts[0].stride = sizeof(vector_float4) + sizeof(vector_float2);
In the Metal debugger, I'm noticing that instead of the data entries above, I'm seeing the following output in the Geometry viewer:
vertices metadata
0, 1, 0, 1, 0, 1,
0, 1, 0, 0, 0, 1,
As it might be important to this situation, I should point out that I load my models manually as I'm plugging this in as an extra render option to my application, I'm doing this in the following way:
std::vector<float> vertices = {...};
auto allocator = [[MTKMeshBufferAllocator alloc] initWithDevice:device];
auto vertexData = [NSData dataWithBytes: vertices.data() length:vertices.size() * sizeof(float)];
auto vertexBuffer = [allocator newBufferWithData:vertexData type:MDLMeshBufferTypeVertex];
auto mdlVertexDescriptor = MTKModelIOVertexDescriptorFromMetal(vertexDescriptor);
// For this particlular example, `row_size` is 6, corresponding to the number of values in each vertex
auto mdlMesh = [[MDLMesh alloc] initWithVertexBuffer:vertexBuffer vertexCount:vertices.size() / row_size descriptor:mdlVertexDescriptor submeshes:#[]];
mdlMesh.vertexDescriptor = mdlVertexDescriptor;
NSError* error = nil;
auto m = [[MTKMesh alloc] initWithMesh:mdlMesh
device:device
error:&error];
Is there any magic invocation I'm missing to get the offset applying properly?
[edit]
I've verified that the same issue is present in the vertex buffer object itself. I've confirmed that the vertex descriptor being passed into the MTKModelIOVertexDescriptorFromMetal call is the expected descriptor, and I've also confirmed that the raw data in the NSData object is identical to the std::vector values, so the issue may lie in how I'm interacting with MDLMesh.
It seems that ModelIO expects the vertex descriptor attributes to all have names matching the field names used in the vertex buffer struct for the above scenario to work. I fixed this like so:
vertexDescriptor.attributes[0].name = #"data";
vertexDescriptor.attributes[1].name = #"index";
After attaching names to each attribute, the correct data was loaded by the shader.
I managed to find this information out through a random chance run-in with the header that declares the MTKModelIOVertexDescriptorFromMetal method. The requirement is mentioned right at the end:
This method can only set vertex format, offset, bufferIndex, and stride information in the produced Model I/O vertex descriptor. It does not add any semantic information such at attributes names. Names must be set in the returned Model I/O vertex descriptor before it can be applied to a a Model I/O mesh.
Related
I'm trying to make a simple 3D modeling tool.
there is some work to move a vertex( or vertices ) for transform the model.
I used dynamic vertex buffer because thought it needs much update.
but performance is too low in high polygon model even though I change just one vertex.
is there other methods? or did I wrong way?
here is my D3D11_BUFFER_DESC
Usage = D3D11_USAGE_DYNAMIC;
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
and my update funtion
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < vIndice.size(); ++i)
{
pBuffer[vIndice[i]].xfPosition.x = pVerticesInfo[vIndice[i]].xfPosition.x;
pBuffer[vIndice[i]].xfPosition.y = pVerticesInfo[vIndice[i]].xfPosition.y;
pBuffer[vIndice[i]].xfPosition.z = pVerticesInfo[vIndice[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
As mentioned in the previous answer, you are updating your whole buffer every time, which will be slow depending on model size.
The solution is indeed to implement partial updates, there are two possibilities for it, you want to update a single vertex, or you want to update
arbitrary indices (for example, you want to move N vertices in one go, in different locations, like vertex 1,20,23 for example.
The first solution is rather simple, first create your buffer with the following description :
Usage = D3D11_USAGE_DEFAULT;
CPUAccessFlags = 0;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
This makes sure your vertex buffer is gpu visible only.
Next create a second dynamic buffer which has the size of a single vertex (you do not need any bind flags in that case, as it will be used only for copies)
_pCopyVertexBuffer
Usage = D3D11_USAGE_DYNAMIC; //Staging works as well
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = 0;
ByteWidth = sizeof(ST_Vertex);
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = NULL;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pCopyVertexBuffer);
when you move a vertex, copy the changed vertex in the copy buffer :
ST_Vertex changedVertex;
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
pBuffer->xfPosition.x = changedVertex.xfPosition.x;
pBuffer->.xfPosition.y = changedVertex.xfPosition.y;
pBuffer->.xfPosition.z = changedVertex.xfPosition.z;
pImmediateContext->Unmap(_pVertexBuffer, 0);
Since you use D3D11_MAP_WRITE_DISCARD, make sure to write all attributes there (not only position).
Now once you done, you can use ID3D11DeviceContext::CopySubresourceRegion to only copy the modified vertex in the current location :
I assume that vertexID is the index of the modified vertex :
pd3DeviceContext->CopySubresourceRegion(_pVertexBuffer,
0, //must be 0
vertexID * sizeof(ST_Vertex), //location of the vertex in you gpu vertex buffer
0, //must be 0
0, //must be 0
_pCopyVertexBuffer,
0, //must be 0
NULL //in this case we copy the full content of _pCopyVertexBuffer, so we can set to null
);
Now if you want to update a list of vertices, things get more complicated and you have several options :
-First you apply this single vertex technique in a loop, this will work quite well if your changeset is small.
-If your changeset is very big (close to almost full vertex size, you can probably rewrite the whole buffer instead).
-An intermediate technique is to use compute shader to perform the updates (thats the one I normally use as its the most flexible version).
Posting all c++ binding code would be way too long, but here is the concept :
your vertex buffer must have BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_UNORDERED_ACCESS; //this allows to write wioth compute
you need to create an ID3D11UnorderedAccessView for this buffer (so shader can write to it)
you need the following misc flags : D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS //this allows to write as RWByteAddressBuffer
you then create two dynamic structured buffers (I prefer those over byteaddress, but vertex buffer and structured is not allowed in dx11, so for the write one you need raw instead)
first structured buffer has a stride of ST_Vertex (this is your changeset)
second structured buffer has a stride of 4 (uint, these are the indices)
both structured buffers get an arbitrary element count (normally i use 1024 or 2048), so that will be the maximum amount of vertices you can update in a single pass.
both structured buffers you need an ID3D11ShaderResourceView (shader visible, read only)
Then update process is the following :
write modified vertices and locations in structured buffers (using map discard, if you have to copy less its ok)
attach both structured buffers for read
attach ID3D11UnorderedAccessView for write
set your compute shader
call dispatch
detach ID3D11UnorderedAccessView for write (this is VERY important)
This is a sample compute shader code (I assume you vertex is position only, for simplicity)
cbuffer cbUpdateCount : register(b0)
{
uint updateCount;
};
RWByteAddressBuffer RWVertexPositionBuffer : register(u0);
StructuredBuffer<float3> ModifiedVertexBuffer : register(t0);
StructuredBuffer<uint> ModifiedVertexIndicesBuffer : register(t0);
//this is the stride of your vertex buffer, since here we use float3 it is 12 bytes
#define WRITE_STRIDE 12
[numthreads(64, 1, 1)]
void CS( uint3 tid : SV_DispatchThreadID )
{
//make sure you do not go part element count, as here we runs 64 threads at a time
if (tid.x >= updateCount) { return; }
uint readIndex = tid.x;
uint writeIndex = ModifiedVertexIndicesBuffer[readIndex];
float3 vertex = ModifiedVertexBuffer[readIndex];
//byte address buffers do not understand float, asuint is a binary cast.
RWVertexPositionBuffer.Store3(writeIndex * WRITE_STRIDE, asuint(vertex));
}
For the purposes of this question I'm going to assume you already have a mechanism for selecting a vertex from a list of vertices based upon ray casting or some other picking method and a mechanism for creating a displacement vector detailing how the vertex was moved in model space.
The method you have for updating the buffer is sufficient for anything less than a few hundred vertices, but on large scale models it becomes extremely slow. This is because you're updating everything, rather than the individual vertices you modified.
To fix this, you should only update the vertices you have changed, and to do that you need to create a change set.
In concept, a change set is nothing more than a set of changes made to the data - a list of the vertices that need to be updated. Since we already know which vertices were modified (otherwise we couldn't have manipulated them), we can map in the GPU buffer, go to that vertex specifically, and copy just those vertices into the GPU buffer.
In your vertex modification method, record the index of the vertex that was modified by the user:
//Modify the vertex coordinates based on mouse displacement
pVerticesInfo[SelectedVertexIndex].xfPosition.x += DisplacementVector.x;
pVerticesInfo[SelectedVertexIndex].xfPosition.y += DisplacementVector.y;
pVerticesInfo[SelectedVertexIndex].xfPosition.z += DisplacementVector.z;
//Add the changed vertex to the list of changes.
changedVertices.add(SelectedVertexIndex);
//And update the GPU buffer
UpdateD3DBuffer();
In UpdateD3DBuffer(), do the following:
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < changedVertices.size(); ++i)
{
pBuffer[changedVertices[i]].xfPosition.x = pVerticesInfo[changedVertices[i]].xfPosition.x;
pBuffer[changedVertices[i]].xfPosition.y = pVerticesInfo[changedVertices[i]].xfPosition.y;
pBuffer[changedVertices[i]].xfPosition.z = pVerticesInfo[changedVertices[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
changedVertices.clear();
This has the effect of only updating the vertices that have changed, rather than all vertices in the model.
This also allows for some more complex manipulations. You can select multiple vertices and move them all as a group, select a whole face and move all the connected vertices, or move entire regions of the model relatively easily, assuming your picking method is capable of handling this.
In addition, if you record the change sets with enough information (the affected vertices and the displacement index), you can fairly easily implement an undo function by simply reversing the displacement vector and reapplying the selected change set.
In metal, when I already have a RenderCommandEncoder and when I already did some job with it, how can I clear the depth buffer or the stencil buffer (but not both I need to keep one)? For example, in OpenGl we have glClearDepthf / GL_DEPTH_BUFFER_BIT and glClearStencil / GL_STENCIL_BUFFER_BIT but I didn't find any equivalent in metal.
While it's true that Metal doesn't provide a mechanism to clear the depth or stencil buffers in the middle of a rendering pass, it's possible to create a near-trivial pipeline state that allows you to do so as selectively as you like.
In the course of porting some OpenGL code to Metal, I found myself with a need to clear a section of the depth buffer that corresponds to the bounds of the currently set viewport. Here was my solution:
In my setup code, I create a specialized MTLRenderPipelineState and MTLDepthStencilState that are used only for the purpose of clearing the depth buffer, and stash them in my MTKView subclass with my other long-lived resources:
#property (nonatomic, retain) id<MTLRenderPipelineState> pipelineDepthClear;
#property (nonatomic, retain) id<MTLDepthStencilState> depthStencilClear;
[...]
// Special depth stencil state for clearing the depth buffer
MTLDepthStencilDescriptor *depthStencilDescriptor = [[MTLDepthStencilDescriptor alloc] init];
// Don't actually perform a depth test, just always write the buffer
depthStencilDescriptor.depthCompareFunction = MTLCompareFunctionAlways;
depthStencilDescriptor.depthWriteEnabled = YES;
depthStencilDescriptor.label = #"depthStencilClear";
depthStencilClear = [self.device newDepthStencilStateWithDescriptor:depthStencilDescriptor];
// Special pipeline state just for clearing the depth buffer.
MTLRenderPipelineDescriptor *renderPipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
// Omit the color attachment, since we don't want to write the color buffer for this case.
renderPipelineDescriptor.depthAttachmentPixelFormat = self.depthStencilPixelFormat;
renderPipelineDescriptor.rasterSampleCount = self.sampleCount;
renderPipelineDescriptor.vertexFunction = [self.library newFunctionWithName:#"vertex_depth_clear"];
renderPipelineDescriptor.vertexFunction.label = #"vertexDepthClear";
renderPipelineDescriptor.fragmentFunction = [self.library newFunctionWithName:#"fragment_depth_clear"];
renderPipelineDescriptor.fragmentFunction.label = #"fragmentDepthClear";
MTLVertexDescriptor *vertexDescriptor = [[MTLVertexDescriptor alloc] init];
vertexDescriptor.attributes[0].format = MTLVertexFormatFloat2;
vertexDescriptor.attributes[0].offset = 0;
vertexDescriptor.attributes[0].bufferIndex = 0;
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
vertexDescriptor.layouts[0].stride = 8;
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor;
NSError* error = NULL;
renderPipelineDescriptor.label = #"pipelineDepthClear";
self.pipelineDepthClear = [self.device newRenderPipelineStateWithDescriptor:renderPipelineDescriptor error:&error];
and set up the matching vertex and fragment functions in my .metal file:
struct DepthClearVertexIn
{
float2 position [[ attribute(0) ]];
};
struct DepthClearVertexOut
{
float4 position [[ position ]];
};
struct DepthClearFragmentOut
{
float depth [[depth(any)]];
};
vertex DepthClearVertexOut
vertex_depth_clear( DepthClearVertexIn in [[ stage_in ]])
{
DepthClearVertexOut out;
// Just pass the position through. We're clearing in NDC space.
out.position = float4(in.position, 0.5, 1.0);
return out;
}
fragment DepthClearFragmentOut fragment_depth_clear()
{
DepthClearFragmentOut out;
out.depth = 1.0;
return out;
}
Finally, the body of my clearDepthBuffer() method looks like this:
// Set up the pipeline and depth/stencil state to write a clear value to only the depth buffer.
[view.commandEncoder setDepthStencilState:view.depthStencilClear];
[view.commandEncoder setRenderPipelineState:view.pipelineDepthClear];
// Normalized Device Coordinates of a tristrip we'll draw to clear the buffer
// (the vertex shader set in pipelineDepthClear ignores all transforms and just passes these through)
float clearCoords[8] = {
-1, -1,
1, -1,
-1, 1,
1, 1
};
[view.commandEncoder setVertexBytes:clearCoords length:sizeof(float) * 8 atIndex:0];
[view.commandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip vertexStart:0 vertexCount:4];
// Be sure to reset the setDepthStencilState and setRenderPipelineState for further drawing
Since the vertex shader doesn't transform the coordinates at all, I specify the input geometry in NDC space, so a rectangle from (-1, -1) to (1, 1) covers the entire viewport. The same technique could be used to clear any portion of your depth buffer if you set up the geometry and/or transforms appropiately.
A similar technique should work for clearing stencil buffers, but I'll leave that as an exercise to the reader. ;)
Problem constraints:
I am not using three.js or similar, but pure WebGL
WebGL 2 is not an option either
I have a couple of models loaded stored as Vertices and Normals arrays (coming from an STL reader).
So far there is no problem when both models are the same size. Whenever I load 2 different models, an error message is shown in the browser:
WebGL: INVALID_OPERATION: drawArrays: attempt to access out of bounds arrays so I suspect I am not manipulating multiple buffers correctly.
The models are loaded using the following typescript method:
public AddModel(model: Model)
{
this.models.push(model);
model.VertexBuffer = this.gl.createBuffer();
model.NormalsBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Vertices, this.gl.STATIC_DRAW);
model.CoordLocation = this.gl.getAttribLocation(this.shaderProgram, "coordinates");
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.NormalsBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Normals, this.gl.STATIC_DRAW);
model.NormalLocation = this.gl.getAttribLocation(this.shaderProgram, "vertexNormal");
this.gl.vertexAttribPointer(model.NormalLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.NormalLocation);
}
After loaded, the Render method is called for drawing all loaded models:
public Render(viewMatrix: Matrix4, perspective: Matrix4)
{
this.gl.uniformMatrix4fv(this.viewRef, false, viewMatrix);
this.gl.uniformMatrix4fv(this.perspectiveRef, false, perspective);
this.gl.uniformMatrix4fv(this.normalTransformRef, false, viewMatrix.NormalMatrix());
// Clear the canvas
this.gl.clearColor(0, 0, 0, 0);
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// Draw the triangles
if (this.models.length > 0)
{
for (var i = 0; i < this.models.length; i++)
{
var model = this.models[i];
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.enableVertexAttribArray(model.NormalLocation);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.uniformMatrix4fv(this.modelRef, false, model.TransformMatrix);
this.gl.uniform3fv(this.materialdiffuseRef, model.Color.AsVec3());
this.gl.drawArrays(this.gl.TRIANGLES, 0, model.TrianglesCount);
}
}
}
One model works just fine. Two cloned models also work OK. Different models fail with the error mentioned.
What am I missing?
The normal way to use WebGL
At init time
for each shader program
create and compile vertex shader
create and compile fragment shader
create program, attach shaders, link program
for each model
for each type of vertex data (positions, normal, color, texcoord
create a buffer
copy data to buffer
create textures
Then at render time
for each model
use shader program appropriate for model
bind buffers, enable and setup attributes
bind textures and set uniforms
call drawArrays or drawElements
But looking at your code it's binding buffers, and enabling and setting up attributes at init time instead of render time.
Maybe see this article and this one
I sometimes find myself struggling between declaring the buffers (with createBuffer/bindBuffer/bufferdata) in different order and rebinding them in other parts of the code, usually in the draw loop.
If I don't rebind the vertex buffer before drawing arrays, the console complains about an attempt to access out of range vertices. My suspect is the the last bound object is passed at the pointer and then to the drawarrays but when I change the order at the beginning of the code, nothing changes. What effectively works is rebinding the buffer in the draw loop. So, I can't really understand the logic behind that. When do you need to rebind? Why do you need to rebind? What is attribute0 referring to?
I don't know if this will help. As some people have said, GL/WebGL has a bunch of internal state. All the functions you call set up the state. When it's all setup you call drawArrays or drawElements and all of that state is used to draw things
This has been explained elsewhere on SO but binding a buffer is just setting 1 of 2 global variables inside WebGL. After that you refer to the buffer by its bind point.
You can think of it like this
gl = function() {
// internal WebGL state
let lastError;
let arrayBuffer = null;
let vertexArray = {
elementArrayBuffer: null,
attributes: [
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
...
],
}
// these values are used when a vertex attrib is disabled
let attribValues = [
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
...
];
...
// Implementation of gl.bindBuffer.
// note this function is doing nothing but setting 2 internal variables.
this.bindBuffer = function(bindPoint, buffer) {
switch(bindPoint) {
case gl.ARRAY_BUFFER;
arrayBuffer = buffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
vertexArray.elementArrayBuffer = buffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
};
...
}();
After that other WebGL functions reference those. For example gl.bufferData might do something like
// implementation of gl.bufferData
// Notice you don't pass in a buffer. You pass in a bindPoint.
// The function gets the buffer one of its internal variable you set by
// previously calling gl.bindBuffer
this.bufferData = function(bindPoint, data, usage) {
// lookup the buffer from the bindPoint
var buffer;
switch (bindPoint) {
case gl.ARRAY_BUFFER;
buffer = arrayBuffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
buffer = vertexArray.elemenArrayBuffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
// copy data into buffer
buffer.copyData(data); // just making this up
buffer.setUsage(usage); // just making this up
};
Separate from those bindpoints there's number of attributes. The attributes are also global state by default. They define how to pull data out of the buffers to supply to your vertex shader. Calling gl.getAttribLocation(someProgram, "nameOfAttribute") tells you which attribute the vertex shader will look at to get data out of a buffer.
So, there's 4 functions that you use to configure how an attribute will get data from a buffer. gl.enableVertexAttribArray, gl.disableVertexAttribArray, gl.vertexAttribPointer, and gl.vertexAttrib??.
They're effectively implemented something like this
this.enableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = true; // true means get data from attribute.buffer
};
this.disableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = false; // false means get data from attribValues[location]
};
this.vertexAttribPointer = function(location, size, type, normalized, stride, offset) {
const attribute = vertexArray.attributes[location];
attribute.size = size; // num values to pull from buffer per vertex shader iteration
attribute.type = type; // type of values to pull from buffer
attribute.normalized = normalized; // whether or not to normalize
attribute.stride = stride; // number of bytes to advance for each iteration of the vertex shader. 0 = compute from type, size
attribute.offset = offset; // where to start in buffer.
// IMPORTANT!!! Associates whatever buffer is currently *bound* to
// "arrayBuffer" to this attribute
attribute.buffer = arrayBuffer;
};
this.vertexAttrib4f = function(location, x, y, z, w) {
const attrivValue = attribValues[location];
attribValue[0] = x;
attribValue[1] = y;
attribValue[2] = z;
attribValue[3] = w;
};
Now, when you call gl.drawArrays or gl.drawElements the system knows how you want to pull data out of the buffers you made to supply your vertex shader. See here for how that works.
Since the attributes are global state that means every time you call drawElements or drawArrays how ever you have the attributes setup is how they'll be used. If you set up attributes #1 and #2 to buffers that each have 3 vertices but you ask to draw 6 vertices with gl.drawArrays you'll get an error. Similarly if you make an index buffer which you bind to the gl.ELEMENT_ARRAY_BUFFER bindpoint and that buffer has an indice that is > 2 you'll get that index out of range error. If your buffers only have 3 vertices then the only valid indices are 0, 1, and 2.
Normally, every time you draw something different you rebind all the attributes needed to draw that thing. Drawing a cube that has positions and normals? Bind the buffer with position data, setup the attribute being used for positions, bind the buffer with normal data, setup the attribute being used for normals, now draw. Next you draw a sphere with positions, vertex colors and texture coordinates. Bind the buffer that contains position data, setup the attribute being used for positions. Bind the buffer that contains vertex color data, setup the attribute being used for vertex colors. Bind the buffer that contains texture coordinates, setup the attribute being used for texture coordinates.
The only time you don't rebind buffers is if you're drawing the same thing more than once. For example drawing 10 cubes. You'd rebind the buffers, then set the uniforms for one cube, draw it, set the uniforms for the next cube, draw it, repeat.
I should also add that there's an extension [OES_vertex_array_object] which is also a feature of WebGL 2.0. A Vertex Array Object is the global state above called vertexArray which includes the elementArrayBuffer and all the attributes.
Calling gl.createVertexArray makes new one of those. Calling gl.bindVertexArray sets the global attributes to point to the one in the bound vertexArray.
Calling gl.bindVertexArray would then be
this.bindVertexArray = function(vao) {
vertexArray = vao ? vao : defaultVertexArray;
}
This has the advantage of letting you set up all attributes and buffers at init time and then at draw time just 1 WebGL call will set all buffers and attributes.
Here is a webgl state diagram that might help visualize this better.
I understand it is possible to pass a 1D array buffer to a metal shader, but is it possible to have it output to a 1D array buffer? I don't want it to write to a texture - I just need an array of processed values.
I can get values out with the shader with the following code, but they are one value at a time. Ideally I could get a whole array out (in the same order as the input 1D array buffer).
Any examples or pointers would be greatly appreciated!
var resultdata = [Float](repeating: 0, count: 3)
let outVectorBuffer = device.makeBuffer(bytes: &resultdata, length: MemoryLayout<float3>.size, options: [])
commandEncoder!.setBuffer(outVectorBuffer, offset: 0, index: 6)
commandBuffer!.addCompletedHandler {commandBuffer in
let data = NSData(bytes: outVectorBuffer!.contents(), length: MemoryLayout<float3>.size)
var out: float3 = float3(0,0,0)
data.getBytes(&out, length: MemoryLayout<float3>.size)
print("data: \(out)")
}
//In the Shader:
kernel void compute1d(
...
device float3 &outBuffer [[buffer(6)]],
outBuffer = float3(1.0, 2.0, 3.0);
)
Two things:
You need to create the buffer large enough to hold however many float3 elements as you want. You really need to use .stride and not .size when calculating the buffer size, though. In particular, float3 has 16-byte alignment, so there's padding between elements in an array. So, you would use something like MemoryLayout<float3>.stride * desiredNumberOfElements.
Then, in the shader, you need to change the declaration of outBuffer from a reference to a pointer. So, device float3 *outBuffer [[buffer(6)]]. Then you can index into it to access the elements (e.g. outBuffer[2] = ...;).