Understanding WebGL State - webgl

Is there any documentation I can find somewhere which documents the preconditions required for WebGL calls?
I have gotten a fairly strong grasp of the WebGL basics, but now I am creating my own 'framework' and I'm after a deeper understanding.
For example, the enableVertexAttribArray call. Does this call required the current shader to be in 'use'? Where does it store this 'enabled' flag? If I switch shader programs, do I have to re-enable it afterwards when I use it again?
I'd love some kind of diagram explaining where all the 'stateful' information is being stored, and when it will go out of context.
Another example is using gl.bindBuffer, are the buffers for ARRAY_BUFFER and ELEMENT_ARRAY_BUFFER stored in separate locations?
With all this in mind, is it recommended to have a parallel state in JavaScript to avoid running WebGL calls? i.e. storing a 'currentBuffer' object to avoid binding the same buffer over and over if its already bound. I can imagine in the general case, this becomes quite a bit of state duplication, but could be quite good for performance.
Bit of a fundamental question but hard to find info on.

I recently gave a similar answer, but I just said that there's quite a lot and gave a link to the spec, without copy pasting anything. Lesson learned, I can fix that. But just a fair warning, if people call WebGL "stateful" they mean it. But the document, that contains all the errors WebGL can generate under which conditions is called the spec. I'm not copying over all the possible errors, because that would easily double it, if not more.
First, because are you explicitly asked about binding targets, here is how you query all of those, not counting extensions:
gl.getParameter( gl.ARRAY_BUFFER_BINDING);
gl.getParameter( gl.ELEMENT_ARRAY_BUFFER_BINDING);
gl.getParameter( gl.FRAMEBUFFER_BINDING);
gl.getParameter( gl.RENDERBUFFER_BINDING);
gl.getParameter( gl.TEXTURE_BINDING_2D);
gl.getParameter( gl.TEXTURE_BINDING_CUBE_MAP);
Now you don't have to go through this huge list to find those. But if you write a framework, and want to understand the state, you may want to use all the others, too.
getParameter(GLenum pname)
pname returned type
ACTIVE_TEXTURE GLenum
ALIASED_LINE_WIDTH_RANGE Float32Array (with 2 elements)
ALIASED_POINT_SIZE_RANGE Float32Array (with 2 elements)
ALPHA_BITS GLint
ARRAY_BUFFER_BINDING WebGLBuffer
BLEND GLboolean
BLEND_COLOR Float32Array (with 4 values)
BLEND_DST_ALPHA GLenum
BLEND_DST_RGB GLenum
BLEND_EQUATION_ALPHA GLenum
BLEND_EQUATION_RGB GLenum
BLEND_SRC_ALPHA GLenum
BLEND_SRC_RGB GLenum
BLUE_BITS GLint
COLOR_CLEAR_VALUE Float32Array (with 4 values)
COLOR_WRITEMASK sequence<GLboolean> (with 4 values)
COMPRESSED_TEXTURE_FORMATS Uint32Array
CULL_FACE GLboolean
CULL_FACE_MODE GLenum
CURRENT_PROGRAM WebGLProgram
DEPTH_BITS GLint
DEPTH_CLEAR_VALUE GLfloat
DEPTH_FUNC GLenum
DEPTH_RANGE Float32Array (with 2 elements)
DEPTH_TEST GLboolean
DEPTH_WRITEMASK GLboolean
DITHER GLboolean
ELEMENT_ARRAY_BUFFER_BINDING WebGLBuffer
FRAMEBUFFER_BINDING WebGLFramebuffer
FRONT_FACE GLenum
GENERATE_MIPMAP_HINT GLenum
GREEN_BITS GLint
IMPLEMENTATION_COLOR_READ_FORMAT GLenum
IMPLEMENTATION_COLOR_READ_TYPE GLenum
LINE_WIDTH GLfloat
MAX_COMBINED_TEXTURE_IMAGE_UNITS GLint
MAX_CUBE_MAP_TEXTURE_SIZE GLint
MAX_FRAGMENT_UNIFORM_VECTORS GLint
MAX_RENDERBUFFER_SIZE GLint
MAX_TEXTURE_IMAGE_UNITS GLint
MAX_TEXTURE_SIZE GLint
MAX_VARYING_VECTORS GLint
MAX_VERTEX_ATTRIBS GLint
MAX_VERTEX_TEXTURE_IMAGE_UNITS GLint
MAX_VERTEX_UNIFORM_VECTORS GLint
MAX_VIEWPORT_DIMS Int32Array (with 2 elements)
PACK_ALIGNMENT GLint
POLYGON_OFFSET_FACTOR GLfloat
POLYGON_OFFSET_FILL GLboolean
POLYGON_OFFSET_UNITS GLfloat
RED_BITS GLint
RENDERBUFFER_BINDING WebGLRenderbuffer
RENDERER DOMString
SAMPLE_BUFFERS GLint
SAMPLE_COVERAGE_INVERT GLboolean
SAMPLE_COVERAGE_VALUE GLfloat
SAMPLES GLint
SCISSOR_BOX Int32Array (with 4 elements)
SCISSOR_TEST GLboolean
SHADING_LANGUAGE_VERSION DOMString
STENCIL_BACK_FAIL GLenum
STENCIL_BACK_FUNC GLenum
STENCIL_BACK_PASS_DEPTH_FAIL GLenum
STENCIL_BACK_PASS_DEPTH_PASS GLenum
STENCIL_BACK_REF GLint
STENCIL_BACK_VALUE_MASK GLuint
STENCIL_BACK_WRITEMASK GLuint
STENCIL_BITS GLint
STENCIL_CLEAR_VALUE GLint
STENCIL_FAIL GLenum
STENCIL_FUNC GLenum
STENCIL_PASS_DEPTH_FAIL GLenum
STENCIL_PASS_DEPTH_PASS GLenum
STENCIL_REF GLint
STENCIL_TEST GLboolean
STENCIL_VALUE_MASK GLuint
STENCIL_WRITEMASK GLuint
SUBPIXEL_BITS GLint
TEXTURE_BINDING_2D WebGLTexture
TEXTURE_BINDING_CUBE_MAP WebGLTexture
UNPACK_ALIGNMENT GLint
UNPACK_COLORSPACE_CONVERSION_WEBGL GLenum
UNPACK_FLIP_Y_WEBGL GLboolean
UNPACK_PREMULTIPLY_ALPHA_WEBGL GLboolean
VENDOR DOMString
VERSION DOMString
VIEWPORT Int32Array (with 4 elements)
enableVertexAttribArray and vertexAttribPointer are setting the state of a vertex attribute array at a specific index, and don't have anything to do with the program. You can also query all this state, by aforementioned index.
getVertexAttrib (GLuint index, GLenum pname )
pname returned type
VERTEX_ATTRIB_ARRAY_BUFFER_BINDING WebGLBuffer
VERTEX_ATTRIB_ARRAY_ENABLED GLboolean
VERTEX_ATTRIB_ARRAY_SIZE GLint
VERTEX_ATTRIB_ARRAY_STRIDE GLint
VERTEX_ATTRIB_ARRAY_TYPE GLenum
VERTEX_ATTRIB_ARRAY_NORMALIZED GLboolean
CURRENT_VERTEX_ATTRIB Float32Array (with 4 elements)
If you now look at the state of the program, there isn't much overlap. One could even go as far as make experiments and see yourself how the states change.
getProgramParameter(WebGLProgram? program, GLenum pname)
pname returned type
DELETE_STATUS GLboolean
LINK_STATUS GLboolean
VALIDATE_STATUS GLboolean
ATTACHED_SHADERS GLint
ACTIVE_ATTRIBUTES GLint
ACTIVE_UNIFORMS GLint
Or maybe you want to check how your shader is doing. Still no real overlap in sight.
getShaderParameter(WebGLShader? shader, GLenum pname)
pname returned type
SHADER_TYPE GLenum
DELETE_STATUS GLboolean
COMPILE_STATUS GLboolean
You saw getVertexAttrib returns a buffer, so that seems relevant. The buffer itself isn't that much more exciting than say a plain ArrayBuffer. The contents are just not in javacript, but far away in gpu land, doing hard work to support the family at home.
getBufferParameter(GLenum target, GLenum pname)
pname returned type
BUFFER_SIZE GLint
BUFFER_USAGE GLenum
So probably programs and vertex arrays don't have that much in common. Difficult to deduce by guessing, but really simple to find out if you know ( or abstract away ) all those getters.
For completeness, and to help you understand state, I also copy over all the other things.
getFramebufferAttachmentParameter(GLenum target, GLenum attachment, GLenum pname)
pname returned type
FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE GLenum
FRAMEBUFFER_ATTACHMENT_OBJECT_NAME WebGLRenderbuffer or WebGLTexture
FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL GLint
FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE GLint
getRenderbufferParameter(GLenum target, GLenum pname)
pname returned type
RENDERBUFFER_WIDTH GLint
RENDERBUFFER_HEIGHT GLint
RENDERBUFFER_INTERNAL_FORMAT GLenum
RENDERBUFFER_RED_SIZE GLint
RENDERBUFFER_GREEN_SIZE GLint
RENDERBUFFER_BLUE_SIZE GLint
RENDERBUFFER_ALPHA_SIZE GLint
RENDERBUFFER_DEPTH_SIZE GLint
RENDERBUFFER_STENCIL_SIZE GLint
getTexParameter(GLenum target, GLenum pname)
pname returned type
TEXTURE_MAG_FILTER GLenum
TEXTURE_MIN_FILTER GLenum
TEXTURE_WRAP_S GLenum
TEXTURE_WRAP_T GLenum
I didn't give up moderating it just yet. So maybe you want to check the value of your uniforms. That's really useful sometimes.
getUniform(WebGLProgram? program, WebGLUniformLocation? location)
Here a few more really useful getters:
getActiveAttrib(WebGLProgram? program, GLuint index)
getActiveUniform(WebGLProgram? program, GLuint index)
And of course the ones everbody loves:
getUniformLocation(WebGLProgram? program, DOMString name)
getAttribLocation(WebGLProgram? program, DOMString name)
getProgramInfoLog(WebGLProgram? program)
getShaderInfoLog(WebGLShader? shader)
getShaderSource(WebGLShader? shader)
getShaderPrecisionFormat(GLenum shadertype, GLenum precisiontype)
getSupportedExtensions()
Oh here that one actually belongs to the vertex attributes, almost forgot. It's separate for important legacy reasons.
getVertexAttribOffset(GLuint index, GLenum pname)
( pname has to be VERTEX_ATTRIB_ARRAY_POINTER on that one. )
Unless I forgot something, that's basically all of WebGL state. It may seem like a lot, but I personally found all of it to be really helpful to understand how things work. Without those you are basically blindfolded and have to just guess all the time, and follow tutorials telling you the exact order you have to call functions in, which doesn't work well with WebGL - just because there are so many things, but also mistakes you can do.

Screenius answer is pretty complete.
Here is a webgl state diagram
The terser version is:
In WebGL 1.0, uniforms are per program, texture filtering and wrapping is per texture. Everything else is global. That includes all attributes and all texture units.
Pasted from some previous answers that cover this
You can think of attributes and texture units like this
gl = {
arrayBuffer: someBuffer,
vertexArray: {
elementArrayBuffer: someOtherBuffer,
attributes: [],
},
};
When you call gl.bindBuffer you're just setting one of 2 global variables in the gl state.
gl.bindBuffer = function(bindPoint, buffer) {
switch (bindPoint) {
case: this.ARRAY_BUFFER:
this.arrayBuffer = buffer;
break;
case: this.ELEMENT_ARRAY_BUFFER:
this.vertexArray.elementArrayBuffer = buffer;
break;
}
};
When you call gl.vertexAttribPointer it copies current value of arrayBuffer to the specified attribute.
gl.vertexAttribPointer = function(index, size, type, normalized, stride, offset) {
var attribute = this.vertexArray.attributes[index];
attribute.size = size;
attribute.type = type;
attribute.normalized = normalized;
attribute.stride = stride;
attribute.offset = offset;
attribute.buffer = this.arrayBuffer; // copies the current buffer reference.
};
Textures work similarly
gl = {
activeTextureUnit: 0,
textureUnits: [],
};
gl.activeTexture sets which texture unit you're working on.
gl.activeTexture = function(unit) {
this.activeTextureUnit = unit - this.TEXTURE_0; // make it zero based.
};
Every texture unit has both a TEXTURE_2D and a TEXTURE_CUBEMAP so gl.bindTexture(b, t) is effectively
gl.bindTexture = function(bindPoint, texture) {
var textureUnit = this.textureUnits[this.activeTextureUnit];
switch (bindPoint) {
case this.TEXTURE_2D:
textureUnit.texture2D = texture;
break;
case this.TEXTURE_CUBEMAP:
textureUnit.textureCubeMap = texture;
break;
}
};
The rest is global state like the clear color, viewport, the blend settings, the stencil settings, the enable/disable stuff like DEPTH_TEST, SCISSOR_TEST
Just a side note: If you enable the extension OES_vertex_array_object the vertexArray in the example above becomes its own object that you can bind with bindVertexArrayOES.

Related

swift/OpenGL ES 2.0 - Creating a texture from a YUV420 buffer

I've got a YUV420 pixelbuffer in a UInt8 Array. I need to create a texture out of it in order to render it with OpenGL. In Android there is an easy way to decode my array to an RGB array for the texture. The code is the following:
BitmapFactory.Options bO = new BitmapFactory.Options();
bO.inJustDecodeBounds = false;
bO.inPreferredConfig = Bitmap.Config.RGB_565;
try {
myBitmap= BitmapFactory.decodeByteArray( yuvbuffer,
0,
yuvbuffer.length,
bO);
} catch (Throwable e) {
// ...
}
I need to decode the yuv buffer on my ios platform (Xcode 8.3.3, Swift 3.1) in order to put it into the following method as data:
void glTexImage2D( GLenum target,
GLint level,
GLint internalFormat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
How can I achieve this decoding?
ALTERNATIVE:
I've described the way I am decoding the YUV-buffer on Android. Maybe there is an other way to create a texture based on yuvpixels without decoding it like this. I've already tried the following method using the FragmentShader (Link), but it is not working for me. I'm getting a black screen or a green screen, but the image is never rendered. There are also some methods using two seperate buffers for Y and for UV - but on this I don't know how to split my YUV-buffer into Y and UV.
Do you have any new examples/samples for yuv-rendering which are not outdated and working?
If you need only to display that image/video, then you don't really need to convert it to rgb texture. You can bind all 3 planes (Y/Cb/Cr) as separate textures, and perform yuv->rgb conversion in fragment shader, with just a three dot products.

How to manually render a mesh loaded with the DirectX Toolkit

I have a c++/cx project where I'm rendering procedural meshes using DirectX-11, it all seems to work fine, but now I wanted to also import and render meshes from files (from fbx to be exact).
I was told to use the DirectX Toolkit for this.
I followed the tutorials of the toolkit, and that all worked,
but then I tried doing that in my project but it didn't seem to work. The imported mesh was not visible, and the existing procedural meshes were rendered incorrectly (as if without a depth buffer).
I then tried manually rendering the imported mesh (identical to the procedural meshes, without using the Draw function from DirectXTK)
This works better, the existing meshes are all correct, but the imported mesh color's are wrong; I use a custom made vertex and fragment shader, that uses only vertex position and color data, but for some reason the imported mesh's normals are send to shader instead of the vertex-colors.
(I don't even want the normals to be stored in the mesh, but I don't seem to have the option to export to fbx without normals, and even if I remove them manually from the fbx, at import the DirectXTK seem to recalculate the normals)
Does anyone know what I'm doing wrong?
This is all still relatively new to me, so any help appreciated.
If you need more info, just let me know.
Here is my code for rendering meshes:
First the main render function (which is called once every update):
void Track3D::Render()
{
if (!_loadingComplete)
{
return;
}
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
// Prepare to pass the view matrix, and updated model matrix, to the shader
XMStoreFloat4x4(&_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(_CameraPosition, _CameraLookat, up)));
// Clear the back buffer and depth stencil view.
_d3dContext->ClearRenderTargetView(_renderTargetView.Get(), DirectX::Colors::Transparent);
_d3dContext->ClearDepthStencilView(_depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Set render targets to the screen.
ID3D11RenderTargetView *const targets[1] = { _renderTargetView.Get() };
_d3dContext->OMSetRenderTargets(1, targets, _depthStencilView.Get());
// Here I render everything:
_TrackMesh->Render(_constantBufferData);
RenderExtra();
_ImportedMesh->Render(_constantBufferData);
Present();
}
The Present-function:
void Track3D::Present()
{
DXGI_PRESENT_PARAMETERS parameters = { 0 };
parameters.DirtyRectsCount = 0;
parameters.pDirtyRects = nullptr;
parameters.pScrollRect = nullptr;
parameters.pScrollOffset = nullptr;
HRESULT hr = S_OK;
hr = _swapChain->Present1(1, 0, &parameters);
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
OnDeviceLost();
}
else
{
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
}
}
Here's the render function which I call on every mesh:
(All of the mesh-specific data is gotten from the imported mesh)
void Mesh::Render(ModelViewProjectionConstantBuffer constantBufferData)
{
if (!_loadingComplete)
{
return;
}
XMStoreFloat4x4(&constantBufferData.model, XMLoadFloat4x4(&_modelMatrix));
// Prepare the constant buffer to send it to the Graphics device.
_d3dContext->UpdateSubresource(
_constantBuffer.Get(),
0,
NULL,
&constantBufferData,
0,
0
);
UINT offset = 0;
_d3dContext->IASetVertexBuffers(
0,
1,
_vertexBuffer.GetAddressOf(),
&_stride,
&_offset
);
_d3dContext->IASetIndexBuffer(
_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
0
);
_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
_d3dContext->IASetInputLayout(_inputLayout.Get());
// Attach our vertex shader.
_d3dContext->VSSetShader(
_vertexShader.Get(),
nullptr,
0
);
// Send the constant buffer to the Graphics device.
_d3dContext->VSSetConstantBuffers(
0,
1,
_constantBuffer.GetAddressOf()
);
// Attach our pixel shader.
_d3dContext->PSSetShader(
_pixelShader.Get(),
nullptr,
0
);
SetTexture();
// Draw the objects.
_d3dContext->DrawIndexed(
_indexCount,
0,
0
);
}
And this is the vertex shader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
//float3 normal : NORMAL0; //uncommenting these changes the color data for some reason (but always wrong)
//float2 uv1 : TEXCOORD0;
//float2 uv2 : TEXCOORD1;
float3 color : COLOR0;
};
struct VertexShaderOutput
{
float3 color : COLOR0;
float4 pos : SV_POSITION;
};
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
output.color = input.color;
return output;
}
And this is the pixel shader:
struct PixelShaderInput
{
float3 color: COLOR0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
return float4(input.color.r, input.color.g, input.color.b, 1);
}
The most likely issue is that you are not setting enough state for your drawing, and that the DirectX Tool Kit drawing functions are setting states that don't match what your existing code requires.
For performance reasons, DirectX Tool Kit does not 'save & restore' state. Instead each draw function sets the state it needs fully and then leaves it. I document which state is impacted in the wiki under the State management section for each class.
Your code above sets the vertex buffer, index buffer, input layout, vertex shader, pixel shader, primitive topology, and VS constant buffer in slot 0.
You did not set blend state, depth/stencil state, or the rasterizer state. You didn't provide the pixel shader so I don't know if you need any PS constant buffers, samplers, or shader resources.
Try explicitly setting the blend state, depth/stencil state, and rasterizer state before you draw your procedural meshes. If you just want to go back to the defined defaults instead of whatever DirectX Tool Kit did, call:
_d3dContext->RSSetState(nullptr);
_d3dContext->OMSetBlendState(nullptr, nullptr, 0);
_d3dContext->OMSetDepthStencilState(nullptr, 0xffffffff);
See also the CommonStates class.
It's generally not a good idea to use identifiers that start with _ in C++. Officially all identifiers that start with _X where X is a capital letter or __ are reserved for the compiler and library implementers so it could conflict with some compiler stuff. m_ or something similar is better.

How to use luaglut function glReadPixels() in lua?

I'm using luaglut to do some graphics in lua. And I am struggling with this function glReadPixels, particularly with its last input argument GLvoid *pixels.
void glReadPixels (GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *pixels);
pixels is a pointer type so in lua it is of type lightuserdata. I managed to get a lightuserdata type variable let's say img in lua, according to this post; however, after I get the frame I wanna grab into img by calling:
glReadPixels(0, 0, 250, 250, GL_RGB, GL_UNSIGNED_BYTE, img)
I could do nothing with img. I tried creating a same structure in lua using ffi and coverting this img to a torch.Tensor type, but it is too slow since I have to assign the values pixel by pixel.
So I am asking here if there is better ways to use this glReadPixels function to get img than this troublesome approach that I took? Both table and torch.Tensor types of img are OK. Thank you in advance!

OpenGL ES glFragColor depend on if condition on Fragment shader in iOS

I'm writing an app on iOS allows drawing free style (using finger) and drawing image on screen. I use OpenGL ES to implement. I have 2 functions, one is drawing free style, one is drawing texture
--- Code drawing free style
- (void)drawFreeStyle:(NSMutableArray *)pointArray {
//Prepare vertex data
.....
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
**GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
glVertexAttrib1f(a_ver_flag_drawing_type, 0.0f);
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
glUniform1f(u_fra_flag_drawing_type, 0.0);**
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
--- Code drawing texture
- (void)drawTexture:(UIImage *)image atRect:(CGRect)rect {
GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
GLuint a_position_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_Position");
GLuint a_texture_coordinates_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_TextureCoordinates");
GLuint u_texture_unit_location = glGetUniformLocation(program[PROGRAM_POINT].id, "u_TextureUnit");
glUseProgram(PROGRAM_POINT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(u_texture_unit_location, 0);
glUniform1f(u_fra_flag_drawing_type, 1.0);
const float textrect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f};
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(textrect), textrect, GL_STATIC_DRAW);
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
glEnableVertexAttribArray(a_ver_flag_drawing_type);
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Notice 2 variables a_ver_flag_drawing_type (attribute) and u_fra_flag_drawing_type (uniform). They're use for setting flags on vertex shader and fragment shader to determine drawing free style or texture on both files
--- Vertex shader
//Flag
attribute lowp float a_drawingType;
//For drawing
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
//For texture
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
if (abs(a_drawingType - 1.0) < 0.0001) {
//Draw texture
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
} else {
//Draw free style
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
}
--- Fragment shader
precision mediump float;
uniform sampler2D texture;
varying lowp vec4 color;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
uniform lowp float v_drawing_type;
void main()
{
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
}
My idea is setting these flags from drawing code at drawing time. Attribute a_drawingType is used for vertex shader and Uniform v_drawing_type is used for fragment shader. Depending these flags to know draw free style or texture.
But if I run independently, each time just one type (if run drawing free style, comment code config drawing texture on vertex shader and fragment shader file and vice versa) it can draw as I want. If I combine them, it not only can't draw but also makes app crash as
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I'm new for OpenGL ES and GLSL language, so I'm not sure my thinking about setting flags like that is right or wrong. Can anyone help me
So why don't you just build 2 seperate shader programs and useProgram() on one of them, instead of sending flag values to GL and making expensive conditional branch in vertex and especially fragment shader?
Attributes are per-vertex, uniforms are per-shader program.
You might see a crash if you supplied only one value for an attribute then asked OpenGL to draw, say, 100 points. In that case OpenGL is going to do an out-of-bounds array access when it attempts to fetch the attributes for vertices 2–100.
It'd be more normal to use two separate programs. Conditionals are very expensive on GPUs because GPUs try to maintain one program counter while processing multiple fragments. They're SIMD units. Any time the evaluation of an if differs between two neighbouring fragments you're probably reducing parallelism. The idiom is therefore not to use if statements where possible.
If you switch to a uniform there's a good chance you won't lose any performance through absent parallelism because the paths will never diverge. Your GLSL compiler may even be smart enough to recompile the shader every time you reset the constant, effectively performing constant folding. But you'll be paying for the recompilation every time.
If you just had two programs and switched between them you wouldn't pay the recompilation fee.
It's not completely relevant in this case but e.g. you'd also often see code like:
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
Written more like:
gl_FragColor = mix(texture2D(u_TextureUnit, v_TextureCoordinates),
color * texture2D(texture, gl_PointCoord),
v_drawing_type);
... because that avoids the conditional entirely. In this case you'd want to adjust the texture2D so that both calls were identical, and probably factor them out of the call, to ensure you don't end up always doing two samples instead of one.
The previously posted answers explain certain pieces, but an important part is missing in all of them. It is perfectly legal to specify a single attribute value that is applied to all vertices in a draw call. What you did here was basically valid:
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
The direct problem was this call that followed shortly after:
glEnableVertexAttribArray(a_ver_flag_drawing_type);
There are two main ways to specify the value of a vertex attribute:
Use the current value, as specified in your case by glVertexAttrib1f().
Use values from an array, as specified with glVertexAttribPointer().
You select which of the two options is used for any given attribute by enabling/disabling the array, which is done by calling glEnableVertexAttribArray()/glDisableVertexAttribArray().
In the posted code, the vertex attribute was specified as only a current value, but the attribute was then enabled to fetch from an array with glEnableVertexAttribArray(). This conflict caused the crash, because the attribute values would have been fetched from an array that was never specified. To use the specified current value, the call simply has to be changed to:
glDisableVertexAttribArray(a_ver_flag_drawing_type);
Or, if the array was never enabled, the call could be left out completely. But just in case another part of the code might have enabled it, it's safer to disable it explicitly.
As a side note, the following statement sequence from the first draw function also looks suspicious. glUniform*() sets value on the active program, so this will set a value on the previously active program, not the one specified in the second statement. If you want to set the value on the new program, the order of the statements has to be reversed.
glUniform1f(u_fra_flag_drawing_type, 0.0);
glUseProgram(program[PROGRAM_POINT].id);
On the whole thing, I think there are at least two approaches that are better than the one chosen:
Use separate shader programs for the two different types of rendering. While using a single program with switchable behavior is a valid option, it looks artificial, and using separate programs seems much cleaner.
If you want to stick with a single program, use a single uniform to do the switching, instead of using an attribute and a uniform. You could use the one you already have, but you might just as well make it a boolean while you're at it. So in both the vertex and fragment shader, use the same uniform declaration:
uniform bool v_use_texture;
Then the tests become:
if (v_use_texture) {
Getting the uniform location is the same as before, and you can set the value, which will the be available in both the vertex and fragment shader, with one of:
glUniform1i(loc, 0);
glUniform1i(loc, 1);
I found the problem, just change variable a_drawingType from Attribute to Uniform then use glGetUniformLocation and glUniform1f to get index and pass value. I think Attribute will pass for any vertex so use uniform to pass once.

Efficient way to render a bunch of layered textures?

What's the efficient way to render a bunch of layered textures? I have some semitransparent textured rectangles that I position randomly in 3D space and render them from back to front.
Currently I call d3dContext->PSSetShaderResources() to feed the pixel shader with a new texture before each call to d3dContext->DrawIndexed(). I have a feeling that I am copying the texture to the GPU memory before each draw. I might have 10-30 ARGB textures roughly 1024x1024 pixels each and they are associated across 100-200 rectangles that I render on screen. My FPS is OK at 100, but goes pretty bad around 200. I possibly have some inefficiencies elsewhere since this is my first semi-serious D3D code, but I strongly suspect this has to do with copying the textures back and forth. 30*1024*1024*4 is 120MB, which is a bit high for a Metro Style App that should target any Windows 8 device. So putting them all in there might be a stretch, but maybe I could at least cache a few somehow? Any ideas?
*EDIT - Some code snippets added
Constant Buffer
struct ModelViewProjectionConstantBuffer
{
DirectX::XMMATRIX model;
DirectX::XMMATRIX view;
DirectX::XMMATRIX projection;
float opacity;
float3 highlight;
float3 shadow;
float textureTransitionAmount;
};
The Render Method
void RectangleRenderer::Render()
{
// Clear background and depth stencil
const float backgroundColorRGBA[] = { 0.35f, 0.35f, 0.85f, 1.000f };
m_d3dContext->ClearRenderTargetView(
m_renderTargetView.Get(),
backgroundColorRGBA
);
m_d3dContext->ClearDepthStencilView(
m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH,
1.0f,
0
);
// Don't draw anything else until all textures are loaded
if (!m_loadingComplete)
return;
m_d3dContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get()
);
UINT stride = sizeof(BasicVertex);
UINT offset = 0;
// The vertext buffer only has 4 vertices of a rectangle
m_d3dContext->IASetVertexBuffers(
0,
1,
m_vertexBuffer.GetAddressOf(),
&stride,
&offset
);
// The index buffer only has 4 vertices
m_d3dContext->IASetIndexBuffer(
m_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0
);
m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
m_d3dContext->IASetInputLayout(m_inputLayout.Get());
FLOAT blendFactors[4] = { 0, };
m_d3dContext->OMSetBlendState(m_blendState.Get(), blendFactors, 0xffffffff);
m_d3dContext->VSSetShader(
m_vertexShader.Get(),
nullptr,
0
);
m_d3dContext->PSSetShader(
m_pixelShader.Get(),
nullptr,
0
);
m_d3dContext->PSSetSamplers(
0, // starting at the first sampler slot
1, // set one sampler binding
m_sampler.GetAddressOf()
);
// number of rectangles is in the 100-200 range
for (int i = 0; i < m_rectangles.size(); i++)
{
// start rendering from the farthest rectangle
int j = (i + m_farthestRectangle) % m_rectangles.size();
m_vsConstantBufferData.model = m_rectangles[j].transform;
m_vsConstantBufferData.opacity = m_rectangles[j].Opacity;
m_vsConstantBufferData.highlight = m_rectangles[j].Highlight;
m_vsConstantBufferData.shadow = m_rectangles[j].Shadow;
m_vsConstantBufferData.textureTransitionAmount = m_rectangles[j].textureTransitionAmount;
m_d3dContext->UpdateSubresource(
m_vsConstantBuffer.Get(),
0,
NULL,
&m_vsConstantBufferData,
0,
0
);
m_d3dContext->VSSetConstantBuffers(
0,
1,
m_vsConstantBuffer.GetAddressOf()
);
m_d3dContext->PSSetConstantBuffers(
0,
1,
m_vsConstantBuffer.GetAddressOf()
);
auto a = m_rectangles[j].textureId;
auto b = m_rectangles[j].targetTextureId;
auto srv1 = m_textures[m_rectangles[j].textureId].textureSRV.GetAddressOf();
auto srv2 = m_textures[m_rectangles[j].targetTextureId].textureSRV.GetAddressOf();
ID3D11ShaderResourceView* srvs[2];
srvs[0] = *srv1;
srvs[1] = *srv2;
m_d3dContext->PSSetShaderResources(
0, // starting at the first shader resource slot
2, // set one shader resource binding
srvs
);
m_d3dContext->DrawIndexed(
m_indexCount,
0,
0
);
}
}
Pixel Shader
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
float opacity;
float3 highlight;
float3 shadow;
float textureTransitionAmount;
};
Texture2D baseTexture : register(t0);
Texture2D targetTexture : register(t1);
SamplerState simpleSampler : register(s0);
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float3 lightDirection = normalize(float3(0, 0, -1));
float4 baseTexelColor = baseTexture.Sample(simpleSampler, input.tex);
float4 targetTexelColor = targetTexture.Sample(simpleSampler, input.tex);
float4 texelColor = lerp(baseTexelColor, targetTexelColor, textureTransitionAmount);
float4 shadedColor;
shadedColor.rgb = lerp(shadow.rgb, highlight.rgb, texelColor.r);
shadedColor.a = texelColor.a * opacity;
return shadedColor;
}
As Jeremiah has suggested, you are not probably moving texture from CPU to GPU for each frame as you would have to create new texture for each frame or using "UpdateSubresource" or "Map/UnMap" methods.
I don't think that instancing is going to help for this specific case, as the number of polygons is extremely low (I would start to worry with several millions of polygons). It is more likely that your application is going to be bandwidth/fillrate limited, as your are performing lots of texture sampling/blending (It depends on tecture fillrate, pixel fillrate and the nunber of ROP on your GPU).
In order to achieve better performance, It is highly recommended to:
Make sure that all your textures have all mipmaps generated. If they
don't have any mipmaps, It will hurt a lot the cache of the GPU. (I also assume that you are using texture.Sample method in HLSL, and not texture.SampleLevel or variants)
Use Direct3D11 Block Compressed texture on the GPU, by using a tool
like texconv.exe or preferably the sample from "Windows DirectX 11
Texture Converter".
On a side note, you will probably get more attention for this kind of question on https://gamedev.stackexchange.com/.
I don't think you are doing any copying back and forth from GPU to system memory. You usually have to explicitly do that a call to Map(...), or by blitting to a texture you created in system memory.
One issue, is you are making a DrawIndexed(...) call for each texture. GPUs work most efficiently if you send it a bunch of work to do by batching. One way to accomplish this is to set n-amount of textures to PSSetShaderResources(i, ...), and do a DrawIndexedInstanced(...). Your shader code would then read each of the shader resources and draw them. I do this in my C++ DirectCanvas code here (SpriteInstanced.cpp). This can make for a lot of code, but the result is very efficient (I even do the matrix ops in the shader for more speed).
One other, maybe a lot easier way, is to give the DirectXTK spritebatch a shot.
I used it here in this project...only for a simple blit but it may be a good start to see the minor amount of setup needed to use the spritebatch.
Also, if possible, try to "atlas" your texture. For instance, try to fit as many "images" in a texture as possible and blit from them vs having a single texture for each.

Resources