Does a Vertex Shader VAO need a VBO? - ios

I am trying to use a VAO with a vertex shader. This works, but only if I set the length of the bufferData to 0. My understanding is that when using a vertex shader, a VBO is not required because my shader is generating the vertices of a quad. If I attempt to create the VAO without binding a buffer, it will also crash.
As I mentioned, this works, however I am concerned because in Apple's Instruments, the OpenGL Expert reports a severe error:
Draw Call Exceeded Array Buffer Bounds
No Buffer Data - DYFKNoBufferData
Here is the code for generating the VAO:
glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object
glBindVertexArray(avoid); // Bind VAO
GLfloat vertices[12]; // Vertices for our square
vertices[0] = -0.5; vertices[1] = 0.5; vertices[2] = 0.0; // Top left corner
vertices[3] = -0.5; vertices[4] = -0.5; vertices[5] = 0.0; // Bottom left corner
vertices[6] = 0.5; vertices[7] = 0.5; vertices[8] = 0.0; // Top Right corner
vertices[9] = 0.5; vertices[10] = -0.5; vertices[11] = 0.0; // Bottom right corner
glGenBuffers(1, &fboTextureVboID); // Create our Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, fboTextureVboID); // Bind VBO
// As long as I set the buffer data length to 0
// then my glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call works
// otherwise I get EXC_BAD_ACCESS
glBufferData(GL_ARRAY_BUFFER, 0, vertices, GL_STATIC_DRAW);
// configure vertex attributes
glEnableVertexAttribArray (...
glVertexAttribPointer(...
...
glEnableVertexAttribArray(0); // Make our Vertex Array Object Inactive
glBindVertexArray(0); // Make our Vertex Buffer Object Inactive
Drawing with:
glUseProgram(vertexShaderProgram);
glBindVertexArray(vaoID);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can I safely ignore Apple's errors? I am trying to use a VAO for the vertex shader because I would like to eliminate all the vertex attribute bindings in my drawing code. Or is there a better way to do this with a shader with or without a VAO?
EDIT:
Here is my vertex shader source:
#version 300 es
uniform lowp mat4 uProjectionMatrix;
in lowp vec4 a_position;
in lowp vec2 a_texCoord;
out lowp vec2 v_texCoord;
void main()
{
gl_Position = uProjectionMatrix * a_position;
v_texCoord = a_texCoord;
}
And fragment shader source:
#version 300 es
precision mediump float;
uniform lowp sampler2D uTexture;
in lowp vec2 v_texCoord;
out lowp vec4 fragmentColor;
void main()
{
fragmentColor = texture( uTexture, v_texCoord );
}

You can pick one of two things.
It is perfectly legal to have a VAO that has no attached buffer objects. However, this does not mean "create a buffer object, but don't put anything in it". It means not to attach buffer objects to the VAO. You just call glGenVertexArrays to generate the vertex array, and you're done.
No calls to glEnableVertexAttribArrays. No calls to glVertexAttribPointer. If you're not using vertex arrays at all, you should not be making these calls at all.
It is also perfectly legal to have a VAO that contains buffer objects. These work like normal.
What you can't do is create a buffer object that has no allocation, then try to use it for vertex data. That's what happens if you remove just glBufferData.
So you have to pick one side of the road or the other. Either your VAO uses one or more buffers, or it doesn't. If it uses a buffer, those buffers have to have storage. If it doesn't, then it won't care.

Related

WebGL: Access buffer from shader

I need to access a buffer from my shader. The buffer is created from an array. (In the real scenario, the array has 10k+ (variable) numbers.)
var myBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, myBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint8Array([1,2,3,4,5,6,7]), gl.STATIC_DRAW);
How do I send it so it's usable by the shader?
precision mediump float;
uniform uint[] myBuffer;//???
void main() {
gl_FragColor = vec4(myBuffer[0],myBuffer[1],0,1);
}
Normally, if it were a attribute, it'd be
gl.vertexAttribPointer(myBuffer, 2, gl.UNSIGNED_BYTE, false, 4, 0);
but I need to be able to access the whole array from any shader pixel, so it's not a vertex attribute.
Use a texture if you want random access to lots of data in a shader.
If you have 10000 values you might make a texture that's 100x100 pixels. you can then get each value from the texture with something like
uniform sampler2D u_texture;
vec2 textureSize = vec2(100.0, 100.0);
vec4 getValueFromTexture(float index) {
float column = mod(index, textureSize.x);
float row = floor(index / textureSize.x);
vec2 uv = vec2(
(column + 0.5) / textureSize.x,
(row + 0.5) / textureSize.y);
return texture2D(u_texture, uv);
}
Make sure your texture filtering is set to gl.NEAREST.
Of course if you make textureSize a uniform you could pass in the size of the texture.
As for why the + 0.5 part see this answer
You can use normal gl.RGBA, gl.UNSIGNED_BYTE textures and add/multiply the channels together to get a large range of values. Or, you could use floating point textures if you don't want to mess with that. You need to enable floating point textures.

OpenGL ES 2.0 - Fragment shader with multiple textures

I'm setting up two textures as follow:
GLKTextureInfo *texture = [GLKTextureLoader ...
glActiveTexture(GL_TEXTURE0);
glUniform1i(glGetUniformLocation(self.program, "uTextureMask"), 0);
glBindTexture(GL_TEXTURE_2D, texture.name);
texture = [GLKTextureLoader ...
glActiveTexture(GL_TEXTURE1);
glUniform1i(glGetUniformLocation(self.program, "uTextureLabel"), 1);
glBindTexture(GL_TEXTURE_2D, texture.name);
referred in the fragment shader:
uniform sampler2D uTextureMask;
uniform sampler2D uTextureLabel;
The problem is that only the last texture I bind is available in the shader.
In the example above, only uTextureLabel works.
Any idea?
Thanks,
DAN
UPDATE:
glGetUniformLocation returns 13 for uTextureMask and 14 for uTextureLabel.
In the shader I do:
highp vec4 label = texture2D(uTextureLabel, vTexel);
highp vec4 mask = texture2D(uTextureMask, vTexel);
highp vec3 surface;
surface = label.rgb;
// surface = mask.rgb; // <--- DOESN'T WORK
gl_FragColor = vec4(surface, 1.0);
On IOS GLKTextureLoader not only load PNG. It's also create texture name byglGenTexture`, bind it and load data on GPU and also set default parametres for GL_WRAP_MODE and MIN/MAG algorithms.
In your code i see that you call
glActiveTexture(GL_TEXTURE0)
glBindTexture(MaskTexture)
glBindTexture(LabelTature) // this call made GLKTextureLoader
glActiveTexture(GL_TEXTURE1)
glBindTexture(LabelTexture)
In result you have LabelTexture binded into GL_TEXTURE0 and GL_TEXTURE1 texture blocks.

Output of vertex shader 'colorVarying' not read by fragment shader

As is shown below, the error is very strange. I use OpenGLES 2.0 and shader in my iPad program, but it seems something goes wrong with the code or project configuration. The model is drawn with no color at all (black color).
2012-12-01 14:21:56.707 medicare[6414:14303] Program link log:
WARNING: Could not find vertex shader attribute 'color' to match BindAttributeLocation request.
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
[Switching to process 6414 thread 0x1ad0f]
And I use glBindAttibLocation to pass position and normal data like this:
// This needs to be done prior to linking.
glBindAttribLocation(_program, INDEX_POSITION, "position");
glBindAttribLocation(_program, INDEX_NORMAL, "normal");
glBindAttribLocation(_program, INDEX_COLOR, "color"); //pass color to shader
There are two shaders in my project. So any good solutions to this odd error? Thanks a lot!
My vertex shader:
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
varying lowp vec4 DestinationColor;
void main()
{
//vec4 a_Color = vec4(0.9, 0.4, 0.4, 1.0);
vec4 a_Color = color;
vec3 u_LightPos = vec3(1.0, 1.0, 2.0);
float distance = 2.4;
vec3 eyeNormal=normalize(normalMatrix * normal);
float diffuse = max(dot(eyeNormal, u_LightPos), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
DestinationColor = a_Color * diffuse; // average between ambient and diffuse a_Color * (diffuse + 0.3)/2.0;
gl_Position = modelViewProjectionMatrix * position;
}
And my fragment shader is:
varying lowp vec4 DestinationColor;
void main()
{
gl_FragColor = DestinationColor;
}
Very simple. Thanks a lot!
I think there are a few things wrong here. First your use of attribute might not be right. An attribute is like an element that changes for each vertex.. do you have the color as an element in your data structure? Cause if not, the shader isn't going to work right.
And I use glBindAttibLocation to pass position and normal data like
this:
no you don't. glBindAttribLocation "Associates a generic vertex attribute index with a named attribute variable". It doesn't pass data. It associates an index (an glint) with the variable. You pass things in later with: glVertexAttribPointer.
I don't even use the bind.. I do it this way - set up the attribute:
glAttributes[PROGNAME][A_vec3_vertexPosition] = glGetAttribLocation(glPrograms[PROGNAME], "a_vertexPosition");
glEnableVertexAttribArray(glAttributes[PROGNAME][A_vec3_vertexPosition]);
and then later before calling glDrawElemetns pass your pointer to it so it can get the data:
glVertexAttribPointer(glAttributes[PROGNAME][A_vec3_vertexPosition], 3, GL_FLOAT, GL_FALSE, stride, (void *) 0);
There I'm using a 2 dimensional array of ints called glAttributes to hold all of my attribute indexes. But you can use glints like you are now.
The error message tells you what's wrong. In your vertex shader you say:
attribute vec4 color;
But then down below you also have an a_Color:
DestinationColor = a_Color * diffuse;
Be consistent with your variable names. I put a_ v_ and u_ in front of all mine now to try to keep straight what kind of variable it is. What you're calling an a_ there is really a varying.
I also suspect that the error message was not from the same version of the shader and code that you posted because of the error:
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
And the error about colorVarying is confusing when it isn't even in this version of your vertex shader. Repost the current version of the shaders and the error messages you get from those and it will be easier to help you.

Drawing many shapes in WebGL

I was reading tutorials from here.
<script class = "WebGL">
var gl;
function initGL() {
// Get A WebGL context
var canvas = document.getElementById("canvas");
gl = getWebGLContext(canvas);
if (!gl) {
return;
}
}
var positionLocation;
var resolutionLocation;
var colorLocation;
var translationLocation;
var rotationLocation;
var translation = [50,50];
var rotation = [0, 1];
var angle = 0;
function initShaders() {
// setup GLSL program
vertexShader = createShaderFromScriptElement(gl, "2d-vertex-shader");
fragmentShader = createShaderFromScriptElement(gl, "2d-fragment-shader");
program = createProgram(gl, [vertexShader, fragmentShader]);
gl.useProgram(program);
// look up where the vertex data needs to go.
positionLocation = gl.getAttribLocation(program, "a_position");
// lookup uniforms
resolutionLocation = gl.getUniformLocation(program, "u_resolution");
colorLocation = gl.getUniformLocation(program, "u_color");
translationLocation = gl.getUniformLocation(program, "u_translation");
rotationLocation = gl.getUniformLocation(program, "u_rotation");
// set the resolution
gl.uniform2f(resolutionLocation, canvas.width, canvas.height);
}
function initBuffers() {
// Create a buffer.
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// Set Geometry.
setGeometry(gl);
}
function setColor(red, green, blue) {
gl.uniform4f(colorLocation, red, green, blue, 1);
}
// Draw the scene.
function drawScene() {
// Clear the canvas.
gl.clear(gl.COLOR_BUFFER_BIT);
// Set the translation.
gl.uniform2fv(translationLocation, translation);
// Set the rotation.
gl.uniform2fv(rotationLocation, rotation);
// Draw the geometry.
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
// Fill the buffer with the values that define a letter 'F'.
function setGeometry(gl) {
/*Assume size1 is declared*/
var vertices = [
-size1/2, -size1/2,
-size1/2, size1/2,
size1/2, size1/2,
size1/2, size1/2,
size1/2, -size1/2,
-size1/2, -size1/2 ];
gl.bufferData(
gl.ARRAY_BUFFER,
new Float32Array(vertices),
gl.STATIC_DRAW);
}
function animate() {
translation[0] += 0.01;
translation[1] += 0.01;
angle += 0.01;
rotation[0] = Math.cos(angle);
rotation[1] = Math.sin(angle);
}
function tick() {
requestAnimFrame(tick);
drawScene();
animate();
}
function start() {
initGL();
initShaders();
initBuffers();
setColor(0.2, 0.5, 0.5);
tick();
}
</script>
<!-- vertex shader -->
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform vec2 u_translation;
uniform vec2 u_rotation;
void main() {
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
// Add in the translation.
vec2 position = rotatedPosition + u_translation;
// convert the position from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
</script>
<!-- fragment shader -->
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
</script>
My WebGL program for 1 shape works something like this:
Get a context (gl) from the canvas element.
initialize buffers with the shape of my object
drawScene() : a call to gl.drawArrays()
If there is animation, an update function, which updates my shape's angles, positions,
and then drawScene() both in tick(), so that it gets called repeatedly.
Now when I need more than 1 shape, should I fill the single buffer at once with many objects and then use it to later call drawScene() drawing all the objects at once
[OR]
should I repeated call the initBuffer and drawScene() from requestAnimFrame().
In pseudo code
At init time
Get a context (gl) from the canvas element.
for each shader
create shader
look up attribute and uniform locations
for each shape
initialize buffers with the shape
for each texture
create textures and/or fill them with data.
At draw time
for each shape
if the last shader used is different than the shader needed for this shape call gl.useProgram
For each attribute needed by shader
call gl.enableVertexAttribArray, gl.bindBuffer and gl.vertexAttribPointer for each attribute needed by shape with the attribute locations for the current shader.
For each uniform needed by shader
call gl.uniformXXX with the desired values using the locations for the current shader
call gl.drawArrays or if the data is indexed called gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, bufferOfIndicesForCurrentShape) followed by gl.drawElements
Common Optimizations
1) Often you don't need to set every uniform. For example if you are drawing 10 shapes with the same shader and that shader takes a viewMatrix or cameraMatrix it's likely that viewMatrix uniform or cameraMatrix uniform is the same for every shape so just set it once.
2) You can often move the calls to gl.enableVertexAttribArray to initialization time.
Having multiple meshes in one buffer (and rendering them with a single gl.drawArrays() as you're suggesting) yields better performance in complex scenes but obviously at that point you're not able to change shader uniforms (such as transformations) per mesh.
If you want to have the meshes running around independently, you'll have to render each one separately. You could still keep all the meshes in one buffer to avoid some overhead from gl.bindBuffer() calls but imho that won't help that much, at least not in simple scenes.
Create your buffers separately for each object you want on the scene otherwise they won't be able to move and use shader effects independently.
But that is in case your objects are different. From what I got here I think you just want to draw the same shape more than once on different positions right?
The way you go about that is you just set that translationLocation uniform right there with a different translation matrix after drawing the shape for the first time. That way when you draw the shape again it will be located somewhere else and not in top of the other one so you can see it. You can set all those transformation matrices differently and then just call gl.drawElements again since you're going to draw the same buffers that are already in use.

GLSL Shaders compile but don't draw anything on Windows

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Resources