Related
I'm trying to follow the suggestion in Apple's OpenGL ES Programming Guide section on instanced drawing: Use Instanced Drawing to Minimize Draw Calls. I have started with the example project that XCode generates for a Game app with OpenGL and Swift and converted it to OpenGL ES 3.0, adding some instanced drawing to duplicate the cube.
This works fine when I use the gl_InstanceID technique and simply generate an offset from that. But when I try to use the 'instanced arrays' technique to pass data in via a buffer I am not seeing any results.
My updated vertex shader looks like this:
#version 300 es
in vec4 position;
in vec3 normal;
layout(location = 5) in vec2 instOffset;
out lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
// gl_Position = modelViewProjectionMatrix * position + vec4( float(gl_InstanceID)*1.5, float(gl_InstanceID)*1.5, 1.0,1.0);
gl_Position = modelViewProjectionMatrix * position + vec4(instOffset, 1.0, 1.0);
}
and in my setupGL() method I have added the following:
//glGenVertexArraysOES(1, &instArray) // EDIT: WRONG
//glBindVertexArrayOES(instArray) // EDIT: WRONG
let kMyInstanceDataAttrib = 5
glGenBuffers(1, &instBuffer)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), instBuffer)
glBufferData(GLenum(GL_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * instData.count), &instData, GLenum(GL_STATIC_DRAW))
glEnableVertexAttribArray(GLuint(kMyInstanceDataAttrib))
glVertexAttribPointer(GLuint(kMyInstanceDataAttrib), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0/*or 8?*/, BUFFER_OFFSET(0))
glVertexAttribDivisor(GLuint(kMyInstanceDataAttrib), 1);
along with some simple instance offset data:
var instData: [GLfloat] = [
1.5, 1.5,
2.5, 2.5,
3.5, 3.5,
]
I am drawing the same way with the above as with the instance id technique:
glDrawArraysInstanced(GLenum(GL_TRIANGLES), 0, 36, 3)
But it seems to have no effect. I just get the single cube and it doesn't even seem to fail if I remove the buffer setup, so I suspect my setup is missing something.
EDIT: Fixed the code by removing two bogus lines from init.
I had an unecessary gen and bind for the attribute vertex array. The code as edited above now works.
I try to draw multiple triangles using OpenGL ES and iOS. I create vertices array with float values with following structure
{x, y, z, r, g, b, a}
for each vertex. Final array for one triangle is:
{x1, y1, z1, r1, g1, b1, a1, x2, y2, z2, r2, g2, b2, a2, x3, y3, z3,
r3, g3, b3, a3}
Here is my update method:
-(void)update {
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 1.0, 100.0);
self.effect.transform.projectionMatrix = projectionMatrix;
}
and render:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
[self drawShapes]; // here I fill vertices array
glClearColor(0.65f, 0.65f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int numItems = 3 * trianglesCount;
glBindVertexArrayOES(vao);
[self.effect prepareToDraw];
glUseProgram(shaderProgram);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize, convertedVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(GLKVertexAttribColor);
glDrawArrays(GL_TRIANGLES, 0, numItems);
}
Context setup. Here I bind my vertex array and generate vertex buffer:
-(void)setupContext
{
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if(!self.context) {
NSLog(#"Failed to create OpenGL ES Context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glGenVertexArraysOES(1, &vao);
glBindVertexArrayOES(vao);
glGenBuffers(1, &vbo);
}
Fragment and vertex shaders are pretty simple:
//fragment
varying lowp vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
//vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
varying lowp vec4 vColor;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
Result. Triangles aren't shown:
Where is mistake? I guess problem is with projection matrix. Here is github link to Xcode project.
Downloaded your code and tried it out. I see the purplish screen and no triangles, so I'm guessing that's the problem. I see two things that could be the problem:
1) You'll need to pass glBufferData the total number of bytes you're sending it, like this: glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize * numItems, convertedVerts, GL_DYNAMIC_DRAW);. Any data related to how to chunk the data stays glVertexAttribPointer.
2) That doesn't seem to be the only thing since I still can't get triangles to show up. I've never used GLKit before (I just have a little experience with OpenGL on the desktop platform). That being said, if I replace GLKVertexAttributePosition and GLKVertexAttribColor with 0 and 1 respectively. And apply the glBufferData fix from 1 I see artifacts flashing on the simulator screen when I move the mouse. So there's gotta be something fishy with those enum values and glVertexAttribPointer.
Edit - clarification for 2:
After changing the glBufferData line as described in 1. I also modified the glEnableVertexAttribArray lines so the looked like this:
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
After both of those changes I can see red triangles flickering on the screen. A step closer, since I couldn't see anything before. But I haven't been able to figure it out any further than that :(
I am using triangles(using vertices and face position) to draw the graphics.I am storing color information for each vertex and applying colors accordingly. But the problem is all the geometries in my scene are of single color(say cone=red, cylinder=blue). SO, storing color for each vertex is apparently of no use to me.
Is their any other approach by which coloring can be done in webgl apart from storing color information of each vertices in the scene. Maybe something like coloring the entire geometry(say a cone).
It's clear from your question you might not really understand WebGL yet? You might want to check out these tutorials.
WebGL uses shaders, those shaders use whatever inputs you define and output whatever you tell them to output. That means WebGL doesn't require vertex colors. Vertex colors are something you decide on when you write your shaders. If you don't want to use vertex colors, don't write a shader that references vertex colors.
That said there if you have a shader that happens to use vertex colors you can easily provide the shader with a constant color. Let's assume you have shaders like this that just use vertex colors.
vertex shader:
attribute vec4 a_position;
attribute vec4 a_color; // vertex colors
varying vec4 v_color; // so we can pass the colors to the fragment shader
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * a_position;
v_color = a_color;
}
fragment shader:
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
Now, all you have to do to use a constant color is turn off the attribute for a_color and set a constant value with gl.vertexAttrib4f like this
// at init time
var a_colorLocation = gl.getAttribLocation(program, "a_color";
// at draw time
gl.disableVertexAttribArray(a_colorLocation); // turn off the attribute
gl.vertexAttrib4f(a_colorLocation, r, g, b, a); // supply a constant color
Note that turning off attribute 0 will slow down WebGL on desktops because if differences between OpenGL and OpenGL ES. It's possible a_colorLocation is attribute 0. To avoid this problem bind your attribute locations BEFORE you link your program. Specifically since you'll always use a position (which is called "a_position" in the example above) just bind that to location 0 like this
..compile shaders..
..attach shaders to program..
// Must happen before you call linkProgram
gl.bindAttribLocation(program, 0, "a_position");
gl.linkProgram(program);
...check for errors, etc...
This will force the attribute for "a_position" to be attribute 0 so you'll always enable it.
Here's a sample
function main() {
var canvas = document.getElementById("c");
var gl = canvas.getContext("webgl");
if (!gl) {
alert("no WebGL");
return;
}
// NOTE:! This function binds attribute locations
// based on the indices of the second array
var program = twgl.createProgramFromScripts(
gl,
["vshader", "fshader"],
["a_position", "a_color"]); // a_position will get location 0
// a_color will get location 1
var a_positionLoc = 0;
var a_colorLoc = 1;
var u_matrixLoc = gl.getUniformLocation(program, "u_matrix");
gl.useProgram(program);
var verts = [
1, 1,
-1, 1,
-1, -1,
1, 1,
-1, -1,
1, -1,
];
var colors = [
255, 0, 0, 255,
0, 255, 0, 255,
0, 0, 255, 255,
255, 255, 0, 255,
0, 255, 255, 255,
255, 0, 255, 255,
];
var positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(verts), gl.STATIC_DRAW);
gl.enableVertexAttribArray(a_positionLoc);
gl.vertexAttribPointer(a_positionLoc, 2, gl.FLOAT, false, 0, 0);
var colorBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint8Array(colors), gl.STATIC_DRAW);
gl.enableVertexAttribArray(a_colorLoc);
gl.vertexAttribPointer(a_colorLoc, 4, gl.UNSIGNED_BYTE, true, 0, 0);
// Draw in the bottom right corner
gl.uniformMatrix4fv(
u_matrixLoc,
false,
[0.5, 0, 0, 0,
0, 0.5, 0, 0,
0, 0, 1, 0,
-0.5, -0.5, 0, 1]);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// Now turn off the a_color attribute and supply a solid color
gl.disableVertexAttribArray(a_colorLoc);
var r = 0.5;
var g = 1;
var b = 0.5;
var a = 1;
gl.vertexAttrib4f(a_colorLoc, r, g, b, a); // greenish
// Draw in the top left corner
gl.uniformMatrix4fv(
u_matrixLoc,
false,
[0.5, 0, 0, 0,
0, 0.5, 0, 0,
0, 0, 1, 0,
0.5, 0.5, 0, 1]);
gl.drawArrays(gl.TRIANGLES, 0, 6);
};
main();
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/3.x/twgl.min.js"></script>
<script id="vshader" type="whatever">
attribute vec4 a_position;
attribute vec4 a_color;
varying vec4 v_color;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * a_position;
v_color = a_color;
}
</script>
<script id="fshader" type="whatever">
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
</script>
<canvas id="c" width="300" height="300"></canvas>
If your geometry has color per object, that doesn't change across the geometry, then you should pass that color as the uniform variable.
So you en up with only one attribute - position of vertices, few matrix uniforms - say model, view, projection matrices, that for the vertex shader, and one vector uniform variable for the fragment shader for "shading" the object.
My vertexShader:
attribute vec4 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
texCoord = vertexTexCoord;
}
My fragmentShder:
precision mediump float;
varying vec2 texCoord;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
Init Shader:
if (shader2D == nil) {
shader2D = [[Shader2D alloc] init];
shader2D.shaderProgramID = [ShaderUtils compileShaders:vertexShader2d :fragmentShader2d];
if (0 < shader2D.shaderProgramID) {
shader2D.vertexHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexPosition");
shader2D.textureCoordHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexTexCoord");
shader2D.mvpMatrixHandle = glGetUniformLocation(shader2D.shaderProgramID, "modelViewProjectionMatrix");
shader2D.texSampler2DHandle = glGetUniformLocation(shader2D.shaderProgramID,"texSampler2D");
}
else {
NSLog(#"Could not initialise shader2D");
}
}
return shader2D;
Rendering:
GLKMatrix4 mvpMatrix;
mvpMatrix = [self position: position];
mvpMatrix = GLKMatrix4Multiply([QCARutils getInstance].projectionMatrix, mvpMatrix);
glUseProgram(shader.shaderProgramID);
glVertexAttribPointer(shader.vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glVertexAttribPointer(shader.textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)texCoords);
glEnableVertexAttribArray(shader.vertexHandle);
glEnableVertexAttribArray(shader.textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [texture textureID]);
glUniformMatrix4fv(shader.mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&mvpMatrix);
glUniform1i(shader.texSampler2DHandle, 0);
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)indices);
glDisableVertexAttribArray(shader.vertexHandle);
glDisableVertexAttribArray(shader.textureCoordHandle);
It seems to work properly when one texture coordinates corresponds to one and only one vertex coordinates(Number of texCoords == Number of vertices)
My question: Does openGL assign a texture coordinates to one and only one vertex? In other words, when texture coordinates and vertex coordinates are not one-to-one correspondence, what will the rendering result turn out to be?
Yes, there needs to be a one-to-one correspondence between vertices and texCoords -- all information passed down the OpenGL pipeline is per-vertex, so every normal and every texCoord must have a vertex.
Note, however, that you can (and will often need to) have multiple texCoords, normals, or other per-vertex data for the same point in space: e.g. if you're wrapping a texture map around a sphere, there will be a "seam" where the ends of the rectangular texture meet. At those spots you'll need to have multiple vertices that occupy the same point.
I am having a look at web gl, and trying to render a cube, but I am having a problem when I try to add projection into the vertex shader. I have added an attribute, but when I use it to multiple the modelview and position, it stops displaying the cube. Im not sure why and was wondering if anyone could help? Ive tried looking at a few examples but just cant get this to work
vertex shader
attribute vec3 aVertexPosition;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
//gl_Position = uMVMatrix * vec4(aVertexPosition, 1.0);
}
fragment shader
#ifdef GL_ES
precision highp float; // Not sure why this is required, need to google it
#endif
uniform vec4 uColor;
void main() {
gl_FragColor = uColor;
}
function init() {
// Get a reference to our drawing surface
canvas = document.getElementById("webglSurface");
gl = canvas.getContext("experimental-webgl");
/** Create our simple program **/
// Get our shaders
var v = document.getElementById("vertexShader").firstChild.nodeValue;
var f = document.getElementById("fragmentShader").firstChild.nodeValue;
// Compile vertex shader
var vs = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vs, v);
gl.compileShader(vs);
// Compile fragment shader
var fs = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fs, f);
gl.compileShader(fs);
// Create program and attach shaders
program = gl.createProgram();
gl.attachShader(program, vs);
gl.attachShader(program, fs);
gl.linkProgram(program);
// Some debug code to check for shader compile errors and log them to console
if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS))
console.log(gl.getShaderInfoLog(vs));
if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS))
console.log(gl.getShaderInfoLog(fs));
if (!gl.getProgramParameter(program, gl.LINK_STATUS))
console.log(gl.getProgramInfoLog(program));
/* Create some simple VBOs*/
// Vertices for a cube
var vertices = new Float32Array([
-0.5, 0.5, 0.5, // 0
-0.5, -0.5, 0.5, // 1
0.5, 0.5, 0.5, // 2
0.5, -0.5, 0.5, // 3
-0.5, 0.5, -0.5, // 4
-0.5, -0.5, -0.5, // 5
-0.5, 0.5, -0.5, // 6
-0.5,-0.5, -0.5 // 7
]);
// Indices of the cube
var indicies = new Int16Array([
0, 1, 2, 1, 2, 3, // front
5, 4, 6, 5, 6, 7, // back
0, 1, 5, 0, 5, 4, // left
2, 3, 6, 6, 3, 7, // right
0, 4, 2, 4, 2, 6, // top
5, 3, 1, 5, 3, 7 // bottom
]);
// create vertices object on the GPU
vbo = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
// Create indicies object on th GPU
ibo = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indicies, gl.STATIC_DRAW);
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.enable(gl.DEPTH_TEST);
// Render scene every 33 milliseconds
setInterval(render, 33);
}
var mvMatrix = mat4.create();
var pMatrix = mat4.create();
function render() {
// Set our viewport and clear it before we render
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.useProgram(program);
// Bind appropriate VBOs
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
// Set the color for the fragment shader
program.uColor = gl.getUniformLocation(program, "uColor");
gl.uniform4fv(program.uColor, [0.3, 0.3, 0.3, 1.0]);
//
// code.google.com/p/glmatrix/wiki/Usage
program.uPMatrix = gl.getUniformLocation(program, "uPMatrix");
program.uMVMatrix = gl.getUniformLocation(program, "uMVMatrix");
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 1.0, 10.0, pMatrix);
mat4.identity(mvMatrix);
mat4.translate(mvMatrix, [0.0, -0.25, -1.0]);
gl.uniformMatrix4fv(program.uPMatrix, false, pMatrix);
gl.uniformMatrix4fv(program.uMVMatrix, false, mvMatrix);
// Set the position for the vertex shader
program.aVertexPosition = gl.getAttribLocation(program, "aVertexPosition");
gl.enableVertexAttribArray(program.aVertexPosition);
gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 3*4, 0); // position
// Render the Object
gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0);
}
Thanks in advance for any help
Problem is here:
..., gl.viewportWidth / gl.viewportHeight, ...
Both gl.viewportWidth and gl.viewportHeight are undefined values.
I think you missed this two lines:
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
You will see a lot of people doing this:
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
But please note that WebGL context also have this two attributes:
gl.drawingBufferWidth
gl.drawingBufferHeight
So your cube shows up without the perspective matrix, correct?
At first glance I would think that you may be clipping away your geometry with the near plane. You provide a near an far plane to the perpective function as 1.0 and 10.0 respectively. This means that for any fragments to be visible they must fall in the z range of [1, 10]. You cube is 1 unit per side, centered on (0, 0, 0), and you are moving it "back" from the camera 1 unit. This means that the nearest face to the camera will actually be at 0.5 Z, which is outside the clipping range and therefore discarded. About half of your cube WILL be at z > 1, but you'll be looking at the inside of the cube at that point. If you have backface culling turned on you won't see anything.
Long story short - Your cube is probably too close to the camera. Try this instead:
mat4.translate(mvMatrix, [0.0, -0.25, -3.0]);