iOS: GPUImage Library and VBO - ios

I am making use of Brad Larson's wonderful GPUImage library for image manipulation. So far, it's been great. However, I'm trying to add a filter to allow Mesh Deformation and running into quite a bit of issues. Specifically, I want to have a filter that uses VBO to render the Quad so I can ultimately dynamically change the vertices for the deformation.
The first step of using VBOs is causing a crash.
I created a subclass of GPUImageFilter overriding the - (void)newFrameReadyAtTime:(CMTime)frameTime method to render a quad via VBO. NOTE: I am simply trying to render a single Quad rather than a full mesh, that way I can tackle one issue at a time.
#implementation GPUMeshImageFilter {
GLuint _positionVBO;
GLuint _texcoordVBO;
GLuint _indexVBO;
BOOL isSetup_;
}
- (void)setupBuffers
{
static const GLsizeiptr verticesSize = 4 * 2 * sizeof(GLfloat);
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLsizeiptr textureSize = 4 * 2 * sizeof(GLfloat);
static const GLfloat squareTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLsizeiptr indexSize = 4 * sizeof(GLushort);
static const GLushort indices[] = {
0,1,2,3,
};
glGenBuffers(1, &_indexVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexSize, indices, GL_STATIC_DRAW);
glGenBuffers(1, &_positionVBO);
glBindBuffer(GL_ARRAY_BUFFER, _positionVBO);
glBufferData(GL_ARRAY_BUFFER, verticesSize, squareVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);
glGenBuffers(1, &_texcoordVBO);
glBindBuffer(GL_ARRAY_BUFFER, _texcoordVBO);
glBufferData(GL_ARRAY_BUFFER, textureSize, squareTextureCoordinates, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);
NSLog(#"Setup complete");
}
- (void)newFrameReadyAtTime:(CMTime)frameTime;
{
if (!isSetup_) {
[self setupBuffers];
isSetup_ = YES;
}
if (self.preventRendering)
{
return;
}
[GPUImageOpenGLESContext useImageProcessingContext];
[self setFilterFBO];
[filterProgram use];
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, filterSourceTexture);
glUniform1i(filterInputTextureUniform, 2);
if (filterSourceTexture2 != 0)
{
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, filterSourceTexture2);
glUniform1i(filterInputTextureUniform2, 3);
}
NSLog(#"Draw VBO");
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
[self informTargetsAboutNewFrameAtTime:frameTime];
}
#end
Plugging in this filter, I see: "Setup complete" and "Draw VBO" displayed to the console. However, after it calls the target (in this case a GPUImageView) it crashes at the target's drawing call, which uses glDrawArrays.
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Here is the complete method that contains this line.
- (void)newFrameReadyAtTime:(CMTime)frameTime;
{
[GPUImageOpenGLESContext useImageProcessingContext];
[self setDisplayFramebuffer];
[displayProgram use];
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
static const GLfloat textureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, inputTextureForDisplay);
glUniform1i(displayInputTextureUniform, 4);
glVertexAttribPointer(displayPositionAttribute, 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer(displayTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[self presentFramebuffer];
}
Any help would be greatly appreciated, I've been banging my head against this for awhile.

It looks likely that the crash occurs because the GL_ARRAY_BUFFER is still bound when GPUImageView-newFrameReadyAtTime: executes.
Try unbinding the buffer (i.e. binding it to 0) at the end of -setupBuffers:
glBindBuffer(GL_ARRAY_BUFFER, 0);
The reason this is a problem is because GPUImage uses the same OpenGL context from one GPUImageInput (e.g. GPUImageFilter, GPUImageView) to the next. I believe largely in order that each step can output to an OpenGL texture and then have that texture directly available to the next GPUImageInput.
So because GL_ARRAY_BUFFER is still bound the behavior of glVertexAttribPointer inside GPUImageView-newFrameReadyAtTime changes, effectively trying point the displayPositionAttribute attribute to the populated VBO at an offset of imageVertices, which is nonsensical and likely to cause a crash. See the glVertexAttribPointer docs.

This code below doesn't look right to me at all. Why are you enabling vertex attrib array 4 & 5? You should enable the array at the attribute location you are intending to use.
//position vbo
glEnableVertexAttribArray(4);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);
//texcoord vbo
glEnableVertexAttribArray(5);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);
If your vertex attribute is at position 0, you should enable attrib 0 and set pointer for attrib 0. If it's at position 4 (which I doubt), then you should enable attrib 4 and set the pointer for position 4. I can't think of any reason it should be mismatched like you have it.
You should either get the proper position by either setting it via a layout attribute, using glBindAttribLocation before shader linking, or using glGetAttribLocation after linking.
Let me know if this doesn't make sense.

Related

No transparency with simple OpenGL ES2.0 stencils

I am attempting to make a stencil mask in OpenGL. I have been following the model from this source (http://open.gl/depthstencils and more specifically, http://open.gl/content/code/c5_reflection.txt), and as far as I can tell, I have followed the example properly. My code is drawing one square stencil, and then another square on top of it. I expected to see only the parts of the second rotating green square that are covering the same space as the first. What I actually see is the two overlapping squares, one rotating with no transparency. One notable difference from the example is that I am not using a texture. Is that a problem? I figured that this would be a simpler example.
I'm fairly new to ES2.0, so if I'm doing something obviously stupid, please let me know.
Initialization:
GLuint attributes[] = { GLKVertexAttribPosition, GLKVertexAttribColor, GLKVertexAttribTexCoord0 };
const char *attributeNames[] = { "position", "color", "texcoord0" };
// These are all global GLuint variables
// vshSrc and fshSrc are const char* filenames (the files are found properly)
_myProgram = loadProgram(vshSrc, fshSrc, 3, attributes, attributeNames);
_myProgramUniformMVP = glGetUniformLocation(_myProgram, "modelViewProjectionMatrix");
_myProgramUniformTex = glGetUniformLocation(_myProgram, "tex");
_myProgramUniformOverrideColor = glGetUniformLocation(_myProgram, "overrideColor");
The draw loop:
glEnable(GL_DEPTH_TEST);
glUseProgram(_myProgram);
glDisable(GL_BLEND);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat gSquare[20] = { // not using the textures currently
// posX, posY, posZ, texX, texY,
-0.5f, -0.5f, 0, 0.0f, 0.0f,
0.5f, -0.5f, 0, 1.0f, 0.0f,
-0.5f, 0.5f, 0, 0.0f, 1.0f,
0.5f, 0.5f, 0, 1.0f, 1.0f
};
// Projection matrix
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
// Put the squares where they can be seen
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
glEnable(GL_STENCIL_TEST);
// Build the stencil
glStencilFunc(GL_ALWAYS, 1, 0xFF); // Set any stencil to 1
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilMask(0xFF); // Write to stencil buffer
glDepthMask(GL_FALSE); // Don't write to depth buffer
glClear(GL_STENCIL_BUFFER_BIT); // Clear stencil buffer (0 by default)
GLKMatrix4 mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
// Draw a stationary red square for the stencil (though the color shouldn't matter)
glUniformMatrix4fv(_chetProgramUniformMVP, 1, 0, mvp.m);
glUniform1i(_chetProgramUniformTex, 0);
glUniform4f(_chetProgramUniformOverrideColor, 1.0f, 1.0f, 1.0f,0.0f);
glVertexAttrib4f(GLKVertexAttribColor, 1, 0, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Prepare the mask
glStencilFunc(GL_EQUAL, 1, 0xFF); // Pass test if stencil value is 1
glStencilMask(0x00); // Don't write anything to stencil buffer
glDepthMask(GL_TRUE); // Write to depth buffer
glUniform4f(_myProgramUniformOverrideColor, 0.3f, 0.3f, 0.3f,1.0f);
// A slow rotating green square to be masked by the stencil
static float rotation = 0;
rotation += 0.01;
baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, rotation, 0, 0, 1);
mvp = GLKMatrix4Multiply(projectionMatrix, baseModelViewMatrix);
glUniformMatrix4fv(_myProgramUniformMVP, 1, 0, mvp.m);//The transformation matrix
glUniform1i(_myProgramUniformTex, 0); // The texture
glVertexAttrib4f(GLKVertexAttribColor, 0, 1, 0, 1);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 20, gSquare);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_STENCIL_TEST);
EDIT: The following shader stuff is irrelevant to the problem I was having. The stenciling does not take place in the shader.
Vertex Shader:
attribute vec4 position;
attribute vec4 color;
attribute vec2 texcoord0;
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform mat4 modelViewProjectionMatrix;
uniform vec4 overrideColor;
void main()
{
colorVarying = overrideColor * color;
texcoord = texcoord0;
gl_Position = modelViewProjectionMatrix * position;
}
Fragment Shader:
varying lowp vec4 colorVarying;
varying lowp vec2 texcoord;
uniform sampler2D tex;
void main()
{
gl_FragColor = colorVarying * texture2D(tex, texcoord);
}
It was necessary to initialize a stencil buffer. Here is the code that fixed it.
glGenRenderbuffersOES(1, &depthStencilRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH24_STENCIL8_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_STENCIL_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthStencilRenderbuffer);

OpenGL 2.0 rendering multiple objects with the same shader

I would like to know how to use the same shader for multiple objects but allow them objects to have a different colour
I have many cubes on the screen which all currently load the same shader, the only difference is when it is draw, I change the cubes colour. If I set the same _program for all of them, they are become all the same colour.
- (void)draw:(float)eyeOffset
{
// Calculate the per-eye model view matrix:
GLKMatrix4 temp = GLKMatrix4MakeTranslation(eyeOffset, 0.0f, 0.0f);
GLKMatrix4 eyeBaseModelViewMatrix = GLKMatrix4Multiply(temp, self.baseModelViewMatrix);
if (self.isTransparant)
{
glEnable (GL_BLEND);
glDisable(GL_CULL_FACE);
//glDisable(GL_DEPTH_TEST);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
if (self.textureInfo)
{
glBindTexture(self.textureInfo.target, self.textureInfo.name);
}
glBindVertexArrayOES(_vertexArray);
//See if we are sharing a program shader
if (self.tprogram)
{
glUseProgram(self.tprogram);
}
else
{
glUseProgram(_program);
}
self.modelViewMatrix = GLKMatrix4MakeTranslation(self.position.x,self.position.y, self.position.z );//(float)x, (float)y, -1.5f)
self.modelViewMatrix = GLKMatrix4Scale(self.modelViewMatrix, self.scale.x, self.scale.y, self.scale.z);
//rotation +=0.01;
self.modelViewMatrix = GLKMatrix4Rotate(self.modelViewMatrix,self.spinRotation, 0.0 ,0.0 ,1.0);
self.modelViewMatrix = GLKMatrix4Multiply(eyeBaseModelViewMatrix, self.modelViewMatrix);
GLKMatrix3 normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(self.modelViewMatrix), NULL);
GLKMatrix4 modelViewProjectionMatrix = GLKMatrix4Multiply(self.projectionMatrix, self.modelViewMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, normalMatrix.m);
_colorSlot = glGetUniformLocation(_program, "color");
GLfloat color[] = {
self.color.x, self.color.y, self.color.z, self.color.a};
glUniform4fv(_colorSlot, 1, color);
glDrawArrays(GL_TRIANGLES, 0, 36);
if (self.isTransparant)
{
glEnable(GL_CULL_FACE);
//glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
}
}
//setup for each cube
- (void)setup;
{
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(12));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(24));
glBindVertexArrayOES(0);
}
Shader
attribute vec4 position;
attribute vec3 normal;
uniform vec4 color;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
//vec4 diffuseColor = color;
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
//diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.7, dot(eyeNormal, normalize(lightPosition))); // 0.0
colorVarying = color * nDotVP;
gl_Position = modelViewProjectionMatrix * position;
}
I thought uniform vec4 color; allowed me to change the colour at anytime and if every object has a shader, it works fine, I can change object colours on the fly
How about sending a different uniform for each cube (say uniform vec4 cubeColor and use it in your fragment shader) before calling glDrawArrays() on it ?
Alternatively, you could consider uploading, for each cube, both vertices and vertex colors during the setup then, when drawing, bind the appropriate vertex buffers (e.g. attribute vec3 a_vertex) and vertex-color buffers (e.g. attribute vec4 a_vertexColor, which you assign, in your vertex shader, to varying vec4 v_vertexColor and use in your fragment shader as varying vec4 v_vertexColor).
Also, as a side note, if you're planning to use the same program, you can call glUseProgram() once, during the setup (OpenGL is based on a state machine, which means that it recalls certain parameters (aka. states, such as the current program) as long as you don't change them). This might enhance the performance of your program a little bit ;-)
Good luck.

Open GL Rendering not proper in iOS when using glDrawArrays(GL_TRIANGLES,0,numVert);

I am using the following code to render a .h file for Open GL.
However, The eventual result comes in triangles and not the whole thing. Please see the attached image. Can anyone please guide me why this is happening. As I am new to Open GL.
I want to develop an app like - https://itunes.apple.com/us/app/mclaren-p1/id562173543?mt=8
- (void)renderFrameQCAR
{
[self setFramebuffer];
// Clear colour and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Render video background and retrieve tracking state
QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
//NSLog(#"active trackables: %d", state.getNumActiveTrackables());
if (QCAR::GL_11 & qUtils.QCARFlags) {
glEnable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
}
glEnable(GL_DEPTH_TEST);
// We must detect if background reflection is active and adjust the culling direction.
// If the reflection is active, this means the pose matrix has been reflected as well,
// therefore standard counter clockwise face culling will result in "inside out" models.
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON)
glFrontFace(GL_CW); //Front camera
else
glFrontFace(GL_CCW); //Back camera
for (int i = 0; i < state.getNumTrackableResults(); ++i) {
// Get the trackable
const QCAR::TrackableResult* result = state.getTrackableResult(i);
const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());
// Choose the texture based on the target name
int targetIndex = 0; // "stones"
if (!strcmp(trackable.getName(), "chips"))
targetIndex = 1;
else if (!strcmp(trackable.getName(), "tarmac"))
targetIndex = 2;
Object3D *obj3D = [objects3D objectAtIndex:targetIndex];
// Render using the appropriate version of OpenGL
if (QCAR::GL_11 & qUtils.QCARFlags) {
// Load the projection matrix
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(qUtils.projectionMatrix.data);
// Load the model-view matrix
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelViewMatrix.data);
glTranslatef(0.0f, 0.0f, -kObjectScale);
glScalef(kObjectScale, kObjectScale, kObjectScale);
// Draw object
glBindTexture(GL_TEXTURE_2D, [obj3D.texture textureID]);
glTexCoordPointer(2, GL_FLOAT, 0, (const GLvoid*)obj3D.texCoords);
glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*)obj3D.vertices);
glVertexPointer(3, GL_FLOAT, 0, MclarenInfoVerts);
glNormalPointer(GL_FLOAT, 0, MclarenInfoNormals);
glTexCoordPointer(2, GL_FLOAT, 0, MclarenInfoTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, MclarenInfoNumVerts);
}
#ifndef USE_OPENGL1
else {
// OpenGL 2
QCAR::Matrix44F modelViewProjection;
ShaderUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
ShaderUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
ShaderUtils::multiplyMatrix(&qUtils.projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
glUseProgram(shaderProgramID);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)obj3D.vertices);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)obj3D.normals);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)obj3D.texCoords);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [obj3D.texture textureID]);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&modelViewProjection.data[0]);
glUniform1i(texSampler2DHandle, 0 /*GL_TEXTURE0*/);
glVertexPointer(3, GL_FLOAT, 0, MclarenInfoVerts);
glNormalPointer(GL_FLOAT, 0, MclarenInfoNormals);
glTexCoordPointer(2, GL_FLOAT, 0, MclarenInfoTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, MclarenInfoNumVerts);
ShaderUtils::checkGlError("EAGLView renderFrameQCAR");
}
#endif
}
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
if (QCAR::GL_11 & qUtils.QCARFlags) {
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
#ifndef USE_OPENGL1
else {
glDisableVertexAttribArray(vertexHandle);
glDisableVertexAttribArray(normalHandle);
glDisableVertexAttribArray(textureCoordHandle);
}
#endif
QCAR::Renderer::getInstance().end();
[self presentFramebuffer];
}
Ok. I found the issue. It so happened that my designer was giving me a file with quadilateral rendering, where as Vuforia-Qualcomm-Unity etc recognizes just triangulated .obj. For anyone who gets stuck here :) Just let your designer know to render in triangulation and not Quad.

Opengl 3.3 rendering nothing

I'm trying to create my own lib that can simplify code, so I'm trying to write the tutorials that we can found on web using my lib but I'm have some trouble and I don't know why it's rendereing nothing.
so this is my main file
#include "../../lib/OpenGLControl.h"
#include "../../lib/Object.h"
#include <iostream>
using namespace std;
using namespace sgl;
int main(){
OpenGLControl sglControl;
sglControl.initOpenGL("02 - My First Triangle",1024,768,3,3);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
//trconfigurações do triangulo
vector<glm::vec3> vertices;
vertices.push_back(glm::vec3(-1.0f,-1.0f,0.0f));
vertices.push_back(glm::vec3(1.0f,-1.0f,0.0f));
vertices.push_back(glm::vec3(0.0f,1.0f,0.0f));
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
Object triangle(vertices);
do{
glClear(GL_COLOR_BUFFER_BIT);
triangle.render(GL_TRIANGLES);
glfwSwapBuffers();
}
while( glfwGetKey( GLFW_KEY_ESC ) != GLFW_PRESS &&
glfwGetWindowParam( GLFW_OPENED ) );
glDeleteVertexArrays(1, &VertexArrayID);
glfwTerminate();
return 0;
}
and this is my Object class functions.
#include "../lib/Object.h"
sgl::Object::Object(){
this->hasColor = false;
}
sgl::Object::Object(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, this->vertices.size()*sizeof(glm::vec3),&vertices[0], GL_STATIC_DRAW);
}
sgl::Object::~Object(){
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDeleteBuffers(1,&(this->vertexBuffer));
glDeleteBuffers(1,&(this->colorBuffer));
}
void sgl::Object::render(GLenum mode){
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glVertexAttribPointer(
0, // The attribute we want to configure
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
cout<<vertices.size()<<endl;
glDrawArrays(mode, 0, vertices.size());
glDisableVertexAttribArray(0);
}
void sgl::Object::setColor(std::vector<glm::vec3> color){
for(int i = 0; i < color.size(); i++)
this->color.push_back(color[i]);
glGenBuffers(1, &(this->colorBuffer));
glBindBuffer(GL_ARRAY_BUFFER, this->colorBuffer);
glBufferData(GL_ARRAY_BUFFER, color.size()*sizeof(glm::vec3),&color[0], GL_STATIC_DRAW);
this->hasColor = true;
}
void sgl::Object::setVertices(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size()*sizeof(glm::vec3),&vertices[0], GL_STATIC_DRAW);
}
the tutorial that I rewtriting is it:
/*
Copyright 2010 Etay Meiri
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Tutorial 03 - First triangle
*/
#include <stdio.h>
#include <GL/glew.h>
#include <GL/freeglut.h>
#include "math_3d.h"
GLuint VBO;
static void RenderSceneCB()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
glutSwapBuffers();
}
static void InitializeGlutCallbacks()
{
glutDisplayFunc(RenderSceneCB);
}
static void CreateVertexBuffer()
{
Vector3f Vertices[3];
Vertices[0] = Vector3f(-1.0f, -1.0f, 0.0f);
Vertices[1] = Vector3f(1.0f, -1.0f, 0.0f);
Vertices[2] = Vector3f(0.0f, 1.0f, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100, 100);
glutCreateWindow("Tutorial 03");
InitializeGlutCallbacks();
// Must be done after glut is initialized!
GLenum res = glewInit();
if (res != GLEW_OK) {
fprintf(stderr, "Error: '%s'\n", glewGetErrorString(res));
return 1;
}
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
CreateVertexBuffer();
glutMainLoop();
return 0;
}
if some one can find the error please help me!
I found the problem, in constructor when I call de function glbufferdata I was sending the wrong data, the right way to do it is that:
sgl::Object::Object(std::vector<glm::vec3> vertices){
for(int i = 0; i < vertices.size(); i++)
this->vertices.push_back(vertices[i]);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, this->vertices.size()*sizeof(glm::vec3),&(this->vertices[0]), GL_STATIC_DRAW);
}

VBO glDrawElements and glVertexAttribPointer on GLES2.0 displays nothing

I can display a texture using shaders, glVertexAttribPointer and glDrawArrays like so:
Init
const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f
};
const GLfloat squareTex[] = {
0, 0,
1, 0,
0, 1,
1, 1
};
glEnableVertexAttribArray(PositionTag);
glEnableVertexAttribArray(TexCoord0Tag);
glVertexAttribPointer(PositionTag, 2, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(TexCoord0Tag, 2, GL_FLOAT, GL_FALSE, 0, squareTex);
And for draw
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
But I'm having difficulty converting to VBOs, shaders and glDrawElements. This is the code I have so far, but nothing displays:
Header
typedef struct MyVertex
{
float x, y, z; //Vertex
float nx, ny, nz; //Normal
float s0, t0; //Texcoord0
} MyVertex;
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
Init
glGenBuffers(1, &VertexVBOID);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
MyVertex pvertices[4];
//Fill the pvertices array
pvertices[0].x = -0.5f;
pvertices[0].y = -0.33f;
pvertices[0].z = 0.0;
pvertices[0].nx = 0.0;
pvertices[0].ny = 0.0;
pvertices[0].nz = 1.0;
pvertices[0].s0 = 0.0;
pvertices[0].t0 = 0.0;
pvertices[1].x = 0.5f;
pvertices[1].y = -0.33f;
pvertices[1].z = 0.0;
pvertices[1].nx = 0.0;
pvertices[1].ny = 0.0;
pvertices[1].nz = 1.0;
pvertices[1].s0 = 1.0;
pvertices[1].t0 = 0.0;
pvertices[2].x = -0.5f;
pvertices[2].y = 0.33f;
pvertices[2].z = 0.0;
pvertices[2].nx = 0.0;
pvertices[2].ny = 0.0;
pvertices[2].nz = 1.0;
pvertices[2].s0 = 0.0;
pvertices[2].t0 = 1.0;
pvertices[3].x = 0.5f;
pvertices[3].y = 0.33f;
pvertices[3].z = 0.0;
pvertices[3].nx = 0.0;
pvertices[3].ny = 0.0;
pvertices[3].nz = 1.0;
pvertices[3].s0 = 1.0;
pvertices[3].t0 = 1.0;
glBufferData(GL_ARRAY_BUFFER, sizeof(MyVertex)*4, NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(MyVertex)*4, pvertices);
glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
int pindices[6];
pindices[0]=0;
pindices[1]=1;
pindices[2]=2;
pindices[3]=2;
pindices[4]=1;
pindices[5]=3;
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*6, NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(int)*6, pindices);
Draw
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableVertexAttribArray(PositionTag);
glEnableVertexAttribArray(NormalTag);
glEnableVertexAttribArray(TexCoord0Tag);
glVertexAttribPointer(PositionTag, 3, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(0));
glVertexAttribPointer(NormalTag, 3, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(12));
glVertexAttribPointer(TexCoord0Tag, 2, GL_FLOAT, GL_FALSE, 32, BUFFER_OFFSET(24));
// glDrawRangeElements(GL_TRIANGLES, x, y, z, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glDrawElements(GL_TRIANGLES, 3, GL_INT, 0);
According to here, GL_INT is not a valid type to use for indices in glDrawElements. Try using unsigned ints for your indices (and of course GL_UNSIGNED_INT in glDrawElements). You may still use your int data as indices, but as glDrawElements needs GL_UNSIGNED_INT, it would be more consistent to make the array unsigned int.
EDIT: After looking into the specification (based on your tags I took the ES 2.0 spec), they seem to further limit it to unsigned byte and unsigned short. I don't know if it is that limited in iOS, but we can conclude that the data has at least to be unsigned. On the other hand I haven't found any statement about a possible GL_INVALID_ENUM error that is thrown on a wrong type argument, but it would be reasonable to get one.
Your code doesn't look terribly wrong, so this time the devil is somewhere in the details. My guess is, that your use of a struct and its data fields' alignments don't match the offsets passed to OpenGL.
I suggest you use the offsetof() macro found in stddef.h to portably get the offsets of the data fields.

Resources