I am drawing lines using this code
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, serieLine[serie_i]);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_TRUE, 0, colors[serie_i]);
glDrawArrays(GL_LINE_STRIP, 0, count/2);
but the result is sometimes kinda creepy, like this
I know that by using GL_TRIANGLE_STRIP I might get better results, but every algorithm I've tried so far to calculate triangles gets me no result or very strange one.
Any idea to get better one will be appreciated.
Related
I’m currently developing a drawing app on iOS, with OpenGL ES 2.0 (I begin using it). I would like to reproduce textured brushes on my app. For that, I decided to use shaders (best choice?). At this stage, I have my textured brushes, but unfortunately, I also have some performance problems after few seconds…
Here is an overview of my app process:
I receive about 140 points each second.
Each time, on the draw function, I browse all of my points (points of contained on the Stroke class, which is contained on the layer) and I redraw it.
Code:
for (int strokeId = 0; strokeId < layer->strokesList.size(); strokeId++) {
Stroke* stroke = layer->strokesList.at(strokeId);
[…]
glVertexAttribPointer(mainProgram.positionSlot, 2, GL_FLOAT, GL_FALSE, 0, stroke->vertices.Position);
glVertexAttribPointer(mainProgram.colorSlot, 4, GL_FLOAT, GL_FALSE, 0, stroke->vertices.Color);
glDrawArrays(GL_TRIANGLES, 0, (int)(stroke->nbVertices));
[…]
}
I am opened to any suggestion to improve this drawing method, thank you!
i have a question about a circle which has a texture mapping. My code works well but i have not antialised edges so it is not smooth and looks not good. I have read now about 3 hours and found some solutions but i don't know how can i implement them in my code. There were two solutions which sounds pretty good.
First was a blurry texture which should bind instead of a non blurry to have smooth edges.
Second add color vertices on the edges with opacity to have smooth edges. My currently draw function looks like this:
CC_NODE_DRAW_SETUP();
[self.shaderProgram use];
ccGLBindTexture2D( _texture.name );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
ccGLBlendFunc( _blendFunc.src, _blendFunc.dst);
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position | kCCVertexAttribFlag_TexCoords );
// Send the texture coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, _textCoords);
// Send the polygon coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, _triangleFanPos);
// Draw it
glDrawArrays(GL_TRIANGLE_FAN, 0, _numOfSegements+2);
I am currently using cococs2d version 3. I asked a similar question and found only the solution of enable the multisampling on cocos2d but this break my fps to 30.
So maybe there is someone how can help me.
I've recently changed drawing in my current project from standard drawing from a memory array to VBOs. To my surprise the framerate dropped significantly from 60fps to 30fps drawing a model with 1200verts 8 times. Doing further profiling showed that glDrawElements took 10 times as long when using VBOs compared to drawing from memory.
I am really puzzled why this is happening. Does anyone know what could be the cause for a performance decrease?
I am testing on an iPhone 5 running iOS 6.1.2.
I've isolated my VBO handling into a single function where I create the vertex/index buffer once statically at the top of the function. I can switch between normal and VBO rendering with an #ifdef USE_VBO
- (void)drawDuck:(Toy*)toy reflection:(BOOL)reflection
{
ModelOBJ* model = _duck[0].model;
int stride = sizeof(ModelOBJ::Vertex);
#define USE_VBO
#ifdef USE_VBO
static bool vboInitialized = false;
static unsigned int vbo, ibo;
if (!vboInitialized) {
vboInitialized = true;
// Generate VBO
glGenBuffers(1, &vbo);
int numVertices = model->getNumberOfVertices();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, stride*numVertices, model->getVertexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Generate index buffer
glGenBuffers(1, &ibo);
int numIndices = model->getNumberOfIndices();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned short)*numIndices, model->getIndexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
#endif
[self setupDuck:toy reflection:reflection];
#ifdef USE_VBO
// Draw with VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, position));
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, texCoord));
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, normal));
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
#else
// Draw with array
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->position);
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->texCoord);
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->normal);
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, model->getIndexBuffer());
#endif
}
ModelOBJ::Vertex is just 3,2,3 float for pos, texcoord, normal. Indices are ushort.
UPDATE: I've now wrapped the draw setup (ie. the attrib binding calls) into an VAO and now performance is ok, even slightly better than drawing from main memory. So my conclusion is that VBO support without VAOs is broken on iOS. Is that assumption correct?
It is likely that the driver was falling back to software vertex submission (CPU copy from the VBO into the command buffer). This can be worse than using vertex arrays in client memory, as client memory us usually cached, while VBO contents are typically in write combined memory on iOS.
When using the CPU Sampler in Instruments, you'll see a ton if time underneath glDrawArrays/glDrawElements in gleRunVertexSubmitARM.
The most common reason to fall back to SW CPU submission is an unaligned attribute (current iOS devices require each attribute to be 4 byte aligned), but that doesn't appear to be the case for the 3 attributes you've shown. After that, the next most common cause is mixing client arrays and buffer objects in a single vertex array configuration.
In this case, you probably have a stray vertex attribute binding: some other array element is likely still enabled and pointing to a client array, causing everything to fall off of the hardware DMA path. By creating a VAO, you've either switched away from the misconfigured default VAO, or alternatively you are trying to enable a client VAO but being saved because client arrays are depreciated and do not function when used with VAOs (throws an INVALID_OPERATION error instead).
When you populate your index buffer with glBufferData, the second argument should be 2*numIndices rather than stride*numIndices.
Since your index buffer is much larger than it needs to be, this could explain your performance problem.
I'm trying to get a game I made for iOS work in OSX. And so far I have been able to get everything working except for the drawing of some random generated hills using a glbound texture.
It works perfectly in iOS but somehow this part is the only thing not visible when the app is run in OSX. I checked all coords and color values so I'm pretty sure it has to do with OpenGL somehow.
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindTexture(GL_TEXTURE_2D, _textureSprite.texture.name);
glColor4f(_terrainColor.r,_terrainColor.g,_terrainColor.b, 1);
glVertexPointer(2, GL_FLOAT, 0, _hillVertices);
glTexCoordPointer(2, GL_FLOAT, 0, _hillTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nHillVertices);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
You're disabling the texture coordinate (and color) array along with the texturing unit, yet are binding a texture coordinate pointer.
Is this really what you intend to do?
Appearantly it was being drawn after all, only as a 1/2 pixel line. Somehow there is some scaling on the vertices in effect, will have to check my code.
I can't find much info on whether drawing from multiple vertex buffers is supported on opengl es 2.0 (i.e use one vertex buffer for position data and another for normal, colors etc). This page http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html and listing 9.4 in particular implies you should be able to, but I can't get it to work on my program. Code for the offending draw call:
glBindBuffer(GL_ARRAY_BUFFER, mPositionBuffer->openglID);
glVertexAttribPointer(0, 4, GL_FLOAT, 0, 16, NULL);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mTexCoordBuffer->openglID);
glVertexAttribPointer(1, 2, GL_FLOAT, 0, 76, NULL);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBuffer->openglID);
glDrawElements(GL_TRIANGLES, 10788, GL_UNSIGNED_SHORT, NULL);
This draw call will stall or crash with EXC_BAD_ACCESS on the simulator, and gives very weird behavior on the device (opengl draws random triangles or presents previously rendered frames). No opengl call ever returns an error, and I've inspected the vertex buffers extensively and am confident they have the correct sizes and data.
Has anyone successfully rendered using multiple vertex buffers and can share their experience on why this might not be working? Any info on where to start debugging stalled/failed draw calls that don't return any error code would be greatly appreciated.
Access violations generally mean that you are trying to draw more triangles than you have allocated in a buffer. The way you've set up buffers is perfectly fine and should work, I would be checking if your parameters are set properly:
http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
I think your issue is either that you've switched offset and stride in your glVertexAttribPointer calls, or you've miscounted the number of indices you're drawing
Yes, you can use multiple vertex buffer objects (VBOs) for a single draw. The OpenGL ES 2.0 spec says so in section 2.9.1.
Do you really have all those hard-coded constants in your code? Where did that 76 come from?
If you want help debugging, you need to post the code that initializes your buffers (the code that calls glGenBuffers and glBufferData). You should also post the stack trace of EXC_BAD_ACCESS.
It might also be easier to debug if you drew something simpler, like one triangle, instead of 3596 triangles.