i have a question about a circle which has a texture mapping. My code works well but i have not antialised edges so it is not smooth and looks not good. I have read now about 3 hours and found some solutions but i don't know how can i implement them in my code. There were two solutions which sounds pretty good.
First was a blurry texture which should bind instead of a non blurry to have smooth edges.
Second add color vertices on the edges with opacity to have smooth edges. My currently draw function looks like this:
CC_NODE_DRAW_SETUP();
[self.shaderProgram use];
ccGLBindTexture2D( _texture.name );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
ccGLBlendFunc( _blendFunc.src, _blendFunc.dst);
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position | kCCVertexAttribFlag_TexCoords );
// Send the texture coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, _textCoords);
// Send the polygon coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, _triangleFanPos);
// Draw it
glDrawArrays(GL_TRIANGLE_FAN, 0, _numOfSegements+2);
I am currently using cococs2d version 3. I asked a similar question and found only the solution of enable the multisampling on cocos2d but this break my fps to 30.
So maybe there is someone how can help me.
Related
I’m currently developing a drawing app on iOS, with OpenGL ES 2.0 (I begin using it). I would like to reproduce textured brushes on my app. For that, I decided to use shaders (best choice?). At this stage, I have my textured brushes, but unfortunately, I also have some performance problems after few seconds…
Here is an overview of my app process:
I receive about 140 points each second.
Each time, on the draw function, I browse all of my points (points of contained on the Stroke class, which is contained on the layer) and I redraw it.
Code:
for (int strokeId = 0; strokeId < layer->strokesList.size(); strokeId++) {
Stroke* stroke = layer->strokesList.at(strokeId);
[…]
glVertexAttribPointer(mainProgram.positionSlot, 2, GL_FLOAT, GL_FALSE, 0, stroke->vertices.Position);
glVertexAttribPointer(mainProgram.colorSlot, 4, GL_FLOAT, GL_FALSE, 0, stroke->vertices.Color);
glDrawArrays(GL_TRIANGLES, 0, (int)(stroke->nbVertices));
[…]
}
I am opened to any suggestion to improve this drawing method, thank you!
I am drawing lines using this code
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, serieLine[serie_i]);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_TRUE, 0, colors[serie_i]);
glDrawArrays(GL_LINE_STRIP, 0, count/2);
but the result is sometimes kinda creepy, like this
I know that by using GL_TRIANGLE_STRIP I might get better results, but every algorithm I've tried so far to calculate triangles gets me no result or very strange one.
Any idea to get better one will be appreciated.
I'm trying to implement shadow volumes according to NVDIA GPU Gems Chapter 9. Efficient Shadow Volume Rendering on iPad, but I'm having issues with the front / light cap appearing in my stencil buffer.
I'm trying to render shadows on the box in the middle of the picture below. Shadows are being correctly generated on the right side of the box, but when I move the camera around, parts of the lit sides of the box are shadowed. It seems to me like it could be a problem with the resolution of the depth buffer, not recognizing when the shadow volume is the same depth as the box and should not be drawn, but I've used glDepthFunc(GL_LESS) for the drawing of the shadow volumes to try to correct this, it doesn't seem to change anything.
Here is a summary of my code:
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_BLEND);
[self drawAmbient];
glDepthMask(GL_FALSE);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 0, 0xff);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glDisable(GL_CULL_FACE);
[self drawShadowVolumes];
glStencilFunc(GL_EQUAL, 0, 0xff);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_KEEP, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_KEEP, GL_KEEP);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_TRUE);
glDepthFunc(GL_EQUAL);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_CULL_FACE);
[self drawDirectionalLight];
You doing something wrong. For default Z-fail technic you must have 2 passes of shadow volume rendering, one for "Degenerated quads" (shadow volumes) and one for "Exactly object geometry" with flats normals (shadow caps). I can see only one pass for "Degenerated quads", but where pass for "Exactly geometry" with opposed flags in stencil buffer?
Degenerated quads must be rendered with
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
and Exactly geometry must be rendered with opposed flags
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
The depth test must be GL_LESS or GL_LEQUAL like in simple geometry rendering.
I'm trying to get a game I made for iOS work in OSX. And so far I have been able to get everything working except for the drawing of some random generated hills using a glbound texture.
It works perfectly in iOS but somehow this part is the only thing not visible when the app is run in OSX. I checked all coords and color values so I'm pretty sure it has to do with OpenGL somehow.
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindTexture(GL_TEXTURE_2D, _textureSprite.texture.name);
glColor4f(_terrainColor.r,_terrainColor.g,_terrainColor.b, 1);
glVertexPointer(2, GL_FLOAT, 0, _hillVertices);
glTexCoordPointer(2, GL_FLOAT, 0, _hillTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nHillVertices);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
You're disabling the texture coordinate (and color) array along with the texturing unit, yet are binding a texture coordinate pointer.
Is this really what you intend to do?
Appearantly it was being drawn after all, only as a 1/2 pixel line. Somehow there is some scaling on the vertices in effect, will have to check my code.
I'm trying to draw a simple crystal that rotates on its axis. I can get the shape right easily enough by drawing a pyramid and then drawing it again upside down, but I've got two problems.
First off, even though I draw everything in the same color, two of the faces come out a different color as the other two.
Second, it's placing a "bottom" on each pyramid that's visible through the translucent walls of the crystal, which ruins the effect. Is there any way to get rid of it?
Here's the code I'm using to set up and draw the GL scene. There's a lot more OpenGL code than this, of course, but this is the relevant part.
procedure Initialize;
begin
glShadeModel(GL_SMOOTH);
glClearColor(0.0, 0.0, 0.0, 0.5);
glClearDepth(1.0);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
end;
procedure Draw; //gets called in a loop
begin
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(-1.5,-0.5,-6.0);
glRotatef(rotation,0.0,1.0,0.0);
glBegin(GL_TRIANGLE_FAN);
glColor4f(0, 0, 1, 0.2);
glVertex3f(0, 3.4, 0);
glVertex3f(-1, 0, -1);
glVertex3f(-1, 0, 1);
glVertex3f(1, 0, 1);
glVertex3f(1, 0, -1);
glVertex3f(-1, 0, -1);
glEnd;
glBegin(GL_TRIANGLE_FAN);
glVertex3f(0, -3.4, 0);
glVertex3f(-1, 0, -1);
glVertex3f(-1, 0, 1);
glVertex3f(1, 0, 1);
glVertex3f(1, 0, -1);
glVertex3f(-1, 0, -1);
glEnd;
rotation := rotation + 0.02;
end;
Anyone know what I'm doing wrong and how to fix it?
I'm trying to draw a simple crystal
Stop. Crystals are translucent, and the moment you start drawing translucent objects, you can basically discard any notion of the effect being "simple". Rendering a true prism (which refracts different wavelengths of light differently) is something that requires raytracing of some form to get right. And there are many ray tracers that can't even get it right, since they only trace R, G and B wavelengths, whereas you need to trace many wavelengths to approximate the refraction and light splitting pattern of a prism.
The best you're going to get is on a rasterizer like OpenGL some level of fakery.
I can't explain what's going on with the faces, but the problem with seeing through to the other polygons is simple: you're not using backface culling. Unless you want to see the back faces of transparent objects, you need to make sure that backface culling is active.