Automatically calculate normals in GLKit/OpenGL-ES - ios

I'm making some fairly basic shapes in OpenGL-ES based on sample code from Apple. They've used an array of points, with an array of indices into the first array and each set of three indices creates a polygon. That's all great, I can make the shapes I want. To shade the shapes correctly I believe I need to calculate normals for each vertex on each polygon. At first the shapes were cuboidal so it was very easy, but now I'm making (slightly) more advanced shapes I want to create those normals automatically. It seems easy enough if I get vectors for two edges of a polygon (all polys are triangles here) and use their cross product for every vertex on that polygon. After that I use code like below to draw the shape.
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, 0, triangleColours);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, triangleNormals);
glDrawArrays(GL_TRIANGLES, 0, 48);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
glDisableVertexAttribArray(GLKVertexAttribNormal);
What I'm having trouble understanding is why I have to do this manually. I'm sure there are cases when you'd want something other than just a vector perpendicular to the surface, but I'm also sure that this is the most popular use case by far, so shouldn't there be an easier way? Have I missed something obvious? glCalculateNormals() would be great.
//And here is an answer
Pass in a GLKVector3[] that you wish to be filled with your normals, another with the vertices (each three are grouped into polygons) and then the count of the vertices.
- (void) calculateSurfaceNormals: (GLKVector3 *) normals forVertices: (GLKVector3 *)incomingVertices count:(int) numOfVertices
{
for(int i = 0; i < numOfVertices; i+=3)
{
GLKVector3 vector1 = GLKVector3Subtract(incomingVertices[i+1],incomingVertices[i]);
GLKVector3 vector2 = GLKVector3Subtract(incomingVertices[i+2],incomingVertices[i]);
GLKVector3 normal = GLKVector3Normalize(GLKVector3CrossProduct(vector1, vector2));
normals[i] = normal;
normals[i+1] = normal;
normals[i+2] = normal;
}
}

And again the answer is: OpenGL is neither a scene managment library nor a geometry library, but just a drawing API that draws nice pictures to the screen. For lighting it needs normals and you give it the normals. That's all. Why should it compute normals if this can just be done by the user and has nothing to do with the actual drawing?
Often you don't compute them at runtime anyway, but load them from a file. And there are many many ways to compute normals. Do you want per-face normals or per-vertex normals? Do you need any specific hard edges or any specific smooth patches? If you want to average face normals to get vertex normals, how do you want to average these?
And with the advent of shaders and the removing of the builtin normal attribute and lighting computations in newer OpenGL versions, this whole question becomes obsolete anyway as you can do lighting any way you want and don't neccessarily need traditional normals anymore.
By the way, it sounds like at the moment you are using per-face normals, which means every vertex of a face has the same normal. This creates a very faceted model with hard edges and also doesn't work very well together with indices. If you want a smooth model (I don't know, maybe you really want a faceted look), you should average the face normals of the adjacent faces for each vertex to compute per-vertex normals. That would actually be the more usual use-case and not per-face normals.
So you can do something like this pseudo-code:
for each vertex normal:
intialize to zero vector
for each face:
compute face normal using cross product
add face normal to each vertex normal of this face
for each vertex normal:
normalize
to generate smooth per-vertex normals. Even in actual code this should result in something between 10 and 20 lines of code, which isn't really complex.

Related

VAO + VBOs logic for data visualization (boost graph)

I'm using the Boost Graph Library to organize points linked by edges in a graph, and now I'm working on their display.
I'm a newbie in OpenGL ES 2/GLKit and Vertex Array Objects / Vertex Buffer Objects. I followed this tutorial which is really good, but at the end of what I guess I should do is :
Create vertices only once for a "model "instance of a Shape class (the "sprite" representing my boost point position) ;
Use this model to feed VBOs ;
Bind VBOs to a unique VAO ;
Draw everything in a single draw call, changing the matrix for each "sprite".
I've read that accessing VBOs is really bad for performances, and that I should use swapping VBOs.
My questions are :
is the matrix translation/scaling/rotation possible in a single call ?
then, if it is: is my logic good ?
finally: it would be great to have some code examples :-)
If you just want to draw charts, there are much easier libraries to use besides OpenGL ES. But assuming you have your reasons:
Just take a stab at what you've described and test it. If it's good enough then congratulations: you're done.
You don't mention how many graphs, how many points per graph, how often the points are modified, and the frame rate you desire.
If you're updating a few hundred vertices, and they don't change frequently, you might not even need VBOs. Recent hardware can render a lot of sprites even without them. Depends on how many verts and how often they change.
To start, try this:
// Bind the shader
glUseProgram(defaultShaderProgram);
// Set the projection (camera) matrix.
glUniformMatrix4fv(uProjectionMatrix, 1, GL_FALSE, (GLfloat*)projectionMatrix[0]);
for ( /* each chart */ )
{
// Set the sprite (scale/rotate/translate) matrix.
glUniformMatrix4fv(uModelViewMatrix, 1, GL_FALSE, (GLfloat*)spriteViewMatrix[0]);
// Set the vertices.
glVertexAttribPointer(ATTRIBUTE_VERTEX_POSITION, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &pVertices->x));
glVertexAttribPointer(ATTRIBUTE_VERTEX_DIFFUSE, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), &pVertices->color));
// Render. Assumes your shader does not use a texture,
// since we did not set one.
glDrawArrays(GL_TRIANGLES, 0, numVertices);
}

Multi-Texturing - Interpolation between two layers of an 3D texture

I'm trying to achieve terrain texturing using 3D texture that consists of several layers of material and to make smooth blending between materials.
Maybe my illustration will explain it better:
Just imagine that each color is a cool terrain texture, like grass, stone, etc.
I want to get them properly blended, but with current approach I get all textures between requested besides textures which I want to appear (it seems logical because, as I've read, 3D texture is treated as three-dimensional array instead of texture pillars).
Current (and foolish, obviously) approach is simple as a pie ('current' result is rendered using point interpolation, desired result is hand-painted):
Vertexes:
Vertex 1: Position = Vector3.Zero, UVW = Vector3.Zero
Vertex 2: Position = Vector3(0, 1, 0), UVW = Vector3(0, 1, 0.75f)
Vertex 3: Position = Vector3(0, 0, 1), UVW = Vector3(1, 0, 1)
As you can see, first vertex of the triangle uses first material (the red one), second vertex uses third material (the blue one) and third vertex uses last fourth material (the yellow one).
This is how it's done in pixel shader (UVW is directly passed without changes):
float3 texColor = tex3D(ColorTextureSampler, input.UVW);
return float4(texColor, 1);
The reason about my choice is my terrain structure. The terrain is being generated from voxels (each voxel holds material ID) using marching cubes. Each vertex is 'welded' because meshes is pretty big and I don't want to make every triangle individual (but I can still do it if there is no way to solve my question using connected vertices).
I recently came to an idea about storing material IDs of other two vertices of the triangle and their blend factors (I would have an float2 UV pair, float3 for material IDs and float3 for blend factor of each material id) in each vertex, but I don't see any way to accomplish this without breaking my mesh into individual triangles.
Any help would be greatly appreciated. I'm targeting for SlimDX with C# and Direct3D 9 API. Thanks for reading.
P.S.: I'm sorry if I made some mistakes in this text, English is not my native language.
Probably, your ColorTextureSampler using point filtering (D3DTEXF_POINT). Use either D3DTEXF_LINEAR or D3DTEXF_ANISOTROPIC to acheve desired interpolation effect.
I'm not very familiar with SlimDX 9, but you've got the idea.
BTW, nice illustration =)
Update 1
Result in your comment below seems appropriate to your code.
Looks like to get desired effect you must change overall approach.
It is not complete solution for you, but there is how we make it in plain 3D terrains:
Every vertex has 1 pair (u, v) of texure coodrinates
You have n textures to sample into (T1, T2, T3, ..., Tn) that represents different layers of terrain: sand, grass, rock, etc.
You have mask texture(s) n channels in total, that stores blending coefficients for each texture T in its channels: R channel holds alpha for T1, G channel for T2, B for T3, ... etc.
In pixel shader you sampling your layer textures as usual, and get color values float4 val1, val2, val3, ...
Then you sampling masks texture(s) for corresponding blend coefficients and get float blend1, blend2, blend3, ...
Then you applying some kind of blending algorith, for example simple linear interpolation:
float4 terrainColor = lerp( val1, val2, blend1 );
terrainColor = lerp( terrainColor, val3, blend2);
terrainColor = lerp( terrainColor, ..., blendN );
For example if your T1 is a grass, and you have a big grass field in a middle of your map, you will wave a big red field in the middle.
This algorithm is a bit slow, because of much texture sampling, but simple to implement, gives good visual results and most flexible. You can use not only mask as blend coefficients, but any values: for example height (sample more snow in mountain peaks, rock in mountains, dirt in low ground), slope (rock on steep, grass on flat), even fixed values, etc. Or mix up all of that. Also, you can vary a blending: use built-in lerp or something more complicated (warning! this example is stupid):
float4 terrainColor = val1 * val2 * blend1 + val2 * val3 * blend2;
terrainColor = saturate(terrainColor);
Playing with blend algo is the most interesting part of this aproach. And you can find many-many techniques in google.
Not sure, but hope it helps!
Happy coding! =)

opengl es, ios and triangle fans

I am currently rendering a scene using triangles with the following code:
glBindVertexArrayOES(_mVertexArrayObjectTriangles);
glBindBuffer(GL_ARRAY_BUFFER, _mVertexPositionNormalTriangles);
glDrawElements(GL_TRIANGLES, _mCRBuffer->GetIndexTriangleData()->size(), GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
_mVertexArrayObjectTriangles is my vertex array object holding elements to be rendered via triangles
_mVertexPositionNormalTriangles is my array of vertices and vertex normals
_mCRBuffer->GetIndexTriangleData() is my array of indices into the vertex array. This is a simple array of integers that encode each triangle (a,b,c,a,b,d would encode two triangles abc and abd).
All works just fine, but I would like to render my primitives using triangle fans instead of triangles. How do I set up an array of triangle fans (i.e. more that one) to be drawn using something like
glDrawElements(GL_TRIANGLE_FAN, ....
How do set up my index array to index a set of triangle fans for rendering (jnstead of triangles). The vertices themselves need not change, just the indices to render them using triangle fans instead of triangles.
I can find good examples using triangle strips (here), including how to set up the index array but nothing on triangle fans.
Changing from a strip to a fan is non-trivial, as your data needs to be setup accordingly. You have to have a central vertex from which all the other triangles emanate
If you choose to do it, you just need to order your vertices in the manner shown in the diagram, and make sure you do it in chunks of vertices that have a common vertex.

iOS OpenGL ES to draw a mesh wireframe

I have a human model in an .OBJ file I want to display as a mesh with triangles. No textures. I want also to be able to move, scale, and rotate in 3D.
The first and working option is to project the vertices to 2D using the maths manually and then draw them with Quartz 2D. This works, for I know the underlying math concepts for perspective projection.
However I would like to use OpenGL ES for that method, but I am not sure how to draw the triangles.
For example, the code in - (void)drawRect:(CGRect)rect is:
glClearColor(1,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
GLKBaseEffect *effect = [[GLKBaseEffect alloc] init];
[effect prepareToDraw];
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
Now what? I have an array of vertex positions (3 floats per vertex) and an array of triangle indices, so I tried this:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, numVertices,pVertices);
glDrawElements(GL_TRIANGLES, numTriangles, GL_UNSIGNED_INT,pTriangles);
but this doesn't show anything. I saw from a sample the usage of glEnableVertexAttribArray(GLKVertexAttribPosition) and glDrawArrays but I 'm not sure how to use them.
I also understand that rendering a wireframe is not possible with ES? So I have to apply color attributes to the vertices. That's ok, but before that the triangles have to be displayed first.
The first thing I'd ask is: where are your vertices? OpenGL (ES) draws in a coordinate space that extends from (-1, -1, -1) to (1, 1, 1), so you probably want to transform your points with a projection matrix to get them into that space. To learn about projection matrices and more of the basics of OpenGL ES 2.0 on iOS, I'd suggest finding a book or a tutorial. This one's not bad, and here's another that's specific to GLKit.
Drawing with OpenGL in drawRect: is probably not something you want to be doing. If you're already using GLKit, why not use GLKView? There's good example code to get you started if you create a new Xcode project with the "OpenGL Game" template.
Once you get up to speed with GL you'll find that the function glPolygonMode typically used for wireframe drawing on desktop OpenGL doesn't exist in OpenGL ES. Depending on how your vertex data is organized, though, you might be able to get a decent wireframe with GL_LINES or GL_LINE_LOOP. Or since you're using GLKit, you can skip wireframe and set up some lights and shading pretty easily with GLKBaseEffect.

optimizing openGL ES 2.0 2D texture output and framerate

I was hoping someone can help me make some progress in some texture benchmarks I'm doing in OpenGL ES 2.0 on and iPhone 4.
I have an array that contains sprite objects. the render loop cycles through all the sprites per texture, and retrieves all their texture coords and vertex coords. it adds those to a giant interleaved array, using degenerate vertices and indices, and sends those to the GPU (I'm embedding code are the bottom). This is all being done per texture so I'm binding the texture once and then creating my interleave array and then drawing it. Everything works just great and the results on the screen are exactly what they should be.
So my benchmark test is done by adding 25 new sprites per touch at varying opacities and changing their vertices on the update so that they are bouncing around the screen while rotation and running OpenGL ES Analyzer on the app.
Heres where I'm hoping for some help....
I can get to around 275 32x32 sprites with varying opacity bouncing around the screen at 60 fps. By 400 I'm down to 40 fps. When i run the OpenGL ES Performance Detective it tells me...
The app rendering is limited by triangle rasterization - the process of converting triangles into pixels. The total area in pixels of all of the triangles you are rendering is too large. To draw at a faster frame rate, simplify your scene by reducing either the number of triangles, their size, or both.
Thing is i just whipped up a test in cocos2D using CCSpriteBatchNode using the same texture and created 800 transparent sprites and the framerate is an easy 60fps.
Here is some code that may be pertinent...
Shader.vsh (matrixes are set up once in the beginning)
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * position;
texCoordOut = texCoordIn;
colorOut = colorIn;
}
Shader.fsh (colorOut is used to calc opacity)
void main()
{
lowp vec4 fColor = texture2D(texture, texCoordOut);
gl_FragColor = vec4(fColor.xyz, fColor.w * colorOut.a);
}
VBO setup
glGenBuffers(1, &_vertexBuf);
glGenBuffers(1, &_indiciesBuf);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuf);
glBufferData(GL_ARRAY_BUFFER, sizeof(TDSEVertex)*12000, &vertices[0].x, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(TDSEVertex), BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(TDSEVertex), BUFFER_OFFSET(8));
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(TDSEVertex), BUFFER_OFFSET(16));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indiciesBuf);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ushort)*12000, indicies, GL_STATIC_DRAW);
glBindVertexArrayOES(0);
Update Code
/*
Here it cycles through all the sprites, gets their vert info (includes coords, texture coords, and color) and adds them to this giant array
The array is of...
typedef struct{
float x, y;
float tx, ty;
float r, g, b, a;
}TDSEVertex;
*/
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuf);
//glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertices[0])*(start), sizeof(TDSEVertex)*(indicesCount), &vertices[start]);
glBufferData(GL_ARRAY_BUFFER, sizeof(TDSEVertex)*indicesCount, &vertices[start].x, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Render Code
GLKTextureInfo* textureInfo = [[TDSETextureManager sharedTextureManager].textures objectForKey:textureName];
glBindTexture(GL_TEXTURE_2D, textureInfo.name);
glBindVertexArrayOES(_vertexArray);
glDrawElements(GL_TRIANGLE_STRIP, indicesCount, GL_UNSIGNED_SHORT, BUFFER_OFFSET(start));
glBindVertexArrayOES(0);
Heres a screenshot at 400 sprites (800 triangles + 800 degenerate triangles) to give an idea of the opacity layering as the textures are moving...
Again i should note that a VBO is being created and sent per texture so Im binding and then drawing only twice per frame (since there are only two textures).
Sorry if this is overwhelming but its my first post on here and wanted to be thorough.
Any help would be much appreciated.
PS, i know that i could just use Cocos2D instead of writing everything from scratch, but wheres the fun(and learning) in that?!
UPDATE #1
When i switch my fragment shader to only be
gl_FragColor = texture2D(texture, texCoordOut);
it gets to 802 sprites at 50fps (4804 triangles including degenerate triangles), though setting sprite opacity is lost.. Any suggestions as to how I can still handle opacity in my shader without running at 1/4th the speed?
UPDATE #2
So i ditched GLKit's View and View controller and wrote a custom view loaded from the AppDelegate. 902 sprites with opacity & transparency at 60fps.
Mostly miscellaneous thoughts...
If you're triangle limited, try switching from GL_TRIANGLE_STRIP to GL_TRIANGLES. You're still going to need to specify exactly the same number of indices — six per quad — but the GPU never has to spot that the connecting triangles between quads are degenerate (ie, it never has to convert them into zero pixels). You'll need to profile to see whether you end up paying a cost for no longer implicitly sharing edges.
You should also shrink the footprint of your vertices. I would dare imagine you can specify x, y, tx and ty as 16-bit integers, and your colours as 8-bit integers without any noticeable change in rendering. That would reduce the footprint of each vertex from 32 bytes (eight components, each four bytes in size) to 12 bytes (four two-byte values plus four one-byte values, with no padding needed because everything is already aligned) — cutting almost 63% of the memory bandwidth costs there.
As you actually seem to be fill-rate limited, you should consider your source texture too. Anything you can do to trim its byte size will directly help texel fetches and hence fill rate.
It looks like you're using art that is consciously about the pixels so switching to PVR probably isn't an option. That said, people sometimes don't realise the full benefit of PVR textures; if you switch to, say, the 4 bits per pixel mode then you can scale your image up to be twice as wide and twice as tall so as to reduce compression artefacts and still only be paying 16 bits on each source pixel but likely getting a better luminance range than a 16 bpp RGB texture.
Assuming you're currently using a 32 bpp texture, you should at least see whether an ordinary 16 bpp RGB texture is sufficient using any of the provided hardware modes (especially if the 1 bit of alpha plus 5 bits per colour channel is appropriate to your art, since that loses only 9 bits of colour information versus the original while reducing bandwidth costs by 50%).
It also looks like you're uploading indices every single frame. Upload only when you add extra objects to the scene or if the buffer as last uploaded is hugely larger than it needs to be. You can just limit the count passed to glDrawElements to cut back on objects without a reupload. You should also check whether you actually gain anything by uploading your vertices to a VBO and then reusing them if they're just changing every frame. It might be faster to provide them directly from client memory.

Resources