opengl es, ios and triangle fans - ios

I am currently rendering a scene using triangles with the following code:
glBindVertexArrayOES(_mVertexArrayObjectTriangles);
glBindBuffer(GL_ARRAY_BUFFER, _mVertexPositionNormalTriangles);
glDrawElements(GL_TRIANGLES, _mCRBuffer->GetIndexTriangleData()->size(), GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
_mVertexArrayObjectTriangles is my vertex array object holding elements to be rendered via triangles
_mVertexPositionNormalTriangles is my array of vertices and vertex normals
_mCRBuffer->GetIndexTriangleData() is my array of indices into the vertex array. This is a simple array of integers that encode each triangle (a,b,c,a,b,d would encode two triangles abc and abd).
All works just fine, but I would like to render my primitives using triangle fans instead of triangles. How do I set up an array of triangle fans (i.e. more that one) to be drawn using something like
glDrawElements(GL_TRIANGLE_FAN, ....
How do set up my index array to index a set of triangle fans for rendering (jnstead of triangles). The vertices themselves need not change, just the indices to render them using triangle fans instead of triangles.
I can find good examples using triangle strips (here), including how to set up the index array but nothing on triangle fans.

Changing from a strip to a fan is non-trivial, as your data needs to be setup accordingly. You have to have a central vertex from which all the other triangles emanate
If you choose to do it, you just need to order your vertices in the manner shown in the diagram, and make sure you do it in chunks of vertices that have a common vertex.

Related

Drawing a variable number of textures

For some scientific data visualization, I am drawing a large float array using WebGL. The dataset is two-dimensional and typically hundreds or few thousands of values in height and several tens of thousands values in width.
To fit this dataset into video memory, I cut it up into several non-square textures (depending on MAX_TEXTURE_SIZE) and display them next to one another. I use the same shader with a single sampler2d to draw all the textures. This means that I have to iterate over all the textures for drawing:
for (var i=0; i<dataTextures.length; i++) {
gl.activeTexture(gl.TEXTURE0+i);
gl.bindTexture(gl.TEXTURE_2D, dataTextures[i]);
gl.uniform1i(samplerUniform, i);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPositionBuffers[i]);
gl.vertexAttribPointer(vertexPositionAttribute, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
However, if the number of textures gets larger than half a dozen, performance becomes quite bad. Now I know that games use quite a few more textures than that, so this can't be expected behavior. I also read that you can bind arrays of samplers, but as far as I can tell, the total number of texture has to be known ahead of time. For me, the number of textures depends on the dataset, so I can't know it before loading the data.
Also, I suspect that I am doing unnecessary things in this render loop. Any hints would be welcome.
How would you normally draw a variable number of textures in WebGL?
Here's a few previous answers that will help
How to bind an array of textures to a WebGL shader uniform?
How to send multiple textures to a fragment shader in WebGL?
How many textures can I use in a webgl fragment shader?
Some ways off the top if my head
Create a shader that loops over N textures. Set the textures you're not using to some 1x1 pixel texture with 0,0,0,0 in it or something else that doesn't effect your calculations
Create a shader that loops over N textures. Create a uniform boolean array, in the loop skip any texture who's corresponding boolean value is false.
Generate a shader on the fly that has exactly the number of textures you need. It shouldn't be that hard to concatinate a few strings etc..

How to define order when drawing 2D triangles in OpenGL ES 1.1?

I'm drawing triangles with only x and y coordinates per vertex:
glVertexPointer(2, GL_FLOAT, 0, vertices);
Sometimes when I draw a triangle over another triangle they seem to be coplanar and the surface jerks because they share the exact same surface in space.
Is there a way of saying "OpenGL, I want that you draw this triangle on top of whatever is below it" without using 3D coordinates, or do I have to enable depth test and use 3D coordinates to control a Z-index?
If you want to render the triangle just on top of whatever was in the framebuffer before, you can just disable the depth test entirely. But if you need some custom ordering different from draw order, then you won't get around adding additional depth information (in the form of a 3rd z-coordinate). There is no way to say to OpenGL "render the following stuff but with the z-coordinate collectively set to some value". You can either say "render the follwing stuff on top of whatever is there" or "render the following stuff on whatever depth results from its transformed vertices".

Drawing Multiple 2d shapes in DirectX

I completed a tutorial on rendering 2d triangles in directx. Now, I want to use my knowledge of rendering a single triangle to render multiple triangles, or for that matter multiple objects on screen.
Should I create a list/stack/vector of vertexbuffers and input layouts and then draw each object? Or is there a better approach to this?
My process would be:
Setup directx, including vertex and pixel shaders
Create vertex buffers for each shape that has to be drawn on the screen and store them in an array.
Draw them to the render target for each frame(each frame)
Present the render target(each frame)
Please assume very rudimentary knowledge of DirectX and graphics programming in general when answering.
You don't need to create vertex buffer for each shape, you can just create one to store all the vertices of all triangles, then create a index buffer to store all indices of all shapes, at last draw them with index buffer.
I am not familiar with DX11, So, I just list the links for D3D 9 for your reference, I think the concept was same, just with some API changes.
Index Buffers(Direct3D 9)
Rendering from Vertex and Index buffers
If the triangles are in the same shape, just with different position or colors, you can consider using geometry instancing, it's a powerful way to render multiple copies of the same geometry.
Geometry Instancing
Efficiently Drawing Multiple Instances of Geometry(D3D9)
I don't know much about DirectX but general rule in rendering on GPU is to use separate vertex and index buffers for every mesh.
Although there is nothing limiting you from using single vertex buffer with many index buffers, in fact you may get some performance gains especially for small meshes...
You'll need just one vertex buffer for do this , and then Batching them,
so here is what you can do, you can make an array/vector holding the triangle information, let's say (pseudo-code)
struct TriangleInfo{
..... texture;
vect2 pos;
vect2 dimension;
float rot;
}
then in you draw method
for(int i=0; i < vector.size(); i++){
TriangleInfo tInfo = vector[i];
matrix worldMatrix = Transpose(matrix(tInfo.dimension) * matrix(tInfo.rot) * matrix(tInfo.pos));
shaderParameters.worldMatrix = worldMatrix; //info to the constabuffer
..
..
dctx->PSSetShaderResources(0, 1, &tInfo.texture);
dctx->Draw(0,4);
}
then in your vertex shader:
cbuffer cbParameters : register( b0 ) {
float4x4 worldMatrix;
};
VOut main(float4 position : POSITION, float4 texCoord : TEXCOORD)
{
....
output.position = mul(position,worldMatrix);
...
}
Remenber all is pseudo-code, but this should give you the idea, but there is a problem if you are planing to make a lot of Triangle, let's say 1000 triangles, maybe this is not the best option, you should using DrawIndexed and modifying the vertex position of each triangle, or you can use DrawInstanced , that is simpler , to be able to send all the information in just once Draw call, because calling Draw * triangleCount , is very heavy for large amounts

iOS OpenGL ES to draw a mesh wireframe

I have a human model in an .OBJ file I want to display as a mesh with triangles. No textures. I want also to be able to move, scale, and rotate in 3D.
The first and working option is to project the vertices to 2D using the maths manually and then draw them with Quartz 2D. This works, for I know the underlying math concepts for perspective projection.
However I would like to use OpenGL ES for that method, but I am not sure how to draw the triangles.
For example, the code in - (void)drawRect:(CGRect)rect is:
glClearColor(1,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
GLKBaseEffect *effect = [[GLKBaseEffect alloc] init];
[effect prepareToDraw];
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
Now what? I have an array of vertex positions (3 floats per vertex) and an array of triangle indices, so I tried this:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, numVertices,pVertices);
glDrawElements(GL_TRIANGLES, numTriangles, GL_UNSIGNED_INT,pTriangles);
but this doesn't show anything. I saw from a sample the usage of glEnableVertexAttribArray(GLKVertexAttribPosition) and glDrawArrays but I 'm not sure how to use them.
I also understand that rendering a wireframe is not possible with ES? So I have to apply color attributes to the vertices. That's ok, but before that the triangles have to be displayed first.
The first thing I'd ask is: where are your vertices? OpenGL (ES) draws in a coordinate space that extends from (-1, -1, -1) to (1, 1, 1), so you probably want to transform your points with a projection matrix to get them into that space. To learn about projection matrices and more of the basics of OpenGL ES 2.0 on iOS, I'd suggest finding a book or a tutorial. This one's not bad, and here's another that's specific to GLKit.
Drawing with OpenGL in drawRect: is probably not something you want to be doing. If you're already using GLKit, why not use GLKView? There's good example code to get you started if you create a new Xcode project with the "OpenGL Game" template.
Once you get up to speed with GL you'll find that the function glPolygonMode typically used for wireframe drawing on desktop OpenGL doesn't exist in OpenGL ES. Depending on how your vertex data is organized, though, you might be able to get a decent wireframe with GL_LINES or GL_LINE_LOOP. Or since you're using GLKit, you can skip wireframe and set up some lights and shading pretty easily with GLKBaseEffect.

Automatically calculate normals in GLKit/OpenGL-ES

I'm making some fairly basic shapes in OpenGL-ES based on sample code from Apple. They've used an array of points, with an array of indices into the first array and each set of three indices creates a polygon. That's all great, I can make the shapes I want. To shade the shapes correctly I believe I need to calculate normals for each vertex on each polygon. At first the shapes were cuboidal so it was very easy, but now I'm making (slightly) more advanced shapes I want to create those normals automatically. It seems easy enough if I get vectors for two edges of a polygon (all polys are triangles here) and use their cross product for every vertex on that polygon. After that I use code like below to draw the shape.
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, 0, triangleColours);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, triangleNormals);
glDrawArrays(GL_TRIANGLES, 0, 48);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
glDisableVertexAttribArray(GLKVertexAttribNormal);
What I'm having trouble understanding is why I have to do this manually. I'm sure there are cases when you'd want something other than just a vector perpendicular to the surface, but I'm also sure that this is the most popular use case by far, so shouldn't there be an easier way? Have I missed something obvious? glCalculateNormals() would be great.
//And here is an answer
Pass in a GLKVector3[] that you wish to be filled with your normals, another with the vertices (each three are grouped into polygons) and then the count of the vertices.
- (void) calculateSurfaceNormals: (GLKVector3 *) normals forVertices: (GLKVector3 *)incomingVertices count:(int) numOfVertices
{
for(int i = 0; i < numOfVertices; i+=3)
{
GLKVector3 vector1 = GLKVector3Subtract(incomingVertices[i+1],incomingVertices[i]);
GLKVector3 vector2 = GLKVector3Subtract(incomingVertices[i+2],incomingVertices[i]);
GLKVector3 normal = GLKVector3Normalize(GLKVector3CrossProduct(vector1, vector2));
normals[i] = normal;
normals[i+1] = normal;
normals[i+2] = normal;
}
}
And again the answer is: OpenGL is neither a scene managment library nor a geometry library, but just a drawing API that draws nice pictures to the screen. For lighting it needs normals and you give it the normals. That's all. Why should it compute normals if this can just be done by the user and has nothing to do with the actual drawing?
Often you don't compute them at runtime anyway, but load them from a file. And there are many many ways to compute normals. Do you want per-face normals or per-vertex normals? Do you need any specific hard edges or any specific smooth patches? If you want to average face normals to get vertex normals, how do you want to average these?
And with the advent of shaders and the removing of the builtin normal attribute and lighting computations in newer OpenGL versions, this whole question becomes obsolete anyway as you can do lighting any way you want and don't neccessarily need traditional normals anymore.
By the way, it sounds like at the moment you are using per-face normals, which means every vertex of a face has the same normal. This creates a very faceted model with hard edges and also doesn't work very well together with indices. If you want a smooth model (I don't know, maybe you really want a faceted look), you should average the face normals of the adjacent faces for each vertex to compute per-vertex normals. That would actually be the more usual use-case and not per-face normals.
So you can do something like this pseudo-code:
for each vertex normal:
intialize to zero vector
for each face:
compute face normal using cross product
add face normal to each vertex normal of this face
for each vertex normal:
normalize
to generate smooth per-vertex normals. Even in actual code this should result in something between 10 and 20 lines of code, which isn't really complex.

Resources