Drawing Multiple identical pyramids in opengl ES 2.0 - ios

I want to draw as many pyramids to fill up the space. I can draw a single pyramid, change its color etc. But, now, I want to draw a lot of pyramids that can fill the screen. I want to use a single set of vertices and indices.
The vertex and index with color information are as follows :
const Vertex Vertices [] = {
{{-1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, 1}, {1, 0, 0, 1}},
{{-1, -1, 1}, {1, 0, 0, 1}},
{{0, 1, 0}, {1, 0, 0, 1}}
};
const GLubyte Indices[] = {
2, 4, 3,
1, 4, 2,
0, 4, 1,
4, 0, 3
};
Can anybody help me with the code as, I know I am doing some mistakes.

In OpenGL ES 2.0 the only way you can do this is simply by re-rendering the pyramid in the different positions on the screen. What you're getting at is called 'instancing' and it's only supported in OpenGL ES 3.0, where you would have as you said one set of vertices and indices but you would issue a gl command to draw many instances of them, all the while in the fragment shader you would have a built-in variable "gl_InstanceID" that you would use to tell what current instance you're on.
There are some vendor-specific extensions that allow you to do instancing in OpenGL ES 2.0, such as NV_draw_instanced, but again that would only work for specific vendors

Related

WebGL TRIANGLE vs TRIANGLE_STRIP

I've got a single triangle rendering using gl.TRIANGLE_STRIP, but when I try to change it to gl.TRIANGLE, the faces do not render. It appears like the vertices are rendering as really tiny dots, but the faces are empty.
My understanding is that the vertex format for a TRIANGLE vs TRIANGLE_STRIP should be identical for a single triangle.
// vertex setup
const buffer = gl.createBuffer();
const vertices = new Float32Array([
1, -1, -1, 1, 1.3, 1.5, 1,
1, -1, 1, 1.3, 1, 1.5, 1,
0, 1, 0, 1, 1, 1.75, 1
]);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const length = vertices.length / 7;
const mode = gl.TRIANGLE_STRIP;
return {buffer, length, mode};
That works as expected with the following render code:
// render frame
gl.bindBuffer(gl.ARRAY_BUFFER, shape.buffer);
gl.vertexAttribPointer(attribs.position, 3, gl.FLOAT, false, 28, 0);
gl.vertexAttribPointer(attribs.color, 4, gl.FLOAT, false, 28, 12);
gl.enableVertexAttribArray(attribs.position);
gl.enableVertexAttribArray(attribs.color);
gl.useProgram(programs.colored);
gl.uniformMatrix4fv(uniforms.projection, false, projection);
gl.uniformMatrix4fv(uniforms.modelView, false, modelView);
gl.drawArrays(shape.mode, 0, shape.length);
But if I change the mode to gl.TRIANGLE, no faces appear, with the vertices just barely visible as tiny dots.
What am I misunderstanding here?

What to decrease the size of 4 faces on a cube in Xcode with OpenGl ES

I have downloaded a sample project that uses OpenGL ES with ios using Objective-C. The app creates a simple cube. I want to make the cube a rectangular prism by decreasing the distance between the front face and back face of the cube (make it slimmer). In order to do that I need to decrease the size of the top, bottom, left, and right faces. I am new to openGL and dont know which code to change in order to decrease the four faces of the cube. Here is the code
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
} Vertex;
const Vertex Vertices[] = {
// Front
{{1, -1, 1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, 1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, 1}, {1, 1, 1, 1}, {0, 0}},
// Back
{{1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, -1, -1}, {1, 1, 1, 1}, {0, 0}},
{{-1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
// Left
{{-1, -1, 1}, {1, 1, 1, 1}, {1, 0}},
{{-1, 1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {0, 0}},
// Right
{{1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
{{1, 1, 1}, {1, 1, 1, 1}, {0, 1}},
{{1, -1, 1}, {1, 1, 1, 1}, {0, 0}},
// Top
{{1, 1, 1}, {1, 1, 1, 1}, {1, 0}},
{{1, 1, -1}, {1, 1, 1, 1}, {1, 1}},
{{-1, 1, -1}, {1, 1, 1, 1}, {0, 1}},
{{-1, 1, 1}, {1, 1, 1, 1}, {0, 0}},
// Bottom
{{1, -1, -1}, {1, 1, 1, 1}, {1, 0}},
{{1, -1, 1}, {1, 1, 1, 1}, {1, 1}},
{{-1, -1, 1}, {1, 1, 1, 1}, {0, 1}},
{{-1, -1, -1}, {1, 1, 1, 1}, {0, 0}}
};
const GLubyte Indices[] = {
// Front
0, 1, 2,
2, 3, 0,
// Back
4, 6, 5,
4, 5, 7,
// Left
8, 9, 10,
10, 11, 8,
// Right
12, 13, 14,
14, 15, 12,
// Top
16, 17, 18,
18, 19, 16,
// Bottom
20, 21, 22,
22, 23, 20
};
If you guys think that this isnt the code to determine the size of the faces, please tell me what method was probably used so I can find it in the project and post it here.
The problem was fixed thanks to Tommy. But now I have new issue. The size of the four faces has decreased but the front and back face now have a gap between the rest of the faces, here is a screenshot.
How can I move the front face inwards towards the other faces so its attached to them?
Each entry in the Vertices array defines an instance of the Vertex struct. So the first three things are the Position — the first vertex listed has position {1, -1, 1}, the second has {1, 1, 1}, etc. They're all floating-point numbers in this code so anything will do.
Indices groups the vertices, into triangles it is strongly implied. So the 'front' is the triangle between the 0th, 1st and 2nd vertex plus the triangle between the 2nd, 3rd and 0th vertex.
Therefore the size of the top face is determined by the position of vertices 0, 1, 2 and 3. They all have z = 1. If you changed that to e.g. z = 0.5 then you'd move the top face towards the centre of the cube.

How to change where rendered object is placed on the screen, OpenGL Es 2.0 iOS

I downloaded a project that displays a square with some texture. The square is currently located near the bottom of the screen, and I want to move it to the middle. Here is the code.
Geometry.h
#import "GLKit/GLKit.h"
#ifndef Geometry_h
#define Geometry_h
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
float Normal[3];
} Vertex
typedef struct{
int x;
int y;
int z;
} Position;
extern const Vertex VerticesCube[24];
extern const GLubyte IndicesTrianglesCube[36];
#endif
Here is the code in geometry.m
const Vertex VerticesCube[] = {
// Front
{{1, -1, 1}, {1, 0, 0, 1}, {0, 1}, {0, 0, 1}},
{{1, 1, 1}, {0, 1, 0, 1}, {0, 2.0/3.0}, {0, 0, 1}},
{{-1, 1, 1}, {0, 0, 1, 1}, {1.0/3.0, 2.0/3.0}, {0, 0, 1}},
{{-1, -1, 1}, {0, 0, 0, 1}, {1.0/3.0, 1}, {0, 0, 1}},
};
const GLubyte IndicesTrianglesCube[] =
{
// Front
0, 1, 2,
2, 3, 0,
}
What part of this code determines the position of the rendered object on the screen?
None of the code you posted has to do with screen position.
VerticesCube specifies the cube corners in an arbitrary 3D space. Somewhere in code you haven't posted, a projection transform (and probably also view and model transforms) map each vertex to clip space, and a glViewport call (which is probably implicitly done for you if you're using GLKView) maps clip space to screen/view coordinates.
Rearranging things on screen could involve any one of those transforms, and which one to use is a choice that depends on understanding each one and how it fits into the larger context of your app design.
This is the sort of thing you'd get from the early stages of any OpenGL tutorial. Here's a decent one.

OpenGL ES texture degrades in quality

I am trying to draw a Core Graphics image generated (at screen resolution) into OpenGL. However, the image is rendering more aliased than the CG output (antialiasing is disabled in CG). The text is the texture (the blue background is respectively drawn in Core Graphics for the first image and OpenGL for the second).
CG Output:
OpenGL Render (in simulator):
Framebuffer setup:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenRenderbuffers(1, &onscrRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, onscrRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.layer];
glGenFramebuffers(1, &onscrFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, onscrFramebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, onscrRenderBuffer);
Texture Loading Code:
-(GLuint) loadTextureFromImage:(UIImage*)image {
CGImageRef textureImage = image.CGImage;
size_t width = CGImageGetWidth(textureImage);
size_t height = CGImageGetHeight(textureImage);
GLubyte* spriteData = (GLubyte*) malloc(width*height*4);
CGColorSpaceRef cs = CGImageGetColorSpace(textureImage);
CGContextRef c = CGBitmapContextCreate(spriteData, width, height, 8, width*4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -CGContextGetClipBoundingBox(c).size.height);
CGContextDrawImage(c, (CGRect){CGPointZero, {width, height}}, textureImage);
CGContextRelease(c);
GLuint glTex;
glGenTextures(1, &glTex);
glBindTexture(GL_TEXTURE_2D, glTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glBindTexture(GL_TEXTURE_2D, 0);
free(spriteData);
return glTex;
}
Vertices:
struct vertex {
float position[3];
float color[4];
float texCoord[2];
};
typedef struct vertex vertex;
const vertex bgVertices[] = {
{{1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {0, 0}} // BL (3)
};
const vertex textureVertices[] = {
{{1, -1, 0}, {0, 0, 0, 0}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 0, 0, 0}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 0, 0, 0}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 0, 0, 0}, {0, 0}} // BL (3)
};
const GLubyte indicies[] = {
3, 2, 0, 1
};
Render Code:
glClear(GL_COLOR_BUFFER_BIT);
GLsizei width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glViewport(0, 0, width, height);
glBindBuffer(GL_ARRAY_BUFFER, bgVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_ARRAY_BUFFER, textureVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(textureUniform, 0);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
I am using the blend function glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in case that has anything to do with it.
Any ideas where the problem is?
Your GL-rendered output looks all pixelated because it has fewer pixels. Per the Drawing and Printing Guide for iOS, the default scale factor for a CAEAGLLayer is 1.0, so when you set up your GL render buffers, you get one pixel in the buffer per point. (Remember, a point is a unit of UI layout, which on modern devices with Retina displays works out to several hardware pixels.) When you render that buffer full-screen, everything gets scaled up (by about 2.61x on an iPhone 6(s) Plus).
To render at the native screen resolution, you need to increase the contentScaleFactor of your view. (Preferably, you should do this early on, before, setting up renderbuffers, so that they get the new scale factor from the view's layer.)
Watch out, though: you want to use the UIScreen property nativeScale, not scale. The scale property reflects UI rendering, where, on iPhone 6(s) Plus, everything gets done at 3x and then scaled down slightly to the native resolution of the display. The nativeScale property reflects the number of actual device pixels per point — if you're doing GPU rendering, you want to target that so you don't sap performance by drawing more pixels than you need to. (On current devices other than the "Plus" iPhones, scale and nativeScale are the same. But using the latter is probably a good insurance policy.)
You can avoid a lot of these kinds of issues (and others) by letting GLKView do renderbuffer setup for you. Even if you're writing cross-platform GL, that part of your code is going to have to be pretty platform- and device-specific anyway, so you might as well reduce the amount of it that you have to write and maintain.
(Addressing previous edits of the question, for posterity's sake: this has nothing to do with multisampling or the quality of the GL texture data. Multisampling has to do with rasterization of polygon edges — points in the interior of a polygon get one fragment per pixel, but points on the edges get multiple fragments whose colors are blended in the resolve stage. And if you bind the texture to an FBO and glReadPixels from it, you'll find the image is pretty much the same one you put in.)

OpenGL 2.0 resizing textures based on object size

I would like to be able to dynamically repeat textures based on the scale size of a object (cube).
I have tried going through VerticesCube3D structure but get a crash when trying to change the values. I have my textures setup on repeat but currently it stretches the texture (I need to dynamically change TEX_COORD_MAX)
Vertex VerticesCube3D[] = {
// Front
{{1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, 0}, {1, 1, 1, 1}, {0, 0}},
// Back
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{-1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{1, -1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, 0}},
// Left
{{-1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{-1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, -2}, {1, 1, 1, 1}, {0, 0}},
// Right
{{1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{1, 1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{1, -1, 0}, {1, 1, 1, 1}, {0, 0}},
// Top
{{1, 1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, 1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, 1, -2}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, 1, 0}, {1, 1, 1, 1}, {0, 0}},
// Bottom
{{1, -1, -2}, {1, 1, 1, 1}, {TEX_COORD_MAX, 0}},
{{1, -1, 0}, {1, 1, 1, 1}, {TEX_COORD_MAX, TEX_COORD_MAX}},
{{-1, -1, 0}, {1, 1, 1, 1}, {0, TEX_COORD_MAX}},
{{-1, -1, -2}, {1, 1, 1, 1}, {0, 0}}
};
You don't want to change the texture coordinates in your vertex data (because then you'll have to re-upload it to the hardware). Instead, you want to transform the texture coordinates on the GPU, probably in your vertex shader.
Read about Projective Texture Mapping. This is a pretty common technique, but surprisingly I can't find much in the way of good tutorials about it. Here's a university programming assignment that discusses it a bit.
The gist of it: you define a texture matrix to transform texture coordinate -- this is a transformation just like your model, view, and projection matrices, so on iOS you can create it with GLKMatrix4 functions just like those. Then provide that matrix to your vertex shader using a uniform variable. In the vertex shader, multiply the input texture coordinate (a vertex attribute) by the texture matrix to transform it before outputting it to a varying for use in the fragment shader.
If you define texture coordinates in your vertex data such that a texture covers an object exactly once, you can then use a texture matrix to scale the texture down. If it's scaled down, and your texture wrap parameters are set to repeat, your texture will tile across the face of the object.

Resources