2D Drawing In OpenGL ES 2.0 (iOS) - ios

I'm learning how to use OpenGL ES 2.0 on iOS. Right now I want to just do some basic 2D animation (e.g. move a rectangle around the screen and change its size). I've started out with the project template for OpenGL ES provided by Apple in Xcode. My drawing code looks like this:
static GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f
};
// Update attribute values.
glVertexAttribPointer(VERTEX_ATTR, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(VERTEX_ATTR);
glVertexAttribPointer(COLOR_ATTR, 4, GL_UNSIGNED_BYTE, 1, 0, squareColors);
glEnableVertexAttribArray(COLOR_ATTR);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 5);
Now this will draw a nice rectangle in the middle of the screen. But if I start to change the rectangle by adding the following code, it starts to look funky:
squareVertices[5] -= .001;
squareVertices[7] -= .001;
It is as if part of the rectangle is attached to the center of the screen. I am completely new to OpenGL ES so I'm sure my problem is obvious. I also assume this has something to do with OpenGL ES being a 3D graphics library and I'm trying to treat it as a 2D space. So my question is: What is the best way to draw and animate 2D objects in OpenGL ES 2.0? I've seen some stuff online for OpenGL ES 1.1, but that is not much help to me. Are their special techniques for 2D drawing in OpenGL ES 2.0, or is there some sort of 2D drawing mode?
Any guidance would be greatly appreciated.

#macinjosh: This is a response to your updated question for those who are interested in the answer. I'm guessing you've gained further knowledge since Dec '10 when you posted!
OpenGL vertices are in 3D, not 2D. To be completely truthful, they're actually in 4D since they include a 'w' component as well, but for now just consider that as value 1.0 like a homogenous vector.
Due to their 3D-ness, unless you add a 'z' component in a shader you must specify one as a vertex attribute.

Because you have a '3' as the third parameter of glVertexAttribPointer. I believe you can set it to 2 but I haven't tried this in GL. My guess is the missing z axis would be filled in internally as '0' (but again, try it and see!). Internally, it's probably all going to be 4 float vectors with the 4th ('w') parameter used for homogeneous matrix multiplication gubbins.
If you're doing this on a mobile device you may also wish to look into fixed point maths (faster on some devices that don't have a floating point co-pro) and also Vertex Buffer Objects, which are more efficient on many machines. Also, a fixed vertex format such as used by
glInterleavedArrays(format_code, stride, data)
may prove to be more efficient as the device may have optimized code paths for whatever 'format_code' you decide to go with.

I'm new with OpenGL, but vertex doesn't require 3 float, one every axis : X,Y,Z?
So, the first vertex array will be :
( -0.50f, -0.33f, 0.50f)
( -0.33f, -0.50f, 0.33f)
( 0.50f, 0.33f, ?????)
the second will be :
( -0.50f, -0.33f, 0.00f )
( 0.50f, -0.33f, 0.00f )
( -0.50f, 0.33f, 0.00f )
( 0.50f, 0.33f, 0.00f )

After some playing around I changed the drawing code to this:
static GLfloat squareVertices[12] = {
-0.5f, -0.33f, 0.0,
0.5f, -0.33f, 0.0,
-0.5f, 0.33f, 0.0,
0.5f, 0.33f, 0.0
};
// Update attribute values.
glVertexAttribPointer(VERTEX_ATTR, 3, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(VERTEX_ATTR);
glVertexAttribPointer(COLOR_ATTR, 4, GL_UNSIGNED_BYTE, 1, 0, squareColors);
glEnableVertexAttribArray(COLOR_ATTR);
squareVertices[7] -= .001;
squareVertices[10] -= .001;
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Adding the third 0.0 float to each vertex seemed to do the trick. I am unclear on why this is so if any one could shed some light I would apreciate it.

Related

Draw textured quad in background of opengl scene

Code flow is as follows:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
renderTexturedQuadForBackground();
presentRenderbuffer();
Is there any way for me to get that textured quad rendering code to show behind the scene in spite of the fact that the scene renders first? Assume that I cannot change that the rendering of the background textured quad will happen directly before I present the render buffer.
Rephrased: I can't change the rendering order. Essentially what I want is that every pixel that would've been colored only by glClearColor to instead be colored by this textured quad.
The easiest solution is to define the quad in normalized device coordinates directly and set the z-value to 1. You then don't need to project the quad and it will be screen-filling and behind anything else - except stuff that's also at z=1 after projection and perspective divide.
That's pretty much the standard procedure for screen-aligned quads, except there is usually no need to put the quad at z=1, not that it would matter. Usually, full screen quads are simply used to be able to process at least once fragment per pixel, normally a 1:1 mapping of fragments an pixels. Deferred shading, post-processing fx or image processing in general are the usual suspects. Since you only render the quad in most cases (and nothing else) the depth value is irrelevant, as long as it's inside the unit cube and not dropped by the depth test, for instance when you put it at z=1 and your depth functions is LESS.
EDIT: I made a little mistake. NDCs are defined in a left-handed coordinate system, meaning that the near plane is mapped to -1 and the far plane is mapped to 1. So, you need to define your quad in NDCs with a z value of 1 and set the DepthFunc to LEQUAL. Alternatively, you can leave the depth function untouched and simply subtract a very small value from 1.f:
float maxZ = 1.f - std::numeric_limits<float>::epsilon();
EDIT2: Let's assume you want to render a screen-aligned quad which is drawn behind everything else and with appropriate texture coordinates. Please note: I'm on a desktop here, so I'm writing core GL code which doesn't map to GLES 2.0 directly. However, there is nothing in my examnple you can't do with GLES and GLSL ES 2.0.
You may define the vertex attribs of the quad like this (without messing with the depth func):
GLfloat maxZ = 1.f - std::numeric_limits<GLfloat>::epsilon ();
// interleaved positions an tex coords
GLfloat quad[] = {-1.f, -1.f, maxZ, 1.f, // v0
0.f, 0.f, 0.f, 0.f, // t0
1.f, -1.f, maxZ, 1.f, // ...
1.f, 0.f, 0.f, 0.f,
1.f, 1.f, maxZ, 1.f,
1.f, 1.f, 0.f, 0.f,
-1.f, 1.f, maxZ, 1.f,
0.f, 1.f, 0.f, 0.f};
GLubyte indices[] = {0, 1, 2, 0, 2, 3};
The VAO and buffers are setup accordingly:
// generate and bind a VAO
gl::GenVertexArrays (1, &vao);
gl::BindVertexArray (vao);
// setup our VBO
gl::GenBuffers (1, &vbo);
gl::BindBuffer (gl::ARRAY_BUFFER, vbo);
gl::BufferData (gl::ARRAY_BUFFER, sizeof(quad), quad, gl::STATIC_DRAW);
// setup out index buffer
gl::GenBuffers (1, &ibo);
gl::BindBuffer (gl::ELEMENT_ARRAY_BUFFER, ibo);
gl::BufferData (gl::ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, gl::STATIC_DRAW);
// setup our vertex arrays
gl::VertexAttribPointer (0, 4, gl::FLOAT, gl::FALSE_, 8 * sizeof(GLfloat), 0);
gl::VertexAttribPointer (1, 4, gl::FLOAT, gl::FALSE_, 8 * sizeof(GLfloat), (GLvoid*)(4 * sizeof(GLfloat)));
gl::EnableVertexAttribArray (0);
gl::EnableVertexAttribArray (1);
The shader code comes to a very, very simple pass-through vertex shader and, for simplicty a fragment shader which in my example simply exports the interpolated tex coords:
// Vertex Shader
#version 430 core
layout (location = 0) in vec4 Position;
layout (location = 1) in vec4 TexCoord;
out vec2 vTexCoord;
void main()
{
vTexCoord = TexCoord.xy;
// you don't need to project, you're already in NDCs!
gl_Position = Position;
}
//Fragment Shader
#version 430 core
in vec2 vTexCoord;
out vec4 FragColor;
void main()
{
FragColor = vec4(vTexCoord, 0.0, 1.0);
}
As you can see, the values written to gl_Position are simply the vertex positions passed to the shader invocation. No projection takes place because the result of projection and perspective divide is nothing else than normalized device coordinates. Since we already are in NDCs, we don't need projection and perspective divide and so simply pass through the positions unaltered.
The final depth is very close to the maximum of the depth range and so the quad will appear to be behind anthing else in your scene.
You can use the texcoords as usual.
I hope you get the idea. Except for the explicit attrib locations which aren't supported by GLES 2.0 (i.e. replace the stuff with BindAttribLocation() calls instead) you shouldn't have to do anything.
There is a way, but you have to put the quad behind the scene. If your quad is constructed correctly you can
enable DEPTH_TEST by using
glEnable(DEPTH_TEST);
and then by using
glDepthFunc(GL_GREATER);
before rendering your background.
Your quad will be rendered behind the scene. But as I said, this only works, when your geometry is literally located behind the scene.

OpenGL draw multiple isometric cubes

I'm trying to draw multiple cubes at an isometric camera angle. Here's the code that draws one. (OpenGL ES 2.0 with GLKit on iOS).
float startZ = -4.0f;
// position
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, location.x, location.y, location.z + startZ);
// isometric camera angle
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, GLKMathDegreesToRadians(45), 1.0, 0, 0);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, GLKMathDegreesToRadians(45), 0.0, 1.0, 0);
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);
The problem is that it is translating first, then rotating, which means with more than one box, they do not line up (they look like a chain of diamonds. Each one is in position and rotated so the corners overlap).
I've tried switching the order so the rotation is before the translation, but they don't show up at all. My vertex array is bound to a unit cube centered around the origin.
I really don't understand how to control the camera separate from the object. I screwed around with the projection matrix for a while without getting it. As far as I understand, the camera is supposed to be controlled with the modelViewMatrix, right? (The "View" part).
Your 'camera' transform (modelview) seems correct, however it looks like you're using a perspective projection - if you want isometric you will need to change your projection matrix.
It looks like you are applying the camera rotation to each object as you draw it. Instead, simulate a 2-deep matrix stack if your use-case is this simple, so you just have your camera matrix and each cube's matrix.
Set your projection and camera matrices - keep a reference to your camera matrix.
For each cube generate the individual cube transformation matrix (which should probably consist of translation only - no rotation so the cubes remain axis aligned - I think this is what you're going for).
Backwards-multiply your cube matrix by the camera matrix and use that as the modelview matrix.
Note that the camera matrix will remain unchanged for each cube you render, but the modelview matrix will incorporate both the cube's individual transformation matrix and the camera matrix into a single modelview matrix. This is equivalent to the old matrix stack methods glPushMatrix and glPopMatrix (not available in GLES 2.0). If you need more complex object hierarchies (where the cubes have child-objects in their 'local' coordinate space) then you should probably implement your own full matrix stack, instead of the 2-deep equivalent discussed above.
For reference, here's an article that helped me understand it. It's a little mathy, but does a good job explaining things intuitively.
http://db-in.com/blog/2011/04/cameras-on-opengl-es-2-x/
I ended up keeping the perspective projection, because I don't want true isometric. The key was to do them in the right order, because moving the camera is the inverse of moving the object. See the comments, and the article. Working code:
// the one you want to happen first is multiplied LAST
// camRotate * camScale * camTranslate * objTranslate * objScale * objRotate;
// TODO cache the camera matrix
// the camera angle remains the same for all objects
GLKMatrix4 camRotate = GLKMatrix4MakeRotation(GLKMathDegreesToRadians(45), 1, 0, 0);
camRotate = GLKMatrix4Rotate(camRotate, GLKMathDegreesToRadians(45), 0, 1, 0);
GLKMatrix4 camTranslate = GLKMatrix4MakeTranslation(4, -5, -4.0);
GLKMatrix4 objTranslate = GLKMatrix4MakeTranslation(location.x, location.y, location.z);
GLKMatrix4 modelViewMatrix = GLKMatrix4Multiply(camRotate, camTranslate);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, objTranslate);
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36);

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

What is the view volume of the projection without glFrustum and glOrtho setting?

I am reading sample code GLVideoFrame from WWDC2010. In this sample, it has code like below:
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f,
};
...
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);
transY += 0.075f;
...
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
Notice that this code does not call any function like glFrustum or glOrtho for openGL projection setting.
By only calling gLoadIdentity(), what will be the "default" view volume?
it wil be a perspective project or orthographic projection?
edited:
to be more specific,
is the view volume a cube that "ranging from -1 to 1 in all three axes" ?
OpenGL assumes that after the ModelView and Projection transform, all visible elements are in clip space (or NDC space); it uses the cube [-1;+1]^3. The matrices contents is entirely your responsibility. Since you load the identity matrix, there is no Projection at all.

How to set glFrustumf in iPad

Here I created one simple OpenGLES iPad sample.
//----------------------------------------
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1, 1, -1, 1, 0, 20);
glMatrixMode(GL_MODELVIEW);
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,1.6,
0.5f, -0.33f,1.6,
-0.5f, 0.33f,1.6,
0.5f, 0.33f,1.6
};
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
. . . . . . .
. . . . . . .
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//----------------------------------------
I used gluLookAt in mac sample, but it is not found in iPad. What is the basic mistake in above code? why square is not visible?
if I change z value in vertex array then works fine.
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,0.5,
0.5f, -0.33f,0.5,
-0.5f, 0.33f,0.5,
0.5f, 0.33f,0.5
};
I thought z value range should be 0-20, here 1.6, its in range. I don't like to change vertex value now. Help me to set glFrustumf.
The near and far values passed to glFrustum are not absolute coordinates in object space. They are the distances from the camera to the near and far planes. That means that you can't say that a z coordinate of 1.6 is necessarily in your frustum unless you know where the camera position is and what direction it is pointing in.
For example if your camera is at (0,0,-19) pointed at the origin and you pass 0 and 20 to glFrustum for near and far then in object space the near plane is at -19 (-19+0) and the far plane is at 1 (-19+20). In such a case the square with z=1.6 would be past the far plane but the one with z=0.5 would be inside the frustum. Similarly if your camera was at (0,0,1) with the same values for glFrustum then in object space the near plane would be at 1 and the far would be at -19 so 0.5 would be in and 1.6 would be out.

Resources