OpenGL ES 1 Scaling without changing lighting on object? - ios

I am implementing a 3d obj viewer like app and when I did a scaling on my object in the app (in OpenGL ES 1.x) it became lighter(scale down) and darker (when I scale up).
Is there a way for me to prevent this "changing of lighting" to happen? i.e. the same brightness uniformly through out.
I guess I have to do something to the lighting normals?
My Render method is as below:
void RenderingEngine::Render(const vector<Visual>& visuals, ivec2 screenSize) const
{
glClearColor(0.5f, 0.5f, 0.5f, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
vector<Visual>::const_iterator visual = visuals.begin();
for (int visualIndex = 0; visual != visuals.end(); ++visual, ++visualIndex) {
// Set the viewport transform.
ivec2 size = visual->ViewportSize;
ivec2 lowerLeft = visual->LowerLeft;
glViewport(lowerLeft.x, lowerLeft.y, size.x, size.y);
// Set the light position.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
vec4 lightPosition(0.25, 0.25, 1, 0);
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition.Pointer());
// Set the model-view transform.
mat4 rotation = visual->Orientation.ToMatrix();
mat4 translation = visual->Translate;
mat4 scale;
scale = scale.Scale(visual->Scale);
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = rotation * m_translation * translation;
glLoadMatrixf(modelview.Pointer());
// Set the projection transform.
float h = 4.0f * size.y / size.x;
mat4 projection = mat4::Frustum(-2, 2, -h / 2, h / 2, 5, 50);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projection.Pointer());
// Set the diffuse color.
vec3 color = visual->Color * 0.75f;
vec4 diffuse(color.x, color.y, color.z, 1);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse.Pointer());
// Draw the surface.
int stride = 2 * sizeof(vec3);
const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);
const Drawable& drawable = m_drawables[visualIndex];
glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
glVertexPointer(3, GL_FLOAT, stride, 0);
glNormalPointer(GL_FLOAT, stride, normalOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
glDrawElements(GL_TRIANGLES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);
}
}

First off, apply your rotation and scale matrices BEFORE your tranlation matrix(or matrices). So... instead of this:
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = rotation * m_translation * translation;
Do this:
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = m_translation * translation * rotation;
Secondly, why do you have two translation matrices? You should just need one matrix to transition your object vertices from object space into world space.
Third, why are you creating a normal matrix? Unless you are modifying normals for some sheared object post rotate, I don't really see any reason to do so. You can rotate the normals for your object using the same rotation matrix you use for your vertices.

This solution works for me. So I'll be sharing this for any newbies to the subject who run into the same problem as me.
Use:
glEnable(GL_NORMALIZE);
to always normalize the surface normals to unit length.

Related

WebGL is there a way to load dynamic buffers in fragment shaders?

I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?
If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.

WebGL: Scaling affects normal matrix?

I'm playing around with WebGL, I scipted a simple flat-shaded cube.
I got a shader which takes projection matrix, view model matrix and a normal matrix, nothing fancy:
(...)
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vec3 transformedNormal = uNMatrix * aVertexNormal;
float directionalLightWeighting = max(dot(transformedNormal, uLightingDirection), 0.0);
vLightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
}
Everything is fine, the flat shading looks good, but as soon as I resize the cube (noted as mat4.scale below), the shading does not affect the scene anymore. If I scale down the computed normal matrix by the reverse factor, it works again.
The code follows the following schema (drawing pseudo routine):
projection = mat4.ortho
// set up general camera view
view = mat4.lookAt
// set up cube position / scaling / rotation on view matrix
mat4.translate(view)
mat4.scale(view) // remove for nice shading ..
mat4.rotate(view)
// normalFromMat4 returns upper-left 3x3 inverse transpose
normal = mat4.normalFromMat4 ( view )
pass projection, view, normal to shader
gl.drawElements
I am using gl-matrix as math library.
Any ideas where my mistake lies?

Centering all of the points in iOS OpenGL ES app

I have an OpenGL view that displays a set of 3D points with some basic shaders:
// Fragment Shader
static const char* PointFS = STRINGIFY
(
void main(void)
{
gl_FragColor = vec4(0.8, 0.8, 0.8, 1.0);
}
);
// Vertex Shader
static const char* PointVS = STRINGIFY
(
uniform mediump mat4 uProjectionMatrix;
attribute mediump vec4 position;
void main( void )
{
gl_Position = uProjectionMatrix * position;
gl_PointSize = 3.0;
}
);
And the MVP matrix is calculated as:
- (void)setMatrices
{
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 2, 2, 2);
// Projection Matrix
const GLfloat aspectRatio = (GLfloat)(self.view.bounds.size.width) / (GLfloat)(self.view.bounds.size.height);
const GLfloat fieldView = GLKMathDegreesToRadians(90.0f);
const GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(fieldView, aspectRatio, 0.1f, 10.0f);
glUniformMatrix4fv(self.pointShader.uProjectionMatrix, 1, 0, GLKMatrix4Multiply(projectionMatrix, modelViewMatrix).m);
}
This works fine, but I have a set of 500 points and I see only a few.
How do I scale/translate the MVP matrix to display all of them (they are a dynamic set)? Ideally the "centroid" should be at the origin, and all of the points visible. It should be able to adapt to rotations of the view (gestures are the next step I want to implement).
Seeing how you present this you might need quite a lot... I guess best approach might be using "look at", the point you are looking at is (0,0,0) as you stated, camera position should probably be (0,0,Z) and up (0,1,0). So the only issue here is the Z component of camera position.
If you start the Z with for instance -.1 and the iterate through all the points then sin(fieldView*.5f) * (p.z-Z) >= point.y for the point to be visible. So you can compute Z1 = p.z-(point.y/sin(fieldView*.5f)) and if Z1<Z then Z=Z1. This check is only for the positive Y check, you also need the same for negative Y and same for +-X. These evasions are very similar though when checking X you could also take the screen ratio into account.
This procedure should give you the smallest field possible to see all the points (with given limitations such as looking towards (0,0,0)) but is far from the simplest. You also need to consider if the equation will work if p.z<-Z.
Another bit easier approach is to generate the smallest cube around centre which holds all the points: iterate through points and get the coordinate with largest absolute value (any of X,Y or Z). When you have it use it with frustum instead perspective so that all rect parameters (top, bottom, left and right) are generated with this value as +-largest. Then you need to compute the translation which for 90 degrees field is Z = (largest*.5). Z is the zNear for the frustum and then also translate the matrix by -(Z+largest). Again one of the coordinate in frustum must be multiplied by screen ratio.
In any case do watch out what your zFar is, having it only 10.0f might be a bit too short in your case. Until you need the depth buffer you should not worry about that value being too large.

OpenGL ES 2.0: Why does this perspective projection matrix not give the right result?

About 2 days ago I decided to write code to explicitly calculate the Model-View-Projection ("MVP") matrix to understand how it worked. Since then I've had nothing but trouble, seemingly because of the projection matrix I'm using.
Working with an iPhone display, I create a screen centered square described by these 4 corner vertices:
const CGFloat cy = screenHeight/2.0f;
const CGFloat z = -1.0f;
const CGFloat dim = 50.0f;
vxData[0] = cx-dim;
vxData[1] = cy-dim;
vxData[2] = z;
vxData[3] = cx-dim;
vxData[4] = cy+dim;
vxData[5] = z;
vxData[6] = cx+dim;
vxData[7] = cy+dim;
vxData[8] = z;
vxData[9] = cx+dim;
vxData[10] = cy-dim;
vxData[11] = z;
Since I am using OGLES 2.0 I pass the MVP as a uniform to my vertex shader, then simply apply the transformation to the current vertex position:
uniform mat4 mvp;
attribute vec3 vpos;
void main()
{
gl_Position = mvp * vec4(vpos, 1.0);
}
For now I have simplified my MVP to just be the P matrix. There are two projection matrices listed in the code shown below. The first is the standard perspective projection matrix, and the second is an explicit-value projection matrix I found online.
CGRect screenBounds = [[UIScreen mainScreen] bounds];
const CGFloat screenWidth = screenBounds.size.width;
const CGFloat screenHeight = screenBounds.size.height;
const GLfloat n = 0.01f;
const GLfloat f = 100.0f;
const GLfloat fov = 60.0f * 2.0f * M_PI / 360.0f;
const GLfloat a = screenWidth/screenHeight;
const GLfloat d = 1.0f / tanf(fov/2.0f);
// Standard perspective projection.
GLKMatrix4 projectionMx = GLKMatrix4Make(d/a, 0.0f, 0.0f, 0.0f,
0.0f, d, 0.0f, 0.0f,
0.0f, 0.0f, (n+f)/(n-f), -1.0f,
0.0f, 0.0f, (2*n*f)/(n-f), 0.0f);
// The one I found online.
GLKMatrix4 projectionMx = GLKMatrix4Make(2.0f/screenWidth,0.0f,0.0f,0.0f,
0.0f,2.0f/-screenHeight,0.0f,0.0f,
0.0f,0.0f,1.0f,0.0f,
-1.0f,1.0f,0.0f,1.0f);
When using the explicit value matrix, the square renders exactly as desired in the centre of the screen with correct dimension. When using the perspective projection matrix, nothing is displayed on-screen. I've done printouts of the position values generated for screen centre (screenWidth/2, screenHeight/2, 0) by the perspective projection matrix and they're enormous. The explicit value matrix correctly produces zero.
I think the explicit value matrix is an orthographic projection matrix - is that right? My frustration is that I can't work out why my perspective projection matrix fails to work.
I'd be tremendously grateful if someone could help me with this problem. Many thanks.
UPDATE For Christian Rau:
#define Zn 0.0f
#define Zf 100.0f
#define PRIMITIVE_Z 1.0f
//...
CGRect screenBounds = [[UIScreen mainScreen] bounds];
const CGFloat screenWidth = screenBounds.size.width;
const CGFloat screenHeight = screenBounds.size.height;
//...
glUseProgram(program);
//...
glViewport(0.0f, 0.0f, screenBounds.size.width, screenBounds.size.height);
//...
const CGFloat cx = screenWidth/2.0f;
const CGFloat cy = screenHeight/2.0f;
const CGFloat z = PRIMITIVE_Z;
const CGFloat dim = 50.0f;
vxData[0] = cx-dim;
vxData[1] = cy-dim;
vxData[2] = z;
vxData[3] = cx-dim;
vxData[4] = cy+dim;
vxData[5] = z;
vxData[6] = cx+dim;
vxData[7] = cy+dim;
vxData[8] = z;
vxData[9] = cx+dim;
vxData[10] = cy-dim;
vxData[11] = z;
//...
const GLfloat n = Zn;
const GLfloat f = Zf;
const GLfloat fov = 60.0f * 2.0f * M_PI / 360.0f;
const GLfloat a = screenWidth/screenHeight;
const GLfloat d = 1.0f / tanf(fov/2.0f);
GLKMatrix4 projectionMx = GLKMatrix4Make(d/a, 0.0f, 0.0f, 0.0f,
0.0f, d, 0.0f, 0.0f,
0.0f, 0.0f, (n+f)/(n-f), -1.0f,
0.0f, 0.0f, (2*n*f)/(n-f), 0.0f);
//...
// ** Here is the matrix you recommended, Christian:
GLKMatrix4 ts = GLKMatrix4Make(2.0f/screenWidth, 0.0f, 0.0f, -1.0f,
0.0f, 2.0f/screenHeight, 0.0f, -1.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
GLKMatrix4 mvp = GLKMatrix4Multiply(projectionMx, ts);
UPDATE 2
The new MVP code:
GLKMatrix4 ts = GLKMatrix4Make(2.0f/screenWidth, 0.0f, 0.0f, -1.0f,
0.0f, 2.0f/-screenHeight, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
// Using Apple perspective, view matrix generators
// (I can solve bugs in my own implementation later..!)
GLKMatrix4 _p = GLKMatrix4MakePerspective(60.0f * 2.0f * M_PI / 360.0f,
screenWidth / screenHeight,
Zn, Zf);
GLKMatrix4 _mv = GLKMatrix4MakeLookAt(0.0f, 0.0f, 1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 1.0f, 0.0f);
GLKMatrix4 _mvp = GLKMatrix4Multiply(_p, _mv);
GLKMatrix4 mvp = GLKMatrix4Multiply(_mvp, ts);
Still nothing visible at the screen centre, and the transformed x,y coordinates of the screen centre are not zero.
UPDATE 3
Using the transpose of ts instead in the above code works! But the square no longer appears square; it appears to now have aspect ratio screenHeight/screenWidth i.e. it has a longer dimension parallel to the (short) screen width, and a shorter dimension parallel to the (long) screen height.
I'd very much like to know (a) why the transpose is required and whether it is a valid fix, (b) how to correctly rectify the non-square dimension, and (c) how this additional matrix transpose(ts) that we use fits into the transformation chain of Viewport * Projection * View * Model * Point .
For (c): I understand what the matrix does, i.e. the explanation by Christian Rau as to how we transform to range [-1, 1]. But is it correct to include this additional work as a separate transformation matrix, or should some part of our MVP chain be doing this work instead?
Sincere thanks go to Christian Rau for his valuable contribution thus far.
UPDATE 4
My question about "how ts fits in" is silly isn't it - the whole point is the matrix is only needed because I'm choosing to use screen coordinates for my vertices; if I were to use coordinates in world space from the start then this work wouldn't be needed!
Thanks Christian for all your help, it's been invaluable :) Problem solved.
The reason for this is, that your first projection matrix doesn't account for the scaling and translation part of the transformation, whereas the second matrix does it.
So, since your modelview matrix is identity, the first projection matrix assumes the models' coordinates to ly somewhere in [-1,1], whereas the second matrix already contains the scaling and translation part (look at the screenWidth/Height values in there) and therefore assumes the coordinates to ly in [0,screenWidth] x [0,screenHeight].
So you have to right-multiply your projection matrix by a matrix that first scales [0,screenWidth] down to [0,2] and [0,screenHeight] down to [0,2] and then translates [0,2] into [-1,1] (using w for screenWidth and h for screenHeight):
[ 2/w 0 0 -1 ]
[ 0 2/h 0 -1 ]
[ 0 0 1 0 ]
[ 0 0 0 1 ]
which will result in the matrix
[ 2*d/h 0 0 -d/a ]
[ 0 2*d/h 0 -d ]
[ 0 0 (n+f)/(n-f) 2*n*f/(n-f) ]
[ 0 0 -1 0 ]
So you see that your second matrix corresponds to a fov of 90 degrees, an aspect ratio of 1:1 and a near-far range of [-1,1]. Additionally it also inverts the y-axis, so that the origin is in the upper-left, which results in the second row being negated:
[ 0 -2*d/h 0 d ]
But as an end comment, I suggest you to not configure the projection matrix to account for all this. Instead your projection matrix should look like the first one and you should let the modelview matrix manage any translation or scaling of your world. It is not by accident, that the transformation pipeline was seperated into modelview and projection matrix and you should keep this separation also when using shaders. You can of course still multiply both matrices together on the CPU and upload a single MVP matrix to the shader.
And in general you don't really use a screen-based coordinate system when working with a 3-dimensional world. You would only want to do this if you are drawing 2d graphics (like GUI elements or HUDs) and in this case you would use a more simple orthographic projection matrix, anyway, that is nothing more than the above mentioned scale-translate matrix without all the perspective complexity.
EDIT: To your 3rd update:
(a) The transpose is required because I guess your GLKMatrix4Make function accepts its parameters in column-major format and you put the matrix in row-wise.
(b) I made a little mistake. You should change the screenWidth in the ts matrix into screenHeight (or maybe the other way around, not sure). We actually need a uniform scale, because the aspect ratio is already taken care of by the projection matrix.
(c) It is not easy to classify this matrix into the usual MVP pipeline. This is because it is not really common. Let's look at the two common cases of rendering:
3D: When you have a 3-dimensional world it is not really common to define it's coordinates in screen-based units, because there is not et a mapping from 3d-scene to 2d-screen and using a coordinate system where units equal pixels just doesn't make sense. In this setup you most likely would classify it as part of the modelview matrix for transforming the world into another unit system. But in this case you would need real 3d transformations and not just such a half-baked 2d solution.
2D: When rendering a 2d-scene (like a GUI or a HUD or just some text), you sometimes really want a screen-based coordinate system. But in this case you most likely would use an orthographic projection (without any perspective). Such an orthographic matrix is actually nothing more than this ts matrix (with some additional scale-translate for z, based on the near-far range). So in this case the matrix belongs to, or actually is, the projection matrix. Just look at how the good old glOrtho function constructs its matrix and you'll see its nothing more than ts.

How to achieve glOrthof in OpenGL ES 2.0

I am trying to convert my OpenGL ES 1 application to a OpenGL ES 2 application to be able to use Shaders. Now I use the glOrthof function to have a "real sized viewport" so I can place the vertices at the "actual" pixel in the OpenGL View.
glOrthof(0, _frame.size.width, _frame.size.height, 0, -1, 1);
I am having trouble finding out how to achieve this in OpenGL ES 2, is there anyone who can show me how to do this?
If not, does anyone have a link to a good OpenGL ES 1 to OpenGL ES 2 tutorial/explanation?
The glOrtho method does nothing else than create a new matrix and multiply the current projection matrix by this matrix. With OpenGL ES 2.0 you have to manage matrices yourself. In order to replicate the glOrtho behaviour you need a uniform for the projection matrix in your vertex shader, which you then multiply your vertices with. Usually you also have a model and a view matrix (or a combined modelview matrix, like in OpenGL ES 1) which you transform your vertices with before the projection transform:
uniform mat4 projection;
uniform mat4 modelview;
attribute vec4 vertex;
void main()
{
gl_Position = projection * (modelview * vertex);
}
The specific projection matrix that glOrtho constructs can be found here.
As Christian describes, all of the matrix math for processing your vertices is up to you, so you have to replicate the matrix that glOrthof() creates. In my answer here, I provided the following Objective-C method for generating such an orthographic projection matrix:
- (void)loadOrthoMatrix:(GLfloat *)matrix left:(GLfloat)left right:(GLfloat)right bottom:(GLfloat)bottom top:(GLfloat)top near:(GLfloat)near far:(GLfloat)far;
{
GLfloat r_l = right - left;
GLfloat t_b = top - bottom;
GLfloat f_n = far - near;
GLfloat tx = - (right + left) / (right - left);
GLfloat ty = - (top + bottom) / (top - bottom);
GLfloat tz = - (far + near) / (far - near);
matrix[0] = 2.0f / r_l;
matrix[1] = 0.0f;
matrix[2] = 0.0f;
matrix[3] = tx;
matrix[4] = 0.0f;
matrix[5] = 2.0f / t_b;
matrix[6] = 0.0f;
matrix[7] = ty;
matrix[8] = 0.0f;
matrix[9] = 0.0f;
matrix[10] = 2.0f / f_n;
matrix[11] = tz;
matrix[12] = 0.0f;
matrix[13] = 0.0f;
matrix[14] = 0.0f;
matrix[15] = 1.0f;
}
The matrix used here is defined as
GLfloat orthographicMatrix[16];
I then apply the matrix within my vertex shader using something like the following:
gl_Position = modelViewProjMatrix * position * orthographicMatrix;
My multiplication order differs from Christian's, so I may be doing something a little backward here, but this is what I've used to handle this within an OpenGL ES 2.0 application of mine (the source code of which can be found here).

Resources