I want to draw object within ar but got unexpected result - gl mashine think that i see object from another side (or from inside).
Here image what i want to draw (taken from separate project)
And here - what i got when try to draw this object in my ar (inside of the sphere)
So I guess that problem is that because I put object inside sphere and adjust position of obj using base mat from sphere obj.
Camera positioned in the center of the sphere - so for this obj I use same mat - just scale/rotate/translate it.
This is how I calculate projection mat
CGRect viewFrame = self.frame;
if (!CGSizeEqualToSize(self.newSize, CGSizeZero){
size = self.newSize;
}
CGFloat aspect = viewFrame.size.width / viewFrame.size.height;
CGFloat scale = self.interractor.scale;
CGFloat FOVY = DEGREES_TO_RADIANS(self.viewScale) / scale;
CGFloat cameraDistanse = -(1.0 / [Utilities FarZ]);
GLKMatrix4 cameraTranslation = GLKMatrix4MakeTranslation(0, 0, cameraDistanse);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(FOVY, aspect, NearZ, [Utilities FarZ]);
projectionMatrix = GLKMatrix4Multiply(projectionMatrix, cameraTranslation);
//and also here added some code for modifying, but I skip it here
For this obj I just calculate new scale and position of obj - looks like it's correct because I able to see obj and change his position etc, so skip this part.
In the second project where I got correct result of displaying obj I calculate projection mat in similar way, but with a little bit less calculation:
float aspect = self.glView.frame.size.width / self.glView.frame.size.height;
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.01f, 100);
//scale
//rotate
//translate
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, projectionMatrix);
GLfloat scale = 0.5 *_scale;
GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scale, scale, scale);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, _positionX, _positionY, -5);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotationX, 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotationY, 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Multiply(scaleMatrix, modelViewMatrix);
In first project (correct one) I also use
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDisable(GL_CULL_FACE);
In second with few obj - depend from obj that I want to draw:
glClearColor(0.0f, 1.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_SCISSOR_TEST);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
//call sphere draw
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
//call obj draw
glEnable(GL_SCISSOR_TEST);
So as I sad before, I guess that the problem is that openGL "think" that we are looking to obj from another side, but I'm not sure. And if i'm right how can i fix this? Or whats done incorrect?
Update
#codetiger I check ur suggestions:
1) Wrong face winding order - recheck it again and try to inverse order, also try to build same model in another project (all works perfect) - result i guess that order is ok;
2) Wrong Culling - check all combinations of
glDisable / glEnable with argument GL_CULL_FACE
glCullFace with argument GL_FRONT, GL_BACK or GL_FRONT_AND_BACK
glFrontFace with argument GL_CW or GL_CCW
What i see - a little bit change but i see still incorrect obj (or wrong side or partial obj etc)
3) vertices are flipped - try to flip them, as result - even worse than before
4) try to combine this 3 suggestion one with another - result not acceptable
Related
I'm writing some code that has to transform a number of vertex coordinates to clip/projection space on the CPU (to integrate with an existing application; shaders are not an option here as the application provides its own shaders at a later point), for which I have a test case which intends to have the result be a basic 'camera-style' transformation + projection looking down a camera rotated around the X axis.
The test case is as follows:
#include <windows.h>
#include <xnamath.h>
#include <stdio.h>
int main()
{
XMVECTOR atPos = XMVectorSet(925.0f, -511.0f, 0.0f, 0.0f);
XMVECTOR eyePos = XMVectorSet(925.0f, -311.0f, 200.0f, 0.0f);
XMVECTOR upDir = XMVectorSet(0.f, 0.f, 1.f, 0.0f);
XMMATRIX viewMatrix = XMMatrixLookAtLH(eyePos, atPos, upDir);
XMMATRIX projectionMatrix = XMMatrixPerspectiveFovLH(XMConvertToRadians(55.f), 1.0f, 0.1f, 5000.0f);
XMMATRIX viewProjMatrix = viewMatrix * projectionMatrix;
XMVECTOR coord1 = XMVectorSet(925.0f, -500.0f, 0.0f, 0.0f);
XMVECTOR coord2 = XMVectorSet(925.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR coord3 = XMVectorSet(925.0f, -1000.0f, 0.0f, 0.0f);
coord1 = XMVector3TransformCoord(coord1, viewProjMatrix);
coord2 = XMVector3TransformCoord(coord2, viewProjMatrix);
coord3 = XMVector3TransformCoord(coord3, viewProjMatrix);
printf(" 0: %g %g %g\n", XMVectorGetX(coord2), XMVectorGetY(coord2), XMVectorGetZ(coord2));
printf(" -500: %g %g %g\n", XMVectorGetX(coord1), XMVectorGetY(coord1), XMVectorGetZ(coord1));
printf("-1000: %g %g %g\n", XMVectorGetX(coord3), XMVectorGetY(coord3), XMVectorGetZ(coord3));
getc(stdin);
}
If I just transform by the view matrix, I'm noticing the Y value correctly moving correlated with the input Y value of the coordinates; but adding the projection matrix the Y value seems to wrap around making both the coordinate that are to the 'north' of the view and the coordinate to the 'south' of the view extend in the same direction in projection space, which results in the eventual faces transformed using similar code in the target application overlapping each other instead of drawing in the correct order.
What am I doing wrong in this regard, and how can I get a valid result for at least this input (i.e. extending in the same manner as the input Y is advancing, even though these vertices are out of the given projection range, which I doubt should be a problem)?
XMVECTOR upDir = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
I have a triangle created in DirectX11. I now want to play around with viewport and world matrices to help my understanding of them, so Id like to simply rotate the triangle around the Z axis. My code for attempting to do that is below.
void Render(void)
{
if (d3dContext_ == 0)
return;
XMMATRIX view = XMMatrixIdentity();
XMMATRIX projection = XMMatrixOrthographicOffCenterLH(0.0f, 800.0f, 0.0f, 600.0f, 0.1f, 100.0f); .
XMMATRIX vpMatrix_ = XMMatrixMultiply(view, projection);
XMMATRIX translation = XMMatrixTranslation(0.0f, 0.0f, 0.0f);
XMMATRIX rotationZ = XMMatrixRotationZ(30.0f);
XMMATRIX TriangleWorld = translation * rotationZ;
XMMATRIX mvp = TriangleWorld*vpMatrix_;
mvp = XMMatrixTranspose(mvp);
float clearColor[4] = { 0.0f, 0.0f, 0.25f, 1.0f };
d3dContext_->ClearRenderTargetView(backBufferTarget_, clearColor);
unsigned int stride = sizeof(VertexPos);
unsigned int offset = 0;
d3dContext_->IASetInputLayout(inputLayout_);
d3dContext_->IASetVertexBuffers(0, 1, &vertexBuffer_, &stride, &offset);
d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dContext_->VSSetShader(solidColorVS_, 0, 0);
d3dContext_->PSSetShader(solidColorPS_, 0, 0);
d3dContext_->UpdateSubresource(mvpCB_, 0, 0, &mvp, 0, 0);
d3dContext_->VSSetConstantBuffers(0, 1, &mvpCB_);
d3dContext_->Draw(3, 0);
swapChain_->Present(0, 0);
}
It just displays the standard triangle, its as if it does not take notice of the mvp.
My desired effect is the rotation as controlled by XMMATRIX rotationZ = XMMatrixRotationZ(30);.
Thanks
XMMatrixRotationZ takes a radian as parameter, not degrees (see MSDN Description ).
To get degrees from radians, you have to multiply by M_PI / 180.0f
XMMATRIX rotationZ = XMMatrixRotationZ(30 * M_PI / 180.0);
As far as i know from OpenGl you must increase the XMMatrixRotationZ-value for an animated rotation a little bit per tick, because otherwise you only draw it once in the specific angle.
So (if you haven't) create a loop for your render function and increase the angle-value per round
Hope i could help
I've got some code working to create a 3d view in opengl and then use the device motion to look around within it. i know this is working because i can place 3d cubes in space around me and see that they are in the right places. (i'm just creating them with x/y/z co-ordinates).
The code uses the rotation matrix of the device and then applies it to the various blocks.
CMRotationMatrix r = dm.attitude.rotationMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4Make(r.m11, r.m21, r.m31, 0.0f,
r.m12, r.m22, r.m32, 0.0f,
r.m13, r.m23, r.m33, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, kNearZ, kFarZ);
block.effect.transform.projectionMatrix = projectionMatrix;
what i want to be able to do is figure out where i'm looking and then create a block out in front of me.
i've had some limited success by creating a vector in one direction, applying the rotation matrix to it and then reading off it's new values. but it only works on some of the directions- when i rotate too far it messes up.
GLKVector4 vect = GLKVector4Make(0.0f,0.0f,10.0f,1.0f);
GLKVector4 newVec = GLKMatrix4MultiplyVector4(baseModelViewMatrix,vect);
I then read off newVec.x,newVec.y,newVec.z and use them to place the cube.
can someone tell me if i'm on the right track here? is there an easier way to achieve this?
the maths of it all is quite daunting.
UPDATE:
I've had some partial success using
GLKVector3 newVec1 = GLKVector3Make(-r.m22, -r.m33, r.m21);
This only works in one lanscape orientation, and also only works in a cylinder around my current location. the up/down axis isn't quite right.
are these parts of the rotation matrix sufficient to get a point anywhere around me?
UPDATE 2:
Thought it might help to post some more code to make it really clear.
This is how everything is getting displayed.
//1- get device position
CMDeviceMotion *dm = motionManager.deviceMotion;
CMRotationMatrix r = dm.attitude.rotationMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4Make(r.m11, r.m21, r.m31, 0.0f,
r.m12, r.m22, r.m32, 0.0f,
r.m13, r.m23, r.m33, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), aspect, kNearZ, kFarZ);
//2- work out position ahead of current view
GLKVector3 newVector = GLKVector3Make(-r.m22, -r.m33, r.m21);
//place cube in 3d space- this works fine when just positioning with x,y,z co-ordinates
cube.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, newVector.x*100, newVector.y*100, newVector.z*100);
modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);
cube.effect.transform.modelviewMatrix = modelViewMatrix;
The vector where you look at should probably be multiplied by inverted matrix. You didn't write what results are you getting but here are a few possible solutions:
newVect1 = (r.m13, r.m23, r.m33)
newVect2 = (-r.m13, -r.m23, r.m33)
multiply (0,0,1) with inverted matrix
EDIT: (To add some explanations)
If your baseModelViewMatrix is responsible for rotating all the objects around you then the vector that faces forward (display wise) should be rotated by inverted matrix of baseModelViewMatrix. If it is true that in case that baseModelViewMatrix is identity the correct vector for facing forward is (0,0,1), then just multiply (0,0,1) with inverted matrix (you can simply try that and you even should, just set it to identity and place the object at (0,0,100) or whatever). If it is not, you should find the one that is facing forward and do the same with it.
Concerning the 2nd method I posted newVect2 = (-r.m13, -r.m23, r.m33) it is specifically meant in case where (0,0,1) is the forward vector. The idea is that rotation matrix consists of 3 base vectors x,y,z where each row represents one of them respectively (it always works for creating rotation matrix but might not reversibly). If the 3rd row does represent the z base vector then vector (0,0,1) will be transformed to (r.m13, r.m23, r.m33) and the one you seem to be facing is the vector that is mirrored through its normal (0,0,1). The operation for that would be R' = normal*(2*dot(normal, R)) - R and for the case where normal is (0,0,1) and R is (r.m13, r.m23, r.m33), the result R' is (-r.m13, -r.m23, r.m33). Just as a note here it might be turned around and the z base vector is (r.m31, r.m32, r.m33).
In any case, if 3rd option does not work neither will the 2nd and the problem for that is probably that (0,0,1) is not the forward vector for identity.
I'm looking for a simple implementation for arcball rotation on 3D models with quaternions, specifically using GLKit on iOS. So far, I have examined the following sources:
Arcball rotation with GLKit
How to rotate a 3D object with touches using OpenGL
I've also been trying to understand source code and maths from here and here. I can rotate my object but it keeps jumping around at certain angles, so I fear gimbal lock is at play. I'm using gesture recognizers to control the rotations (pan gestures affect roll and yaw, rotate gestures affect pitch). I'm attaching my code for the quaternion handling as well as the modelview matrix transformation.
Variables:
GLKQuaternion rotationE;
Quaternion Handling:
- (void)rotateWithXY:(float)x and:(float)y
{
const float rate = M_PI/360.0f;
GLKVector3 up = GLKVector3Make(0.0f, 1.0f, 0.0f);
GLKVector3 right = GLKVector3Make(1.0f, 0.0f, 0.0f);
up = GLKQuaternionRotateVector3(GLKQuaternionInvert(self.rotationE), up);
self.rotationE = GLKQuaternionMultiply(self.rotationE, GLKQuaternionMakeWithAngleAndVector3Axis(x*rate, up));
right = GLKQuaternionRotateVector3(GLKQuaternionInvert(self.rotationE), right);
self.rotationE = GLKQuaternionMultiply(self.rotationE, GLKQuaternionMakeWithAngleAndVector3Axis(y*rate, right));
}
- (void)rotateWithZ:(float)z
{
GLKVector3 front = GLKVector3Make(0.0f, 0.0f, -1.0f);
front = GLKQuaternionRotateVector3(GLKQuaternionInvert(self.rotationE), front);
self.rotationE = GLKQuaternionMultiply(self.rotationE, GLKQuaternionMakeWithAngleAndVector3Axis(z, front));
}
Modelview Matrix Transformation (Inside Draw Loop):
// Get Quaternion Rotation
GLKVector3 rAxis = GLKQuaternionAxis(self.transformations.rotationE);
float rAngle = GLKQuaternionAngle(self.transformations.rotationE);
// Set Modelview Matrix
GLKMatrix4 modelviewMatrix = GLKMatrix4Identity;
modelviewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -0.55f);
modelviewMatrix = GLKMatrix4Rotate(modelviewMatrix, rAngle, rAxis.x, rAxis.y, rAxis.z);
modelviewMatrix = GLKMatrix4Scale(modelviewMatrix, 0.5f, 0.5f, 0.5f);
glUniformMatrix4fv(self.sunShader.uModelviewMatrix, 1, 0, modelviewMatrix.m);
Any help is greatly appreciated, but I do want to keep it as simple as possible and stick to GLKit.
There seem to be a few issues going on here.
You say that you're using [x,y] to pan, but it looks more like you're using them to pitch and yaw. To me, at least, panning is translation, not rotation.
Unless I'm missing something, it also looks like your replacing the entire rotation everytime you try to update it. You rotate a vector by the inverse of the current rotation and then create a quaternion from that vector and some angle. I believe that this is equivalent to creating the quaternion from the original vector and then rotating it by the current rotation inverse. So you have q_e'*q_up. Then you multiply that with the current rotation, which gives q_e*q_e'*q_up = q_up. The current rotation is canceled out. This doesn't seem like it's what you want.
All you really need to do is create a new quaternion from axis-and-angle and then multiply it with the current quaternion. If the new quaternion is on the left, the orientation change will use the eye-local frame. If the new quaternion is on the right, the orientation change will be in the global frame. I think you want:
self.rotationE =
GLKQuaternionMultiply(
GLKQuaternionMakeWithAngleAndVector3Axis(x*rate, up),self.rotationE);
Do this, without the pre-rotation by inverse for all three cases.
I've never used the GLKit, but it's uncommon to extract axis-angle when converting from Quaternion to Matrix. If the angle is zero, the axis is undefined. When it's near zero, you'll have numeric instability. It looks like you should be using GLKMatrix4MakeWithQuaternion and then multiplying the resulting matrix with your translation matrix and scale matrix:
GLKMatrix4 modelviewMatrix =
GLKMatrix4Multiply( GLKMatrix4MakeTranslation(0.0f, 0.0f, -0.55f),
GLKMatrix4MakeWithQuaternion( self.rotationE ) );
modelviewMatrix = GLKMatrix4Scale( modelviewMatrix, 0.5f, 0.5f, 0.5f );
I was recently asked a bit more about my resulting implementation of this problem, so here it is!
- (void)rotate:(GLKVector3)r
{
// Convert degrees to radians for maths calculations
r.x = GLKMathDegreesToRadians(r.x);
r.y = GLKMathDegreesToRadians(r.y);
r.z = GLKMathDegreesToRadians(r.z);
// Axis Vectors w/ Direction (x=right, y=up, z=front)
// In OpenGL, negative z values go "into" the screen. In GLKit, positive z values go "into" the screen.
GLKVector3 right = GLKVector3Make(1.0f, 0.0f, 0.0f);
GLKVector3 up = GLKVector3Make(0.0f, 1.0f, 0.0f);
GLKVector3 front = GLKVector3Make(0.0f, 0.0f, 1.0f);
// Quaternion w/ Angle and Vector
// Positive angles are counter-clockwise, so convert to negative for a clockwise rotation
GLKQuaternion q = GLKQuaternionIdentity;
q = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(-r.x, right), q);
q = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(-r.y, up), q);
q = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(-r.z, front), q);
// ModelView Matrix
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, GLKMatrix4MakeWithQuaternion(q));
}
Hope you put it to good use :)
About 2 days ago I decided to write code to explicitly calculate the Model-View-Projection ("MVP") matrix to understand how it worked. Since then I've had nothing but trouble, seemingly because of the projection matrix I'm using.
Working with an iPhone display, I create a screen centered square described by these 4 corner vertices:
const CGFloat cy = screenHeight/2.0f;
const CGFloat z = -1.0f;
const CGFloat dim = 50.0f;
vxData[0] = cx-dim;
vxData[1] = cy-dim;
vxData[2] = z;
vxData[3] = cx-dim;
vxData[4] = cy+dim;
vxData[5] = z;
vxData[6] = cx+dim;
vxData[7] = cy+dim;
vxData[8] = z;
vxData[9] = cx+dim;
vxData[10] = cy-dim;
vxData[11] = z;
Since I am using OGLES 2.0 I pass the MVP as a uniform to my vertex shader, then simply apply the transformation to the current vertex position:
uniform mat4 mvp;
attribute vec3 vpos;
void main()
{
gl_Position = mvp * vec4(vpos, 1.0);
}
For now I have simplified my MVP to just be the P matrix. There are two projection matrices listed in the code shown below. The first is the standard perspective projection matrix, and the second is an explicit-value projection matrix I found online.
CGRect screenBounds = [[UIScreen mainScreen] bounds];
const CGFloat screenWidth = screenBounds.size.width;
const CGFloat screenHeight = screenBounds.size.height;
const GLfloat n = 0.01f;
const GLfloat f = 100.0f;
const GLfloat fov = 60.0f * 2.0f * M_PI / 360.0f;
const GLfloat a = screenWidth/screenHeight;
const GLfloat d = 1.0f / tanf(fov/2.0f);
// Standard perspective projection.
GLKMatrix4 projectionMx = GLKMatrix4Make(d/a, 0.0f, 0.0f, 0.0f,
0.0f, d, 0.0f, 0.0f,
0.0f, 0.0f, (n+f)/(n-f), -1.0f,
0.0f, 0.0f, (2*n*f)/(n-f), 0.0f);
// The one I found online.
GLKMatrix4 projectionMx = GLKMatrix4Make(2.0f/screenWidth,0.0f,0.0f,0.0f,
0.0f,2.0f/-screenHeight,0.0f,0.0f,
0.0f,0.0f,1.0f,0.0f,
-1.0f,1.0f,0.0f,1.0f);
When using the explicit value matrix, the square renders exactly as desired in the centre of the screen with correct dimension. When using the perspective projection matrix, nothing is displayed on-screen. I've done printouts of the position values generated for screen centre (screenWidth/2, screenHeight/2, 0) by the perspective projection matrix and they're enormous. The explicit value matrix correctly produces zero.
I think the explicit value matrix is an orthographic projection matrix - is that right? My frustration is that I can't work out why my perspective projection matrix fails to work.
I'd be tremendously grateful if someone could help me with this problem. Many thanks.
UPDATE For Christian Rau:
#define Zn 0.0f
#define Zf 100.0f
#define PRIMITIVE_Z 1.0f
//...
CGRect screenBounds = [[UIScreen mainScreen] bounds];
const CGFloat screenWidth = screenBounds.size.width;
const CGFloat screenHeight = screenBounds.size.height;
//...
glUseProgram(program);
//...
glViewport(0.0f, 0.0f, screenBounds.size.width, screenBounds.size.height);
//...
const CGFloat cx = screenWidth/2.0f;
const CGFloat cy = screenHeight/2.0f;
const CGFloat z = PRIMITIVE_Z;
const CGFloat dim = 50.0f;
vxData[0] = cx-dim;
vxData[1] = cy-dim;
vxData[2] = z;
vxData[3] = cx-dim;
vxData[4] = cy+dim;
vxData[5] = z;
vxData[6] = cx+dim;
vxData[7] = cy+dim;
vxData[8] = z;
vxData[9] = cx+dim;
vxData[10] = cy-dim;
vxData[11] = z;
//...
const GLfloat n = Zn;
const GLfloat f = Zf;
const GLfloat fov = 60.0f * 2.0f * M_PI / 360.0f;
const GLfloat a = screenWidth/screenHeight;
const GLfloat d = 1.0f / tanf(fov/2.0f);
GLKMatrix4 projectionMx = GLKMatrix4Make(d/a, 0.0f, 0.0f, 0.0f,
0.0f, d, 0.0f, 0.0f,
0.0f, 0.0f, (n+f)/(n-f), -1.0f,
0.0f, 0.0f, (2*n*f)/(n-f), 0.0f);
//...
// ** Here is the matrix you recommended, Christian:
GLKMatrix4 ts = GLKMatrix4Make(2.0f/screenWidth, 0.0f, 0.0f, -1.0f,
0.0f, 2.0f/screenHeight, 0.0f, -1.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
GLKMatrix4 mvp = GLKMatrix4Multiply(projectionMx, ts);
UPDATE 2
The new MVP code:
GLKMatrix4 ts = GLKMatrix4Make(2.0f/screenWidth, 0.0f, 0.0f, -1.0f,
0.0f, 2.0f/-screenHeight, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
// Using Apple perspective, view matrix generators
// (I can solve bugs in my own implementation later..!)
GLKMatrix4 _p = GLKMatrix4MakePerspective(60.0f * 2.0f * M_PI / 360.0f,
screenWidth / screenHeight,
Zn, Zf);
GLKMatrix4 _mv = GLKMatrix4MakeLookAt(0.0f, 0.0f, 1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 1.0f, 0.0f);
GLKMatrix4 _mvp = GLKMatrix4Multiply(_p, _mv);
GLKMatrix4 mvp = GLKMatrix4Multiply(_mvp, ts);
Still nothing visible at the screen centre, and the transformed x,y coordinates of the screen centre are not zero.
UPDATE 3
Using the transpose of ts instead in the above code works! But the square no longer appears square; it appears to now have aspect ratio screenHeight/screenWidth i.e. it has a longer dimension parallel to the (short) screen width, and a shorter dimension parallel to the (long) screen height.
I'd very much like to know (a) why the transpose is required and whether it is a valid fix, (b) how to correctly rectify the non-square dimension, and (c) how this additional matrix transpose(ts) that we use fits into the transformation chain of Viewport * Projection * View * Model * Point .
For (c): I understand what the matrix does, i.e. the explanation by Christian Rau as to how we transform to range [-1, 1]. But is it correct to include this additional work as a separate transformation matrix, or should some part of our MVP chain be doing this work instead?
Sincere thanks go to Christian Rau for his valuable contribution thus far.
UPDATE 4
My question about "how ts fits in" is silly isn't it - the whole point is the matrix is only needed because I'm choosing to use screen coordinates for my vertices; if I were to use coordinates in world space from the start then this work wouldn't be needed!
Thanks Christian for all your help, it's been invaluable :) Problem solved.
The reason for this is, that your first projection matrix doesn't account for the scaling and translation part of the transformation, whereas the second matrix does it.
So, since your modelview matrix is identity, the first projection matrix assumes the models' coordinates to ly somewhere in [-1,1], whereas the second matrix already contains the scaling and translation part (look at the screenWidth/Height values in there) and therefore assumes the coordinates to ly in [0,screenWidth] x [0,screenHeight].
So you have to right-multiply your projection matrix by a matrix that first scales [0,screenWidth] down to [0,2] and [0,screenHeight] down to [0,2] and then translates [0,2] into [-1,1] (using w for screenWidth and h for screenHeight):
[ 2/w 0 0 -1 ]
[ 0 2/h 0 -1 ]
[ 0 0 1 0 ]
[ 0 0 0 1 ]
which will result in the matrix
[ 2*d/h 0 0 -d/a ]
[ 0 2*d/h 0 -d ]
[ 0 0 (n+f)/(n-f) 2*n*f/(n-f) ]
[ 0 0 -1 0 ]
So you see that your second matrix corresponds to a fov of 90 degrees, an aspect ratio of 1:1 and a near-far range of [-1,1]. Additionally it also inverts the y-axis, so that the origin is in the upper-left, which results in the second row being negated:
[ 0 -2*d/h 0 d ]
But as an end comment, I suggest you to not configure the projection matrix to account for all this. Instead your projection matrix should look like the first one and you should let the modelview matrix manage any translation or scaling of your world. It is not by accident, that the transformation pipeline was seperated into modelview and projection matrix and you should keep this separation also when using shaders. You can of course still multiply both matrices together on the CPU and upload a single MVP matrix to the shader.
And in general you don't really use a screen-based coordinate system when working with a 3-dimensional world. You would only want to do this if you are drawing 2d graphics (like GUI elements or HUDs) and in this case you would use a more simple orthographic projection matrix, anyway, that is nothing more than the above mentioned scale-translate matrix without all the perspective complexity.
EDIT: To your 3rd update:
(a) The transpose is required because I guess your GLKMatrix4Make function accepts its parameters in column-major format and you put the matrix in row-wise.
(b) I made a little mistake. You should change the screenWidth in the ts matrix into screenHeight (or maybe the other way around, not sure). We actually need a uniform scale, because the aspect ratio is already taken care of by the projection matrix.
(c) It is not easy to classify this matrix into the usual MVP pipeline. This is because it is not really common. Let's look at the two common cases of rendering:
3D: When you have a 3-dimensional world it is not really common to define it's coordinates in screen-based units, because there is not et a mapping from 3d-scene to 2d-screen and using a coordinate system where units equal pixels just doesn't make sense. In this setup you most likely would classify it as part of the modelview matrix for transforming the world into another unit system. But in this case you would need real 3d transformations and not just such a half-baked 2d solution.
2D: When rendering a 2d-scene (like a GUI or a HUD or just some text), you sometimes really want a screen-based coordinate system. But in this case you most likely would use an orthographic projection (without any perspective). Such an orthographic matrix is actually nothing more than this ts matrix (with some additional scale-translate for z, based on the near-far range). So in this case the matrix belongs to, or actually is, the projection matrix. Just look at how the good old glOrtho function constructs its matrix and you'll see its nothing more than ts.