stucked in matrix operation errors - ios

I was set up a 2D opengl view in iOS with top left as origin and bottom right as (768, 1366)
My projection matrix is setup like this:
projectionMtx = GLKMatrix4MakeOrtho( 0, 768, 1366, 0, 10, -10);
When I got the touch event, the coordinates are in physical coordinates, and i need to convert them into my own logical coordinates, so I thought like this:
Since V_physical = M_projection * V_logical
So V_logical = M_projection_invert * V_phsical
and I implemented the code like this:
(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch* touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:self.view];
GLKVector4 locationVector = {
(float)location.x,
(float)location.y,
0,
0,
};
GLKVector4 result = GLKMatrix4MultiplyVector4(GLKMatrix4Invert(projectionMtx, nullptr),locationVector);
NSLog(#"touch %.2f %.2f", location.x, location.y);
NSLog(#"vector %.2f %.2f", result.v[0], result.v[1]);
}
However, this is what I got from testing:
touch 367.00 662.00
vector 140928.00 -452151.28
Is my math wrong or my code wrong?

You have some mix up with your coordinate systems. The projection matrix maps its input coordinates (which in a full 3D pipeline are typically called "eye coordinates") to clip coordinates. For a parallel projection, clip coordinates are the same as normalized device coordinates (NDC), which have a range of [-1, 1] in x and y direction.
This means that your ortho projection:
projectionMtx = GLKMatrix4MakeOrtho( 0, 768, 1366, 0, 10, -10);
maps an x range of [0, 768] to [-1, 1], and a y range of [1366, 0] to [-1, 1]. The resulting mapping done by the matrix is:
xNdc = (2.0 / 768.0) * xEye - 1.0
yNdc = (2.0 / -1366.0) * yEye + 1.0
The inverse of this is:
xEye = (768.0 / 2.0) * (xNdc + 1.0)
yEye = (-1366.0 / 2.0) * (yNdc - 1.0)
Applying this inverse transformation gives:
(768.0 / 2.0) * (367.0 + 1.0) = 141312.0
(-1366.0 / 2.0) * (662.0 - 1.0) = -451463
For reasons I can't explain at the moment, this is slightly off what you got (looks like a one-off difference), but it's very similar.
This is obviously not meaningful. To use the inverse projection transformation, your input coordinates should be in the range [-1, 1].
In your use case, since you set up the projection transformations to transform coordinates in pixels, and you receive touch input that is also in pixels, you really have to do nothing at all to get the touch input in your OpenGL coordinate system. They are already in the same coordinate system (pixels).
If you use any other projection. You would first map your touch coordinates to a [-1, 1] range, and then apply the inverse projection transformation. The coordinate mapping would use the same equations as the ones I had above for mapping eye coordinates to NDC.

Related

Using realmeter coordinates for cv2.puttext coordinates

I found the x,y coordinates that change in real time, and I want to assign this to the position value of the cv2.puttext function. But it keeps failing. How can I use x,y coordinates that move in real time?`
extra explain)
'it keeps failing' is when i apply in cv2.puttext
Coordinate x,y code:
image_hight, image_width, _ = image.shape
x_coodinate = results.pose_landmarks.landmark[mp_pose.PoseLandmark.LEFT_SHOULDER].x * image_width
y_coodinate = results.pose_landmarks.landmark[mp_pose.PoseLandmark.LEFT_SHOULDER].y * image_hight
cv2.puttext code:
cv2.putText(image, str(results.pose_landmarks.landmark[mp_pose.PoseLandmark.LEFT_SHOULDER].x * image_width),(x_coodinate,y_coodinate), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0), 1, cv2.LINE_AA)

iOS revert camera projection

I'm trying to estimate my device position related to a QR code in space. I'm using ARKit and the Vision framework, both introduced in iOS11, but the answer to this question probably doesn't depend on them.
With the Vision framework, I'm able to get the rectangle that bounds a QR code in the camera frame. I'd like to match this rectangle to the device translation and rotation necessary to transform the QR code from a standard position.
For instance if I observe the frame:
* *
B
C
A
D
* *
while if I was 1m away from the QR code, centered on it, and assuming the QR code has a side of 10cm I'd see:
* *
A0 B0
D0 C0
* *
what has been my device transformation between those two frames? I understand that an exact result might not be possible, because maybe the observed QR code is slightly non planar and we're trying to estimate an affine transform on something that is not one perfectly.
I guess the sceneView.pointOfView?.camera?.projectionTransform is more helpful than the sceneView.pointOfView?.camera?.projectionTransform?.camera.projectionMatrix since the later already takes into account transform inferred from the ARKit that I'm not interested into for this problem.
How would I fill
func get transform(
qrCodeRectangle: VNBarcodeObservation,
cameraTransform: SCNMatrix4) {
// qrCodeRectangle.topLeft etc is the position in [0, 1] * [0, 1] of A0
// expected real world position of the QR code in a referential coordinate system
let a0 = SCNVector3(x: -0.05, y: 0.05, z: 1)
let b0 = SCNVector3(x: 0.05, y: 0.05, z: 1)
let c0 = SCNVector3(x: 0.05, y: -0.05, z: 1)
let d0 = SCNVector3(x: -0.05, y: -0.05, z: 1)
let A0, B0, C0, D0 = ?? // CGPoints representing position in
// camera frame for camera in 0, 0, 0 facing Z+
// then get transform from 0, 0, 0 to current position/rotation that sees
// a0, b0, c0, d0 through the camera as qrCodeRectangle
}
====Edit====
After trying number of things, I ended up going for camera pose estimation using openCV projection and perspective solver, solvePnP This gives me a rotation and translation that should represent the camera pose in the QR code referential. However when using those values and placing objects corresponding to the inverse transformation, where the QR code should be in the camera space, I get inaccurate shifted values, and I'm not able to get the rotation to work:
// some flavor of pseudo code below
func renderer(_ sender: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let currentFrame = sceneView.session.currentFrame, let pov = sceneView.pointOfView else { return }
let intrisics = currentFrame.camera.intrinsics
let QRCornerCoordinatesInQRRef = [(-0.05, -0.05, 0), (0.05, -0.05, 0), (-0.05, 0.05, 0), (0.05, 0.05, 0)]
// uses VNDetectBarcodesRequest to find a QR code and returns a bounding rectangle
guard let qr = findQRCode(in: currentFrame) else { return }
let imageSize = CGSize(
width: CVPixelBufferGetWidth(currentFrame.capturedImage),
height: CVPixelBufferGetHeight(currentFrame.capturedImage)
)
let observations = [
qr.bottomLeft,
qr.bottomRight,
qr.topLeft,
qr.topRight,
].map({ (imageSize.height * (1 - $0.y), imageSize.width * $0.x) })
// image and SceneKit coordinated are not the same
// replacing this by:
// (imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
// weirdly fixes an issue, see below
let rotation, translation = openCV.solvePnP(QRCornerCoordinatesInQRRef, observations, intrisics)
// calls openCV solvePnP and get the results
let positionInCameraRef = -rotation.inverted * translation
let node = SCNNode(geometry: someGeometry)
pov.addChildNode(node)
node.position = translation
node.orientation = rotation.asQuaternion
}
Here is the output:
where A, B, C, D are the QR code corners in the order they are passed to the program.
The predicted origin stays in place when the phone rotates, but it's shifted from where it should be. Surprisingly, if I shift the observations values, I'm able to correct this:
// (imageSize.height * (1 - $0.y), imageSize.width * $0.x)
// replaced by:
(imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
and now the predicted origin stays robustly in place. However I don't understand where the shift values come from.
Finally, I've tried to get an orientation fixed relatively to the QR code referential:
var n = SCNNode(geometry: redGeometry)
node.addChildNode(n)
n.position = SCNVector3(0.1, 0, 0)
n = SCNNode(geometry: blueGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0.1, 0)
n = SCNNode(geometry: greenGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0, 0.1)
The orientation is fine when I look at the QR code straight, but then it shifts by something that seems to be related to the phone rotation:
Outstanding questions I have are:
How do I solve the rotation?
where do the position shift values come from?
What simple relationship do rotation, translation, QRCornerCoordinatesInQRRef, observations, intrisics verify? Is it O ~ K^-1 * (R_3x2 | T) Q ? Because if so that's off by a few order of magnitude.
If that's helpful, here are a few numerical values:
Intrisics matrix
Mat 3x3
1090.318, 0.000, 618.661
0.000, 1090.318, 359.616
0.000, 0.000, 1.000
imageSize
1280.0, 720.0
screenSize
414.0, 736.0
==== Edit2 ====
I've noticed that the rotation works fine when the phone stays horizontally parallel to the QR code (ie the rotation matrix is [[a, 0, b], [0, 1, 0], [c, 0, d]]), no matter what the actual QR code orientation is:
Other rotation don't work.
Coordinate systems' correspondence
Take into consideration that Vision/CoreML coordinate system doesn't correspond to ARKit/SceneKit coordinate system. For details look at this post.
Rotation's direction
I suppose the problem is not in matrix. It's in vertices placement. For tracking 2D images you need to place ABCD vertices counter-clockwise (the starting point is A vertex located in imaginary origin x:0, y:0). I think Apple Documentation on VNRectangleObservation class (info about projected rectangular regions detected by an image analysis request) is vague. You placed your vertices in the same order as is in official documentation:
var bottomLeft: CGPoint
var bottomRight: CGPoint
var topLeft: CGPoint
var topRight: CGPoint
But they need to be placed the same way like positive rotation direction (about Z axis) occurs in Cartesian coordinates system:
World Coordinate Space in ARKit (as well as in SceneKit and Vision) always follows a right-handed convention (the positive Y axis points upward, the positive Z axis points toward the viewer and the positive X axis points toward the viewer's right), but is oriented based on your session's configuration. Camera works in Local Coordinate Space.
Rotation direction about any axis is positive (Counter-Clockwise) and negative (Clockwise). For tracking in ARKit and Vision it's critically important.
The order of rotation also makes sense. ARKit, as well as SceneKit, applies rotation relative to the node’s pivot property in the reverse order of the components: first roll (about Z axis), then yaw (about Y axis), then pitch (about X axis). So the rotation order is ZYX.
Math (Trig.):
Notes: the bottom is l (the QR code length), the left angle is k, and the top angle is i (the camera)

Apply rotation around axis defined by touched point

I have an object displayed using OpenGL ES on an iPad. The model is defined by vertices, normals and indexes to vertices. The origin of the model is 0,0,0. Using UIGestureRecognizer I can detect various gestures - two-fingered swipe horizontally for rotation about y, vertically for rotation about x. Two-fingered rotate gesture for rotation about y. Pan to move the model around. Pinch/zoom gesture to scale. I want the viewer to be able to manipulate the model to see (for example) the reverse of the model or the whole thing at once.
The basic strategy comes from Ray Wenderlich's tutorial but I have rewritten this in Swift.
I understand quaternions to be a vector and an angle. The vectors up, right and front represent the three axes:
front = GLKVector3Make(0.0, 0.0, 1.0)
right = GLKVector3Make(1.0, 0.0, 0.0)
up = GLKVector3Make(0.0, 1.0, 0.0)
so the quaternion apples a rotation around each of the three axes (though only one of dx, dy, dz has a value, decided by the gesture recognizer.)
func rotate(rotation : GLKVector3, multiplier : Float) {
let dx = rotation.x - rotationStart.x
let dy = rotation.y - rotationStart.y
let dz = rotation.z - rotationStart.z
rotationStart = GLKVector3Make(rotation.x, rotation.y, rotation.z)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx * multiplier, up), rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dy * multiplier, right), rotationEnd)
rotationEnd = GLKQuaternionMultiply((GLKQuaternionMakeWithAngleAndVector3Axis(-dz, front)), rotationEnd)
state = .Rotation
}
Drawing uses the modelViewMatrix, calculated by the following function:
func modelViewMatrix() -> GLKMatrix4 {
var modelViewMatrix = GLKMatrix4Identity
// translation and zoom
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, translationEnd.x, translationEnd.y, -initialDepth);
// rotation
let quaternionMatrix = GLKMatrix4MakeWithQuaternion(rotationEnd)
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, quaternionMatrix)
// scale
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, scaleEnd, scaleEnd, scaleEnd);
// rotation
return modelViewMatrix
}
And mostly this works. However everything is relative to the origin.
If the model is rotated then the pivot is always an axis passing through the origin - if zoomed in looking at the end of the model away from the origin and then rotating, the model can rapidly swing out of view. If the model is scaled then the origin is always the fixed point with the model growing larger or smaller - if the origin is off-screen and scale is reduced the model can disappear from view as it collapses toward the origin...
What should happen is that whatever the current view, the model rotates or scales relative to the current view. For a rotation around the y axis that would mean defining the y axis around which the rotation occurs as passing vertically through the middle of the current view. For a scale operation the fixed point of the model would be in the centre of the screen with the model shrinking toward or growing outward from that point.
I know that in 2D the solution is always to translate to the origin, apply rotation and then apply the inverse of the first translation. I don't see why this should be different in 3D, but I cannot find any example doing this with quaternions only matrices. I have tried to apply a translation and its inverse around the rotation but nothing has an effect.
So I tried to do this in the rotate function:
let xTranslation : Float = 300.0
let yTranslation : Float = 300.0
let translation = GLKMatrix4Translate(GLKMatrix4Identity, xTranslation, yTranslation, -initialDepth);
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithMatrix4(translation) , rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx * multiplier, up), rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dy * multiplier, right), rotationEnd)
rotationEnd = GLKQuaternionMultiply((GLKQuaternionMakeWithAngleAndVector3Axis(-dz, front)), rotationEnd)
// inverse translation
let inverseTranslation = GLKMatrix4Translate(GLKMatrix4Identity, -xTranslation, -yTranslation, -initialDepth);
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithMatrix4(inverseTranslation) , rotationEnd)
The translation is 300,300 but there is no effect at all, it still pivots around where I know the origin to be. I've searched a long time for sample code and not found any.
The modelViewMatrix is applied in update() with:
effect?.transform.modelviewMatrix = modelViewMatrix
I could also cheat by adjusting all of the values in the model so that 0,0,0 falls at a central point - but that would still be a fixed origin and would be only marginally better.
The problem is in the last operation you made, you should swap the inverseTranslation with rotationEnd :
rotationEnd = GLKQuaternionMultiply(rotationEnd, GLKQuaternionMakeWithMatrix4(inverseTranslation))
And I think the partial rotation(dx, dy, dz) should follow the same rule.
In fact, if you want to change the pivot, this is how your matrix multiplication should be done:
modelMatrix = translationMatrix * rotationMatrix * inverse(translationMatrix)
and the result in homogeneous coordinates will be calculated as follows:
newPoint = translationMatrix * rotationMatrix * inverse(translationMatrix) * v4(x,y,z,1)
Example
This is a 2D test example that you can run in a playground.
let v4 = GLKVector4Make(1, 0, 0, 1) // Point A
let T = GLKMatrix4Translate(GLKMatrix4Identity, 1, 2, 0);
let rot = GLKMatrix4MakeWithQuaternion(GLKQuaternionMakeWithAngleAndVector3Axis(Float(M_PI)*0.5, GLKVector3Make(0, 0, 1))) //rotate by PI/2 around the z axis.
let invT = GLKMatrix4Translate(GLKMatrix4Identity, -1, -2, 0);
let partModelMat = GLKMatrix4Multiply(T, rot)
let modelMat = GLKMatrix4Multiply(partModelMat, invT) //The parameters were swapped in your code
//and the result would the rot matrix, since T*invT will be identity
var v4r = GLKMatrix4MultiplyVector4(modelMat, v4) //ModelMatrix multiplication with pointA
print(v4r.v) //(3,2,0,1)
//Step by step multiplication using the relation described above
v4r = GLKMatrix4MultiplyVector4(invT, v4)
v4r = GLKMatrix4MultiplyVector4(rot, v4r)
v4r = GLKMatrix4MultiplyVector4(T, v4r)
print(v4r.v) //(3,2,0,1)
As for the scale, if I understand correctly what you want, I would recommend to do it like it's done here: https://gamedev.stackexchange.com/questions/61473/combining-rotation-scaling-around-a-pivot-with-translation-into-a-matrix

How can I track a point on a texture in OpenGL ES1?

In my iOS application I have a texture applied to a sphere rendered in OpenGLES1. The sphere can be rotated by the user. How can I track where a given point on the texture is in 2D space at any given time?
For example, given point (200, 200) on a texture that's 1000px x 1000px, I'd like to place a UIButton on top of my OpenGL view that tracks the point as the sphere is manipulated.
What's the best way to do this?
On my first attempt, I tried to use a color-picking technique where I have a separate sphere in an off-screen framebuffer that uses a black texture with a red square at point (200, 200). Then, I used glReadPixels() to track the position of the red square and I moved my button accordingly. Unfortunately, grabbing all the pixel data and iterating it 60 times a second just isn't possible for obvious performance reasons. I tried a number of ways to optimize this hack (eg: iterating only the red pixels, iterating every 4th red pixel, etc), but it just didn't prove to be reliable.
I'm an OpenGL noob, so I'd appreciate any guidance. Is there a better solution? Thanks!
I think it's easier to keep track of where your ball is instead of searching for it with pixels. Then just have a couple of functions to translate your ball's coordinates to your view's coordinates (and back), then set your subview's center to the translated coordinates.
CGPoint translatePointFromGLCoordinatesToUIView(CGPoint coordinates, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
coordinates.x -= leftMostGLCoord;
coordinates.y -= bottomMostGLCoord;
CGPoint translatedPoint;
translatedPoint.x = coordinates.x / scale.x;
translatedPoint.y =coordinates.y / scale.y;
//flip y for iOS coordinates
translatedPoint.y = myGLView.bounds.size.height - translatedPoint.y;
return translatedPoint;
}
CGPoint translatePointFromUIViewToGLCoordinates(CGPoint pointInView, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
//flip y for iOS coordinates
pointInView.y = myGLView.bounds.size.height - pointInView.y;
CGPoint translatedPoint;
translatedPoint.x = leftMostGLCoord + (pointInView.x * scale.x);
translatedPoint.y = bottomMostGLCoord + (pointInView.y * scale.y);
return translatedPoint;
}
In my app I choose to use the iOS coordinate system for my drawing too. I just apply a projection matrix to my whole glkView the reconciles the coordinate system.
static GLKMatrix4 GLKMatrix4MakeIOSCoordsWithSize(CGSize screenSize){
GLKMatrix4 matrix4 = GLKMatrix4MakeScale(
2.0 / screenSize.width,
-2.0 / screenSize.height,
1.0);
matrix4 = GLKMatrix4Translate(matrix4,-screenSize.width / 2.0, -screenSize.height / 2.0, 0);
return matrix4;
}
This way you don't have to translate anything.

OpenGL ES 2.0 Ray Picking, far point

Help me please with ray picking
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f), aspect, 0.1f, 1000.0f);
GLKMatrix4 modelViewMatrix = _mainmodelViewMatrix;
// some transformations
_mainmodelViewMatrix = modelViewMatrix;
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix and _normalMatrix put to shader
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
and in touch end
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) , //1 - 2 * position.y / self.view.bounds.size.height,
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
How I can get far point? And my near_point is correct or not?
Thanks!
it looks like you have
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
-1, 1);
(phew) to calculate the normalized device coordinates of the near point.
To get the far point, just swap the -1 z coordinate for a 1:
GLKVector4 normalisedFarVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
1, 1);
And apply the same inverse transform to that. That should do the trick.
Background: Under normal circumstances, the final coordinates received by the GL for turning a fragment into a pixel are what are called normalised device coordinates. These lie within a cube whose corners are at (-1,-1,-1_ and (1,1,1). So the center of the screen is (0,0,z), the top left corner is (-1,1,z) and so on. The coordinates are transformed so that a point lying on the near plane will have a z coordinate of 1, and one lying just on the far plane will have a z coordinate of -1. These are the numbers that are used for depth testing, if you have it turned on.
So, as you might guess, when you want to convert a screen location back to a point in 3D space, you actually have a number of points to choose from - a line, in fact, stretching from the near plane to the far plane. In normalised device coordinates, this is the line stretching from z=-1 to z=1. So the process goes like this:
convert the x and y coordinates into normalised device coordinates x' and y'
For each of z' = 1 and z' = -1:
convert the coordinates to normalised device coordinates (see here for the formula)
apply the inverse of the projection matrix
apply the inverse of the model/view matrix (as it is before any per-object transformations)
The results are the two coordinates of your line in 3D space.
We can draw line from near_point to far_point.
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1),
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
near_point.v[3] = 1.0/near_point.v[3];
near_point = GLKVector4Make(near_point.v[0]*near_point.v[3], near_point.v[1]*near_point.v[3], near_point.v[2]*near_point.v[3], 1);
normalisedVector.z = 1.0;
GLKVector4 far_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
far_point.v[3] = 1.0/far_point.v[3];
far_point = GLKVector4Make(far_point.v[0]*far_point.v[3], far_point.v[1]*far_point.v[3], far_point.v[2]*far_point.v[3], 1);

Resources