I'm using Vuforia on Android for AR development. We can obtain the modelViewMatrix using
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(trackableResult.getPose());
This works great. Any geometry multiplied by this matrix and then by the projection matrix shows up on the screen as expected, with (0,0,0) at the centre of the tracked target.
But what I also want to do is to simultaneously draw geometry relative to the user's device, so to achieve this we can work out the inverse modelViewMatrix using:
Matrix44F inverseMV = SampleMath.Matrix44FInverse(invTranspMV);
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(modelViewMatrix_Vuforia);
modelViewMatrixInverse = invTranspMV.getData();
This works pretty well, e.g. if I draw a cube using this matrix, then when I tilt my phone up and down, the cube is also tilted up and down correctly, but when I turn left and right there's a problem. Left turning causes the cube to turn the wrong way as if I'm looking to the right hand side of it. Similarly with right turning. What should be happening is that the cube should appear "stuck" to the screen, i.e. which ever way I turn I should be able to see the same face "stuck" to the screen always.
I think the problem might be do with the Vuforia projection matrix, and I am going to create my own projection matrix (using guidance here) to experiment with different settings. As this post says, it could be to do with the intrinsic camera calibration of a specific device.
Am I on the right track? Any ideas what might be wrong and how I might solve this?
UPDATE
I don't think it's the projection matrix anymore (due to experimentation and peedee's answer comment below)
Having looked at this post I think I've made some progress. I am now using the following code:
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(trackableResult.getPose());
Matrix44F inverseMV = SampleMath.Matrix44FInverse(modelViewMatrix_Vuforia);
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(inverseMV);
modelViewMatrixInverse = invTranspMV.getData();
float [] position = {0, 0, 0, 1};
float [] lookAt = {0, 0, 1, 0};
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Log.v("QCV", "posx = " + cam_position[0] + ", posy = " + cam_position[1] + ", posz = " + cam_position[2]);
Log.v("QCV", "latx = " + cam_lookat[0] + ", laty = " + cam_lookat[1] + ", latz = " + cam_lookat[2]);
This successfully returns the camera position, and the normal to the camera as you move the camera about the target. I think I should be able to use this to project geometry in the way I want. Will update later if it works.
UPDATE2
Ok, some progress made. I'm now using the following code. It does the same thing as the previous code block but uses Matrix class instead of the SampleMath class.
float [] temp = new float[16];
temp = modelViewMatrix_Vuforia.getData();
Matrix.invertM(modelViewMatrixInverse, 0, temp, 0);
float [] position = {0, 0, 0, 1};
float [] lookAt = {0, 0, 1, 0};
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Log.v("QCV", "posx = " + cam_position[0] / kObjectScale + ", posy = " + cam_position[1] / kObjectScale + ", posz = " + cam_position[2] / kObjectScale);
Log.v("QCV", "latx = " + cam_lookat[0] + ", laty = " + cam_lookat[1] + ", latz = " + cam_lookat[2]);
The next bit of code gives (almost) the desired result:
modelViewMatrix = modelViewMatrix_Vuforia.getData();
Matrix.translateM(modelViewMatrix, 0, 0, 0, kObjectScale);
Matrix.scaleM(modelViewMatrix, 0, kObjectScale, kObjectScale, kObjectScale);
line.setVerts(cam_position[0] / kObjectScale,
cam_position[1] / kObjectScale,
cam_position[2] / kObjectScale,
cam_position[0] / kObjectScale + 0.5f,
cam_position[1] / kObjectScale + 0.5f,
cam_position[2] / kObjectScale - 30);
This defines a line along the negative z-axis from position vector equal to the camera position (which is calculated from the position of the actual physical device). Since the vector is normal, I have offsetted the X/Y so the normal can actually be visualised.
As you reposition your physical device, the normal moves with you. Great!
However, keeping the phone in the same position, but tilting the phone forwards/backwards or turning left/right, the line does not maintain it's central position within the camera's display. The effect I want is for the line to be rotated in world space as I tilt/turn so that in camera/screen space the line appears normal and is central to the physical display.
Note - you may wonder why I don't use something like:
line.setVerts(cam_position[0] / kObjectScale,
cam_position[1] / kObjectScale,
cam_position[2] / kObjectScale,
cam_position[0] / kObjectScale + cam_lookat[0] * 30,
cam_position[1] / kObjectScale + cam_lookat[1] * 30,
cam_position[2] / kObjectScale + cam_lookat[2] * 30);
The simple answer is I did try and it doesn't work ! All this achieves is that one end of the line stays where it is, whilst the other end points in the direction of the screen device normal. What we need is to rotate the line in world space based on angles obtained from cam_lookat so that the line actually appears in front of the camera in the centre and normal to the camera.
The next stage is to adjust the position of the line in world space based on angles calculated from the cam_lookat unit vector. These can be used to update the vertices of the line so that the normal always appears in the centre of the camera whichever way you orient the phone.
I think this is the right way to go. I will update again if this works!
Ok, this was a tough nut to crack but success is sooo sweet!
One crucial part is that it uses a function from SampleMath to compute the start of an intersection line from the centre of the physical device to the target. We combine this with the camera normal vector to get the line we want !
If you want to dig deeper I'm sure you can unearth/workout the matrix math behind the getPointToPlaneLineStart function.
This is the code that works. It's not optimal so you can probably tidy it up a bit/lot!
modelViewMatrix44F = Tool.convertPose2GLMatrix(trackableResult.getPose());
modelViewMatrixInverse44F = SampleMath.Matrix44FInverse(modelViewMatrix44F);
modelViewMatrixInverseTranspose44F = SampleMath.Matrix44FTranspose(modelViewMatrix44F);
modelViewMatrix = modelViewMatrix44F.getData();
Matrix.translateM(modelViewMatrix, 0, 0, 0, kObjectScale);
Matrix.scaleM(modelViewMatrix, 0, kObjectScale, kObjectScale, kObjectScale);
modelViewMatrix44F.setData(modelViewMatrix);
projectionMatrix44F = vuforiaAppSession.getProjectionMatrix();
projectionMatrixInverse44F = SampleMath.Matrix44FInverse(projectionMatrix44F);
projectionMatrixInverseTranspose44F = SampleMath.Matrix44FTranspose(projectionMatrixInverse44F);
// work out camera position and direction
modelViewMatrixInverse = modelViewMatrixInverseTranspose44F.getData();
position = new float [] {0, 0, 0, 1}; // camera position
lookAt = new float [] {0, 0, 1, 0}; // camera direction
float [] rotate = new float [] {(float) Math.cos(angle_degrees * 0.017453292f), (float) Math.sin(angle_degrees * 0.017453292f), 0, 0};
angle_degrees += 10;
if(angle_degrees > 359)
angle_degrees = 0;
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
float [] cam_rotate = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Matrix.multiplyMV(cam_rotate, 0, modelViewMatrixInverse, 0, rotate, 0);
Vec3F line_start = SampleMath.getPointToPlaneLineStart(projectionMatrixInverse44F, modelViewMatrix44F, 2*kObjectScale, 2*kObjectScale, new Vec2F(0, 0), new Vec3F(0, 0, 0), new Vec3F(0, 0, 1));
float x1 = line_start.getData()[0];
float y1 = line_start.getData()[1];
float z1 = line_start.getData()[2];
float x2 = x1 + cam_lookat[0] * 3 + cam_rotate[0] * 0.1f;
float y2 = y1 + cam_lookat[1] * 3 + cam_rotate[1] * 0.1f;
float z2 = z1 + cam_lookat[2] * 3 + cam_rotate[2] * 0.1f;
line.setVerts(x1, y1, z1, x2, y2, z2);
Note - I added the cam_rotate vector so that you could see the line, otherwise you can't see it - or at least you only see a speck on the screen - because it is defined to be perpendicular to the screen !
And it's Friday so I might go to the pub later to celebrate :-)
UPDATE
In fact the getPointToPlaneLineStart Java SampleMath method calls the following code (C++), so you can probably decipher the matrix math from it if you don't want to use the SampleMath class (c.f. this post)
SampleMath::projectScreenPointToPlane(QCAR::Matrix44F inverseProjMatrix, QCAR::Matrix44F modelViewMatrix,
float contentScalingFactor, float screenWidth, float screenHeight,
QCAR::Vec2F point, QCAR::Vec3F planeCenter, QCAR::Vec3F planeNormal,
QCAR::Vec3F &intersection, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd)
{
// Window Coordinates to Normalized Device Coordinates
QCAR::VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig();
float halfScreenWidth = screenHeight / 2.0f;
float halfScreenHeight = screenWidth / 2.0f;
float halfViewportWidth = config.mSize.data[0] / 2.0f;
float halfViewportHeight = config.mSize.data[1] / 2.0f;
float x = (contentScalingFactor * point.data[0] - halfScreenWidth) / halfViewportWidth;
float y = (contentScalingFactor * point.data[1] - halfScreenHeight) / halfViewportHeight * -1;
QCAR::Vec4F ndcNear(x, y, -1, 1);
QCAR::Vec4F ndcFar(x, y, 1, 1);
// Normalized Device Coordinates to Eye Coordinates
QCAR::Vec4F pointOnNearPlane = Vec4FTransform(ndcNear, inverseProjMatrix);
QCAR::Vec4F pointOnFarPlane = Vec4FTransform(ndcFar, inverseProjMatrix);
pointOnNearPlane = Vec4FDiv(pointOnNearPlane, pointOnNearPlane.data[3]);
pointOnFarPlane = Vec4FDiv(pointOnFarPlane, pointOnFarPlane.data[3]);
// Eye Coordinates to Object Coordinates
QCAR::Matrix44F inverseModelViewMatrix = Matrix44FInverse(modelViewMatrix);
QCAR::Vec4F nearWorld = Vec4FTransform(pointOnNearPlane, inverseModelViewMatrix);
QCAR::Vec4F farWorld = Vec4FTransform(pointOnFarPlane, inverseModelViewMatrix);
lineStart = QCAR::Vec3F(nearWorld.data[0], nearWorld.data[1], nearWorld.data[2]);
lineEnd = QCAR::Vec3F(farWorld.data[0], farWorld.data[1], farWorld.data[2]);
linePlaneIntersection(lineStart, lineEnd, planeCenter, planeNormal, intersection);
}
I'm by no means an expert, but it sounds to me like this left/right inversion should be expected. In my mind, the object in world space is looking in the direction of the positive z-axis towards the camera, while the camera space is looking in the direction of the negative z-axis facing the camera. Such a transformation of the coordinate system is bound to invert one of the x/y-axes to keep the coordinate system consistent.
ELI5: When you're standing in front of someone and tell them "on the count of 3 we both step to the left", you won't be standing in front of each other anymore afterwards.
I think it's unlikely to be a problem with the projection matrix as you said. The projection matrix merely transforms the 3d objects onto your 2d screen. Also the camera intrinsics doesn't sound like the right place to me. That matrix will correct for small distortions caused by the camera lens shape and placement, nothing as drastic as a left/right inversion.
Unfortunately I also don't know how to solve it right now, but what I had to say was too long for a comment. Sorry :-(
Related
I am trying to project a giving 3D point to image plane, I have posted many question regarding this and many people help me, also I read many related links but still the projection doesn't work for me correctly.
I have a 3d point (-455,-150,0) where x is the depth axis and z is the upwards axis and y is the horizontal one I have roll: Rotation around the front-to-back axis (x) , pitch: Rotation around the side-to-side axis (y) and yaw:Rotation around the vertical axis (z) also I have the position on the camera (x,y,z)=(-50,0,100) so I am doing the following
first I am doing from world coordinates to camera coordinates using the extrinsic parameters:
double pi = 3.14159265358979323846;
double yp = 0.033716827630996704* pi / 180; //roll
double thet = 67.362312316894531* pi / 180; //pitch
double k = 89.7135009765625* pi / 180; //yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx; //rotation matrix
cv::Mat mpoint3(1, 3, CV_64F, { -455,-150,0 }); //the 3D point location
mpoint3 = mpoint3 * rotationm; //rotation
cv::Mat position(1, 3, CV_64F, {-50,0,100}); //the camera position
mpoint3=mpoint3 - position; //translation
and now I want to move from camera coordinates to image coordinates
the first solution was: as I read from some sources
Mat myimagepoint3 = mpoint3 * mycameraMatrix;
this didn't work
the second solution was:
double fx = cameraMatrix.at<double>(0, 0);
double fy = cameraMatrix.at<double>(1, 1);
double cx1 = cameraMatrix.at<double>(0, 2);
double cy1= cameraMatrix.at<double>(1, 2);
xt = mpoint3 .at<double>(0) / mpoint3.at<double>(2);
yt = mpoint3 .at<double>(1) / mpoint3.at<double>(2);
double u = xt * fx + cx1;
double v = yt * fy + cy1;
but also didn't work
I also tried to use opencv method fisheye::projectpoints(from world to image coordinates)
Mat recv2;
cv::Rodrigues(rotationm, recv2);
//inputpoints a vector contains one point which is the 3d world coordinate of the point
//outputpoints a vector to store the output point
cv::fisheye::projectPoints(inputpoints,outputpoints,recv2,position,mycameraMatrix,mydiscoff );
but this also didn't work
by didn't work I mean: I know (in the image) where should the point appear but when I draw it, it is always in another place (not even close) sometimes I even got a negative values
note: there is no syntax errors or exceptions but may I made typos while I am writing code here
so can any one suggest if I am doing something wrong?
Last couple of weeks I've been working on developing a simple proof-of-concept application in which a 3D model is projected over a specific Augmented Reality marker (in my case I am using Aruco markers) in IOS (with Swift and Objective-C)
I calibrated an Ipad Camera with a specific fixed lens position and used that to estimate the pose of the AR marker (which from my debug analysis seem pretty accurate). The problem seems (surprise, surprise) when I try to use SceneKit scene to project a model over the marker.
I am aware that the axis in opencv and SceneKit are different (Y and Z) and already done this correction as well as the row order/column order difference between the two libraries.
After constructing the projection matrix, I apply that same transform to the 3D model and from my debug analysis the object seems to be translated to the desired position and with the desired rotation. The problem is that it never does overlap the specific image pixel position of the marker. I am using a AVCapturePreviewVideoLayer as to put the video in background that has the same bounds as my SceneKit View.
Has anyone has a clue why this happens? I tried to play with cameras FOV's but with no real impact in the results.
Thank you all for your time.
EDIT1: I Will post some of the code here to reveal what I am currently doing.
I have two subviews inside the main view, one which is a background AVCaptureVideoPreviewLayer and another which is a SceneKitView. Both have the same bounds as the main view.
At each frame I use an opencv wrapper which outputs the pose of each marker:
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners, rejected;
cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
if (ids.size() > 0 ){
cv::aruco::drawDetectedMarkers(frame, corners, ids);
cv::Mat rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);
// Let's protect ourselves agains multiple markers
if (rvecs.total() > 1)
return;
_markerFound = true;
cv::Rodrigues(rvecs, _currentR);
_currentT = tvecs;
for (int row = 0; row < _currentR.rows; row++){
for (int col = 0; col < _currentR.cols; col++){
_currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
}
_currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
}
_currentExtrinsics.at<double>(3,3) = 1;
std::cout << tvecs << std::endl;
// Convert coordinate systems of opencv to openGL (SceneKit)
// Note that in openCV z goes away the camera (in openGL goes into the camera)
// and y points down and on openGL point up
// Another note: openCV has a column order matrix representation, while SceneKit
// has a row order matrix, but we'll take care of it later.
cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0,0) = 1.0f;
cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
cvToGl.at<double>(3,3) = 1.0f;
_currentExtrinsics = cvToGl * _currentExtrinsics;
cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);
Then in each frame I convert the opencv matrix for a SCN4Matrix:
- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();
// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);
mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);
mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);
//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);
return mat;
}
At each frame in which the AR marker is found I add a box to the scene and apply the transformation to the object node:
SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
[self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
Mat R, T;
[self.delegate returnRotationMat:R];
[self.delegate returnTranslationMat:T];
SCNMatrix4 Transformation;
Transformation = [self transformToSceneKit:extrinsicMatrixOfTheMarker];
//_cameraNode.transform = SCNMatrix4Invert(Transformation);
[_sceneKitScene.rootNode addChildNode:_cameraNode];
//_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
//_cameraNode.camera.zNear = 0.0;
_sceneKitView.pointOfView = _cameraNode;
_boxNode.transform = Transformation;
[_sceneKitScene.rootNode addChildNode:_boxNode];
//_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);
std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}
For example if the translation vector is (-1, 5, 20) the object appears in the scene in position (-1, -5, -20) in the scene, and the rotation is correct also. The problem is that it never appears in the correct position in the background image. I will add some images to show the result.
Does anyone know why this is happening?
Found out the solution. Instead of applying the transform to the node of the object I applied the inverted transformation matrix to the camera node. Then for the camera perspective transform matrix I applied the following matrix:
projection = SCNMatrix4Identity
projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
projection.m33 = (-far - near) / (far - near)
projection.m34 = (-2 * far * near) / (far - near)
projection.m43 = -1
projection.m44 = 0
being far and near the z clipping planes.
I also had to correct the box initial position to center it on the marker.
I have plus sign which i drawn using lines. Now i want to rotate these lines, so that it will look like a X sign. I tried rotating the lines but no way.
self.shapeLayer1.transform = CATransform3DMakeRotation( (CGFloat) (GLKMathDegreesToRadians(-45)),0, 0, 0)
self.shapeLayer2.transform = CATransform3DMakeRotation( (CGFloat) (GLKMathDegreesToRadians(-45)), 0, 0, 0)
You guys can see that, i put zeros in the x, y, z places.!! i tries different values.But colud not get the actual rotation. If somebody got any idea, Please share with me. Sometimes the lines move to another point and rotates.
It looks like the x,y,z parameters define the axis of rotation. Since we want to rotate around xy, your axis of rotation should be the z axis, or 0,0,1.
self.shapeLayer1.transform = CATransform3DMakeRotation( (CGFloat) (GLKMathDegreesToRadians(45)),0, 0, 1)
self.shapeLayer2.transform = CATransform3DMakeRotation( (CGFloat) (GLKMathDegreesToRadians(-45)), 0, 0, 1)
Regarding the issue you're having with rotation around a non-centerpoint of the line, if you're unable to redraw the line centred around 0,0,0, you can also use the following code to transform it to 0,0,0, rotate, then transform it back to where you need it:
CGFloat tx = 1.0,ty = 2.0,tz = 0; // Modify these to the values you need
CATransform3D t = CATransform3DMakeTranslation (tx, ty, tz);
t = CATransform3DRotate(t,(CGFloat) (GLKMathDegreesToRadians(45)),0, 0, 1);
self.shapeLayer1.transform = CATransform3DTranslate(t,-tx,-ty,-tz);
CATransform3D t = CATransform3DMakeTranslation (tx, ty, tz);
t = CATransform3DRotate(t,(CGFloat) (GLKMathDegreesToRadians(-45)),0, 0, 1);
self.shapeLayer2.transform = CATransform3DTranslate(t,-tx,-ty,-tz);
Help me please with ray picking
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f), aspect, 0.1f, 1000.0f);
GLKMatrix4 modelViewMatrix = _mainmodelViewMatrix;
// some transformations
_mainmodelViewMatrix = modelViewMatrix;
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix and _normalMatrix put to shader
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
and in touch end
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) , //1 - 2 * position.y / self.view.bounds.size.height,
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
How I can get far point? And my near_point is correct or not?
Thanks!
it looks like you have
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
-1, 1);
(phew) to calculate the normalized device coordinates of the near point.
To get the far point, just swap the -1 z coordinate for a 1:
GLKVector4 normalisedFarVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
1, 1);
And apply the same inverse transform to that. That should do the trick.
Background: Under normal circumstances, the final coordinates received by the GL for turning a fragment into a pixel are what are called normalised device coordinates. These lie within a cube whose corners are at (-1,-1,-1_ and (1,1,1). So the center of the screen is (0,0,z), the top left corner is (-1,1,z) and so on. The coordinates are transformed so that a point lying on the near plane will have a z coordinate of 1, and one lying just on the far plane will have a z coordinate of -1. These are the numbers that are used for depth testing, if you have it turned on.
So, as you might guess, when you want to convert a screen location back to a point in 3D space, you actually have a number of points to choose from - a line, in fact, stretching from the near plane to the far plane. In normalised device coordinates, this is the line stretching from z=-1 to z=1. So the process goes like this:
convert the x and y coordinates into normalised device coordinates x' and y'
For each of z' = 1 and z' = -1:
convert the coordinates to normalised device coordinates (see here for the formula)
apply the inverse of the projection matrix
apply the inverse of the model/view matrix (as it is before any per-object transformations)
The results are the two coordinates of your line in 3D space.
We can draw line from near_point to far_point.
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1),
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
near_point.v[3] = 1.0/near_point.v[3];
near_point = GLKVector4Make(near_point.v[0]*near_point.v[3], near_point.v[1]*near_point.v[3], near_point.v[2]*near_point.v[3], 1);
normalisedVector.z = 1.0;
GLKVector4 far_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
far_point.v[3] = 1.0/far_point.v[3];
far_point = GLKVector4Make(far_point.v[0]*far_point.v[3], far_point.v[1]*far_point.v[3], far_point.v[2]*far_point.v[3], 1);
I have a texture that follows a user's finger in GLKit. I calculate the radian to draw the angle at using arctan between the two points.
Part of the trick here is to keep the object centered underfed the finger. So i have introduced the idea of an anchor point so that things can be drawn relative to their origin or center. My goal is to move the sprite into place and then rotate. I have the following code in my renderer.
// lets adjust for our location based on our anchor point.
GLKVector2 adjustment = GLKVector2Make(self.spriteSize.width * self.anchorPoint.x,
self.spriteSize.height * self.anchorPoint.y);
GLKVector2 adjustedPosition = GLKVector2Subtract(self.position, adjustment);
GLKMatrix4 modelMatrix = GLKMatrix4Multiply(GLKMatrix4MakeTranslation(adjustedPosition.x, adjustedPosition.y, 1.0), GLKMatrix4MakeScale(adjustedScale.x, adjustedScale.y, 1));
modelMatrix = GLKMatrix4Rotate(modelMatrix, self.rotation, 0, 0, 1);
effect.transform.modelviewMatrix = modelMatrix;
effect.transform.projectionMatrix = scene.projection;
One other note is that my sprite is on a texture alias. If i take out my rotation my sprite draws correctly centered under my finger. My project matrix is GLKMatrix4MakeOrtho(0, CGRectGetWidth(self.frame), CGRectGetHeight(self.frame), 0, 1, -1); so it matches the UIkit and the view its embedded in.
I ended up having to add a little more math to calculate additional offsets before i rotate.
// lets adjust for our location based on our anchor point.
GLKVector2 adjustment = GLKVector2Make(self.spriteSize.width * self.anchorPoint.x,
self.spriteSize.height * self.anchorPoint.y);
// we need to further adjust based on our so we can calucate the adjust based on our anchor point in our image.
GLKVector2 angleAdjustment;
angleAdjustment.x = adjustment.x * cos(self.rotation) - adjustment.y * sin(self.rotation);
angleAdjustment.y = adjustment.x * sin(self.rotation) + adjustment.y * cos(self.rotation);
// now create our real position.
GLKVector2 adjustedPosition = GLKVector2Subtract(self.position, angleAdjustment);
GLKMatrix4 modelMatrix = GLKMatrix4Multiply(GLKMatrix4MakeTranslation(adjustedPosition.x, adjustedPosition.y, 1.0), GLKMatrix4MakeScale(adjustedScale.x, adjustedScale.y, 1));
modelMatrix = GLKMatrix4Rotate(modelMatrix, self.rotation, 0, 0, 1);
This will create an additional adjustment based on where in the image we want to rotate and then transform based on that. This works like a charm..
There is a similar code I used to rotate a sprite around its center
First you move it to the position, then you rotate it, then you move it back halfsprite
- (GLKMatrix4) modelMatrix {
GLKMatrix4 modelMatrix = GLKMatrix4Identity;
float radians = GLKMathDegreesToRadians(self.rotation);
modelMatrix = GLKMatrix4Multiply(
GLKMatrix4Translate(modelMatrix, self.position.x , self.position.y , 0),
GLKMatrix4MakeRotation(radians, 0, 0, 1));
modelMatrix = GLKMatrix4Translate(modelMatrix, -self.contentSize.height/2, -self.contentSize.width/2 , 0);
return modelMatrix;
}