Related
Using this:
local W, H = 100, 50
function love.draw()
love.graphics.translate(love.graphics.getWidth()/2,love.graphics.getHeight()/2)
for i = 1, 360 do
local I = math.rad(i)
local x,y = math.cos(I)*W, math.sin(I)*H
love.graphics.line(0, 0, x, y)
end
end
I can connect a line with the center of an ellipse (with length W and height H) and the edge. How do you 'rotate' the ellipse around it's center, with a parameter R? I know you can sort of do it with love.graphics.ellipse and love.graphics.rotate but is there any way I can get the coordinates of the points on a rotated ellipse?
This is a Trigonometry problem, here is how the basic 2D rotation work. Imagine a point located at (x,y). If you want to rotate that point around the origin(in your case 0,0) by the angle θ, the coordinates of the new point would be located at (x1,y1) by using the following transformation
x1 = xcosθ − ysinθ
y1 = ycosθ + xsinθ
In your example, I added a new ellipse after rotations
function love.draw()
love.graphics.translate(love.graphics.getWidth()/2,love.graphics.getHeight()/2)
for i = 1, 360, 5 do
local I = math.rad(i)
local x,y = math.cos(I)*W, math.sin(I)*H
love.graphics.setColor(0xff, 0, 0) -- red
love.graphics.line(0, 0, x, y)
end
-- rotate by angle r = 90 degree
local r = math.rad(90)
for i = 1, 360, 5 do
local I = math.rad(i)
-- original coordinates
local x = math.cos(I) * W
local y = math.sin(I) * H
-- transform coordinates
local x1 = x * math.cos(r) - y * math.sin(r)
local y1 = y * math.cos(r) + x * math.sin(r)
love.graphics.setColor(0, 0, 0xff) -- blue
love.graphics.line(0, 0, x1, y1)
end
end
Working on a script to convert dxf to png, I need to draw arc which are having only three parameters, i.e. Start point of arc, end point of arc and bulge distance.
I have checked OpenCV and PIL both, and they require start and end angle to draw this arc. I can find out those angles using some geometry, but would like to know if there is any other solution out there which I am missing out.
You have three pieces of information defining your circular arc: two points on the circle (defining a chord of that circle) and the bulge distance (called the sagitta of the circular arc).
See the following graphic:
Here s is the sagitta, l is half the chord-length, and r is of course the radius. Other important non-marked positions are the points which the chord intersects the circle, the point where the sagitta intersects the circle, and the center of the circle, from which the radius is extending.
For OpenCV's ellipse() function, we would use the following prototype:
cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) → img
where most of the parameters are described by the following graphic:
Since we're drawing a circular, not elliptical arc, the major/minor axes will have the same size and there's no difference rotating it, so the axes will just be (radius, radius) and the angle should be zero to simplify. Then the only parameters we need are the center of the circle, the radius, and the start angle and ending angle of drawing, corresponding to the points of the chord. The angles are easy to calculate (they're just some angles on the circle). So ultimately we need to find the radius and center of the circle.
Finding the radius and the center is the same as finding the equation of the circle, so there are a ton of ways to do it. But since we're programming here, the easiest way IMO is to define a third point on the circle by where the sagitta touches the circle, and then solve for the circle from those three points.
So first we'll need to get the midpoint of the chord, get a perpendicular line to that midpoint, and extend it the length of the sagitta to get to that third point, but that's easy enough. I'll start given pt1 = (x1, y1) and pt2 = (x2, y2) as my two points on the circle and sagitta is the 'bulge depth' (i.e. the parameters you have):
# extract point coordinates
x1, y1 = pt1
x2, y2 = pt2
# find normal from midpoint, follow by length sagitta
n = np.array([y2 - y1, x1 - x2])
n_dist = np.sqrt(np.sum(n**2))
if np.isclose(n_dist, 0):
# catch error here, d(pt1, pt2) ~ 0
print('Error: The distance between pt1 and pt2 is too small.')
n = n/n_dist
x3, y3 = (np.array(pt1) + np.array(pt2))/2 + sagitta * n
Now we've got the third point on the circle. Note that the sagitta is just some length, so it could go either direction---if sagitta were negative, it'd go one direction from the chord, and if it were positive, it'd go the other direction. Not sure if this is how that distance is given to you or not.
Then we can simply use determinants to solve for the radius and center.
# calculate the circle from three points
# see https://math.stackexchange.com/a/1460096/246399
A = np.array([
[x1**2 + y1**2, x1, y1, 1],
[x2**2 + y2**2, x2, y2, 1],
[x3**2 + y3**2, x3, y3, 1]])
M11 = np.linalg.det(A[:, (1, 2, 3)])
M12 = np.linalg.det(A[:, (0, 2, 3)])
M13 = np.linalg.det(A[:, (0, 1, 3)])
M14 = np.linalg.det(A[:, (0, 1, 2)])
if np.isclose(M11, 0):
# catch error here, the points are collinear (sagitta ~ 0)
print('Error: The third point is collinear.')
cx = 0.5 * M12/M11
cy = -0.5 * M13/M11
radius = np.sqrt(cx**2 + cy**2 + M14/M11)
Then lastly, as we need the starting and ending angles to draw the ellipse with OpenCV, we can just use atan2() to get the angles from the center to the initial points:
# calculate angles of pt1 and pt2 from center of circle
pt1_angle = 180*np.arctan2(y1 - cy, x1 - cx)/np.pi
pt2_angle = 180*np.arctan2(y2 - cy, x2 - cx)/np.pi
So I packaged this all up into one function:
def convert_arc(pt1, pt2, sagitta):
# extract point coordinates
x1, y1 = pt1
x2, y2 = pt2
# find normal from midpoint, follow by length sagitta
n = np.array([y2 - y1, x1 - x2])
n_dist = np.sqrt(np.sum(n**2))
if np.isclose(n_dist, 0):
# catch error here, d(pt1, pt2) ~ 0
print('Error: The distance between pt1 and pt2 is too small.')
n = n/n_dist
x3, y3 = (np.array(pt1) + np.array(pt2))/2 + sagitta * n
# calculate the circle from three points
# see https://math.stackexchange.com/a/1460096/246399
A = np.array([
[x1**2 + y1**2, x1, y1, 1],
[x2**2 + y2**2, x2, y2, 1],
[x3**2 + y3**2, x3, y3, 1]])
M11 = np.linalg.det(A[:, (1, 2, 3)])
M12 = np.linalg.det(A[:, (0, 2, 3)])
M13 = np.linalg.det(A[:, (0, 1, 3)])
M14 = np.linalg.det(A[:, (0, 1, 2)])
if np.isclose(M11, 0):
# catch error here, the points are collinear (sagitta ~ 0)
print('Error: The third point is collinear.')
cx = 0.5 * M12/M11
cy = -0.5 * M13/M11
radius = np.sqrt(cx**2 + cy**2 + M14/M11)
# calculate angles of pt1 and pt2 from center of circle
pt1_angle = 180*np.arctan2(y1 - cy, x1 - cx)/np.pi
pt2_angle = 180*np.arctan2(y2 - cy, x2 - cx)/np.pi
return (cx, cy), radius, pt1_angle, pt2_angle
With these values you can then the arc with OpenCV's ellipse() function. However, these are all floating point values. ellipse() does let you plot floating point values with the shift argument, but if you're not familiar with it it's a little weird, so instead we can borrow the solution from this answer to define a function
def draw_ellipse(
img, center, axes, angle,
startAngle, endAngle, color,
thickness=1, lineType=cv2.LINE_AA, shift=10):
# uses the shift to accurately get sub-pixel resolution for arc
# taken from https://stackoverflow.com/a/44892317/5087436
center = (
int(round(center[0] * 2**shift)),
int(round(center[1] * 2**shift))
)
axes = (
int(round(axes[0] * 2**shift)),
int(round(axes[1] * 2**shift))
)
return cv2.ellipse(
img, center, axes, angle,
startAngle, endAngle, color,
thickness, lineType, shift)
Then to use these functions it's as simple as:
img = np.zeros((500, 500), dtype=np.uint8)
pt1 = (50, 50)
pt2 = (350, 250)
sagitta = 50
center, radius, start_angle, end_angle = convert_arc(pt1, pt2, sagitta)
axes = (radius, radius)
draw_ellipse(img, center, axes, 0, start_angle, end_angle, 255)
cv2.imshow('', img)
cv2.waitKey()
And again note that a negative sagitta gives an arc the other direction:
center, radius, start_angle, end_angle = convert_arc(pt1, pt2, sagitta)
axes = (radius, radius)
draw_ellipse(img, center, axes, 0, start_angle, end_angle, 255)
center, radius, start_angle, end_angle = convert_arc(pt1, pt2, -sagitta)
axes = (radius, radius)
draw_ellipse(img, center, axes, 0, start_angle, end_angle, 127)
cv2.imshow('', img)
cv2.waitKey()
Lastly just to expand, I broke out two error cases in the convert_arc() function. First:
if np.isclose(n_dist, 0):
# catch error here, d(pt1, pt2) ~ 0
print('Error: The distance between pt1 and pt2 is too small.')
The error catch here is because we need to get a unit vector, so we need to divide by the length which can't be zero. Of course, this will only happen if pt1 and pt2 are the same point, so you can just check that they're unique at the top of the function instead of checking here.
Second:
if np.isclose(M11, 0):
# catch error here, the points are collinear (sagitta ~ 0)
print('Error: The third point is collinear.')
Here only happens if the three points are collinear, which only happens if the sagitta is 0. So again, you can check this at the top of your function (and maybe say, OK, if it is 0, then just draw a line from pt1 to pt2 or whatever you want to do).
I'm using Vuforia on Android for AR development. We can obtain the modelViewMatrix using
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(trackableResult.getPose());
This works great. Any geometry multiplied by this matrix and then by the projection matrix shows up on the screen as expected, with (0,0,0) at the centre of the tracked target.
But what I also want to do is to simultaneously draw geometry relative to the user's device, so to achieve this we can work out the inverse modelViewMatrix using:
Matrix44F inverseMV = SampleMath.Matrix44FInverse(invTranspMV);
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(modelViewMatrix_Vuforia);
modelViewMatrixInverse = invTranspMV.getData();
This works pretty well, e.g. if I draw a cube using this matrix, then when I tilt my phone up and down, the cube is also tilted up and down correctly, but when I turn left and right there's a problem. Left turning causes the cube to turn the wrong way as if I'm looking to the right hand side of it. Similarly with right turning. What should be happening is that the cube should appear "stuck" to the screen, i.e. which ever way I turn I should be able to see the same face "stuck" to the screen always.
I think the problem might be do with the Vuforia projection matrix, and I am going to create my own projection matrix (using guidance here) to experiment with different settings. As this post says, it could be to do with the intrinsic camera calibration of a specific device.
Am I on the right track? Any ideas what might be wrong and how I might solve this?
UPDATE
I don't think it's the projection matrix anymore (due to experimentation and peedee's answer comment below)
Having looked at this post I think I've made some progress. I am now using the following code:
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(trackableResult.getPose());
Matrix44F inverseMV = SampleMath.Matrix44FInverse(modelViewMatrix_Vuforia);
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(inverseMV);
modelViewMatrixInverse = invTranspMV.getData();
float [] position = {0, 0, 0, 1};
float [] lookAt = {0, 0, 1, 0};
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Log.v("QCV", "posx = " + cam_position[0] + ", posy = " + cam_position[1] + ", posz = " + cam_position[2]);
Log.v("QCV", "latx = " + cam_lookat[0] + ", laty = " + cam_lookat[1] + ", latz = " + cam_lookat[2]);
This successfully returns the camera position, and the normal to the camera as you move the camera about the target. I think I should be able to use this to project geometry in the way I want. Will update later if it works.
UPDATE2
Ok, some progress made. I'm now using the following code. It does the same thing as the previous code block but uses Matrix class instead of the SampleMath class.
float [] temp = new float[16];
temp = modelViewMatrix_Vuforia.getData();
Matrix.invertM(modelViewMatrixInverse, 0, temp, 0);
float [] position = {0, 0, 0, 1};
float [] lookAt = {0, 0, 1, 0};
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Log.v("QCV", "posx = " + cam_position[0] / kObjectScale + ", posy = " + cam_position[1] / kObjectScale + ", posz = " + cam_position[2] / kObjectScale);
Log.v("QCV", "latx = " + cam_lookat[0] + ", laty = " + cam_lookat[1] + ", latz = " + cam_lookat[2]);
The next bit of code gives (almost) the desired result:
modelViewMatrix = modelViewMatrix_Vuforia.getData();
Matrix.translateM(modelViewMatrix, 0, 0, 0, kObjectScale);
Matrix.scaleM(modelViewMatrix, 0, kObjectScale, kObjectScale, kObjectScale);
line.setVerts(cam_position[0] / kObjectScale,
cam_position[1] / kObjectScale,
cam_position[2] / kObjectScale,
cam_position[0] / kObjectScale + 0.5f,
cam_position[1] / kObjectScale + 0.5f,
cam_position[2] / kObjectScale - 30);
This defines a line along the negative z-axis from position vector equal to the camera position (which is calculated from the position of the actual physical device). Since the vector is normal, I have offsetted the X/Y so the normal can actually be visualised.
As you reposition your physical device, the normal moves with you. Great!
However, keeping the phone in the same position, but tilting the phone forwards/backwards or turning left/right, the line does not maintain it's central position within the camera's display. The effect I want is for the line to be rotated in world space as I tilt/turn so that in camera/screen space the line appears normal and is central to the physical display.
Note - you may wonder why I don't use something like:
line.setVerts(cam_position[0] / kObjectScale,
cam_position[1] / kObjectScale,
cam_position[2] / kObjectScale,
cam_position[0] / kObjectScale + cam_lookat[0] * 30,
cam_position[1] / kObjectScale + cam_lookat[1] * 30,
cam_position[2] / kObjectScale + cam_lookat[2] * 30);
The simple answer is I did try and it doesn't work ! All this achieves is that one end of the line stays where it is, whilst the other end points in the direction of the screen device normal. What we need is to rotate the line in world space based on angles obtained from cam_lookat so that the line actually appears in front of the camera in the centre and normal to the camera.
The next stage is to adjust the position of the line in world space based on angles calculated from the cam_lookat unit vector. These can be used to update the vertices of the line so that the normal always appears in the centre of the camera whichever way you orient the phone.
I think this is the right way to go. I will update again if this works!
Ok, this was a tough nut to crack but success is sooo sweet!
One crucial part is that it uses a function from SampleMath to compute the start of an intersection line from the centre of the physical device to the target. We combine this with the camera normal vector to get the line we want !
If you want to dig deeper I'm sure you can unearth/workout the matrix math behind the getPointToPlaneLineStart function.
This is the code that works. It's not optimal so you can probably tidy it up a bit/lot!
modelViewMatrix44F = Tool.convertPose2GLMatrix(trackableResult.getPose());
modelViewMatrixInverse44F = SampleMath.Matrix44FInverse(modelViewMatrix44F);
modelViewMatrixInverseTranspose44F = SampleMath.Matrix44FTranspose(modelViewMatrix44F);
modelViewMatrix = modelViewMatrix44F.getData();
Matrix.translateM(modelViewMatrix, 0, 0, 0, kObjectScale);
Matrix.scaleM(modelViewMatrix, 0, kObjectScale, kObjectScale, kObjectScale);
modelViewMatrix44F.setData(modelViewMatrix);
projectionMatrix44F = vuforiaAppSession.getProjectionMatrix();
projectionMatrixInverse44F = SampleMath.Matrix44FInverse(projectionMatrix44F);
projectionMatrixInverseTranspose44F = SampleMath.Matrix44FTranspose(projectionMatrixInverse44F);
// work out camera position and direction
modelViewMatrixInverse = modelViewMatrixInverseTranspose44F.getData();
position = new float [] {0, 0, 0, 1}; // camera position
lookAt = new float [] {0, 0, 1, 0}; // camera direction
float [] rotate = new float [] {(float) Math.cos(angle_degrees * 0.017453292f), (float) Math.sin(angle_degrees * 0.017453292f), 0, 0};
angle_degrees += 10;
if(angle_degrees > 359)
angle_degrees = 0;
float [] cam_position = new float[16];
float [] cam_lookat = new float[16];
float [] cam_rotate = new float[16];
Matrix.multiplyMV(cam_position, 0, modelViewMatrixInverse, 0, position, 0);
Matrix.multiplyMV(cam_lookat, 0, modelViewMatrixInverse, 0, lookAt, 0);
Matrix.multiplyMV(cam_rotate, 0, modelViewMatrixInverse, 0, rotate, 0);
Vec3F line_start = SampleMath.getPointToPlaneLineStart(projectionMatrixInverse44F, modelViewMatrix44F, 2*kObjectScale, 2*kObjectScale, new Vec2F(0, 0), new Vec3F(0, 0, 0), new Vec3F(0, 0, 1));
float x1 = line_start.getData()[0];
float y1 = line_start.getData()[1];
float z1 = line_start.getData()[2];
float x2 = x1 + cam_lookat[0] * 3 + cam_rotate[0] * 0.1f;
float y2 = y1 + cam_lookat[1] * 3 + cam_rotate[1] * 0.1f;
float z2 = z1 + cam_lookat[2] * 3 + cam_rotate[2] * 0.1f;
line.setVerts(x1, y1, z1, x2, y2, z2);
Note - I added the cam_rotate vector so that you could see the line, otherwise you can't see it - or at least you only see a speck on the screen - because it is defined to be perpendicular to the screen !
And it's Friday so I might go to the pub later to celebrate :-)
UPDATE
In fact the getPointToPlaneLineStart Java SampleMath method calls the following code (C++), so you can probably decipher the matrix math from it if you don't want to use the SampleMath class (c.f. this post)
SampleMath::projectScreenPointToPlane(QCAR::Matrix44F inverseProjMatrix, QCAR::Matrix44F modelViewMatrix,
float contentScalingFactor, float screenWidth, float screenHeight,
QCAR::Vec2F point, QCAR::Vec3F planeCenter, QCAR::Vec3F planeNormal,
QCAR::Vec3F &intersection, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd)
{
// Window Coordinates to Normalized Device Coordinates
QCAR::VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig();
float halfScreenWidth = screenHeight / 2.0f;
float halfScreenHeight = screenWidth / 2.0f;
float halfViewportWidth = config.mSize.data[0] / 2.0f;
float halfViewportHeight = config.mSize.data[1] / 2.0f;
float x = (contentScalingFactor * point.data[0] - halfScreenWidth) / halfViewportWidth;
float y = (contentScalingFactor * point.data[1] - halfScreenHeight) / halfViewportHeight * -1;
QCAR::Vec4F ndcNear(x, y, -1, 1);
QCAR::Vec4F ndcFar(x, y, 1, 1);
// Normalized Device Coordinates to Eye Coordinates
QCAR::Vec4F pointOnNearPlane = Vec4FTransform(ndcNear, inverseProjMatrix);
QCAR::Vec4F pointOnFarPlane = Vec4FTransform(ndcFar, inverseProjMatrix);
pointOnNearPlane = Vec4FDiv(pointOnNearPlane, pointOnNearPlane.data[3]);
pointOnFarPlane = Vec4FDiv(pointOnFarPlane, pointOnFarPlane.data[3]);
// Eye Coordinates to Object Coordinates
QCAR::Matrix44F inverseModelViewMatrix = Matrix44FInverse(modelViewMatrix);
QCAR::Vec4F nearWorld = Vec4FTransform(pointOnNearPlane, inverseModelViewMatrix);
QCAR::Vec4F farWorld = Vec4FTransform(pointOnFarPlane, inverseModelViewMatrix);
lineStart = QCAR::Vec3F(nearWorld.data[0], nearWorld.data[1], nearWorld.data[2]);
lineEnd = QCAR::Vec3F(farWorld.data[0], farWorld.data[1], farWorld.data[2]);
linePlaneIntersection(lineStart, lineEnd, planeCenter, planeNormal, intersection);
}
I'm by no means an expert, but it sounds to me like this left/right inversion should be expected. In my mind, the object in world space is looking in the direction of the positive z-axis towards the camera, while the camera space is looking in the direction of the negative z-axis facing the camera. Such a transformation of the coordinate system is bound to invert one of the x/y-axes to keep the coordinate system consistent.
ELI5: When you're standing in front of someone and tell them "on the count of 3 we both step to the left", you won't be standing in front of each other anymore afterwards.
I think it's unlikely to be a problem with the projection matrix as you said. The projection matrix merely transforms the 3d objects onto your 2d screen. Also the camera intrinsics doesn't sound like the right place to me. That matrix will correct for small distortions caused by the camera lens shape and placement, nothing as drastic as a left/right inversion.
Unfortunately I also don't know how to solve it right now, but what I had to say was too long for a comment. Sorry :-(
I was set up a 2D opengl view in iOS with top left as origin and bottom right as (768, 1366)
My projection matrix is setup like this:
projectionMtx = GLKMatrix4MakeOrtho( 0, 768, 1366, 0, 10, -10);
When I got the touch event, the coordinates are in physical coordinates, and i need to convert them into my own logical coordinates, so I thought like this:
Since V_physical = M_projection * V_logical
So V_logical = M_projection_invert * V_phsical
and I implemented the code like this:
(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch* touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:self.view];
GLKVector4 locationVector = {
(float)location.x,
(float)location.y,
0,
0,
};
GLKVector4 result = GLKMatrix4MultiplyVector4(GLKMatrix4Invert(projectionMtx, nullptr),locationVector);
NSLog(#"touch %.2f %.2f", location.x, location.y);
NSLog(#"vector %.2f %.2f", result.v[0], result.v[1]);
}
However, this is what I got from testing:
touch 367.00 662.00
vector 140928.00 -452151.28
Is my math wrong or my code wrong?
You have some mix up with your coordinate systems. The projection matrix maps its input coordinates (which in a full 3D pipeline are typically called "eye coordinates") to clip coordinates. For a parallel projection, clip coordinates are the same as normalized device coordinates (NDC), which have a range of [-1, 1] in x and y direction.
This means that your ortho projection:
projectionMtx = GLKMatrix4MakeOrtho( 0, 768, 1366, 0, 10, -10);
maps an x range of [0, 768] to [-1, 1], and a y range of [1366, 0] to [-1, 1]. The resulting mapping done by the matrix is:
xNdc = (2.0 / 768.0) * xEye - 1.0
yNdc = (2.0 / -1366.0) * yEye + 1.0
The inverse of this is:
xEye = (768.0 / 2.0) * (xNdc + 1.0)
yEye = (-1366.0 / 2.0) * (yNdc - 1.0)
Applying this inverse transformation gives:
(768.0 / 2.0) * (367.0 + 1.0) = 141312.0
(-1366.0 / 2.0) * (662.0 - 1.0) = -451463
For reasons I can't explain at the moment, this is slightly off what you got (looks like a one-off difference), but it's very similar.
This is obviously not meaningful. To use the inverse projection transformation, your input coordinates should be in the range [-1, 1].
In your use case, since you set up the projection transformations to transform coordinates in pixels, and you receive touch input that is also in pixels, you really have to do nothing at all to get the touch input in your OpenGL coordinate system. They are already in the same coordinate system (pixels).
If you use any other projection. You would first map your touch coordinates to a [-1, 1] range, and then apply the inverse projection transformation. The coordinate mapping would use the same equations as the ones I had above for mapping eye coordinates to NDC.
Help me please with ray picking
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(35.0f), aspect, 0.1f, 1000.0f);
GLKMatrix4 modelViewMatrix = _mainmodelViewMatrix;
// some transformations
_mainmodelViewMatrix = modelViewMatrix;
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix and _normalMatrix put to shader
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
and in touch end
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) , //1 - 2 * position.y / self.view.bounds.size.height,
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
How I can get far point? And my near_point is correct or not?
Thanks!
it looks like you have
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
-1, 1);
(phew) to calculate the normalized device coordinates of the near point.
To get the far point, just swap the -1 z coordinate for a 1:
GLKVector4 normalisedFarVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1) ,
1, 1);
And apply the same inverse transform to that. That should do the trick.
Background: Under normal circumstances, the final coordinates received by the GL for turning a fragment into a pixel are what are called normalised device coordinates. These lie within a cube whose corners are at (-1,-1,-1_ and (1,1,1). So the center of the screen is (0,0,z), the top left corner is (-1,1,z) and so on. The coordinates are transformed so that a point lying on the near plane will have a z coordinate of 1, and one lying just on the far plane will have a z coordinate of -1. These are the numbers that are used for depth testing, if you have it turned on.
So, as you might guess, when you want to convert a screen location back to a point in 3D space, you actually have a number of points to choose from - a line, in fact, stretching from the near plane to the far plane. In normalised device coordinates, this is the line stretching from z=-1 to z=1. So the process goes like this:
convert the x and y coordinates into normalised device coordinates x' and y'
For each of z' = 1 and z' = -1:
convert the coordinates to normalised device coordinates (see here for the formula)
apply the inverse of the projection matrix
apply the inverse of the model/view matrix (as it is before any per-object transformations)
The results are the two coordinates of your line in 3D space.
We can draw line from near_point to far_point.
GLKVector4 normalisedVector = GLKVector4Make((2 * position.x / self.view.bounds.size.width - 1),
(2 * (self.view.bounds.size.height-position.y) / self.view.bounds.size.height - 1),
-1,
1);
GLKMatrix4 inversedMatrix = GLKMatrix4Invert(_modelViewProjectionMatrix, nil);
GLKVector4 near_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
near_point.v[3] = 1.0/near_point.v[3];
near_point = GLKVector4Make(near_point.v[0]*near_point.v[3], near_point.v[1]*near_point.v[3], near_point.v[2]*near_point.v[3], 1);
normalisedVector.z = 1.0;
GLKVector4 far_point = GLKMatrix4MultiplyVector4(inversedMatrix, normalisedVector);
far_point.v[3] = 1.0/far_point.v[3];
far_point = GLKVector4Make(far_point.v[0]*far_point.v[3], far_point.v[1]*far_point.v[3], far_point.v[2]*far_point.v[3], 1);