Why is my shape distorted on rotation about the z axis? - ios

I just started learning metal and can best show you my frustration with the following series of screenshots. From top to bottom we have
(1) My model where the model matrix is the identity matrix
(2) My model rotated 60 deg about the x axis with orthogonal projection
(3) My model rotated 60 deg about the y axis with orthogonal projection
(4) My model rotated 60 deg about the z axis
So I use the following function for conversion into normalized device coordinates:
- (CGPoint)normalizedDevicePointForViewPoint:(CGPoint)point
{
CGPoint p = [self convertPoint:point toCoordinateSpace:self.window.screen.fixedCoordinateSpace];
CGFloat halfWidth = CGRectGetMidX(self.window.screen.bounds);
CGFloat halfHeight = CGRectGetMidY(self.window.screen.bounds);
CGFloat px = ( p.x - halfWidth ) / halfWidth;
CGFloat py = ( p.y - halfHeight ) / halfHeight;
return CGPointMake(px, -py);
}
The following rotates and orthogonally projects the model:
- (matrix_float4x4)zRotation
{
self.rotationZ = M_PI / 3;
const vector_float3 zAxis = { 0, 0, 1 };
const matrix_float4x4 zRot = matrix_float4x4_rotation(zAxis, self.rotationZ);
const matrix_float4x4 modelMatrix = zRot;
return matrix_multiply( matrix_float4x4_orthogonal_projection_on_z_plane(), modelMatrix );
}
As you can see when I use the exact same method for rotating about the other two axes, it looks fine-not distorted. What am I doing wrong? Is there some sort of scaling/aspect ratio thing I should be setting somewhere? What things could it be? I've been staring at this for an embarrassingly long period of time so any help/ideas that can lead me in the right direction are much appreciated. Thank you in advance.

There's nothing wrong with your rotation or projection matrices. The visual oddity arises from the fact that you move your vertices into NDC space prior to rotation. A rectangle doesn't preserve its aspect ratio when rotating in NDC space, because the mapping from NDC back to screen coordinates is not 1:1.
I would recommend not working in NDC until the very end of the vertex pipeline (i.e., pass vertices into your vertex function in "world" space, and out to the rasterizer as NDC). You can do this with a classic construction of the orthographic projection matrix that scales and biases the vertices, correctly accounting for the non-square aspect ratio of window coordinates.

Related

Metal shader determine point inside a convex quadrilateral

Is there a builtin way in Metal shading language to determine if a point lies inside a convex quadrilateral (or convex polygon in general)? If not, what is the quickest way to determine the same?
I have not been able to find a metal function that meets your needs. I will propose what I believe to be a relatively fast solution (although please feel free to critique or improve it). Note that I have assumed you are working in 2D (or at least a 2D frame for a polygon whose vertices are coplanar)
constant constexpr float M_PI = 3.14159265358979323846264338327950288;
constant constexpr float2 iHat = float2(1, 0);
namespace metal {
// The sawtooth function
METAL_FUNC float sawtooth(float f) { return f - floor(f); }
/// A polygon with `s` sides oriented with `transform` that converts points from the system within which the polygon resides.
/// The frame "attached" to the polygon has an X axis passing through a vertex of the polygon. `circR` refers to the radius
/// of the circumscribed circle that passes through each of the verticies
struct polygon {
const uint s;
const float circR;
const float3x3 transform;
// Constructor
polygon(uint s, float circR, float3x3 transform) : s(s), circR(circR), transform(transform) {}
// `pt` is assumed to be a point in the parent system. `conatins` excludes the set of points along the edges of the polygon
bool contains(float2 pt);
};
}
bool metal::polygon::contains(float2 pt) {
// The position in the frame of the polygon
float2 poly_pt = (transform * float3(pt, 1)).xy;
// Using the law of sines, we can determine the distance that is allowed (see below)
float sqDist = distance_squared(0, poly_pt);
// Outside circle that circumscibes the polygon
if (sqDist > circR * circR) return false;
// Calculate the angle the point makes with the x axis in the frame of the polygon.
// The wedgeAngle is the angle that is formed between two verticies connected by an edge
float wedgeAngle = 2 * M_PI / s;
float ptAngle = dot(poly_pt, iHat);
float deltaTheta = sawtooth(ptAngle / wedgeAngle) * wedgeAngle;
// Calculate the maximum distance squared at this angle that is allowed at this angle relative to
// line-segment joining the `floor(ptAngle / wedgeAngle)`th (kth) vertex with the center of the polygon.
// This is done by viewing the polygon from a frame whose X-axis is the line from the center of the polygon
/// to the kth vertex. Draw line segment L1 from the kth vertex to the (k+1)th vertex and mark its endpoints K and L respectively.
/// Draw line segment L2 from the center of the polygon to the point under consideration and mark L2's intersection with L1
/// as "A". If the center of the triangle is "O", then triangle "OKL" is isosceles with vertex angle `wedgeAngle` and
/// base angle B = M_PI / 2 - wedgeAngle / 2 (since 2B + wedge = M_PI). Triangle "OAK" contains `deltaTheta` and B.
/// Thus, the third angle is M_PI - B - deltaTheta. `maxR` results from the law of sines with this third angle and the
/// base angle B' contained within triangle "OAK".
float maxR = circR * sin(M_PI / 2 - wedgeAngle / 2) / sin(M_PI / 2 + wedgeAngle / 2 - deltaTheta);
return sqDist < maxR * maxR;
}
Note that I opted for a constexpr value in lieu of a macro declaration. Either would do.

world coordinates to camera coordinates to pixel coordinates

I am trying to project a giving 3D point to image plane, I have posted many question regarding this and many people help me, also I read many related links but still the projection doesn't work for me correctly.
I have a 3d point (-455,-150,0) where x is the depth axis and z is the upwards axis and y is the horizontal one I have roll: Rotation around the front-to-back axis (x) , pitch: Rotation around the side-to-side axis (y) and yaw:Rotation around the vertical axis (z) also I have the position on the camera (x,y,z)=(-50,0,100) so I am doing the following
first I am doing from world coordinates to camera coordinates using the extrinsic parameters:
double pi = 3.14159265358979323846;
double yp = 0.033716827630996704* pi / 180; //roll
double thet = 67.362312316894531* pi / 180; //pitch
double k = 89.7135009765625* pi / 180; //yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx; //rotation matrix
cv::Mat mpoint3(1, 3, CV_64F, { -455,-150,0 }); //the 3D point location
mpoint3 = mpoint3 * rotationm; //rotation
cv::Mat position(1, 3, CV_64F, {-50,0,100}); //the camera position
mpoint3=mpoint3 - position; //translation
and now I want to move from camera coordinates to image coordinates
the first solution was: as I read from some sources
Mat myimagepoint3 = mpoint3 * mycameraMatrix;
this didn't work
the second solution was:
double fx = cameraMatrix.at<double>(0, 0);
double fy = cameraMatrix.at<double>(1, 1);
double cx1 = cameraMatrix.at<double>(0, 2);
double cy1= cameraMatrix.at<double>(1, 2);
xt = mpoint3 .at<double>(0) / mpoint3.at<double>(2);
yt = mpoint3 .at<double>(1) / mpoint3.at<double>(2);
double u = xt * fx + cx1;
double v = yt * fy + cy1;
but also didn't work
I also tried to use opencv method fisheye::projectpoints(from world to image coordinates)
Mat recv2;
cv::Rodrigues(rotationm, recv2);
//inputpoints a vector contains one point which is the 3d world coordinate of the point
//outputpoints a vector to store the output point
cv::fisheye::projectPoints(inputpoints,outputpoints,recv2,position,mycameraMatrix,mydiscoff );
but this also didn't work
by didn't work I mean: I know (in the image) where should the point appear but when I draw it, it is always in another place (not even close) sometimes I even got a negative values
note: there is no syntax errors or exceptions but may I made typos while I am writing code here
so can any one suggest if I am doing something wrong?

Get the actual angle after the superview is rotated

I have 5 subviews(White) added to the superview(Gray), when I rotate the superview I want to know the angle(like 1 and 2) of each of the subview with the red circle.(the center of the subviews and the red circle are ON the same circle)
Start Position:
Rotated Position:
From your comment you appear to want to determine the coordinates of the centres of your five circles for a given rotation. The centres will all lie on a circle. So your question boils down to what are the coordinates of a point on a circle of radius r for an angle θ. The parametric equations for a circle give you that:
x = r cos θ
y = r sin θ
The angle, θ, in these equations is measured in radians from the positive x-axis in an anti-clockwise direction. If your angle are in degrees you will find the M_PI constant for π useful as:
360 degrees = 2 π radians
The rest is simple math, take your angle of rotation to give you the angle for A (remembering to adjust for 0 being the x-axis and measuring anti-clockwise if needed), the other centres are multiples of 72 degrees (0.4 π radians) from this.
HTH
I'm not sure I completely understand your question, but if you just need to take a known point and rotate it a certain number of degrees, check out the docs for CGAffineTransform.
For example:
CGAffineTransform rotation = CGAffineTransformMakeRotation (angle);
CGPoint rotatedPoint = CGPointApplyAffineTransform (startingPoint, rotation);
This rotation matrix is around (0, 0) and the angle is in radians, so you will need to subtract the center of your superview's bounds to get an offset relative to the center, do the rotation, and add back in the center. Or you can build an affine transform made up of that translation, rotation, and inverse translation, and then apply that to your starting point as above.
Given that you already seem to know the main rotation angle, this will give you the angles in the range -180 .. +180 and positions of each of the white discs:
GCFloat toRads = M_PI / 180.0;
CGFloat angleA = self.rotationInDegrees;
if (angleA > 180) angleA -= 360;
CGFloat xA = self.radius * sinf(angleA * toRads);
CGFloat yA = self.radius * cosf(angleA * toRads);
CGFloat angleB = angleA + 72;
if (angleB > 180) angleB -= 360;
CGFloat xB = self.radius * sinf(angleB * toRads);
CGFloat yB = self.radius * cosf(angleB * toRads);
etc...
(This assumes your zero degrees is from the vertical. If it's from the horizontal swap cos and sin over).

OpenCV: rotation/translation vector to OpenGL modelview matrix

I'm trying to use OpenCV to do some basic augmented reality. The way I'm going about it is using findChessboardCorners to get a set of points from a camera image. Then, I create a 3D quad along the z = 0 plane and use solvePnP to get a homography between the imaged points and the planar points. From that, I figure I should be able to set up a modelview matrix which will allow me to render a cube with the right pose on top of the image.
The documentation for solvePnP says that it outputs a rotation vector "that (together with [the translation vector] ) brings points from the model coordinate system to the camera coordinate system." I think that's the opposite of what I want; since my quad is on the plane z = 0, I want a a modelview matrix which will transform that quad to the appropriate 3D plane.
I thought that by performing the opposite rotations and translations in the opposite order I could calculate the correct modelview matrix, but that seems not to work. While the rendered object (a cube) does move with the camera image and seems to be roughly correct translationally, the rotation just doesn't work at all; it on multiple axes when it should only be rotating on one, and sometimes in the wrong direction. Here's what I'm doing so far:
std::vector<Point2f> corners;
bool found = findChessboardCorners(*_imageBuffer, cv::Size(5,4), corners,
CV_CALIB_CB_FILTER_QUADS |
CV_CALIB_CB_FAST_CHECK);
if(found)
{
drawChessboardCorners(*_imageBuffer, cv::Size(6, 5), corners, found);
std::vector<double> distortionCoefficients(5); // camera distortion
distortionCoefficients[0] = 0.070969;
distortionCoefficients[1] = 0.777647;
distortionCoefficients[2] = -0.009131;
distortionCoefficients[3] = -0.013867;
distortionCoefficients[4] = -5.141519;
// Since the image was resized, we need to scale the found corner points
float sw = _width / SMALL_WIDTH;
float sh = _height / SMALL_HEIGHT;
std::vector<Point2f> board_verts;
board_verts.push_back(Point2f(corners[0].x * sw, corners[0].y * sh));
board_verts.push_back(Point2f(corners[15].x * sw, corners[15].y * sh));
board_verts.push_back(Point2f(corners[19].x * sw, corners[19].y * sh));
board_verts.push_back(Point2f(corners[4].x * sw, corners[4].y * sh));
Mat boardMat(board_verts);
std::vector<Point3f> square_verts;
square_verts.push_back(Point3f(-1, 1, 0));
square_verts.push_back(Point3f(-1, -1, 0));
square_verts.push_back(Point3f(1, -1, 0));
square_verts.push_back(Point3f(1, 1, 0));
Mat squareMat(square_verts);
// Transform the camera's intrinsic parameters into an OpenGL camera matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Camera parameters
double f_x = 786.42938232; // Focal length in x axis
double f_y = 786.42938232; // Focal length in y axis (usually the same?)
double c_x = 217.01358032; // Camera primary point x
double c_y = 311.25384521; // Camera primary point y
cv::Mat cameraMatrix(3,3,CV_32FC1);
cameraMatrix.at<float>(0,0) = f_x;
cameraMatrix.at<float>(0,1) = 0.0;
cameraMatrix.at<float>(0,2) = c_x;
cameraMatrix.at<float>(1,0) = 0.0;
cameraMatrix.at<float>(1,1) = f_y;
cameraMatrix.at<float>(1,2) = c_y;
cameraMatrix.at<float>(2,0) = 0.0;
cameraMatrix.at<float>(2,1) = 0.0;
cameraMatrix.at<float>(2,2) = 1.0;
Mat rvec(3, 1, CV_32F), tvec(3, 1, CV_32F);
solvePnP(squareMat, boardMat, cameraMatrix, distortionCoefficients,
rvec, tvec);
_rv[0] = rvec.at<double>(0, 0);
_rv[1] = rvec.at<double>(1, 0);
_rv[2] = rvec.at<double>(2, 0);
_tv[0] = tvec.at<double>(0, 0);
_tv[1] = tvec.at<double>(1, 0);
_tv[2] = tvec.at<double>(2, 0);
}
Then in the drawing code...
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, -tv[1], -tv[0], -tv[2]);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[0], 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[1], 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[2], 0.0f, 0.0f, 1.0f);
The vertices I'm rendering create a cube of unit length around the origin (i.e. from -0.5 to 0.5 along each edge.) I know with OpenGL translation functions performed transformations in "reverse order," so the above should rotate the cube along the z, y, and then x axes, and then translate it. However, it seems like it's being translated first and then rotated, so perhaps Apple's GLKMatrix4 works differently?
This question seems very similar to mine, and in particular coder9's answer seems like it might be more or less what I'm looking for. However, I tried it and compared the results to my method, and the matrices I arrived at in both cases were the same. I feel like that answer is right, but that I'm missing some crucial detail.
You have to make sure the axis are facing the correct direction. Especially, the y and z axis are facing different directions in OpenGL and OpenCV to ensure the x-y-z basis is direct. You can find some information and code (with an iPad camera) in this blog post.
-- Edit --
Ah ok. Unfortunately, I used these resources to do it the other way round (opengl ---> opencv) to test some algorithms. My main issue was that the row order of the images was inverted between OpenGL and OpenCV (maybe this helps).
When simulating cameras, I came across the same projection matrices that can be found here and in the generalized projection matrix paper. This paper quoted in the comments of the blog post also shows some link between computer vision and OpenGL projections.
I'm not an IOS programmer, so this answer might be misleading!
If the problem is not in the order of applying the rotations and the translation, then suggest using a simpler and more commonly used coordinate system.
The points in the corners vector have the origin (0,0) at the top left corner of the image and the y axis is towards the bottom of the image. Often from math we are used to think of the coordinate system with the origin at the center and y axis towards the top of the image. From the coordinates you're pushing into board_verts I'm guessing you're making the same mistake. If that's the case, it's easy to transform the positions of the corners by something like this:
for (i=0;i<corners.size();i++) {
corners[i].x -= width/2;
corners[i].y = -corners[i].y + height/2;
}
then you call solvePnP(). Debugging this is not that difficult, just print the positions of the four corners and the estimated R and T, and see if they make sense. Then you can proceed to the OpenGL step. Please let me know how it goes.

Use of maths in the Apple pARk sample code

I'm studied the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so I can apply some of its fundamentals in an app i'm working on. I understand nearly everything, except:
The way it has to calculate if a point of interest must appear or not. It gets the attitude, multiply it with the projection matrix (to get the rotation in GL coords?), then multiply that matrix with the coordinates of the point of interest and, at last, look at the last coordinate of that vector to find out if the point of interest must be shown. Which are the mathematic fundamentals of this?
Thanks a lot!!
I assume you are referring to the following method:
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
This is performing an OpenGL like vertex transformation on the places of interest to check if they are in a viewable frustum. The frustum is created in the following line:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
This sets up a frustum with a 60 degree field of view, a near clipping plane of 0.25 and a far clipping plane of 1000. Any point of interest that is further away than 1000 units will then not be visible.
So, to step through the code, first the projection matrix that sets up the frustum, and the camera view matrix, which simply rotates the object so it is the right way up relative to the camera, are multiplied together. Then, for each place of interest, its location is multiplied by the viewProjection matrix. This will project the location of the place of interest into the view frustum, applying rotation and perspective.
The next two lines then convert the transformed location of the place into whats known as normalized device coordinates. The 4 component vector needs to be collapsed to 3 dimensional space, this is achieved by projecting it onto the plane w == 1, by dividing the vector by its w component, v[3]. It is then possible to determine if the point lies within the projection frustum by checking if its coordinates lie in the cube with side length 2 with origin [0, 0, 0]. In this case, the x and y coordinates are being biased from the range [-1 1] to [0 1] to match up with the UIKit coordinate system, by adding 1 and dividing by 2.
Next, the v[2] component, z, is checked to see if it is greater than 0. This is actually incorrect as it has not been biased, it should be checked to see if it is greater than -1. This will detect if the place of interest is in the first half of the projection frustum, if it is then the object is deemed visible and displayed.
If you are unfamiliar with vertex projection and coordinate systems, this is a huge topic with a fairly steep learning curve. There is however a lot of material online covering it, here are a couple of links to get you started:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
http://www.opengl.org/wiki/Vertex_Transformation
Good luck//

Resources