So, I'm trying to rotate 5 vectors2s to get target(Vector2) as a normal for them. But when I rotate it isn't rotating properly. When target is pointing up and verticies' normal right or left(default: 0,-1 0,0 0,1) I have to rotate them 90 degrees but when I do result isn't -1,0 0,0 1,0 what it should be. Instead it is something like: -1,vs 0,0 1,-vs vs=very small number. Why is this? Is there a way to correct this? Code:
Vector2 target = new Vector2(0, 1); //Create target
Vector2[] vecs = new Vector2[3] { new Vector2(0, -1), Vector2.Zero, new Vector2(0, 1) }; //Create verticies to be rotated
Matrix matrix = Matrix.CreateRotationZ(MathHelper.ToRadians(90)); //Should be: (float)Math.Atan2(target.Y, target.X) instead of 90 but wanted to simplify this
Vector2.Transform(vecs, ref matrix, vecs); // Rotate
I even tried rotating with 360 degrees for full turn but that didn't give me the starting vectors, which is weird to me.
Why is this?
Its because of the precision of floating point variables.
Is there a way to correct this?
I dont know a way, but I would just ignore it. Because it is nearly equal to zero.
Protip
With the given vector (x,y), (-y,x) or (y,-x) are the normals of this vector.
Related
I am running solvePnPRansac on an image dataset, with 2d feature points and triangulated 3d landmark points. It runs great, and the results in rotation, and in the forward and side axes, look great. The Y axis though, is completely wrong.
I am testing the output against the ground truth from the data set, and it goes up where it should go down, and drifts off the ground truth very quickly. The other axes stay locked on for much much longer.
this strikes me as strange, how can it be correct for the other axes, and wrong for one? Surely that is not possible, I would have thought that either every axis was bad, or every axis was good.
What could i possibly be doing wrong to make this happen? And how can i debug this weirdness? My PnP code is very standard:
cv::Mat inliers;
cv::Mat rvec = cv::Mat::zeros(3, 1, CV_64FC1);
int iterationsCount = 500; // number of Ransac iterations.
float reprojectionError = 2.0; //2.0 // maximum allowed distance to consider it an inlier.
float confidence = 0.95; // RANSAC successful confidence.
bool useExtrinsicGuess = false;
int flags = cv::SOLVEPNP_ITERATIVE;
int num_inliers_;
//points3D_t0
cv::solvePnPRansac(points3D_t0, points_left_t1, intrinsic_matrix, distCoeffs, rvec, translation_stereo,
useExtrinsicGuess, iterationsCount, reprojectionError, confidence,
inliers, flags);
I encountered similar problem for images taking by a drone – sometimes the Y value (camera line of sight axis – the height axis in my case) was BELOW the ground. If you think about it – for a plane view (or close to a plane) - there are two possible ‘y’ solutions : before the plane and away to the plane (below and under the ground in my case). So both are a legal solutions.
I am using 3ds max for a long time and I know xyz axis. what I see in xcode in rotation the scnnode what made my mind blowed up is w component of scnvector4.
Can someone explain in detail how to use this method because I searched a lot of time but I can't make my object spin as I desire. anyone can help to make him spin to his back in 180 degree but I will appreciate if someone explain more for further rotations, Knowing that I saw this link but I didn't understand something.
http://www.cprogramming.com/tutorial/3d/rotationMatrices.html
I believe that your are trying to rotate nodes (rotation property).
From the documentation :
The four-component rotation vector specifies the direction of the rotation axis in the first three components and the angle of rotation (in radians) in the fourth.
You might find it easier to use eulerAngles :
The node’s orientation, expressed as pitch, yaw, and roll angles, each in radians
Use .transform to rotate a node
node.transform = SCNMatrix4Mult(node.transform, SCNMatrix4MakeRotation(angle, x, y, z))
if you want to rotate your node 180 degree by x axis
node.transform = SCNMatrix4Mult(node.transform, SCNMatrix4MakeRotation(Float(M_PI), 1, 0, 0))
Is it possible to use SceneKit's unprojectPoint to convert a 2D point to 3D without having a depth value?
I only need to find the 3D location in the XZ plane. Y can be always 0 or any value since I'm not using it.
I'm trying to do this for iOS 8 Beta.
I had something similar with JavaScript and Three.js (WebGL) like this:
function getMouse3D(x, y) {
var pos = new THREE.Vector3(0, 0, 0);
var pMouse = new THREE.Vector3(
(x / renderer.domElement.width) * 2 - 1,
-(y / renderer.domElement.height) * 2 + 1,
1
);
//
projector.unprojectVector(pMouse, camera);
var cam = camera.position;
var m = pMouse.y / ( pMouse.y - cam.y );
pos.x = pMouse.x + ( cam.x - pMouse.x ) * m;
pos.z = pMouse.z + ( cam.z - pMouse.z ) * m;
return pos;
};
But I don't know how to translate the part with unprojectVector to SceneKit.
What I want to do is to be able to drag an object around in the XZ plane only. The vertical axis Y will be ignored.
Since the object would need to move along a plane, one solution would be to use hitTest method, but I don't think is very good in terms of performance to do it for every touch/drag event. Also, it wouldn't allow the object to move outside the plane either.
I've tried a solution based on the accepted answer here, but it didn't worked. Using one depth value for unprojectPoint, if dragging the object around in the +/-Z direction the object doesn't stay under the finger too long, but it moves away from it instead.
I need to have the dragged object stay under the finger no matter where is it moved in the XZ plane.
First, are you actually looking for a position in the xz-plane or the xy-plane? By default, the camera looks in the -z direction, so the x- and y-axes of the 3D Scene Kit coordinate system go in the same directions as they do in the 2D view coordinate system. (Well, y is flipped by default in UIKit, but it's still the vertical axis.) The xz-plane is then orthogonal to the plane of the screen.
Second, a depth value is a necessary part of converting from 2D to 3D. I'm not an expert on three.js, but from looking at their library documentation (which apparently can't be linked into), their unprojectVector still takes a Vector3. And that's what you're constructing for pMouse in your code above — a vector whose z- and y-coordinates come from the 2D mouse position, and whose z-coordinate is 1.
SceneKit's unprojectPoint works the same way — it takes a point whose z-coordinate refers to a depth in clip space, and maps that to a point in your scene's world space.
If your world space is oriented such that the only variation you care about is in the x- and y-axes, you may pass any z-value you want to unprojectPoint, and ignore the z-value in the vector you get back. Otherwise, pass -1 to map to the far clipping plane, 1 for the near clipping plane, or 0 for halfway in between — the plane whose z-coordinate (in camera space) is 0. If you're using the unprojected point to position a node in the scene, the best advice is to just try different z-values (between -1 and 1) until you get the behavior you want.
However, it's a good idea to be thinking about what you're using an unprojected vector for — if the next thing you'd be doing with it is testing for intersections with scene geometry, look at hitTest: instead.
I am drawing a texture with 4 vertices in OpenGL ES 1.1.
It can rotate around z:
glRotatef(20, 0, 0, 1);
But when I try to rotate it around x or y like a CALayer then the texture just disappears completely. Example for rotation around x:
glRotatef(20, 1, 0, 0);
I also tried very small values and incremented them in animation loop.
// called in render loop
static double angle = 0;
angle += 0.005;
glRotatef(angle, 1, 0, 0);
At certain angles I see only the edge of the texture. As if OpenGL ES would clip away anything that goes into depth.
Can the problem be related to projection mode? How would you achieve a perspective transformation of a texture like you can do with CALayer transform property?
The problem is most likely in one of the glFrustumf or glOrthof. The last parameter in this 2 calls will take z-far and it should be large enough for the primitive to be drawn. If a side length of the square is 1.0 and centre is in (.0, .0, .5) then z-far should be (> 1.0) to see the square rotated 90 degrees around X or Y axis. Though note these can depend on other matrix operations as well (translating the object or using tools like lookAt).
Making this parameter large enough should solve your problem.
To achieve a perspective transformation use glFrustumf instead of glOrthof.
I will track an object according to the coordinates that I read from OpenCV. The thing is that: in order to turn my servo 120 degrees in the positive way, I need to send 3300 more to servo. (for 1 degree I need to send 27.5 more to servo).
Now I need to find a relation with the coordinates that I read from OpenCV and the value I need to send to servo. However I could not understand OpenCV's coordinates. For example I do not change object's height, I only decrease the distance between the object and the camera, in that case, only z value should decrease however it seems like x value also changes significantly. What is the reason for that?
In case, I have a problem with my code (maybe x is not changing, but I am reading it wrong), could you please give me information about OpenCV coordinates and how to interpret it? As I said in the beginning, I need to find a relation like how many degrees turn of my servo correspond to how much change in the balls X coordinates that I read from OpenCV?
Regards
edit1 for #FvD:
int i;
for (i = 0; i < circles->total; i++)
{
float *p = (float*)cvGetSeqElem(circles, i);
printf("Ball! x=%f y=%f r=%f\n\r",p[0],p[1],p[2] );
CvPoint center = cvPoint(cvRound(p[0]),cvRound(p[1]));
CvScalar val = cvGet2D(finalthreshold, center.y, center.x);
if (val.val[0] < 1) continue;
cvCircle(frame, center, 3, CV_RGB(0,255,0), -1, CV_AA, 0);
cvCircle(frame, center, cvRound(p[2]), CV_RGB(255,0,0), 3, CV_AA, 0);
cvCircle(finalthreshold, center, 3, CV_RGB(0,255,0), -1, CV_AA, 0);
cvCircle(finalthreshold, center, cvRound(p[2]), CV_RGB(255,0,0), 3, CV_AA, 0);
}
In general, there are no OpenCV coordinates, but you will frequently use the columns and rows of an image matrix as image coordinates.
If you have calibrated your camera, you can transform those image coordinates to real-world coordinates. In the general case, you cannot pinpoint the location of an object in space with a single camera image, unless you have a second camera (stereo vision) or supplementary information about the scene, e.g. if you are detecting objects on the ground and you know the orientation and position of your camera relative to the ground plane. In that case, moving your ball towards the camera would result in unexpected movement because the assumption that it is lying on the ground is violated.
The coordinates in the code snippet you provided are image coordinates. The third "coordinate" is the radius of the circular blob detected in the webcam image and the first two are the column and row of the circle's center in the image.
I'm not sure how you are moving the ball in your test, but if the center of the ball stays stationary in your images and you still get differing x coordinates, you should look into the detection algorithm you are using.