I wanted to use the iPhone's rotation system to be able to have an object follow below a line the user rotates. To do this, I need the angle at which the line is being measured as. When I set the rotation equal to M_PI_2, the object rotates 90 degrees counterclockwise. Which begs the question: Is the iPhone's angle-measurement system backwards? In other words, in portrait orientation, screen facing you, is positive theta clockwise?
Thanks.
I figured it out after doing some researching.
Since the iPhone's +x axis is to the right (same orientation as question) and the +y axis is down, it should make sense that the +theta direction is clockwise.
The normal axes
The iPhone's Axes
Related
I'm trying to implement a russian roulette game and want it to brute-force the solution for it. Here is my problem. I'm going to hard code the relative angles of the numbers on the wheel (eg. there are 36 numbers and each number would have 10 degree offset to each other, the one on the top, 12 o'clock position, will have the 0 and the next 10 and vice versa). I will rotate the wheel randomly and then determine the rotation of it based on some values that I can calculate (startPosition to finishedPosition). The wheel is an ImageView. Is there a way to actually do this? For example, get the top left x,y pos for its start and end, then by some formula to calculate how much it rotated. Or is there a better way to do this? There is not much of a source code to show it, so this is more like a mathematical question rather than a swift one. Any feedback is much appreciated.
To calculate rotation, you need coordinates of three points: start location sx, sy, end location ex, ey of the same point after rotation and center of rotation cx, cy
Then you can find angle using atan2 function
rot_angle = atan2((ex-cx)*(sx-cx)+(ey-cy)*(sy-cy), (ex-cx)*(sy-cy)-(ey-cy)*(sx-cx))
Note - I used argument order (x,y) from here, while most languages use reverse order (y,x), so check what order you really need (I have no experience in IOS languages). Also result value might be in radians or in degrees (above link doesn't specify it clearly)
Your question doesn't make much sense. If you rotate the wheel randomly, calculate the random value as an angle. If you want to change the previous rotation by some random angle, then do the math on the starting rotation and ending rotation. That is just adding and subtracting angles (modulo 2π). Then you will know how far it is rotated, and not have to calculate it.
Assuming you're talking about a roulette wheel, and not "Russian Roulette" (In American English at least, that term involves pointing a loaded revolver at your head) you'll need to track both the wheel rotation and the ball rotation. To apply the rotation to the wheel, you'll just take the image of the wheel and rotate it on the Z axis around it's x/y center point.
To plot the ball, you'll need to use trig to calculate the center of the ball based on the radius of the track the ball follows and the angle. But again, always track the angle, and then convert the angle to an x/y center point for the ball to plot it. Don't forget the angle and then have to convert back from the ball position to its angle. That's silly.
I am currently working on a project where I need to determine whether a robot, with an ArUco marker on top of it, needs to rotate to a certain direction in order for it to point, with its front, towards a particular object, for which its centre point is known. So basically, what I've got is the centre point of the ball and the 4 points of the marker corners.
I'm including an example of what I mean as an image.
Note the little arrow drawn on the marker cardboard. It shows the front side of the robot.
Lastly: I have a camera that captures frames, and the program prints out the rotation vector. For some reason, the values are different during every frame, even though I intentionally left the robot at the same position. Could anyone please explain wy that might be?
Thanks a lot.
EDIT: I've got the issue with the rotation vector fluctuating sorted; now I just need to figure out how to use the output of that to get the orientation of the robot, that is, in respect to a ball (of which I have its centre point), which apparently is done through the X-axis.
I'm adding another image, which shows the x-axis as red, the y-axis as blue and the z-axis as green. The vectors are of type cv::Vec3d.
First, some code:
std::vector<cv::Vec3d> rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, CAMERA_MATRIX, DISTORTION_COEFFICIENTS, rvecs, tvecs);
And the image showing what I mean:
I have done a tiny bit of 3d graphics in the past. When you move or rotate a Scene Kit sprite does it automatically update its translation matrix, or do you have to make it yourself?
Are "position" and "eulerAngles" both properties that are... absolute.
For example if I am in sprite kit and set the translation to (1, 0) it will be at that point relative to the origin.
And if I set the z rotation to 90 it will be rotated 90 degrees.
And if I incrament the translation (with +=) x it will start going in a line.
And same for zRotation if incremented it will rotate. In scene kit if I do similar things to the translation and euler angle values will they do the same thing?
Also what exactly does the accelerometer think its measuring, it is like the amount of motion in a certain period? So basically is it the delta between the two simultaneous points that the device was in.
Yes, this question is definitely broad, however they are much better placed here, then scattered in three tiny posts.
Doe, let me see if I can help
Translation matrix? It has a TRANSFORM matrix that includes translation, scale and rotation, and yes, it is automatically updated when you change one of these 3, and vice-versa.
If I understood well, yes, just like in SpriteKit. They are related to their parent coordinates. The position (1,0,0) would mean the Node (its center, unless you change its pivot (anchorPoint in spriteKit)) will be at distance 1 along the X axis of its parent from its parent origin).
The same works for the rotation, if a NodeA has 30 degrees rotation at axis X and you add a NodeB with 20 degrees rotation at X in NodeA, you would have the NodeA having visually a 50 degrees rotation at X.
Accelerometer measures the acceleration forces given to the device in a specific moment, in the three axis of the device. Its unit is not [m^2/s] but [Gravity/s] (would be approximately [10m^2/s]). An important detail is that this measure includes the gravity acceleration as well.
So, if you try to measure the acceleration with the device standing ortogonal to the ground, you would expect (0, 0, -1) (or 0,0,1, if upside down).
Lying down the device on the ground it would be (0, 1or-1, 0) (depending if the screen is facing the ground or the ceiling)
For for every tick (of update rate of the accelerometer) it calculates what was the acceleration imposed to the device at that moment. That's not the delta itself, but it can be easily calculated if you store the values.
I need to display an OpenGL cubemap (360deg panoramic image used as a texture on a cube) 'aligned' with the North on an iPhone.
0) The panoramic image is split into six images, applied onto the faces of the cube as a texture.
1) Since the 'front' face of the cubemap does not point towards North, I rotate the look-at matrix by theta degrees (found manually). This way when the GL view is displayed it shows the face containing the North view.
2) I rotate the OpenGL map using the attitude from CMDeviceMotion of a CMMotionManager. The view moves correctly. However, it is not yet 'aligned' with the North.
So far everything is fine. I need only to align the front face with North and then rotate it according to the phone motion data.
3) So I access the heading (compass heading) from a CLLocationManager. I read just one heading (the first update I receive) and use this value in step 1 when building the look-at matrix.
After step 3, the OpenGL view is aligned with the surrounding environment. The view is kept (more or less) aligned at step 2, by the CMMotionManager. If I launch the app facing South, the 'back' face of the cube is shown: it is aligned.
However, sometimes the first compass reading is not very accurate. Furthermore, its accuracy improves with the user moving the phone. The idea is to continuously modify the rotation applied to the look-at matrix by keeping into account the continuous readings of the compass heading.
So I have implemented also step 4.
4) Instead of using only the first reading of the heading, I keep reading updates from the CLLocationManager and use them to continuously align the look-at matrix, which is not rotate by the angle theta (found manually at step 1) and by the angle returned by the compass service.
After step 4 nothing works: the view is fixed in a position and moving the phone does not change the view. The cube is rotated with the phone, meaning that I see always the same face of the cube.
From my point of view (but I am clearly wrong) by first rotating the look-at matrix to align with North and then applying the rotation computed by "DeviceMotion attitude" nothing should change with respect to step 3.
Which step of my reasoning is wrong?
I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.