DeviceMotion relative to world - multiplyByInverseOfAttitude - ios

What is the correct way to use CMAttitude:multiplyByInverseOfAttitude?
Assuming an iOS5 device laying flat on a table, after starting CMMotionManager with:
CMMotionManager *motionManager = [[CMMotionManager alloc]init];
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:
CMAttitudeReferenceFrameXTrueNorthZVertical];
Later, CMDeviceMotion objects are obtained:
CMDeviceMotion *deviceMotion = [motionManager deviceMotion];
I expect that [deviceMotion attitude] reflects the rotation of the device from True North.
By observation, [deviceMotion userAcceleration] reports acceleration in the device reference frame. That is, moving the device side to side (keeping it flat on the table) registers acceleration in the x-axis. Turning the device 90° (still flat) and moving the device side to side still reports x acceleration.
What is the correct way to transform [deviceMotion userAcceleration] to obtain North-South/East-West acceleration rather than left-right/forward-backward?
CMAttitude multiplyByInverseOfAttitude seems unnecessary since a reference frame has already been specified and it is unclear from the documentation how to apply the attitude to CMAcceleration.

The question would not have arisen if CMDeviceMotion had an accessor for the userAcceleration in coordinates of the reference frame. So, I used a category to add the required method:
In CMDeviceMotion+TransformToReferenceFrame.h:
#import <CoreMotion/CoreMotion.h>
#interface CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame;
#end
and in CMDeviceMotion+TransformToReferenceFrame.m:
#import "CMDeviceMotion+TransformToReferenceFrame.h"
#implementation CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame
{
CMAcceleration acc = [self userAcceleration];
CMRotationMatrix rot = [self attitude].rotationMatrix;
CMAcceleration accRef;
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
#end
and in Swift 3
extension CMDeviceMotion {
var userAccelerationInReferenceFrame: CMAcceleration {
let acc = self.userAcceleration
let rot = self.attitude.rotationMatrix
var accRef = CMAcceleration()
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
}
Now, code that previously used [deviceMotion userAcceleration] can use [deviceMotion userAccelerationInReferenceFrame] instead.

According to Apple's Documentation, CMAttitude refers to the orientation of a body relative to a given frame of reference. And either userAcceleration or gravity is the value of the device's frame. So in order to get the value of reference frame. We should do as #Batti said
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
Here's the Swift version
import CoreMotion
import GLKit
extension CMDeviceMotion {
func userAccelerationInReferenceFrame() -> CMAcceleration {
let origin = userAcceleration
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
func gravityInReferenceFrame() -> CMAcceleration {
let origin = self.gravity
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
}
extension CMRotationMatrix {
func inverse() -> CMRotationMatrix {
let matrix = GLKMatrix3Make(Float(m11), Float(m12), Float(m13), Float(m21), Float(m22), Float(m23), Float(m31), Float(m32), Float(m33))
let invert = GLKMatrix3Invert(matrix, nil)
return CMRotationMatrix(m11: Double(invert.m00), m12: Double(invert.m01), m13: Double(invert.m02),
m21: Double(invert.m10), m22: Double(invert.m11), m23: Double(invert.m12),
m31: Double(invert.m20), m32: Double(invert.m21), m33: Double(invert.m22))
}
}
Hope it helps a little bit

i tried to implement a solution after reading the paper linked above.
Steps are the follows:
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
the resultant vector will be the projection of the vector.
-x north, +x south
-y east, +y weast
my code it's not perfect yet, i'm working on it.

The reference frame is related to the attitude value, look the value of attitude of yaw angle; If you don't use a reference frame, when you start your app, this value is always zero, instead if you use the reference frame CMAttitudeReferenceFrameXTrueNorthZVertical the yaw value indicates the angle between the x-axis and true north.
with this information you can identify the attitude of phone in the coordinates of the earth and therefore the position of axes of the accelerometer with respect to the cardinal points.

Related

How to detect object clicked in WebGL?

Firstly, I am not using 3Js in my Orbits app because I encountered a number of limitations including, but not limited to, issues with texture resolution and my requirement for complex lighting equations but I would like to implement something like 3Js' raycaster to allow me to detect the object clicked by the user.
I'm new to WebGL, but an "old hand" in software development so I'm looking for some hints about where to start.
The approach is as follows:
You generate your scene twice, once normally which is displayed and the second, with the objects uniquely coloured but not displayed. Then you use gl.readPixels from the second scene using the position on the first and decode the colour to identify the object.
Now I have to implement it myself.
Picking spheres
When picking spheres, or objects that are separated (not one inside another) you can use a simple distance from ray to very quickly get the closest object.
Example
The function returns a function that does the calculation. As it is only the closest you are interested in the distances can remain as squares. The distance from the camera is held as a unit distance along the ray.
function distanceFromRay() {
var dSqr, ox, oy, oz, vx, vy, vz;
function distanceSqr(px, py, pz) {
const ax = px - ox, ay = py - oy, az = pz - oz;
const u = (ax * vx + ay * vy + az * vz) / dSqr;
distanceSqr.unit = u;
if (u > 0) { // is past origin
const bx = ox + vx * u - px, by = oy + vy * u - py, bz = oz + vz * u - pz;
return bx * bx + by * by + bz * bz; // dist sqr to closest point on ray
}
return Infinity;
}
distanceSqr.unit = 0;
distanceSqr.setRay(x, y, z, xx, yy, zz) { // ray from origin x, y,z,
// infinite length along xx,yy,zz
(ox = x, oy = y, oz = z);
(vx = xx, vy = yy, vz = zz);
dSqr = vx * vx + vy * vy + vz * vz;
}
return distanceSqr;
}
Usage
There is a one time setup call;
// setup
const distToRay = distanceFromRay();
At the start of a frame that requires a pick, calculate the pick ray and set it. Also set the min distance from ray and eye.
// at start of frame set pick ray
distToRay.setRay(eye.x, eye.y, eye.z, pointer.ray.x, pointer.ray.y, pointer.ray.y);
var minDist = maxObjRadius * maxObjRadius;
var nearestObj = undefined;
var eyeDist = Infinity;
Then for each pickable object get the distance by passing the objects center and comparing it to any previous (in frame) found distance, objects radius, and distance from eye.
// per object
const dis = distToRay(obj.pos.x, obj.pos.y, obj.pos.z);
if (dis < obj.radius && dis < minDist && distToRay.unit > 0 && distToRay.unit < eyeDist ) {
minDist = dis;
eyeDist = distToRay.unit;
nearestObj = obj;
}
At the end of the frame if nearestObj is not undefined it will hold a reference to the picked object.
// end of frame
if (nearestObj) {
// you have the closest object
}

Rotation angles from Quaternion

I have a 3D scene in which in the imaginary sphere I position few objects, now I want to rotate them within device motion.
I use spherical coordinate system and calculate position in sphere like below:
x = ρ * sin⁡ϕ * cos⁡θ
y = ρ * sin⁡ϕ * sin⁡θ
z = ρ * cos⁡ϕ.
Also, I use angles (from 0 to 2_M_PI) for performing rotation horizontally (in z-x)
As result all works perfect until I want to use Quaternion from motion matrix.
I can extract values like pitch, yaw, roll
GLKQuaternion quat = GLKQuaternionMakeWithMatrix4(motionMatrix);
CGFloat adjRoll = atan2(2 * (quat.y * quat.w - quat.x * quat.z), 1 - 2 * quat.y * quat.y - 2 * quat.z * quat.z);
CGFloat adjPitch = atan2(2 * (quat.x * quat.w + quat.y * quat.z), 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z);
CGFloat adjYaw = asin(2 * quat.x * quat.y + 2 * quat.w * quat.z);
or try also
CMAttitude *currentAttitude = [MotionDataProvider sharedProvider].attitude; //from CoreMotion
CGFloat roll = currentAttitude.roll;
CGFloat pitch = currentAttitude.pitch;
CGFloat yaw = currentAttitude.yaw;
*the values that i got is different for this methods
The problem is that pitch, yaw, roll is not applicable in this format to my scheme.
How can I convert pitch, yaw, roll or quaternion or motionMatrix to required angles in x-z for my rotation model? Am I on correct way of things doing, or I missed some milestone point?
How to get rotation around y axis from received rotation matrix/quaternion from CoreMotion, converting current z and x to 0, so displayed object can be rotated only around y axis?
I use iOS, by the way, but guess this is not important here.

How to Convert a SCNVector3 position to a CGPoint SCNView coordinate?

I have this app that just works in landscape.
I have an object on scenekit. That object is at a certain position, specified by:
SCNNode * buttonRef = [scene.rootNode childNodeWithName:#"buttonRef" recursively:YES];
SCNVector3 buttonRefPosition = [buttonRef position];
Now I need to convert that SCNVector3 to mainView 2D coordinates, where
mainView = (SCNView *)self.view;
The scene is using orthogonal camera, so no perspective. The camera is the default camera. No rotation, no translation. Camera's orthographicScale is 5.4.
How do I convert that?
I have found other answers on SO about the reverse question, that is converting from CGPoint to SCNVector3 but doesn't sound obvious to me how to do the reverse, from SCNVector3 to CGPoint.
If the SCNView is a subclass of UIView, than (0,0) is the upper left point and in the case of the iPhone 6 landscape, (667, 375) is the lower right point.
This button of mine is at almost full right and middle of height. In terms of UIKit I would say that this button is at (600, 187) but when I do this:
SCNNode * buttonRef = [scene.rootNode childNodeWithName:#"buttonRef" recursively:YES];
SCNVector3 buttonRefPosition = [buttonRef position];
SCNVector3 projectedPoint = [mainView projectPoint: buttonRefPosition];
I get projectPoint equal to (7.8, 375, 0) (?)
and when I add a 10x10 view to the coordinate (7.8, 375) using UIKit, I get a view at the lower left!!
Like I said the app is operating in landscape only but these coordinate systems are all messed.
Converting model-space points to screen-space points is called projecting (because it's applying the camera's projection transform to map from a 3D space into a 2D one). The renderer (via the GPU) does it many, many times per frame to get the points in your scene onto the screen.
There's also a convenience method for when you need to do it yourself: projectPoint:
Quite old question, but unfortunately #ricksters answer was no real help for me, as projectPoint: keeps delivering messed up view coordinates in my case too (the same problem as #SpaceDog described)!
However, I found, that in my case, the problem was, that I had applied a transform to my root node (turned the world upside down ;-)). This obviously lead to the wrong results from projectPoint:...
As I couldn't remove the transform, I had to adjust my input to projectPoint: accordingly.
SCNVector3 viewStartPosition = ...; // Some place in the scene
SCNVector3 viewStartPositionRel = SCNVector3FromSCNVector4(SCNMatrix4MultV(self.scene.rootNode.worldTransform, SCNVector4FromSCNVector3(viewStartPosition)));
// Vector from UISceneView 0/0 top/left to start point
SCNVector3 viewStartPositionInViewCoords = [self.sceneView projectPoint:viewStartPositionRel];
Unfortunately, Apple delivers no matrix mathematics for SCNMatrix4 matrices, so the used (really easy) matrix handling methods are also 'home made':
/*
SCNVector4FromSCNVector3
*/
SCNVector4 SCNVector4FromSCNVector3(SCNVector3 pV) {
return SCNVector4Make(pV.x, pV.y, pV.z, 0.0);
}
/*
SCNVector3FromSCNVector4
*/
SCNVector3 SCNVector3FromSCNVector4(SCNVector4 pV) {
return SCNVector3Make(pV.x, pV.y, pV.z);
}
/*
SCNMatrix4MultV
*/
SCNVector4 SCNMatrix4MultV(SCNMatrix4 pM, SCNVector4 pV) {
SCNVector4 r = {
pM.m11 * pV.x + pM.m12 * pV.y + pM.m13 * pV.z + pM.m14 * pV.w,
pM.m21 * pV.x + pM.m22 * pV.y + pM.m23 * pV.z + pM.m24 * pV.w,
pM.m31 * pV.x + pM.m32 * pV.y + pM.m33 * pV.z + pM.m34 * pV.w,
pM.m41 * pV.x + pM.m42 * pV.y + pM.m43 * pV.z + pM.m44 * pV.w
};
return r;
}
I've got no idea, if this is a bug in SceneKit, or if maybe it is forbidden to apply transforms to the rootNode...
Anyway: Problem solved :-)

Finding normal vector to iOS device

I would like to use CMAttitude to know the vector normal to the glass of the iPad/iPhone's screen (relative to the ground). As such, I would get vectors like the following:
Notice that this is different from orientation, in that I don't care how the device is rotated about the z axis. So if I was holding the iPad above my head facing down, it would read (0,-1,0), and even as I spun it around above my head (like a helicopter), it would continue to read (0,-1,0):
I feel like this might be pretty easy, but as I am new to quaternions and don't fully understand the reference frame options for device motion, its been evading me all day.
In your case we can say rotation of the device is equal to rotation of the device normal (rotation around the normal itself is just ignored like you specified it)
CMAttitude which you can get via
CMMotionManager.deviceMotion provides the rotation
relative to a reference frame. Its properties quaternion, roation
matrix and Euler angles are just different representations.
The reference frame can be specified when you start device motion updates using CMMotionManager's startDeviceMotionUpdatesUsingReferenceFrame method. Until iOS 4 you had to use multiplyByInverseOfAttitude
Putting this together you just have to multiply the quaternion in the right way with the normal vector when the device lies face up on the table. Now we need this right way of quaternion multiplication that represents a rotation: According to Rotating vectors this is done by:
n = q * e * q' where q is the quaternion delivered by CMAttitude [w, (x, y, z)], q' is its conjugate [w, (-x, -y, -z)] and e is the quaternion representation of the face up normal [0, (0, 0, 1)]. Unfortunately Apple's CMQuaternion is struct and thus you need a small helper class.
Quaternion e = [[Quaternion alloc] initWithValues:0 y:0 z:1 w:0];
CMQuaternion cm = deviceMotion.attitude.quaternion;
Quaternion quat = [[Quaternion alloc] initWithValues:cm.x y:cm.y z:cm.z w: cm.w];
Quaternion quatConjugate = [[Quaternion alloc] initWithValues:-cm.x y:-cm.y z:-cm.z w: cm.w];
[quat multiplyWithRight:e];
[quat multiplyWithRight:quatConjugate];
// quat.x, .y, .z contain your normal
Quaternion.h:
#interface Quaternion : NSObject {
double w;
double x;
double y;
double z;
}
#property(readwrite, assign)double w;
#property(readwrite, assign)double x;
#property(readwrite, assign)double y;
#property(readwrite, assign)double z;
Quaternion.m:
- (Quaternion*) multiplyWithRight:(Quaternion*)q {
double newW = w*q.w - x*q.x - y*q.y - z*q.z;
double newX = w*q.x + x*q.w + y*q.z - z*q.y;
double newY = w*q.y + y*q.w + z*q.x - x*q.z;
double newZ = w*q.z + z*q.w + x*q.y - y*q.x;
w = newW;
x = newX;
y = newY;
z = newZ;
// one multiplication won't denormalise but when multipling again and again
// we should assure that the result is normalised
return self;
}
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2 {
if ((self = [super init])) {
x = x2; y = y2; z = z2; w = w2;
}
return self;
}
I know quaternions are a bit weird at the beginning but once you have got an idea they are really brilliant. It helped me to imagine a quaternion as a rotation around the vector (x, y, z) and w is (cosine of) the angle.
If you need to do more with them take a look at cocoamath open source project. The classes Quaternion and its extension QuaternionOperations are a good starting point.
For the sake of completeness, yes you can do it with matrix multiplication as well:
n = M * e
But I would prefer the quaternion way it saves you all the trigonometric hassle and performs better.
Thanks to Kay for the starting point on the solution. Here is my implementation for anyone that needs it. I made a couple of small tweeks to Kay's advice for my situation. As a heads up, I'm using a landscape only presentation. I have code that updates a variable _isLandscapeLeft to make the necessary adjustment to the direction of the vector.
Quaternion.h
#interface Quaternion : NSObject{
//double w;
//double x;
//double y;
//double z;
}
#property(readwrite, assign)double w;
#property(readwrite, assign)double x;
#property(readwrite, assign)double y;
#property(readwrite, assign)double z;
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2;
- (Quaternion*) multiplyWithRight:(Quaternion*)q;
#end
Quaternion.m
#import "Quaternion.h"
#implementation Quaternion
- (Quaternion*) multiplyWithRight:(Quaternion*)q {
double newW = _w*q.w - _x*q.x - _y*q.y - _z*q.z;
double newX = _w*q.x + _x*q.w + _y*q.z - _z*q.y;
double newY = _w*q.y + _y*q.w + _z*q.x - _x*q.z;
double newZ = _w*q.z + _z*q.w + _x*q.y - _y*q.x;
_w = newW;
_x = newX;
_y = newY;
_z = newZ;
// one multiplication won't denormalise but when multipling again and again
// we should assure that the result is normalised
return self;
}
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2 {
if ((self = [super init])) {
_x = x2; _y = y2; _z = z2; _w = w2;
}
return self;
}
#end
And my game class that uses the quaternion for shooting:
-(void)fireWeapon{
ProjectileBaseClass *bullet = [[ProjectileBaseClass alloc] init];
bullet.position = SCNVector3Make(0, 1, 0);
[self.rootNode addChildNode:bullet];
Quaternion *e = [[Quaternion alloc] initWithValues:0 x:0 y:0 z:1];
CMQuaternion cm = _currentAttitude.quaternion;
Quaternion *quat = [[Quaternion alloc] initWithValues:cm.w x:cm.x y:cm.y z:cm.z];
Quaternion *quatConjugate = [[Quaternion alloc] initWithValues:cm.w x:-cm.x y:-cm.y z:-cm.z];
quat = [quat multiplyWithRight:e];
quat = [quat multiplyWithRight:quatConjugate];
SCNVector3 directionToShoot;
if (_isLandscapeLeft) {
directionToShoot = SCNVector3Make(quat.y, -quat.x, -quat.z);
}else{
directionToShoot = SCNVector3Make(-quat.y, quat.x, -quat.z);
}
SCNAction *shootBullet = [SCNAction moveBy:directionToShoot duration:.1];
[bullet runAction:[SCNAction repeatActionForever:shootBullet]];
}

Algorithm for creating a circular path around a center mass?

I am attempting to simply make objects orbit around a center point, e.g.
The green and blue objects represent objects which should keep their distance to the center point, while rotating, based on an angle which I pass into method.
I have attempted to create a function, in objective-c, but it doesn't work right without a static number. e.g. (It rotates around the center, but not from the true starting point or distance from the object.)
-(void) rotateGear: (UIImageView*) view heading:(int)heading
{
// int distanceX = 160 - view.frame.origin.x;
// int distanceY = 240 - view.frame.origin.y;
float x = 160 - view.image.size.width / 2 + (50 * cos(heading * (M_PI / 180)));
float y = 240 - view.image.size.height / 2 + (50 * sin(heading * (M_PI / 180)));
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
My magic numbers 160, and 240 are the center of the canvas in which I'm drawing the images onto. 50 is a static number (and the problem), which allows the function to work partially correctly -- without maintaining the starting poisition of the object or correct distance. I don't know what to put here unfortunately.
heading is a parameter that passes in a degree, from 0 to 359. It is calculated by a timer and increments outside of this class.
Essentially what I would like to be able to drop any image onto my canvas, and based on the starting point of the image, it would rotate around the center of my circle. This means, if I were to drop an image at Point (10,10), the distance to the center of the circle would persist, using (10,10) as a starting point. The object would rotate 360 degrees around the center, and reach it's original starting point.
The expected result would be to pass for instance (10,10) into the method, based off of zero degrees, and get back out, (15,25) (not real) at 5 degrees.
I know this is very simple (and this problem description is entirely overkill), but I'm going cross eyed trying to figure out where I'm hosing things up. I don't care about what language examples you use, if any. I'll be able to decipher your meanings.
Failure Update
I've gotten farther, but I still cannot get the right calculation. My new code looks like the following:
heading is set to 1 degree.
-(void) rotateGear: (UIImageView*) view heading:(int)heading
{
float y1 = view.frame.origin.y + (view.frame.size.height/2); // 152
float x1 = view.frame.origin.x + (view.frame.size.width/2); // 140.5
float radius = sqrtf(powf(160 - x1 ,2.0f) + powf(240 - y1, 2.0f)); // 90.13
// I know that I need to calculate 90.13 pixels from my center, at 1 degree.
float x = 160 + radius * (cos(heading * (M_PI / 180.0f))); // 250.12
float y = 240 + radius * (sin(heading * (M_PI / 180.0f))); // 241.57
// The numbers are very skewed.
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
I'm getting results that are no where close to where the point should be. The problem is with the assignment of x and y. Where am I going wrong?
You can find the distance of the point from the centre pretty easily:
radius = sqrt((160 - x)^2 + (240 - y)^2)
where (x, y) is the initial position of the centre of your object. Then just replace 50 by the radius.
http://en.wikipedia.org/wiki/Pythagorean_theorem
You can then figure out the initial angle using trigonometry (tan = opposite / adjacent, so draw a right-angled triangle using the centre mass and the centre of your orbiting object to visualize this):
angle = arctan((y - 240) / (x - 160))
if x > 160, or:
angle = arctan((y - 240) / (x - 160)) + 180
if x < 160
http://en.wikipedia.org/wiki/Inverse_trigonometric_functions
Edit: bear in mind I don't actually know any Objective-C but this is basically what I think you should do (you should be able to translate this to correct Obj-C pretty easily, this is just for demonstration):
// Your object gets created here somewhere
float x1 = view.frame.origin.x + (view.frame.size.width/2); // 140.5
float y1 = view.frame.origin.y + (view.frame.size.height/2); // 152
float radius = sqrtf(powf(160 - x1 ,2.0f) + powf(240 - y1, 2.0f)); // 90.13
// Calculate the initial angle here, as per the first part of my answer
float initialAngle = atan((y1 - 240) / (x1 - 160)) * 180.0f / M_PI;
if(x1 < 160)
initialAngle += 180;
// Calculate the adjustment we need to add to heading
int adjustment = (int)(initialAngle - heading);
So we only execute the code above once (when the object gets created). We need to remember radius and adjustment for later. Then we alter rotateGear to take an angle and a radius as inputs instead of heading (this is much more flexible anyway):
-(void) rotateGear: (UIImageView*) view radius:(float)radius angle:(int)angle
{
float x = 160 + radius * (cos(angle * (M_PI / 180.0f)));
float y = 240 + radius * (sin(angle * (M_PI / 180.0f)));
// The numbers are very skewed.
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
And each time we want to update the position we make a call like this:
[objectName rotateGear radius:radius angle:(adjustment + heading)];
Btw, once you manage to get this working, I'd strongly recommend converting all your angles so you're using radians all the way through, it makes it much neater/easier to follow!
The formula for x and y coordinates of a point on a circle, based on radians, radius, and center point:
x = cos(angle) * radius + center_x
y = sin(angle) * radius + center_y
You can find the radius with HappyPixel's formula.
Once you figure out the radius and the center point, you can simply vary the angle to get all the points on the circle that you'd want.
If I understand correctly, you want to do InitObject(x,y). followed by UpdateObject(angle) where angle sweeps from 0 to 360. (But use radians instead of degrees for the math)
So you need to track the angle and radius for each object.:
InitObject(x,y)
relative_x = x-center.x
relative_y = y-center.y
object.radius = sqrt((relative_x)^2, (relative_y)^2)
object.initial_angle = atan(relative_y,relative_x);
And
UpdateObject(angle)
newangle = (object.initial_angle + angle) % (2*PI )
object.x = cos(newangle) * object.radius + center.x
object.y = sin(newangle) * object.radius + center.y
dx=dropx-centerx; //target-source
dy=-(dropy-centery); //minus = invert screen coords to cartesian coords
radius=sqrt(dy*dy+dx*dx); //faster if your compiler optimizer is bad
if dx=0 then dx=0.000001; //hackpatchfudgenudge*
angle=atan(dy/dx); //set this as start angle for the angle-incrementer
Then go with the code you have and you'll be fine. You seem to be calculating radius from current position each time though? This, like the angle, should only be done once, when the object is dropped, or else the radius might not be constant.
*instead of handling 3 special cases for dx=0, if you need < 1/100 degree precision for the start angle go with those instead, google Polar Arctan.

Resources