i'm trying to run Madgwick's sensor fusion algorithm on iOS. Since the code is open source i already included it in my project and call the methods with the provided sensor values.
But it seems, that the algorithm expects the sensor measurements in a different coordinate system. The Apple CoreMotion Sensor System is given on the right side, Madgewick's on the left. Here is the picture of the different coordinate systems. Both systems follow the right hand rule.
For me it seems like there is a 90 degree rotation around the z axis. But this didn't work.
I also tried to flip x and y (and invert z) axis as suggested by other stackoverflow posts for WP but this didn't work also. So do you have a hint?
Would be perfect if Madgwick's alogithm output could be in the same system as the CoreMotion output (CMAttitudeReferenceFrameXMagneticNorthZVertical).
Furthermore I'm looking for a good working value for betaDef on the iPhone. betaDef is kind of the proportional gain and is currently set to 0.1f.
Any help on how to achieve the goal would be appreciated.
I'm not sure how to write this in objective c, but here's how I accomplished the coordinate transformations in vanilla c. I also wanted to rotate the orientation so that +y is north. This translation is also reflected in the below method.
This method expects a 4 element quaternion in the form of wxyz, and returns a translated quaternion in the same format:
void madgeq_to_openglq(float *fMadgQ, float *fRetQ) {
float fTmpQ[4];
// Rotate around Z-axis, 90 degres:
float fXYRotationQ[4] = { sqrt(0.5), 0, 0, -1.0*sqrt(0.5) };
// Inverse the rotation vectors to accomodate handedness-issues:
fTmpQ[0] = fMadgQ[0];
fTmpQ[1] = fMadgQ[1] * -1.0f;
fTmpQ[2] = fMadgQ[2];
fTmpQ[3] = fMadgQ[3] * -1.0f;
// And then store the translated Rotation into ret:
quatMult((float *) &fTmpQ, (float *) &fXYRotationQ, fRetQ);
}
// Quaternion Multiplication operator. Expects its 4-element arrays in wxyz order
void quatMult(float *a, float *b, float *ret) {
ret[0] = (b[0] * a[0]) - (b[1] * a[1]) - (b[2] * a[2]) - (b[3] * a[3]);
ret[1] = (b[0] * a[1]) + (b[1] * a[0]) + (b[2] * a[3]) - (b[3] * a[2]);
ret[2] = (b[0] * a[2]) + (b[2] * a[0]) + (b[3] * a[1]) - (b[1] * a[3]);
ret[3] = (b[0] * a[3]) + (b[3] * a[0]) + (b[1] * a[2]) - (b[2] * a[1]);
return;
}
Hope that helps!
Related
Our AR device is based on a camera with pretty strong optical zoom. We measure the distortion of this camera using classical camera-calibration tools (checkerboards), both through OpenCV and the GML Camera Calibration tools.
At higher zoom levels (I'll use 249 out of 255 as an example) we measure the following camera parameters at full HD resolution (1920x1080):
fx = 24545.4316
fy = 24628.5469
cx = 924.3162
cy = 440.2694
For the radial and tangential distortion we measured 4 values:
k1 = 5.423406
k2 = -2964.24243
p1 = 0.004201721
p2 = 0.0162647516
We are not sure how to interpret (read: implement) those extremely large values for k1 and k2. Using OpenCV's classic "undistort" operation to rectify the image using these values seems to work well. Unfortunately this is (much) too slow for realtime usage.
The thumbnails below look similar, clicking them will display the full size images where you can spot the difference:
Camera footage
Undistorted using OpenCV
That's why we want to take the opposite aproach: leave the camera footage be distorted and apply a similar distortion to our 3D scene using shaders. Following the OpenCV documentation and this accepted answer in particular, the distorted position for a corner point (0, 0) would be
// To relative coordinates
double x = (point.X - cx) / fx; // -960 / 24545 = -0.03911
double y = (point.Y - cy) / fy; // -540 / 24628 = -0.02193
double r2 = x*x + y*y; // 0.002010
// Radial distortion
// -0.03911 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.039067
double xDistort = x * (1 + k1 * r2 + k2 * r2 * r2);
// -0.02193 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.021906
double yDistort = y * (1 + k1 * r2 + k2 * r2 * r2);
// Tangential distortion
... left out for brevity
// Back to absolute coordinates.
xDistort = xDistort * fx + cx; // -0.039067 * 24545.4316 + 924.3162 = -34.6002 !!!
yDistort = yDistort * fy + cy; // -0.021906 * 24628.5469 + 440.2694 = = -99.2435 !!!
These large pixel displacements (34 and 100 pixels at the upper left corner) seem overly warped and do not correspond with the undistorted image OpenCV generates.
So the specific question is: what is wrong with the way we interpreted the values we measured, and what should the correct code for distortion be?
I have a 3D scene in which in the imaginary sphere I position few objects, now I want to rotate them within device motion.
I use spherical coordinate system and calculate position in sphere like below:
x = ρ * sinϕ * cosθ
y = ρ * sinϕ * sinθ
z = ρ * cosϕ.
Also, I use angles (from 0 to 2_M_PI) for performing rotation horizontally (in z-x)
As result all works perfect until I want to use Quaternion from motion matrix.
I can extract values like pitch, yaw, roll
GLKQuaternion quat = GLKQuaternionMakeWithMatrix4(motionMatrix);
CGFloat adjRoll = atan2(2 * (quat.y * quat.w - quat.x * quat.z), 1 - 2 * quat.y * quat.y - 2 * quat.z * quat.z);
CGFloat adjPitch = atan2(2 * (quat.x * quat.w + quat.y * quat.z), 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z);
CGFloat adjYaw = asin(2 * quat.x * quat.y + 2 * quat.w * quat.z);
or try also
CMAttitude *currentAttitude = [MotionDataProvider sharedProvider].attitude; //from CoreMotion
CGFloat roll = currentAttitude.roll;
CGFloat pitch = currentAttitude.pitch;
CGFloat yaw = currentAttitude.yaw;
*the values that i got is different for this methods
The problem is that pitch, yaw, roll is not applicable in this format to my scheme.
How can I convert pitch, yaw, roll or quaternion or motionMatrix to required angles in x-z for my rotation model? Am I on correct way of things doing, or I missed some milestone point?
How to get rotation around y axis from received rotation matrix/quaternion from CoreMotion, converting current z and x to 0, so displayed object can be rotated only around y axis?
I use iOS, by the way, but guess this is not important here.
Can any one help me , How do I implement the area calculator using in group of latitude and longitudes .
For Example One person walk around the building , I am getting the latitude and longitudes while walking time . I want calculate the how much square feet that building was constructed.
Well, I´m going to try to help you, but I´m not going to give the complete answer.
I think, the first step is convert lats and longs in a cartesian coordinate system. You should calculate the center of all points. (A simple median).
Second step, convert all points to ENU coordinates centered in its center:
This step, I did, and here you are:
Constants:
#define DEGREES_TO_RADIANS (M_PI/180.0)
#define WGS84_A (6378137.0) // WGS 84 semi-major axis constant in meters
#define WGS84_E (8.1819190842622e-2) // WGS 84 eccentricity
Structs:
//To change to ECEF
typedef struct{
double x;
double y;
double z;
} ECEFCoordinate;
typedef struct{
double east;
double north;
double up;
} ENUCoordinate;
Methods, (you need past through ECEF):
#pragma mark Geodetic utilities definition
-(ECEFCoordinate) ecefFromLatitude:(double)lat longitude:(double)lon andAltitude:(double)alt
{
double clat = cos(lat * DEGREES_TO_RADIANS);
double slat = sin(lat * DEGREES_TO_RADIANS);
double clon = cos(lon * DEGREES_TO_RADIANS);
double slon = sin(lon * DEGREES_TO_RADIANS);
double N = WGS84_A / sqrt(1.0 - WGS84_E * WGS84_E * slat * slat);
ECEFCoordinate ecef;
ecef.x = (N + alt) * clat * clon;
ecef.y = (N + alt) * clat * slon;
ecef.z = (N * (1.0 - WGS84_E * WGS84_E) + alt) * slat;
return ecef;
}
// Converts ECEF to ENU coordinates centered at given lat , lon (with ECEFCenter)
-(ENUCoordinate)enuFromECEFCenter:(ECEFCoordinate)ecefCenter withLat:(double)lat andLon:(double)lon fromEcef:(ECEFCoordinate)ecef
{
double clat = cos(lat * DEGREES_TO_RADIANS);
double slat = sin(lat * DEGREES_TO_RADIANS);
double clon = cos(lon * DEGREES_TO_RADIANS);
double slon = sin(lon * DEGREES_TO_RADIANS);
double dx = ecefCenter.x - ecef.x;
double dy = ecefCenter.y - ecef.y;
double dz = ecefCenter.z - ecef.z;
ENUCoordinate enu;
enu.east = -slon*dx + clon*dy;
enu.north = -slat*clon*dx - slat*slon*dy + clat*dz;
enu.up = clat*clon*dx + clat*slon*dy + slat*dz;
return enu;
}
Last step: (I think the easy way is use triangles, from center to two consecutive points), calculate the area of a bunch of points in a cartesian coordinate system (east,north). Same than (x,y).
Good luck.
Last help two calculate the Area, I think you can find more help (and maybe best way) trough internet.
Getting data from the CMMotionManager is fairly straight forward, processing it not so much.
Does anybody have any pointers to code for relatively accurately detecting a step (and ignoring smaller movements) or guidelines in a general direction how to go about such a thing?
What you basically need is a kind of a Low Pass Filter that will allow you to ignore small movements. Effectively, this “smooths” out the data by taking out the jittery.
- (void)updateViewsWithFilteredAcceleration:(CMAcceleration)acceleration
{
static CGFloat x0 = 0;
static CGFloat y0 = 0;
const NSTimeInterval dt = (1.0 / 20);
const double RC = 0.3;
const double alpha = dt / (RC + dt);
CMAcceleration smoothed;
smoothed.x = (alpha * acceleration.x) + (1.0 - alpha) * x0;
smoothed.y = (alpha * acceleration.y) + (1.0 - alpha) * y0;
[self updateViewsWithAcceleration:smoothed];
x0 = smoothed.x;
y0 = smoothed.y;
}
The alpha value determines how much weight to give the previous data vs the raw data.
The dt is how much time elapsed between samples.
RC value controls the aggressiveness of the filter. Bigger values mean smoother output.
I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof