Determine total Delta rotation of iPhone - ios

I've been trying to break into an understanding of Core Motion, and it feels like I'm failing.
I'm not an engineer, nor a mathematician, so rotational matrices and Euler angles, etc. are (as of yet) a difficult concept for me to wrap my head around.
I'm trying to write an app that can calculate the angle of rotation for an iPhone. Basically, I'm trying to measure range of motion for, say, someone's elbow.
I've gotten a few different test projects working, and it seems like the correct way to go is using CMAttitude instances from the Core Motion framework.
I've even gotten so far as to (sort of) understand taking a reference frame when I begin asking for motion data, and using multipleByInverseOfAttitude: to determine the net change in attitude (did I state that correctly?).
What I don't understand is what to DO with that calculated CMAttitude. Assuming a user has the iPhone in their hand, begins a measurement and flexes their arm, so that their elbow collapses from extended (180º) to collapsed (say, 45º). Depending on how they are holding the device, it seems like I could get very, very different calculated attitudes.
In another scenario, say a user holds the iPhone in their hand palm-up. Then, without moving their arm (the phone stays "in place"), they rotate their hand palm-down. Clearly, that's something like a 180º rotation.
Now, in both of the above scenarios, I think I could figure out how to do the measurement if I can guarantee how the user is holding the device to start. For the elbow, I could have them hold the device "top up / screen left", which would mean that the yaw measurement would give me the answer I seek.
For the wrist rotation, if the user starts with the phone "screen up / top right", then I could isolate pitch as my measurement.
What I'd like to find is a more generic way to find a common angle of rotation between any two attitudes (or, from a single, calculated, "delta" attitude).
Am I even thinking of this correctly?
If you're going to respond with advanced math, please give me resources or simple explanations. I don't mind reading up on this topic, but some of the answers I've seen related to this on other questions are so dense that I don't even know where to start with them.
EDIT
I've been reading up on this more over the past few hours, and I have an idea, but wanted to run it by someone with more experience on this matter. I've also been running on-device experiments and experiencing gimbal lock at the edge of pitch's range and oddities at the edge of roll's range.
So, from what I can tell, a CMAttitude instance represents the iPhone's position in 3-dimensional space. It can be asked to represent itself in Euler angles (pitch, roll, yaw), or Quaternions (x, y, z, w).
If so, for my purposes, could I calculate the angle between 2 quaternions' vectors (x1, y1, z1) vs. (x2, y2, z3) and compare it to the difference between w1 and w2? If the angle of difference between the vectors is sufficiently small, then wouldn't the difference between the "w" rotations be what I seek? Does that make sense, or have I missed the essence of quaternions?

Related

IMU Orientation constantly changing

We have XSENS MTi IMU-Device and use the ROS-Framework (Ubuntu / Fuerte).
We subscribe to the IMU-Data and all data looks good except orientation.
In Euler-Outputmode like in Quaternion-Outputmode the values are constantly changing. Not randomly, they increase or decrease slowly at a more or less constant rate, and sometimes I observed the change to flatten out and then change it's direction.
When the Value at Second X may be:
x: 7.79210457616
y: -6.58661204898
z: 41.2841955308
the Z value changes in a range of about 10 within a few seconds (10-20 seconds I think).
What can cause this behaviour? Do we misinterpret the data or is there something wrong with the driver? The strange thing is, this also happend with 2 other drivers, and one other IMU device (we have 2). Same results in each combination.
Feel free to ask for more precise data or whatever you'd like to know that may be able to help us out. We are participating at the Spacebot-Cup in November, so it would be quite a relief to get the IMU done. :)
Perfectly normal if you have no magnetometer to give a corrected heading.
Gyroscope alone measures rate of turn only, and has no idea of orientation at any given time on any axis. Integrating the rate of turn gives the heading if you know the initial heading and the gyro is 100% accurate. It drifts anyway, even if it's perfectly calibrated, as you are sampling at discrete intervals rather than continuously.
Adding an accelerometer will at least fix the downward direction (because it measures gravity, which is towards the Earth's centre). This will keep the Z axis solution aligned with vertical, but it won't fix the horizontal direction (the heading or yaw). That will continue to drift, as you are seeing.
Adding a magnetometer will fix the heading relative to the Earth's magnetic field. This will give you a heading relative to magnetic North. You will need to apply a shift for local magnetic declination to get True North. These are generally available on line and reasonably constant over tens of km. Google ITREF.
Some integrated sensors don't have a magnetometer. That's why the heading drifts. Units like the MPU6050 have firmware built in, and can access a magnetometer, but the usual firmware doesn't use it, so you have to implement Madgwick, etc., on your micro controller or a connected PC anyway. Bosch have a new single module with a processing unit built in. Hopefully, it uses 9 DOF rather than the 6 you get with the DMP on the MPU6050.
Magnetic sensors are accurate to about 2 degrees. Local magnetic declination corrections also have an error. You may be able to perform additional calibrations by using a GPS on a long base line to get better results. It's also worth noting that heading and course made good are often different, due to crosswind / cross currents.
The Madgwick algorithm is fairly stable and easy to implement, and uses fewer resources than a Kalman filter, which needs to perform matrix inversion. It still gives minor jitter, but minor smoothing of results shouldn't induce too much lag.
If you have the IMU version, I assume that no signal processing has been done on the device. (but I don't know the product). So the data you get for the orientation should be only the integral of the gyroscope data.
The drift you can see is normal and can come from the integration of the noise, a bad zero rate calibration, or the bias of the gyroscope.
To correct this drift, we usually use an AHRS or a VRU algorithm (depending the need of a corrected yaw). It's a fusion sensor algorithm which take the gravity from the accelerometer and the magnetometer data (for AHRS) to correct this drift.
The algorithms often used are the Kalman filter and the complementary filter (Madgwick/Mahony).
It's not an easy job and request a bit of reading and experimenting on matlab/python to configure these filters :)

How to getting movement size from 3 axis accelerometer data

I did a lot of experiment using the accelerometer for detecting the movement size(magnitude) just one value from x,y,z acceleration. I am using an iPhone 4 with accelerometer update frequency 1.0 / 50.0 (50HZ), but I've also tried with 100HZ, 150HZ, 200HZ.
Examples:
Acceleration on X axis
Acceleration on Y axis
Acceleration on Z axis
I assume ( I hope I am correct) that the accelerations are the small peaks on the graph, not the big steps. I think from my experiments that the big steps show the device position. If changed the position the step is changed too.
If my previous assumption is correct I need to cut the peaks from the graph and summarize them. Here comes my question how can I cut those peaks without losing the information, the peak sizes.
I know that the high pass filter does this kind of thinks(passes the high peaks and blocks the noise, the small ones, I've read some paper about the filters. But for me the filter cut a lot of information from my "signal"(accelerometer data).
I think that there should be a better way for getting the information out from the data.
I've tried a simple one which looks nice but it isn't correct.
I did this data data using my function magnitude
for i = 2 : length(x)
converted(i-1) = x(i-1) - x(i);
end
Where x is my data and converted array is the result.
The following row generated a the image below, which looks like nice.
xyz = magnitude(datay) + magnitude(dataz) + magnitude(datax)
However the problem with that solution is that if I have continuos acceleration the graph just will show the first point and then goes down. I know that I need somehow better filter, but I am bit confused. Could you give some advice how can I do this properly.
Thanks for your time,
I really appreciate your help
Edit(answers for Zaph question):
What are you trying to accomplish?
I want to measure the movement when the iPhone is placed to desk, chair or bed. The accelerometer is so sensible if I put down a pencil it to a desk it shows me. I want to measure all movement that happens in a specific time.
What are the scale units?
I'm not scaling the data.
When you say "device position" what do you mean, an accelerometer provides movement (in iPhones with gyros)
I am using only the accelerometer. When I put the device like the picture below I got values around -1 on x coordinate, 0.0 on z and y coordinate. This is what I mean as device position.
The measurements that are returned from the accelerometer are acceleration, not position.
I'm not sure what you mean with "big steps" but the peaks show a change of acceleration. The fact that the values are not 0 when holding the device still is from the fact that the gravitation accelerates the device with 9.81 m/s^2 (the magnitude of the acceleration vector).
You are potentially trying to do something quite difficult, especially the with low quality sensors that are embedded in phones. That is, getting the actual coordinate acceleration of the phone.
What you can do, is to detect the time periods when the phone was moved or touched. You can first calculate magnitude (norm) of acceleration signal and then, with a moving window, check areas where sample standard deviation is smaller than some threshold. Determining how the phone moved is more complicated issue. Of course you can check orientation for the stationary areas between movements.

Finding the cardinal direction accelerated with an Algorithm for using the magnetometer, accelerometer, and gyro readings

I want to find the cardinal direction accelerated by an iphone. I thought I could just use the accelerometer to do this, however, as you can see from the picture below the accelerometers axes are defined by the device orientation.
I figured that if i used the gyroscope to correct for yaw, spin, rotation then I could get a more accurate reading and not have to hold the phone in the same orientation during movement.
But this still does not tell me what cardinal direction the iphone is moving in. For that I would also have to use the the magnetometer.
Can anybody tell me how to use a three sensor readings to find the cardinal direction accelerated in? I dont even know where to start. I dont even know if the phone takes these measurements at the same rates of time either.
Taking the cross product of the magnetometer vector with the "down" vector will give you a horizontal magnetic east/west vector; from that, a second cross product gets the magnetic north/south vector. That's the easy part.
The harder problem is tracking the "down" vector effectively. If you integrate the accelerometers over time, you can filter out the motion of a hand-held mobile device, to get the persistent direction of gravity. Or, you could, if your device weren't rotating at the same time...
That's where the rate gyros come in: the gyros can let you compensate for the dynamic rotation of the hand-held device, so you can track your gravity in real-time. The classic way to do this is called a Kalman filter, which can integrate (both literally and figuratively) multiple data sources in order to evaluate the most likely state of your system.
A Kalman filter requires a mathematical model both of your physical system, and of the sensors that observe it; each of these models must be both accurate and "sufficiently linear" for the Kalman filter to work properly. As it happens, the iphone/accelerometer/gyro system is in fact sufficiently linear.
The Kalman filter uses both calculus and linear algebra, so if you're rolling your own, you will need a certain amount of math.
Also, as a practical matter, you should understand that physical sensors typically have offsets that need to be compensated for -- in particular, you need to pay attention to the rate gyro offsets in this kind of inertial navigation system, or your tracker will never stabilize. This means you will need to add your rate gyro offsets to your Kalman state vector and system model.

CMDeviceMotion userAcceleration is upside down?

I'm seeing some unexpected readings from the userAcceleration field in CMDeviceMotion. When I look at the raw accelerometer data from CMAccelerometerData, I see that if the iPhone is flat on a table the reading is 1G straight down (1G in -Z axis) and if I drop the iphone (on a soft surface of course) then the acceleromtere reading goes to zero as expected. That's all fine. When I instead use the CMDeviceMotion class, the userAcceleration reading is zero as expected when the iPhone is flat on table. Again this is fine. But when I drop the iPhone and read the CMDeviceManager userAcceleration, the userAcceleration values are 1G straight up (+Z) not down (-Z) as expected. It appears that the userAcceleration readings are actually the exact opposite of what acceleration the device is really experiencing. Has anyone else observed this? Can I just invert (multiply by -1) all the userAcceleration values before I try to integrate for velocity and position, or am I misunerstanding what userAcceleration is reading?
There are some conceptual differences between CMAccelerometerData.acceleration and CMDeviceMotion.userAcceleration
Raw accelerometer data is just the sum of all accelerations measured i.e. a combination of gravity and current acceleration of the device.
Device motion data is the result of sensor fusion of all 3 sensors i.e. accelerometer, gyroscope and magnetometer. Thus bias and errors are eliminated (in theory) and the remaining acceleration data is separated into gravity and acceleration to be used conveniently.
So if you want to compare both you have to check CMAccelerometerData.acceleration against CMDeviceMotion.userAcceleration + CMDeviceMotion.gravity to compare like with like.
In general CMDeviceMotion is your first choice in most cases when you want precise values and hardware independency.
Another thing to consider is the CMAttitudeReferenceFrame you provide when starting Device Motion updates via startDeviceMotionUpdatesUsingReferenceFrame. I am not sure what is the default when using the basic version startDeviceMotionUpdates
You stated that you want to integrate the values to get velocity and position. There are several discussions about this and at the bottom line I can say it's impossible to get reasonable results. See:
Finding distance using accelerometer in iPhone
Getting displacement from accelerometer data with Core Motion
How can I find distance traveled with a gyroscope and accelerometer?
If your app concept forces you to rely on precise results for more than half a second, try to change it.
It turns out the CMAcceleration is not obey the right hand rule, which x is point to left, y is point to the screen bottom, in that case, with a typical right hand system, z axis should point to the upper side,but its not.
It makes me uncomfortable when dealing with motion sensors!

Is it possible to use core motion for distance measurement [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Getting displacement from accelerometer data with Core Motion
Android accelerometer accuracy (Inertial navigation)
I am trying to use core motion user acceleration values, and double integrating them to derive distance covered. I move my iPhone linearly along its Y axis, against a 30 cm log ruler, on the table. First, I let the device be at rest for 10 seconds, and I calculate my offsets along the three axes, by averaging the respective user acceleration values.
The X, Y and Z offsets are subtracted from the acceleration values, when I try calculating the distance covered. After offset subtraction, these values are passed through a low pass filter and a median filter, separately of course. The filters are linear filters, and the cut-off frequency is specified by the number of neighbouring values whose mean is taken in low pass, and median in the median filter. I have experimented with varying values of this number from 1 to 100. In the end, these filtered values are double integrated using trapezoidal rule to get distances. But, the distance calculated is no where close to 30 cm. The closest value I got was some -22 cm(I am wondering why I am getting negative values even though I move the device in positive Y direction). I also came across this:
http://ajnaware.wordpress.com/2008/09/05/accelerating-iphones/
its an old post about the same thing, which says that the accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g), resulting in a large cumulative error very quickly. Going by that, for this error to really not matter, one will have to accelerate the device by almost 1.8m/s^2, which is practically impossible for distance/length measurement purposes. for small movements, it does not look like there is a possibility of calculating distances by using an optimal filter and a higher order numerical integration method, without an impractical velocity/acceleration constraint like that. Is it possible?
How about using my acceleration vs timestamp data to interpolate a polynomial that grows over time, as I get more and more motion updates, which represents approximately an acceleration vs time curve. Double integration of ths polynomial would be a piece of cake. But, for small distances, the polynomial will have a big error component. Using a predictable known motion that my device will be subjected to, I wish to take a huge number of snapshots (calculated distance vs actual known distance) to calculate my error polynomial in a similar way, and then subtract it from my first polynomial. Can this work?
Although this does not fit StackOverflow, because it's not a question but a discussion, I'll try to sum up my thoughts about it.
As already said, the accelerometer is very inaccurate and you would need very good accuracy for this kind of task, especially if you are trying to measure such short distances. Plus, accelerometers differ from device to device, you will get different results for the same movements with different device. Plus a very huge random error.
My guess is, that you can get rid of a huge part of randomness/error by calibrating the device and making the "measurement move" a couple of times, like 10 times. After that you have enough data to get an average that might get close to the real value.
Calibration is a key part here, you have to think of a clever way to calibrate, like letting the user move the device over different distances in different speeds.
But all this is just theory. I would really like to see your results, but I doubt you get it working good enough even using the best possible filters/algorithms, since there is just too much noise.

Resources