I have an arduino controlling an led light strip, and an iphone connected to the arduino via bluetooth. so the number of lights that are turned on correspond to the phones position along an x axis
Is it possible to use the accelerometer to estimate the distance the phone has traveled. i'm currently polling the accelerometer at 0.01 second intervals. so in 0.5 seconds i'll have an array of 50 values. I believe each value represents the g force at the instance it was measured, so 1.0 = 9.8 meters/second. What would be the formula to take this array and the time interval to calculate the distance? Am i reinventing the wheel here? i feel like arKit has to use some kind of position tracking similar to this. Is there anything in coreMotion that could accomplish this for me.
Obligatory apology for not knowing what i'm doing. also similar questions have been asked before but they are >2 years old and the answer then was its possible but not accurate. i assume it could be more accurate now because arkit wouldn't work without doing something like
No, this isn't practical. The problem is drift. You can't tell if the phone is still or moving at a constant velocity, and the accelerometer isn't accurate enough to "zero out" the velocity of the phone. Minor errors in your calculations almost immediately swamp your results and you can't tell if the phone is sitting still or moving at a constant speed.
Acceleration is the second derivative of position. To start with acceleration you have to integrate twice, which will magnify errors.
To do this you could have two bluetooth sensors (one at each end of the bar) and use triangulation to calculate position. I haven't done this calculation myself to know all the details of it, but it's the same idea as those bluetooth tags you can have on a bunch of items to help you locate your keys, etc.
Related
I have what I thought was a very simple need, which is to get a 2d array of distances from the camera many times per second (like a LIDAR); for example, a 10x10 array of samples that are interpolated across the screen. I just want the distance to whatever the pixel is showing. I thought ARCore should be easily capable of this because it's correlating all the pixels with past frames so it should know where everything is.
I thought I am supposed to use hitTest() for this. I could call hitTest() 100 times per second. But hitTest() usually gives no results or inaccurate results. For example, it might detect a table, but not a wall or anything else. And, hitTest() seems to be very slow and laggy so I can't call it 100 times per second.
Am I doing something wrong? Also, would Apple's ARKit be a better choice for my needs? Or do I need to resort to external hardware which is better for getting actual distance?
ARcore does not currently support accurate depth sensing on the types of devices it can run on at this time (mid 2019).
See a note from the ARCore team here: https://github.com/google-ar/arcore-android-sdk/issues/206
I want to track user's position and update it in the offline map based on his movement without using GPS and having to rely on location updates.
I have tried CMMotionManager and got acceleration in G's. However, this is acceleration rather than valocity. The manager also allows to get gravity, rotation and attitude.
Is there a way to calculate the user's speed in m/s ? If so, how would I go about it? Any formulas / code samples?
The only way to do this is to assume that the phone is at rest when the app starts. With an accelerometer there's no way to tell the difference between being at rest and moving at a constant rate. For example if you were on a jet plane you'd have no way to tell that you were traveling at 800 kph and not sitting still.
If you do assume that you are at rest when you start it's possible to come up with very crude estimates of speed by tracking acceleration, but in practice, the results are prone to large amounts of "drift error", were small measurement errors quickly add up to a completely wrong current speed result, and so your position drifts around hopelessly.
So in practice, the answer to your question is "no, not really."
Edit:
Thinking about this a little more, you might be able to get usable results if you can impose some assumptions.
Say we assume that the user is on foot. We rule out traveling on a bike/in a car/train/plane. On foot, you really don't "drift". You move in fits and starts as the user takes steps. In fact, you could likely use the accelerometer to recognize the characteristic bounce of a person walking. There are pedometer apps that already do that. For walking, you could probably assume that in the absence of acceleration (ignoring gravity, which is constant), the phone is stationary, so zero out the speed and keep it at zero until there is an acceleration above a certain threshold. That would enable you to reduce drift error.
We have XSENS MTi IMU-Device and use the ROS-Framework (Ubuntu / Fuerte).
We subscribe to the IMU-Data and all data looks good except orientation.
In Euler-Outputmode like in Quaternion-Outputmode the values are constantly changing. Not randomly, they increase or decrease slowly at a more or less constant rate, and sometimes I observed the change to flatten out and then change it's direction.
When the Value at Second X may be:
x: 7.79210457616
y: -6.58661204898
z: 41.2841955308
the Z value changes in a range of about 10 within a few seconds (10-20 seconds I think).
What can cause this behaviour? Do we misinterpret the data or is there something wrong with the driver? The strange thing is, this also happend with 2 other drivers, and one other IMU device (we have 2). Same results in each combination.
Feel free to ask for more precise data or whatever you'd like to know that may be able to help us out. We are participating at the Spacebot-Cup in November, so it would be quite a relief to get the IMU done. :)
Perfectly normal if you have no magnetometer to give a corrected heading.
Gyroscope alone measures rate of turn only, and has no idea of orientation at any given time on any axis. Integrating the rate of turn gives the heading if you know the initial heading and the gyro is 100% accurate. It drifts anyway, even if it's perfectly calibrated, as you are sampling at discrete intervals rather than continuously.
Adding an accelerometer will at least fix the downward direction (because it measures gravity, which is towards the Earth's centre). This will keep the Z axis solution aligned with vertical, but it won't fix the horizontal direction (the heading or yaw). That will continue to drift, as you are seeing.
Adding a magnetometer will fix the heading relative to the Earth's magnetic field. This will give you a heading relative to magnetic North. You will need to apply a shift for local magnetic declination to get True North. These are generally available on line and reasonably constant over tens of km. Google ITREF.
Some integrated sensors don't have a magnetometer. That's why the heading drifts. Units like the MPU6050 have firmware built in, and can access a magnetometer, but the usual firmware doesn't use it, so you have to implement Madgwick, etc., on your micro controller or a connected PC anyway. Bosch have a new single module with a processing unit built in. Hopefully, it uses 9 DOF rather than the 6 you get with the DMP on the MPU6050.
Magnetic sensors are accurate to about 2 degrees. Local magnetic declination corrections also have an error. You may be able to perform additional calibrations by using a GPS on a long base line to get better results. It's also worth noting that heading and course made good are often different, due to crosswind / cross currents.
The Madgwick algorithm is fairly stable and easy to implement, and uses fewer resources than a Kalman filter, which needs to perform matrix inversion. It still gives minor jitter, but minor smoothing of results shouldn't induce too much lag.
If you have the IMU version, I assume that no signal processing has been done on the device. (but I don't know the product). So the data you get for the orientation should be only the integral of the gyroscope data.
The drift you can see is normal and can come from the integration of the noise, a bad zero rate calibration, or the bias of the gyroscope.
To correct this drift, we usually use an AHRS or a VRU algorithm (depending the need of a corrected yaw). It's a fusion sensor algorithm which take the gravity from the accelerometer and the magnetometer data (for AHRS) to correct this drift.
The algorithms often used are the Kalman filter and the complementary filter (Madgwick/Mahony).
It's not an easy job and request a bit of reading and experimenting on matlab/python to configure these filters :)
I did a lot of experiment using the accelerometer for detecting the movement size(magnitude) just one value from x,y,z acceleration. I am using an iPhone 4 with accelerometer update frequency 1.0 / 50.0 (50HZ), but I've also tried with 100HZ, 150HZ, 200HZ.
Examples:
Acceleration on X axis
Acceleration on Y axis
Acceleration on Z axis
I assume ( I hope I am correct) that the accelerations are the small peaks on the graph, not the big steps. I think from my experiments that the big steps show the device position. If changed the position the step is changed too.
If my previous assumption is correct I need to cut the peaks from the graph and summarize them. Here comes my question how can I cut those peaks without losing the information, the peak sizes.
I know that the high pass filter does this kind of thinks(passes the high peaks and blocks the noise, the small ones, I've read some paper about the filters. But for me the filter cut a lot of information from my "signal"(accelerometer data).
I think that there should be a better way for getting the information out from the data.
I've tried a simple one which looks nice but it isn't correct.
I did this data data using my function magnitude
for i = 2 : length(x)
converted(i-1) = x(i-1) - x(i);
end
Where x is my data and converted array is the result.
The following row generated a the image below, which looks like nice.
xyz = magnitude(datay) + magnitude(dataz) + magnitude(datax)
However the problem with that solution is that if I have continuos acceleration the graph just will show the first point and then goes down. I know that I need somehow better filter, but I am bit confused. Could you give some advice how can I do this properly.
Thanks for your time,
I really appreciate your help
Edit(answers for Zaph question):
What are you trying to accomplish?
I want to measure the movement when the iPhone is placed to desk, chair or bed. The accelerometer is so sensible if I put down a pencil it to a desk it shows me. I want to measure all movement that happens in a specific time.
What are the scale units?
I'm not scaling the data.
When you say "device position" what do you mean, an accelerometer provides movement (in iPhones with gyros)
I am using only the accelerometer. When I put the device like the picture below I got values around -1 on x coordinate, 0.0 on z and y coordinate. This is what I mean as device position.
The measurements that are returned from the accelerometer are acceleration, not position.
I'm not sure what you mean with "big steps" but the peaks show a change of acceleration. The fact that the values are not 0 when holding the device still is from the fact that the gravitation accelerates the device with 9.81 m/s^2 (the magnitude of the acceleration vector).
You are potentially trying to do something quite difficult, especially the with low quality sensors that are embedded in phones. That is, getting the actual coordinate acceleration of the phone.
What you can do, is to detect the time periods when the phone was moved or touched. You can first calculate magnitude (norm) of acceleration signal and then, with a moving window, check areas where sample standard deviation is smaller than some threshold. Determining how the phone moved is more complicated issue. Of course you can check orientation for the stationary areas between movements.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Getting displacement from accelerometer data with Core Motion
Android accelerometer accuracy (Inertial navigation)
I am trying to use core motion user acceleration values, and double integrating them to derive distance covered. I move my iPhone linearly along its Y axis, against a 30 cm log ruler, on the table. First, I let the device be at rest for 10 seconds, and I calculate my offsets along the three axes, by averaging the respective user acceleration values.
The X, Y and Z offsets are subtracted from the acceleration values, when I try calculating the distance covered. After offset subtraction, these values are passed through a low pass filter and a median filter, separately of course. The filters are linear filters, and the cut-off frequency is specified by the number of neighbouring values whose mean is taken in low pass, and median in the median filter. I have experimented with varying values of this number from 1 to 100. In the end, these filtered values are double integrated using trapezoidal rule to get distances. But, the distance calculated is no where close to 30 cm. The closest value I got was some -22 cm(I am wondering why I am getting negative values even though I move the device in positive Y direction). I also came across this:
http://ajnaware.wordpress.com/2008/09/05/accelerating-iphones/
its an old post about the same thing, which says that the accelerometer readings returned appeared to come in quanta of about 0.18m/s^2 (ie. about 0.018g), resulting in a large cumulative error very quickly. Going by that, for this error to really not matter, one will have to accelerate the device by almost 1.8m/s^2, which is practically impossible for distance/length measurement purposes. for small movements, it does not look like there is a possibility of calculating distances by using an optimal filter and a higher order numerical integration method, without an impractical velocity/acceleration constraint like that. Is it possible?
How about using my acceleration vs timestamp data to interpolate a polynomial that grows over time, as I get more and more motion updates, which represents approximately an acceleration vs time curve. Double integration of ths polynomial would be a piece of cake. But, for small distances, the polynomial will have a big error component. Using a predictable known motion that my device will be subjected to, I wish to take a huge number of snapshots (calculated distance vs actual known distance) to calculate my error polynomial in a similar way, and then subtract it from my first polynomial. Can this work?
Although this does not fit StackOverflow, because it's not a question but a discussion, I'll try to sum up my thoughts about it.
As already said, the accelerometer is very inaccurate and you would need very good accuracy for this kind of task, especially if you are trying to measure such short distances. Plus, accelerometers differ from device to device, you will get different results for the same movements with different device. Plus a very huge random error.
My guess is, that you can get rid of a huge part of randomness/error by calibrating the device and making the "measurement move" a couple of times, like 10 times. After that you have enough data to get an average that might get close to the real value.
Calibration is a key part here, you have to think of a clever way to calibrate, like letting the user move the device over different distances in different speeds.
But all this is just theory. I would really like to see your results, but I doubt you get it working good enough even using the best possible filters/algorithms, since there is just too much noise.