Calculation speed by accelerometer - orientation

I trying to calculate speed by accelerometer LIS331DLH.
I am using ZUPT to set to zero velocity.
When i shake acceleromter integration mistake too big.
How to fix this problem?
Maybe to try to detect vibration and ignore measure in this moment?
Speed graphs:
Move up and down:
When shaking:
integration code:
Code:
if (lpMag < 0.25)
{
vx = 0;vy=0;vz=0;
}
else
{
vx = vx + potx*samplePeriod;
vy = vy + poty*samplePeriod;
vz = vz + potz*samplePeriod;
}
pointXYZ bufV;
bufV.x = vx;bufV.y = vy;bufV.z = vz;
velZUPT.push_back(bufV);
potx,poty,potz - accelerometer data
lpMag - magnitude

Related

How to detect object clicked in WebGL?

Firstly, I am not using 3Js in my Orbits app because I encountered a number of limitations including, but not limited to, issues with texture resolution and my requirement for complex lighting equations but I would like to implement something like 3Js' raycaster to allow me to detect the object clicked by the user.
I'm new to WebGL, but an "old hand" in software development so I'm looking for some hints about where to start.
The approach is as follows:
You generate your scene twice, once normally which is displayed and the second, with the objects uniquely coloured but not displayed. Then you use gl.readPixels from the second scene using the position on the first and decode the colour to identify the object.
Now I have to implement it myself.
Picking spheres
When picking spheres, or objects that are separated (not one inside another) you can use a simple distance from ray to very quickly get the closest object.
Example
The function returns a function that does the calculation. As it is only the closest you are interested in the distances can remain as squares. The distance from the camera is held as a unit distance along the ray.
function distanceFromRay() {
var dSqr, ox, oy, oz, vx, vy, vz;
function distanceSqr(px, py, pz) {
const ax = px - ox, ay = py - oy, az = pz - oz;
const u = (ax * vx + ay * vy + az * vz) / dSqr;
distanceSqr.unit = u;
if (u > 0) { // is past origin
const bx = ox + vx * u - px, by = oy + vy * u - py, bz = oz + vz * u - pz;
return bx * bx + by * by + bz * bz; // dist sqr to closest point on ray
}
return Infinity;
}
distanceSqr.unit = 0;
distanceSqr.setRay(x, y, z, xx, yy, zz) { // ray from origin x, y,z,
// infinite length along xx,yy,zz
(ox = x, oy = y, oz = z);
(vx = xx, vy = yy, vz = zz);
dSqr = vx * vx + vy * vy + vz * vz;
}
return distanceSqr;
}
Usage
There is a one time setup call;
// setup
const distToRay = distanceFromRay();
At the start of a frame that requires a pick, calculate the pick ray and set it. Also set the min distance from ray and eye.
// at start of frame set pick ray
distToRay.setRay(eye.x, eye.y, eye.z, pointer.ray.x, pointer.ray.y, pointer.ray.y);
var minDist = maxObjRadius * maxObjRadius;
var nearestObj = undefined;
var eyeDist = Infinity;
Then for each pickable object get the distance by passing the objects center and comparing it to any previous (in frame) found distance, objects radius, and distance from eye.
// per object
const dis = distToRay(obj.pos.x, obj.pos.y, obj.pos.z);
if (dis < obj.radius && dis < minDist && distToRay.unit > 0 && distToRay.unit < eyeDist ) {
minDist = dis;
eyeDist = distToRay.unit;
nearestObj = obj;
}
At the end of the frame if nearestObj is not undefined it will hold a reference to the picked object.
// end of frame
if (nearestObj) {
// you have the closest object
}

Alternative to CMPedometer to calculate number of steps with accelerometer on iOS

Since CMPedometer is not available for below iPhone5S.
CMPedometer StepCounting not Available
Is there an algorithm code that we can use to program number of steps with the accelerometer on ios ?
Thanks
IOS aside, there is no simple solution to create an accurate pedometer using just the accelerometer output; it's just to noisy. Using the output from a gyroscope(where available) to filter the output would increase the accuracy.
But a crude here's a crude approach to wiring code for a pedometer:
- steps are detected as a variation in the acceleration detected on the Z axis. Assuming you know the default acceleration(the impact of gravity) here's how you do it:
float g = (x * x + y * y + z * z) / (GRAVITY_VALUE * GRAVITY_VALUE)
Your threshold is g=1 (this is what you would see when standing still). Spikes in this value represent steps. So all you have to do is count the spikes. Please mind here that a simple g>1 will not do, as for one step, the g value will increase for a certain amount of time and then go back (if you plot the value over time, it should look like a sin wave when there is a step - essentially you want to count the sin waves)
Mind you that this is just something to get you started; you will have to add more complexity to it to increase accuracy.
Things like:
- hysteresis to avoid false step detection
- filtering the accelerometer output
- figuring out the step intervals
Are not included here and should be experimented with.
You can detect step Event using accelerometer data from CMMotionManager
protected CMMotionManager _motionManager;
public event EventHandler<bool> OnMotion;
public double ACCEL_DETECTION_LIMIT = 0.31;
private const double ACCEL_REDUCE_SPEED = 0.9;
private double accel = -1;
private double accelCurrent = 0;
private void StartAccelerometerUpdates()
{
if (_motionManager.AccelerometerAvailable)
_motionManager.AccelerometerUpdateInterval = ACCEL_UPDATE_INTERVAL;
_motionManager.StartAccelerometerUpdates (NSOperationQueue.MainQueue, AccelerometerDataUpdatedHandler);
}
public void AccelerometerDataUpdatedHandler(CMAccelerometerData data, NSError error)
{
double x = data.Acceleration.X;
double y = data.Acceleration.Y;
double z = data.Acceleration.Z;
double accelLast = accelCurrent;
accelCurrent = Math.Sqrt(x * x + y * y + z * z);
double delta = accelCurrent - accelLast;
accel = accel * ACCEL_REDUCE_SPEED + delta;
var didStep = OnMotion;
if (accel > ACCEL_DETECTION_LIMIT)
{
didStep (this, true);//maked a step
} else {
didStep (this, false);
}
}

Collision accuracy

Not quite sure how to word this but I've been using
if (CGRectIntersectsRect(ball.frame, bottom.frame)) {
[self finish ];
}
for my collision detection and it run the code perfectly but the ball sometimes collides with the bottom and runs the code but you can clearly see there is a gap in between the objects. i have created the images my self and they have no background around them. i was wondering if theres any other way of coding it or making it so the it doesn't run the code until the intersect x amount of pixels into one another.
You can implement ball-line collision in many ways. Your solution is in fact a rectangle - rectangle collision detection. Here is how I did it in one of my small gaming projects. It gave me best results and it's simple.
Let's say that a ball has a ballRadius, and location (xBall, yBall). The line is defined with two points (xStart, yStart) and (xEnd, yEnd).
Implementation of a simple collision detection:
float ballRadius = ...;
float x1 = xStart - xBall;
float y1 = yStart - yBall;
float x2 = xEnd - xBall;
float y2 = yEnd - yBall;
float dx = x2 - x1;
float dy = y2 - y1;
float dr = sqrtf(powf(dx, 2) + powf(dy, 2));
float D = x1*y2 - x2*y1;
float delta = powf(ballRadius*0.9,2)*powf(dr,2) - powf(D,2);
if (delta >= 0)
{
// Collision detected
}
If delta is greater than zero there are two intersections between ball (circle) and line. If delta is equal to zero there is one intersection – perfect collision.

XNA 2D rotated sprite, with realative rotated position (Lunar Lander Exhaust Placement)

I am making a little Lunar Lander clone, and its working quite ok, now i have added particle effects to the lander, so when the thrust is engange, the particle effect is created, just in the middle of my ship.
What i would like to have happen is that the Particles are created, where the ship exhaust is on the sprite. And this has me stumped. I know i should be able to calculate it, as i both have the rotation angle and the current location, so i should be able to get the rotated location of any pixel within my 64x64 sprite.
Im interested in calculating the Lander.exhaust.X and Lander.exhaust.Y values. Could anyone point me in the right direction.
//this is part of the code, im sure i dont need all of it :)
Lander.acceleration.X = Lander.acceleration.X * (0.01f * gameTime.ElapsedGameTime.Seconds);
Lander.acceleration.Y = Lander.acceleration.Y * (0.01f * gameTime.ElapsedGameTime.Seconds);
Lander.velocity.Y = Lander.velocity.Y + (0.05f + Lander.velocity.Y * gameTime.ElapsedGameTime.Seconds);
Lander.oldvelocity.X = Lander.velocity.X;
Lander.oldvelocity.Y = Lander.velocity.Y;
Lander.exhaust.X = (float)Math.Cos(Lander.RotationAngle) * 0.1f + Lander.Position.Y ;
Lander.exhaust.Y = (float)Math.Sin(Lander.RotationAngle) * 0.1f + Lander.Position.X ;
Lander.Position.Y = Lander.velocity.Y + Lander.Position.Y;
Lander.Position.X = Lander.velocity.X + Lander.Position.X;
//if (Lander.Position.Y >= groundlevel + (Lander.mSpriteTexture.Height / 2))
if (Lander.Position.Y >= groundlevel)
{
Lander.Position.Y = groundlevel;
Lander.velocity.X = 0f;
Lander.oldvelocity.X = 0f;
}
float circle = MathHelper.Pi * 2;
RotationAngle = RotationAngle % circle;
Lander.RotationAngle = RotationAngle;
RotationAngledegrees = MathHelper.ToDegrees(RotationAngle);
if (keyState.IsKeyDown(Keys.Space))
{
Lander.acceleration.X = (float)Math.Cos(Lander.RotationAngle) * 0.1f + Lander.acceleration.X;
Lander.acceleration.Y = (float)Math.Sin(Lander.RotationAngle) * 0.1f + Lander.acceleration.Y;
Lander.velocity.X = Lander.oldvelocity.X + Lander.acceleration.X;
Lander.velocity.Y = Lander.oldvelocity.Y + Lander.acceleration.Y;
particleEngine.EmitterLocation = new Vector2(Lander.exhaust.X, Lander.exhaust.Y);
-lasse
Lander.exhaust.X = (float)Math.Cos(RotationAngle) * 32 + Lander.Position.X;
Lander.exhaust.Y = (float)Math.Sin(RotationAngle) * 32 + Lander.Position.Y;
You may have to subtract or add PI/2 to the angle depending on the initial angle of the sprite the emitter will be 32 pixels away from the position of the Lander.
On a side note, it would probably be a good idea to put each part of the game in its own class, it will make changing stuff later a LOT easier.
And another thing, when you add the velocity to the position, you can do this:
Lander.Position += Lander.velocity;
It basically does the same thing.

DeviceMotion relative to world - multiplyByInverseOfAttitude

What is the correct way to use CMAttitude:multiplyByInverseOfAttitude?
Assuming an iOS5 device laying flat on a table, after starting CMMotionManager with:
CMMotionManager *motionManager = [[CMMotionManager alloc]init];
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:
CMAttitudeReferenceFrameXTrueNorthZVertical];
Later, CMDeviceMotion objects are obtained:
CMDeviceMotion *deviceMotion = [motionManager deviceMotion];
I expect that [deviceMotion attitude] reflects the rotation of the device from True North.
By observation, [deviceMotion userAcceleration] reports acceleration in the device reference frame. That is, moving the device side to side (keeping it flat on the table) registers acceleration in the x-axis. Turning the device 90° (still flat) and moving the device side to side still reports x acceleration.
What is the correct way to transform [deviceMotion userAcceleration] to obtain North-South/East-West acceleration rather than left-right/forward-backward?
CMAttitude multiplyByInverseOfAttitude seems unnecessary since a reference frame has already been specified and it is unclear from the documentation how to apply the attitude to CMAcceleration.
The question would not have arisen if CMDeviceMotion had an accessor for the userAcceleration in coordinates of the reference frame. So, I used a category to add the required method:
In CMDeviceMotion+TransformToReferenceFrame.h:
#import <CoreMotion/CoreMotion.h>
#interface CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame;
#end
and in CMDeviceMotion+TransformToReferenceFrame.m:
#import "CMDeviceMotion+TransformToReferenceFrame.h"
#implementation CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame
{
CMAcceleration acc = [self userAcceleration];
CMRotationMatrix rot = [self attitude].rotationMatrix;
CMAcceleration accRef;
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
#end
and in Swift 3
extension CMDeviceMotion {
var userAccelerationInReferenceFrame: CMAcceleration {
let acc = self.userAcceleration
let rot = self.attitude.rotationMatrix
var accRef = CMAcceleration()
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
}
Now, code that previously used [deviceMotion userAcceleration] can use [deviceMotion userAccelerationInReferenceFrame] instead.
According to Apple's Documentation, CMAttitude refers to the orientation of a body relative to a given frame of reference. And either userAcceleration or gravity is the value of the device's frame. So in order to get the value of reference frame. We should do as #Batti said
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
Here's the Swift version
import CoreMotion
import GLKit
extension CMDeviceMotion {
func userAccelerationInReferenceFrame() -> CMAcceleration {
let origin = userAcceleration
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
func gravityInReferenceFrame() -> CMAcceleration {
let origin = self.gravity
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
}
extension CMRotationMatrix {
func inverse() -> CMRotationMatrix {
let matrix = GLKMatrix3Make(Float(m11), Float(m12), Float(m13), Float(m21), Float(m22), Float(m23), Float(m31), Float(m32), Float(m33))
let invert = GLKMatrix3Invert(matrix, nil)
return CMRotationMatrix(m11: Double(invert.m00), m12: Double(invert.m01), m13: Double(invert.m02),
m21: Double(invert.m10), m22: Double(invert.m11), m23: Double(invert.m12),
m31: Double(invert.m20), m32: Double(invert.m21), m33: Double(invert.m22))
}
}
Hope it helps a little bit
i tried to implement a solution after reading the paper linked above.
Steps are the follows:
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
the resultant vector will be the projection of the vector.
-x north, +x south
-y east, +y weast
my code it's not perfect yet, i'm working on it.
The reference frame is related to the attitude value, look the value of attitude of yaw angle; If you don't use a reference frame, when you start your app, this value is always zero, instead if you use the reference frame CMAttitudeReferenceFrameXTrueNorthZVertical the yaw value indicates the angle between the x-axis and true north.
with this information you can identify the attitude of phone in the coordinates of the earth and therefore the position of axes of the accelerometer with respect to the cardinal points.

Resources