Alternative to CMPedometer to calculate number of steps with accelerometer on iOS - ios

Since CMPedometer is not available for below iPhone5S.
CMPedometer StepCounting not Available
Is there an algorithm code that we can use to program number of steps with the accelerometer on ios ?
Thanks

IOS aside, there is no simple solution to create an accurate pedometer using just the accelerometer output; it's just to noisy. Using the output from a gyroscope(where available) to filter the output would increase the accuracy.
But a crude here's a crude approach to wiring code for a pedometer:
- steps are detected as a variation in the acceleration detected on the Z axis. Assuming you know the default acceleration(the impact of gravity) here's how you do it:
float g = (x * x + y * y + z * z) / (GRAVITY_VALUE * GRAVITY_VALUE)
Your threshold is g=1 (this is what you would see when standing still). Spikes in this value represent steps. So all you have to do is count the spikes. Please mind here that a simple g>1 will not do, as for one step, the g value will increase for a certain amount of time and then go back (if you plot the value over time, it should look like a sin wave when there is a step - essentially you want to count the sin waves)
Mind you that this is just something to get you started; you will have to add more complexity to it to increase accuracy.
Things like:
- hysteresis to avoid false step detection
- filtering the accelerometer output
- figuring out the step intervals
Are not included here and should be experimented with.

You can detect step Event using accelerometer data from CMMotionManager
protected CMMotionManager _motionManager;
public event EventHandler<bool> OnMotion;
public double ACCEL_DETECTION_LIMIT = 0.31;
private const double ACCEL_REDUCE_SPEED = 0.9;
private double accel = -1;
private double accelCurrent = 0;
private void StartAccelerometerUpdates()
{
if (_motionManager.AccelerometerAvailable)
_motionManager.AccelerometerUpdateInterval = ACCEL_UPDATE_INTERVAL;
_motionManager.StartAccelerometerUpdates (NSOperationQueue.MainQueue, AccelerometerDataUpdatedHandler);
}
public void AccelerometerDataUpdatedHandler(CMAccelerometerData data, NSError error)
{
double x = data.Acceleration.X;
double y = data.Acceleration.Y;
double z = data.Acceleration.Z;
double accelLast = accelCurrent;
accelCurrent = Math.Sqrt(x * x + y * y + z * z);
double delta = accelCurrent - accelLast;
accel = accel * ACCEL_REDUCE_SPEED + delta;
var didStep = OnMotion;
if (accel > ACCEL_DETECTION_LIMIT)
{
didStep (this, true);//maked a step
} else {
didStep (this, false);
}
}

Related

Creating square, sawtooth and triangle wave in Objective-C

I am trying to generate different wave shapes for a simple wavetable synth iOS app I am working on. This is how I am generating the sine wave:
if (waveType == 1) // sine wave
{
//NSLog(#"sine in AU");
for (int i=0;i<n;i++)
{
float x = 0.0; // default to silence
if (toneCount > 0) // or create a sinewave
{
x = testVolume * sinf(ph);
ph = ph + dp;
if (ph > M_PI) { ph -= 2.0 * M_PI; } // sine wave
toneCount -= 1; // decrement tone length counter
}
if (ptrLeft != NULL){
ptrLeft[ i] = x;
}
if (ptrRight != NULL) {
ptrRight[i] = x;
}
}
}
For a square wave I was assuming this would work but it hasn't:
if (ph > M_PI) {
ph -= 2.0 * M_PI;
ph >= 0 ? 1.0 : -1.0;
}
How would I go about creating sawtooth and triangles waves and where is the square wave going wrong?
Thanks, I'm new to iOS programming and love working with audio.
For starters think about your line
if (ph > M_PI) { ph -= 2.0 * M_PI; } // sine wave
ph is your input into your sin function which is some value in radians ... be aware that sin function will automatically handle values greater than 2 * PI it will simply wrap it around so no need to manage your ph variable other then simply incrementing it
Regarding your square wave ... keep in mind your output has only two values low or high ... do not drive the toggle between low/high by value of the curve
if (ph > M_PI) {
ph -= 2.0 * M_PI;
ph >= 0 ? 1.0 : -1.0;
}
keep in mind that as you increment your outer loop i you are incrementing ph which you are using to determine where you are in your PI cycles which is OK however that same ph then cannot also be input to sin(ph) as well ... something else must get fed into sin() or else something else maintains your sample position in your PI cycle ... you are mistakenly conflating two responsibilities into a single variable
In above if code block think about when you must perform the toggle from low to high or back from high to low ... does it happen at M_PI ? how many such toggles happen per cycle ? when ? perhaps you need an additional if test to determine when to perform that toggle ... in your sin curve algorithm you are already managing ph must you handle square wave ph differently or leave ph logic the same ?

What does the "simd" prefix mean in SceneKit?

There is a SCNNode category named SCNNode(SIMD), which declares some properties like simdPosition, simdRotation and so on. It seems these are duplicated properties of the original/normal properties position and rotation.
#property(nonatomic) simd_float3 simdPosition API_AVAILABLE(macos(10.13), ios(11.0), tvos(11.0), watchos(4.0));
#property(nonatomic) simd_float4 simdRotation API_AVAILABLE(macos(10.13), ios(11.0), tvos(11.0), watchos(4.0));
What's the difference between position and simdPosition? What does the prefix "simd" mean exactly?
SIMD: Single Instruction Multiple Data
SIMD instructions allow you to perform the same operation on multiple values at the same time.
Let's see an example
Serial Approach (NO SIMD)
We have these 4 Int32 values
let x0: Int32 = 10
let y0: Int32 = 20
let x1: Int32 = 30
let y1: Int32 = 40
Now we want to sum the 2 x and the 2 y values, so we write
let sumX = x0 + x1 // 40
let sumY = y0 + y1 // 60
In order to perform the 2 previous sums the CPU needs to
load x0 and x1 in memory and add them
load y0 and y1 in memory and add them
So the result is obtained with 2 operations.
I created some graphics to better show you the idea
Step 1
Step 2
SIMD
Let's see now how SIMD does work.
First of all we need the input values stored in the proper SIMD format so
let x = simd_int2(10, 20)
let y = simd_int2(30, 40)
As you can see the previous x and y are vectors. Infact both x and y contain 2 components.
Now we can write
let sum = x + y
Let's see what the CPU does in order to perform the previous operations
load x and y in memory and add them
That's it!
Both components of x and both components of y are processed at the same time.
Parallel Programming
We are NOT talking about concurrent programming, instead this is real parallel programming.
As you can imagine in certain operation the SIMD approach is way faster then the serial one.
Scene Kit
Let's see now an example in SceneKit
We want to add 10 to the x, y and z components of all the direct descendants of the scene node.
Using the classic serial approach we can write
for node in scene.rootNode.childNodes {
node.position.x += 10
node.position.y += 10
node.position.z += 10
}
Here a total of childNodes.count * 3 operations is executed.
Let's see now how we can convert the previous code in SIMD instructions
let delta = simd_float3(10)
for node in scene.rootNode.childNodes {
node.simdPosition += delta
}
This code is much faster then the previous one. I am not sure whether 2x or 3x faster but, believe me, it's way better.
Wrap up
If you need to perform several times the same operation on different value, just use the SIMD properties :)
SIMD is a small library built on top of vector types that you can import from <simd/simd.h>. It allows for more expressive and more performant code.
For instance using SIMD you can write
simd_float3 result = a + 2.0 * b;
instead of
SCNVector3 result = SCNVector3Make(a.x + 2.0 * b.x, a.y + 2.0 * b.y, a.z + 2.0 * b.z);
In Objective-C you can not overload methods. That is you can not have both
#property(nonatomic) SCNVector3 position;
#property(nonatomic) simd_float3 position API_AVAILABLE(macos(10.13), ios(11.0), tvos(11.0), watchos(4.0));
The new SIMD-based API needed a different name, and that's why SceneKit exposes simdPosition.

CMMotionManager - How to detect steps ignoring small movements?

Getting data from the CMMotionManager is fairly straight forward, processing it not so much.
Does anybody have any pointers to code for relatively accurately detecting a step (and ignoring smaller movements) or guidelines in a general direction how to go about such a thing?
What you basically need is a kind of a Low Pass Filter that will allow you to ignore small movements. Effectively, this “smooths” out the data by taking out the jittery.
- (void)updateViewsWithFilteredAcceleration:(CMAcceleration)acceleration
{
static CGFloat x0 = 0;
static CGFloat y0 = 0;
const NSTimeInterval dt = (1.0 / 20);
const double RC = 0.3;
const double alpha = dt / (RC + dt);
CMAcceleration smoothed;
smoothed.x = (alpha * acceleration.x) + (1.0 - alpha) * x0;
smoothed.y = (alpha * acceleration.y) + (1.0 - alpha) * y0;
[self updateViewsWithAcceleration:smoothed];
x0 = smoothed.x;
y0 = smoothed.y;
}
The alpha value determines how much weight to give the previous data vs the raw data.
The dt is how much time elapsed between samples.
RC value controls the aggressiveness of the filter. Bigger values mean smoother output.

Madgwick's sensor fusion algorithm on iOS

i'm trying to run Madgwick's sensor fusion algorithm on iOS. Since the code is open source i already included it in my project and call the methods with the provided sensor values.
But it seems, that the algorithm expects the sensor measurements in a different coordinate system. The Apple CoreMotion Sensor System is given on the right side, Madgewick's on the left. Here is the picture of the different coordinate systems. Both systems follow the right hand rule.
For me it seems like there is a 90 degree rotation around the z axis. But this didn't work.
I also tried to flip x and y (and invert z) axis as suggested by other stackoverflow posts for WP but this didn't work also. So do you have a hint?
Would be perfect if Madgwick's alogithm output could be in the same system as the CoreMotion output (CMAttitudeReferenceFrameXMagneticNorthZVertical).
Furthermore I'm looking for a good working value for betaDef on the iPhone. betaDef is kind of the proportional gain and is currently set to 0.1f.
Any help on how to achieve the goal would be appreciated.
I'm not sure how to write this in objective c, but here's how I accomplished the coordinate transformations in vanilla c. I also wanted to rotate the orientation so that +y is north. This translation is also reflected in the below method.
This method expects a 4 element quaternion in the form of wxyz, and returns a translated quaternion in the same format:
void madgeq_to_openglq(float *fMadgQ, float *fRetQ) {
float fTmpQ[4];
// Rotate around Z-axis, 90 degres:
float fXYRotationQ[4] = { sqrt(0.5), 0, 0, -1.0*sqrt(0.5) };
// Inverse the rotation vectors to accomodate handedness-issues:
fTmpQ[0] = fMadgQ[0];
fTmpQ[1] = fMadgQ[1] * -1.0f;
fTmpQ[2] = fMadgQ[2];
fTmpQ[3] = fMadgQ[3] * -1.0f;
// And then store the translated Rotation into ret:
quatMult((float *) &fTmpQ, (float *) &fXYRotationQ, fRetQ);
}
// Quaternion Multiplication operator. Expects its 4-element arrays in wxyz order
void quatMult(float *a, float *b, float *ret) {
ret[0] = (b[0] * a[0]) - (b[1] * a[1]) - (b[2] * a[2]) - (b[3] * a[3]);
ret[1] = (b[0] * a[1]) + (b[1] * a[0]) + (b[2] * a[3]) - (b[3] * a[2]);
ret[2] = (b[0] * a[2]) + (b[2] * a[0]) + (b[3] * a[1]) - (b[1] * a[3]);
ret[3] = (b[0] * a[3]) + (b[3] * a[0]) + (b[1] * a[2]) - (b[2] * a[1]);
return;
}
Hope that helps!

Microsoft Kinect SDK Depth calibration

I am working on a 3D model reconstruction application with Kinect sensor. I use Microsoft SDK to get depth data, I want to calculate the location of each point in the real-world. I have read several articles about it and I have implemented several depth-calibration methods but all of them do not work in my application. the closest calibration was http://openkinect.org/wiki/Imaging_Information
but my result in Meshlab was not acceptable.
I calculate depth value by this method:
private double GetDistance(byte firstByte, byte secondByte)
{
double distance = (double)(firstByte >> 3 | secondByte << 5);
return distance;
}
and then I used below methods to calculate distance of real-world
public static float RawDepthToMeters(int depthValue)
{
if (depthValue < 2047)
{
return (float)(0.1 / ((double)depthValue * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const double fx_d = 5.9421434211923247e+02;
const double fy_d = 5.9104053696870778e+02;
const double cx_d = 3.3930780975300314e+02;
const double cy_d = 2.4273913761751615e+02;
double depth = RawDepthToMeters(depthValue);
Point3D result = new Point3D((float)((x - cx_d) * depth / fx_d),
(float)((y - cy_d) * depth / fy_d), (float)(depth));
return result;
}
these methods did not work well and generated scene was not correct. then I used below method, the result is better than previous method but it is not acceptable yet.
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const int w = 640;
const int h = 480;
int minDistance = -10;
double scaleFactor = 0.0021;
Point3D result = new Point3D((x - w / 2) * (depthValue + minDistance) * scaleFactor * (w/h),
(y - h / 2) * (depthValue + minDistance) * scaleFactor,depthValue);
return result;
}
I was wondering if you let me know how can I calculate real-world position based on my depth pixel values calculating by my method.
The getDistance() function you're using to calculate real depth is referred to kinect player detection. So check that you are opening your kinect stream accordingly or maybe you should get only the raw depth data
Runtime nui = Runtime.Kinects[0] ;
nui.Initialize(RuntimeOptions.UseDepth);
nui.DepthStream.Open(
ImageStreamType.Depth,
2,
ImageResolution.Resolution320x240,
ImageType.Depth);
and then compute depth by simply bitshifting second byte by 8:
Distance (0,0) = (int)(Bits[0] | Bits[1] << 8);
The first calibration methods should work ok even if you could do a little improvement
using a better approximation given by Stéphane Magnenat:
distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters
If you really need more accurate calibration values you should really calibrate your kinect using for example a tool such as the matlab kinect calibration:
http://sourceforge.net/projects/kinectcalib/
And double check obtained values with the ones you are currently using provided by Nicolas Burrus.
EDIT
Reading your question again I noticed that you are using Microsoft SDK, so the values
that are returned from kinect sensor are already real distances in mm. You do not need to use the RawDepthToMeters() function, it should be used only with non official sdk.
The hardware creates a depth map, that it is a non linear function of disparity values, and it has 11 bits of precision. The kinect sdk driver convert out of the box this disparity values to mm and rounds it to an integer. MS Kinect SDK has 800mm to 4000mm depth range.

Resources