CMMotionManager - How to detect steps ignoring small movements? - ios

Getting data from the CMMotionManager is fairly straight forward, processing it not so much.
Does anybody have any pointers to code for relatively accurately detecting a step (and ignoring smaller movements) or guidelines in a general direction how to go about such a thing?

What you basically need is a kind of a Low Pass Filter that will allow you to ignore small movements. Effectively, this “smooths” out the data by taking out the jittery.
- (void)updateViewsWithFilteredAcceleration:(CMAcceleration)acceleration
{
static CGFloat x0 = 0;
static CGFloat y0 = 0;
const NSTimeInterval dt = (1.0 / 20);
const double RC = 0.3;
const double alpha = dt / (RC + dt);
CMAcceleration smoothed;
smoothed.x = (alpha * acceleration.x) + (1.0 - alpha) * x0;
smoothed.y = (alpha * acceleration.y) + (1.0 - alpha) * y0;
[self updateViewsWithAcceleration:smoothed];
x0 = smoothed.x;
y0 = smoothed.y;
}
The alpha value determines how much weight to give the previous data vs the raw data.
The dt is how much time elapsed between samples.
RC value controls the aggressiveness of the filter. Bigger values mean smoother output.

Related

Use AVAudioRecorder metering to make triangular wave

I am trying to make triangular waves for audio recorder through metering. I am using AVAudioRecorder this means that Fast Fourier Transformation will not work in this case (Secondly i don't have enough knowledge how to implement it). I found this project on github. In this project author is using the following equation to make smooth sine wave:
CGFloat y = scaling * self.maxAmplitude * normedAmplitude * sinf(2 * M_PI *(x / self.waveWidth) * self.frequency + self.phase) + (self.waveHeight * 0.5);
If you consider this sinf(2 * M_PI *(x / self.waveWidth) * self.frequency + self.phase) part of equation you will find that it is the equation of sine wave (wikipedia). If i replace this part with the equation of triangular equation (wikipedia) it still make sine wave with little difference. I want to transform this equation in such a way that it make triangular wave instead of sine wave.
My triangle wave equation looks like this:
CGFloat t = x / self.waveWidth;
CGFloat numerator = sinf( (2.0 * M_PI * (2.0 * self.amplitude + 1.0) * self.frequency * t) );
CGFloat denominator = (2.0 * self.amplitude + 1.0) * (2.0 * self.amplitude + 1.0);
CGFloat multiplyer = (8.0 / pow(M_PI, 2.0));
CGFloat result = multiplyer * (numerator / denominator);
Then finally y position is calculated by:
y = (result * scaling * self.maxAmplitude * normedAmplitude) + (self.waveHeight * 0.5);
Animation is also look unnatural. Output of this equation is:
Thanks
Well by looking at the equation you're using (which is the fourier transform), you're implementing it a bit wrong (k samples should be increasing but you've left it constant with 2.0 * self.amplitude + 1.0. You're also leaving out (-1)^k which adds in the odd harmonics.
Wikipedia wrote this:
It is possible to approximate a triangle wave with additive synthesis by adding odd harmonics of the fundamental, multiplying every (4n−1)th harmonic by −1 (or changing its phase by π), and rolling off the harmonics by the inverse square of their relative frequency to the fundamental.
I'm guessing (as I'm not a DSP expert) that because you're leaving the k value as a constant it is just giving you a sine wave output.
Look at this algorithm block for the triangle wave (try it, then change it for your code):
phaseIncr = (2.0 * M_PI / sample_rate) * self.frequency;
for (int i = 0; i < numSamples; i++) {
triVal = (phase * 2.0/M_PI);
if (phase < 0) triVal = 1.0 + triVal;
else triVal = 1.0 - triVal;
sample = amplitude * triVal;
if ((phase += phaseIncr) >= M_PI) phase -= (2.0 * M_PI);
}
I also see that the original project wrapped the phase in setLevel method so check that out. Hope this helps out and let me know if this doesn't work, I'll try to help as much as I can.

Alternative to CMPedometer to calculate number of steps with accelerometer on iOS

Since CMPedometer is not available for below iPhone5S.
CMPedometer StepCounting not Available
Is there an algorithm code that we can use to program number of steps with the accelerometer on ios ?
Thanks
IOS aside, there is no simple solution to create an accurate pedometer using just the accelerometer output; it's just to noisy. Using the output from a gyroscope(where available) to filter the output would increase the accuracy.
But a crude here's a crude approach to wiring code for a pedometer:
- steps are detected as a variation in the acceleration detected on the Z axis. Assuming you know the default acceleration(the impact of gravity) here's how you do it:
float g = (x * x + y * y + z * z) / (GRAVITY_VALUE * GRAVITY_VALUE)
Your threshold is g=1 (this is what you would see when standing still). Spikes in this value represent steps. So all you have to do is count the spikes. Please mind here that a simple g>1 will not do, as for one step, the g value will increase for a certain amount of time and then go back (if you plot the value over time, it should look like a sin wave when there is a step - essentially you want to count the sin waves)
Mind you that this is just something to get you started; you will have to add more complexity to it to increase accuracy.
Things like:
- hysteresis to avoid false step detection
- filtering the accelerometer output
- figuring out the step intervals
Are not included here and should be experimented with.
You can detect step Event using accelerometer data from CMMotionManager
protected CMMotionManager _motionManager;
public event EventHandler<bool> OnMotion;
public double ACCEL_DETECTION_LIMIT = 0.31;
private const double ACCEL_REDUCE_SPEED = 0.9;
private double accel = -1;
private double accelCurrent = 0;
private void StartAccelerometerUpdates()
{
if (_motionManager.AccelerometerAvailable)
_motionManager.AccelerometerUpdateInterval = ACCEL_UPDATE_INTERVAL;
_motionManager.StartAccelerometerUpdates (NSOperationQueue.MainQueue, AccelerometerDataUpdatedHandler);
}
public void AccelerometerDataUpdatedHandler(CMAccelerometerData data, NSError error)
{
double x = data.Acceleration.X;
double y = data.Acceleration.Y;
double z = data.Acceleration.Z;
double accelLast = accelCurrent;
accelCurrent = Math.Sqrt(x * x + y * y + z * z);
double delta = accelCurrent - accelLast;
accel = accel * ACCEL_REDUCE_SPEED + delta;
var didStep = OnMotion;
if (accel > ACCEL_DETECTION_LIMIT)
{
didStep (this, true);//maked a step
} else {
didStep (this, false);
}
}

Get Rho and Theta from Hough-Transform opencvsharp?

I have Hough-Transform implemented using Opencvsharp (opencv), and get the lines detected on my image in console application/windows-from-application:
lines = edgeImg.HoughLines2(storage, HoughLinesMethod.Probabilistic, 1, Math.PI / 180, 60, 100, 100);
for (int i = 0; i < lines.Total; i++)
{
CvLineSegmentPoint segP= lines.GetSeqElem<CvLineSegmentPoint>(i).Value;
double angle = Math.Atan2((segP.P2.Y) - (segP.P1.Y), (segP.P2.X) - (segP.P1.X)) * 180 / Math.PI;
if (Math.Abs(angle) <= 60)
continue;
if (segP.P1.Y > segP.P2.Y + 20 || segP.P1.Y < segP.P2.Y - 20)
src.Line(segP.P1, segP.P2, CvColor.blue, 2, LineType.AntiAlias, 0);
}
I have tried different methods for visualizing the rho-theta space. since "HoughLinesMethod" does all the transformation internally, I have tried to get these values from x,y in the reverse way:
double angle = Math.Atan2(dy, dx) * 180 / Math.PI;
double theta = 90 - angle;
var thetaRad = theta*Math.PI/180;
double rho = (x1 * Math.Cos(thetaRad) + y1 * Math.Sin(thetaRad));
my first question is if I need to get two values for rho/theta, both for x1,y1 and also x2,y2 ; or calculating only one "rho/theta" would be the right intersect?
Thanks!
second, how can I visualize them in the right format? (what I currently see on my outout image is some random white dots at the top left corner of my output)
third, is it rational to get rho,theta values back in this way or you would suggest to perform the hough transform by myself and reduce the complexity? (I used opencvsharp function for better and efficient performance!)

iOS Generate Square Sound

I want to generate a square wave sound on iPhone, I found a sine wave code on Web (sorry forgotten the link), but i want to generate Square wave format.
Could you help me please?
const double amplitude = 0.25;
ViewController *viewController =
(__bridge ViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController->sampleRate;
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
viewController->theta = theta;
Sum of the odd harmonics
A perfect square wave is the sum of all the odd harmonics divided by the harmonic number up to infinity. In the real world you have to stop of course - specifically at the nyquist frequency in digital. Below is a picture of the fundamental plus the first 3 odd harmonics. You can see how the square begins to take shape.
In your code sample, this would mean wrapping the sine generation in another loop. Something like this:
double harmNum = 1.0;
while (true)
{
double freq = viewController->frequency * harmNum;
if (freq > viewController->sampleRate / 2.0)
break;
double theta_increment = 2.0 * M_PI * freq / viewController->sampleRate;
double ampl = amplitude / harmNum;
// and then the rest of your code.
for (UInt32 frame = ....
The main problem you'll have is that you need to track theta for each of the harmonics.
A cheater solution
A cheat would be to draw a square like you would on paper. Divide the sample rate by the frequency by 2 and then produce that number of -1 and that number of +1.
For example, for a 1kHz sine at 48kHz. 48000/1000/2 = 24 so you need to output [-1,-1,-1,....,1,1,1,.....] where there are 24 of each.
A major disadvantage is that you'll have poor frequency resolution. Like if your sample rate were 44100 you can't produce exactly 1kHz. because that would require 22.05 samples at -1 and 22.05 samples at 1 so you have to round down.
Depending on your requirements this might an easier way to go since you can implement it with a counter and the last count between invocations (as you're tracking theta now)

Madgwick's sensor fusion algorithm on iOS

i'm trying to run Madgwick's sensor fusion algorithm on iOS. Since the code is open source i already included it in my project and call the methods with the provided sensor values.
But it seems, that the algorithm expects the sensor measurements in a different coordinate system. The Apple CoreMotion Sensor System is given on the right side, Madgewick's on the left. Here is the picture of the different coordinate systems. Both systems follow the right hand rule.
For me it seems like there is a 90 degree rotation around the z axis. But this didn't work.
I also tried to flip x and y (and invert z) axis as suggested by other stackoverflow posts for WP but this didn't work also. So do you have a hint?
Would be perfect if Madgwick's alogithm output could be in the same system as the CoreMotion output (CMAttitudeReferenceFrameXMagneticNorthZVertical).
Furthermore I'm looking for a good working value for betaDef on the iPhone. betaDef is kind of the proportional gain and is currently set to 0.1f.
Any help on how to achieve the goal would be appreciated.
I'm not sure how to write this in objective c, but here's how I accomplished the coordinate transformations in vanilla c. I also wanted to rotate the orientation so that +y is north. This translation is also reflected in the below method.
This method expects a 4 element quaternion in the form of wxyz, and returns a translated quaternion in the same format:
void madgeq_to_openglq(float *fMadgQ, float *fRetQ) {
float fTmpQ[4];
// Rotate around Z-axis, 90 degres:
float fXYRotationQ[4] = { sqrt(0.5), 0, 0, -1.0*sqrt(0.5) };
// Inverse the rotation vectors to accomodate handedness-issues:
fTmpQ[0] = fMadgQ[0];
fTmpQ[1] = fMadgQ[1] * -1.0f;
fTmpQ[2] = fMadgQ[2];
fTmpQ[3] = fMadgQ[3] * -1.0f;
// And then store the translated Rotation into ret:
quatMult((float *) &fTmpQ, (float *) &fXYRotationQ, fRetQ);
}
// Quaternion Multiplication operator. Expects its 4-element arrays in wxyz order
void quatMult(float *a, float *b, float *ret) {
ret[0] = (b[0] * a[0]) - (b[1] * a[1]) - (b[2] * a[2]) - (b[3] * a[3]);
ret[1] = (b[0] * a[1]) + (b[1] * a[0]) + (b[2] * a[3]) - (b[3] * a[2]);
ret[2] = (b[0] * a[2]) + (b[2] * a[0]) + (b[3] * a[1]) - (b[1] * a[3]);
ret[3] = (b[0] * a[3]) + (b[3] * a[0]) + (b[1] * a[2]) - (b[2] * a[1]);
return;
}
Hope that helps!

Resources