detect pulse objective c - ios

At the moment i'm creating an iOS app which is visualizing port status of an arduino. Therefor the iPad is receiving information via Serial Cable from Arduino.
The Arduino sends every 100ms a package with it's current port status. This status is visualized on the iPad.
The Ports are input Ports. I've recognized that the device i'm reading is pulsing the ports so the Arduino reads high low level alternating. That creates flickering in the visualization.
My question is now how to detect if the level is up or the input is flickering.
The port is high for x seconds get low for y seconds and after that it repeats. If the port is low for z seconds i need to set the port as low in the visualization. Otherwise it is high.
- (void) readBytesAvailable:(UInt32)numBytes {
int bytesRead = [manager read:rxBuffer Length:numBytes];
if(rxBuffer[i]==48){
[self setButtonRed];
}else if(rxBuffer[i]==49){
[self setButtonWhite]
}
}
https://www.dropbox.com/s/bhy5lbm8lkdhnoy/3wire.png?dl=0

If I understood well the scenario is this: you want to determine if the output is alternating its state or it is stuck at ground. You didn't specify the period/up-low time nor the number of pins, so I assume that you have four buttons connected to arduino pins 1,2,3,5 and I'll use literals.
You'll have to set CHECK_PERIOD to an appropriate sampling period so that you can sample the input 4/5 times for every state, and CHECK_ITERATIONS so that you can also miss some points.
For instance if the normal wave is 100ms high and 100ms low, I'd set CHECK_PERIOD to 20 and CHECK_ITERATIONS to, let's say, 3 or 4.
long previousInputCheck;
#define NUM_INPUTS 4
const int inputPins[] = { 1, 2, 3, 5}
unsigned char inputCounter[NUM_INPUTS];
unsigned char inputStates[NUM_INPUTS];
... THEN, INTO THE MAIN ...
if ((millis() - previousInputCheck) >= CHECK_PERIOD)
{
previousInputCheck += CHECK_PERIOD;
unsigned char i;
for (i = 0; i < NUM_INPUTS; i++)
{
if (digitalRead(inputPins[i]) == LOW)
{
if (inputCounter[i] <= CHECK_ITERATIONS)
inputCounter[i]++;
if (inputCounter[i] == CHECK_ITERATIONS)
{
inputStates[i] = LOW;
}
}
else
{ // HIGH
inputCounter[i] = 0;
inputStates[i] = HIGH;
}
}
}

Related

STM32 - Reading I2S to record a .WAV file. Audio choppy, what is causing it?

I'm using an STM32 (STM32F446RE) to receive audio from two INMP441 mems microphone in an stereo setup via I2S protocol and record it into a .WAV on a micro SD card, using the HAL library.
I wrote the firmware that records audio into a .WAV with FreeRTOS. But the audio files that I record sound like Darth Vader. Here is a screenshot of the audio in audacity:
if you zoom in you can see a constant noise being inserted in between the real audio data:
I don't know what is causing this.
I have tried increasing the MessageQueue, but that doesnt seem to be the problem, the queue is kept at 0 most of the time. I've tried different frame sizes and sampling rates, changing the number of channels, using only one inmp441. All this without any success.
I proceed explaining the firmware.
Here is a block diagram of the architecture for the RTOS that I have implemented:
It consists of three tasks. The first one receives a command via UART (with interrupts) that signals to start or stop recording. the second one is simply an state machine that walks through the steps to write a .WAV.
Here the code for the WriteWavFileTask:
switch(audio_state)
{
case STATE_START_RECORDING:
sprintf(filename, "%saud_%03d.wav", SDPath, count++);
do
{
res = f_open(&file_ptr, filename, FA_CREATE_ALWAYS|FA_WRITE);
}
while(res != FR_OK);
res = fwrite_wav_header(&file_ptr, I2S_SAMPLE_FREQUENCY, I2S_FRAME, 2);
HAL_I2S_Receive_DMA(&hi2s2, aud_buf, READ_SIZE);
audio_state = STATE_RECORDING;
break;
case STATE_RECORDING:
osDelay(50);
break;
case STATE_STOP:
HAL_I2S_DMAStop(&hi2s2);
while(osMessageQueueGetCount(AudioQueueHandle)) osDelay(1000);
filesize = f_size(&file_ptr);
data_len = filesize - 44;
total_len = filesize - 8;
f_lseek(&file_ptr, 4);
f_write(&file_ptr, (uint8_t*)&total_len, 4, bw);
f_lseek(&file_ptr, 40);
f_write(&file_ptr, (uint8_t*)&data_len, 4, bw);
f_close(&file_ptr);
audio_state = STATE_IDLE;
break;
case STATE_IDLE:
osThreadSuspend(WAVHandle);
audio_state = STATE_START_RECORDING;
break;
default:
osDelay(50);
break;
Here are the macros used in the code for readability:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define WRITE_SIZE (READ_SIZE*I2S_FRAME/16) // half words to write
#define WRITE_SIZE_BYTES (WRITE_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
The last task is the responsible for processing the buffer received via I2S. Here is the code:
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
static uint16_t aud_ptr[WRITE_SIZE];
/* Infinite loop */
for(;;)
{
osMessageQueueGet(AudioQueueHandle, aud_ptr, 0L, osWaitForever);
res = f_write(&file_ptr, aud_ptr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
This tasks reads from a queue an array of 256 uint16_t elements containing the raw audio data in PCM. f_write takes the Size parameter in number of bytes to write to the SD card, so 512 bytes. The I2S Receives 128 frames (for a 32 bit frame, 128 words).
The following is the configuration for the I2S and clocks:
Any help would be much appreciated!
Solution
As pmacfarlane pointed out, the problem was with the method used for buffering the audio data. The solution consisted of easing the overhead on the ISR and implementing a circular DMA for double buffering. Here is the code:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define BUFFER_SIZE (READ_SIZE*I2S_FRAME/16) // number of uint16_t elements expected
#define WRITE_SIZE_BYTES (BUFFER_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
uint16_t aud_buf[2*BUFFER_SIZE]; // Double buffering
static volatile int16_t *BufPtr;
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxHalfCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = aud_buf;
osSemaphoreRelease(RxAudioSemHandle);
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = &aud_buf[BUFFER_SIZE];
osSemaphoreRelease(RxAudioSemHandle);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
/* Infinite loop */
for(;;)
{
osSemaphoreAcquire(RxAudioSemHandle, osWaitForever);
convert_endianness((uint32_t *)BufPtr, READ_SIZE);
res = f_write(&file_ptr, BufPtr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
Problems
I think the problem is your method of buffering the audio data - mainly in this function:
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
The main problem is that you are re-using the same buffer each time. You have queued a message to save aud_buf to the SD-card, but you've also instructed the I2S to start DMAing data into that same buffer, before it has been saved. You'll end up saving some kind of mish-mash of "old" data and "new" data.
#Flexz pointed out that the message queue takes a copy of the data, so there is no issue about the I2S writing over the data that is being written to the SD-card. However, taking the copy (in an ISR) adds overhead, and delays the start of the new I2S DMA.
Another problem is that you are doing the endian conversion in this function (that is called from an ISR). This will block any other (lower priority) interrupts from being serviced while this happens, which is a bad thing in an embedded system. You should do the endian conversion in the task that reads from the queue. ISRs should be very short and do the minimum possible work (often just setting a flag, giving a semaphore, or adding something to a queue).
Lastly, while you are doing the endian conversion, what is happening to audio samples? The previous DMA has completed, and you haven't started a new one, so they will just be dropped on the floor.
Possible solution
You probably want to allocate a suitably big buffer, and configure your DMA to work in circular buffer mode. This means that once started, the DMA will continue forever (until you stop it), so you'll never drop any samples. There won't be any gap between one DMA finishing and a new one starting, since you never need to start a new one.
The DMA provides a "half-complete" interrupt, to say when it has filled half the buffer. So start the DMA, and when you get the half-complete interrupt, queue up the first half of the buffer to be saved. When you get the fully-complete interrupt, queue up the second half of the buffer to be saved. Rinse and repeat.
You might want to add some logic to detect if the interrupt happens before the previous save has completed, since the data will be overrun and possibly corrupted. Depending on the speed of the SD-card (and the sample rate), this may or may not be a problem.

iOS Detect aggressive behavior of a vehicle driver

I am working on a driver behavior app and I am using SOMotionDetector (Thanks to MIT). Its giving speed and Motion Type (Not Moving, Walking, Running, Automotive) of device. I will use Automotive in my case as I need to detect driver behavior. This is detecting Motion Type based on speed with some thresholds set for Walking, Running, Automotive or if available it uses M7 Chip. It updates location approximately after every second (time varies based on GPS) in [SOMotionDetector sharedInstance].locationChangedBlock To detect Aggressive speed or break I am checking is that the increase/decrease of speed in last second. If it increases from a certain threshold (I am using kAggressiveSpeedIncrementFactor 8.0f) then its aggressively increasing speed, and if there is decreasing speed (difference factor is negative in this case) then its aggressive break. For turn I am playing with angle of latitude and longitude points, following is code for my logic:
#define kAggressiveSpeedIncrementFactor 8.0f // if 8 km/h speed was increased in last second
#define kAggressiveAngleIncrementFactor 30.0f // 30 degree turn angle
#define kAggressiveTurnIncrementFactor 5.0f . // while turn the increasing speed factor in last second
SOMotionDetector *motionDetector = [SOMotionDetector sharedInstance];
motionDetector.locationChangedBlock = ^(CLLocation *location) {
if (motionDetector.motionType == MotionTypeAutomotive) {
SOLocationManager *locationManager = [SOLocationManager sharedInstance];
float currSpeed = motionDetector.currentSpeed * 3.6f;
float lastSpeed = motionDetector.lastSpeed * 3.6f;
float currAngle = locationManager.currAngle;
float lastAngle = locationManager.lastAngle;
self.speedDiff = currSpeed-lastSpeed;
self.angleDiff = currAngle-lastAngle;
if (fabs(self.speedDiff)>kAggressiveSpeedIncrementFactor && fabs(self.angleDiff)<kAggressiveAngleIncrementFactor) {
NSString *msg = #"Aggressive Speed";
if (self.speedDiff < 0)
msg = #"Aggressive Break";
NSLog(#"%#", msg);
}
if (fabs(self.angleDiff)>kAggressiveAngleIncrementFactor && currSpeed>kAggressiveTurnIncrementFactor) {
NSLog(#"aggressive turn");
}
}
};
I have created currentSpeed and lastSpeed in SOMotionDetector class (for my speed difference) and currAngle and lastAngle in SOLocationManager. Please have a look at code,
Aggressive Speed some times work perfect
My question is:
Is this right approach what I am doing?
For detecting aggressive turn with the angle some times this happens that if
my vehicle is going 50 degrees angle (calculated with current and last lat, longs) on a strait road, some times the GPS detect location right or left side of road that give a big difference to the angle (like the path becomes a zig zag). any suggestion for this?

could NaN be causing the occasional crash in this core audio iOS app?

My first app synthesised music audio from a sine look-up table using methods deprecated since iOS 6. I have just revised it to address warnings about AudioSessionhelped by this blog and the Apple guidelines on AVFoundationFramework. Audio Session warnings have now been addressed and the app produces audio as it did before. It currently runs under iOS 9.
However the app occasionally crashes for no apparent reason. I checked out this SO post but it seems to deal with accessing rather than generating raw audio data, so maybe it is not dealing with a timing issue. I suspect there is a buffering problem but I need to understand what this might be before I change or fine tune anything in the code.
I have a deadline to make the revised app available to users so I'd be most grateful to hear from someone who has dealt a similar issue.
Here is the issue. The app goes into debug on the simulator reporting:
com.apple.coreaudio.AQClient (8):EXC_BAD_ACCESS (code=1, address=0xffffffff10626000)
In the Debug Navigator, Thread 8 (com.apple.coreaudio.AQClient (8)), it reports:
0 -[Synth fillBuffer:frames:]
1 -[PlayView audioBufferPlayer:fillBuffer:format:]
2 playCallback
This line of code in fillBuffer is highlighted
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
... and so is this line of code in audioBufferPlayer
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
... and playCallBack
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
Here is the code for audioBufferPlayer (delegate, essentially the same as in the demo referred to above).
- (void)audioBufferPlayer:(AudioBufferPlayer*)audioBufferPlayer fillBuffer:(AudioQueueBufferRef)buffer format:(AudioStreamBasicDescription)audioFormat
{
[synthLock lock];
int packetsPerBuffer = buffer->mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
buffer->mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[synthLock unlock];
}
... (initialised in myViewController)
- (id)init
{
if ((self = [super init])) {
// The audio buffer is managed (filled up etc.) within its own thread (Audio Queue thread)
// Since we are also responding to changes from the GUI, we need a lock so both threads
// do not attempt to change the same value independently.
synthLock = [[NSLock alloc] init];
// Synth and the AudioBufferPlayer must use the same sample rate.
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise note buttons
buttons = [[NSMutableArray alloc] init];
// Initialise the audio buffer.
player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate channels:1 bitsPerChannel:16 packetsPerBuffer:1024];
player.delegate = self;
player.gain = 0.9f;
[[AVAudioSession sharedInstance] setActive:YES error:nil];
}
return self;
} // initialisation
... and for playCallback
static void playCallback( void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inBuffer)
{
AudioBufferPlayer* player = (AudioBufferPlayer*) inUserData;
if (player.playing){
[player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}
}
... and here is the code for fillBuffer where audio is synthesised
- (int)fillBuffer:(void*)buffer frames:(int)frames
{
SInt16* p = (SInt16*)buffer;
// Loop through the frames (or "block size"), then consider each sample for each tone.
for (int f = 0; f < frames; ++f)
{
float m = 0.0f; // the mixed value for this frame
for (int n = 0; n < MAX_TONE_EVENTS; ++n)
{
if (tones[n].state == STATE_INACTIVE) // only active tones
continue;
// recalculate a 30sec envelope and place in a look-up table
// Longer notes need to interpolate through the envelope
int a = (int)tones[n].envStep; // integer part (like a floored float)
float b = tones[n].envStep - a; // decimal part (like doing a modulo)
// c allows us to calculate if we need to wrap around
int c = a + 1; // (like a ceiling of integer part)
if (c >= envLength) c = a; // don't wrap around
/////////////// LOOK UP ENVELOPE TABLE /////////////////
// uses table look-up with interpolation for both level and pitch envelopes
// 'b' is a value interpolated between 2 successive samples 'a' and 'c')
// first, read values for the level envelope
float envValue = (1.0f - b)*tones[n].levelEnvelope[a] + b*tones[n].levelEnvelope[c];
// then the pitch envelope
float pitchFactorValue = (1.0f - b)*tones[n].pitchEnvelope[a] + b*tones[n].pitchEnvelope[c];
// Advance envelope pointer one step
tones[n].envStep += tones[n].envDelta;
// Turn note off at the end of the envelope.
if (((int)tones[n].envStep) >= envLength){
tones[n].state = STATE_INACTIVE;
continue;
}
// Precalculated Sine look-up table
a = (int)tones[n].phase; // integer part
b = tones[n].phase - a; // decimal part
c = a + 1;
if (c >= sineLength) c -= sineLength; // wrap around
///////////////// LOOK UP OF SINE TABLE ///////////////////
float sineValue = (1.0f - b)*sine[a] + b*sine[c];
// Wrap round when we get to the end of the sine look-up table.
tones[n].phase += (tones[n].frequency * pitchFactorValue); // calculate frequency for each point in the pitch envelope
if (((int)tones[n].phase) >= sineLength)
tones[n].phase -= sineLength;
////////////////// RAMP NOTE OFF IF IT HAS BEEN UNPRESSED
if (tones[n].state == STATE_UNPRESSED) {
tones[n].gain -= 0.0001;
if ( tones[n].gain <= 0 ) {
tones[n].state = STATE_INACTIVE;
}
}
//////////////// FINAL SAMPLE VALUE ///////////////////
float s = sineValue * envValue * gain * tones[n].gain;
// Clip the signal, if needed.
if (s > 1.0f) s = 1.0f;
else if (s < -1.0f) s = -1.0f;
// Add the sample to the out-going signal
m += s;
}
// Write the sample mix to the buffer as a 16-bit word.
p[f] = (SInt16)(m * 0x7FFF);
}
return frames;
}
I'm not sure whether it is a red herring but I came across NaN in several debug registers. It appears to happen while calculating phase increment for sine lookup in fillBuffer (see above). That calculation is done for up to a dozen partials every sample at a sampling rate of 44.1 kHz and worked in iOS 4 on an iPhone 4. I'm running on simulator of iOS 9. The only changes I made are described in this post!
My NaN problem turned out to have nothing directly to do with Core Audio. It was caused by an edge condition introduced by changes in another area of my code. The real problem was a division by zero attempted while calculating the duration of the sound envelope in realtime.
However, in trying to identify the cause of that problem, I am confident my pre-iOS 7 Audio Session has been replaced by a working setup based on AVFoundation. Thanks goes to the source of my initial code Matthijs Hollemans and also to Mario Diana whose blog explained the changes needed.
At first, the sound levels on my iPhone were significantly less than the sound levels on the Simulator, a problem addressed here by foundry. I found it necessary to include these improvements by replacing Mario's
- (BOOL)setUpAudioSession
with foundry's
- (void)configureAVAudioSession
Hopefully this might help someone else.

Detecting when someone begins walking using Core Motion and CMAccelerometer Data

I'm trying to detect three actions: when a user begins walking, jogging, or running. I then want to know when the stop. I've been successful in detecting when someone is walking, jogging, or running with the following code:
- (void)update:(CMAccelerometerData *)accelData {
[(id) self setAcceleration:accelData.acceleration];
NSTimeInterval secondsSinceLastUpdate = -([self.lastUpdateTime timeIntervalSinceNow]);
if (labs(_acceleration.x) >= 0.10000) {
NSLog(#"walking: %f",_acceleration.x);
}
else if (labs(_acceleration.x) > 2.0) {
NSLog(#"jogging: %f",_acceleration.x);
}
else if (labs(_acceleration.x) > 4.0) {
NSLog(#"sprinting: %f",_acceleration.x);
}
The problem I run into is two-fold:
1) update is called multiple times every time there's a motion, probably because it checks so frequently that when the user begins walking (i.e. _acceleration.x >= .1000) it is still >= .1000 when it calls update again.
Example Log:
2014-02-22 12:14:20.728 myApp[5039:60b] walking: 1.029846
2014-02-22 12:14:20.748 myApp[5039:60b] walking: 1.071777
2014-02-22 12:14:20.768 myApp[5039:60b] walking: 1.067749
2) I'm having difficulty figuring out how to detect when the user stopped. Does anybody have advice on how to implement "Stop Detection"
According to your logs, accelerometerUpdateInterval is about 0.02. Updates could be less frequent if you change mentioned property of CMMotionManager.
Checking only x-acceleration isn't very accurate. I can put a device on a table in a such way (let's say on left edge) that x-acceleration will be equal to 1, or tilt it a bit. This will cause a program to be in walking mode (x > 0.1) instead of idle.
Here's a link to ADVANCED PEDOMETER FOR SMARTPHONE-BASED ACTIVITY TRACKING publication. They track changes in the direction of the vector of acceleration. This is the cosine of the angle between two consecutive acceleration vector readings.
Obviously, without any motion, angle between two vectors is close to zero and cos(0) = 1. During other activities d < 1. To filter out noise, they use a weighted moving average of the last 10 values of d.
After implementing this, your values will look like this (red - walking, blue - running):
Now you can set a threshold for each activity to separate them. Note that average step frequency is 2-4Hz. You should expect current value to be over the threshold at least few times in a second in order to identify the action.
Another helpful publications:
ERSP: An Energy-efficient Real-time Smartphone Pedometer (analyze peaks and throughs)
A Gyroscopic Data based Pedometer Algorithm (threshold detection of gyro readings)
UPDATE
_acceleration.x, _accelaration.y, _acceleration.z are coordinates of the same acceleration vector. You use each of these coordinates in d formula. In order to calculate d you also need to store acceleration vector of previous update (with i-1 index in formula).
WMA just take into account 10 last d values with different weights. Most recent d values have more weight, therefore, more impact on resulting value. You need to store 9 previous d values in order to calculate current one. You should compare WMA value to corresponding threshold.
if you are using iOS7 and iPhone5S, I suggest you look into CMMotionActivityManager which is available in iPhone5S because of the M7 chip. It is also available in a couple of other devices:
M7 chip
Here is a code snippet I put together to test when I was learning about it.
#import <CoreMotion/CoreMotion.h>
#property (nonatomic,strong) CMMotionActivityManager *motionActivityManager;
-(void) inSomeMethod
{
self.motionActivityManager=[[CMMotionActivityManager alloc]init];
//register for Coremotion notifications
[self.motionActivityManager startActivityUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMMotionActivity *activity)
{
NSLog(#"Got a core motion update");
NSLog(#"Current activity date is %f",activity.timestamp);
NSLog(#"Current activity confidence from a scale of 0 to 2 - 2 being best- is: %ld",activity.confidence);
NSLog(#"Current activity type is unknown: %i",activity.unknown);
NSLog(#"Current activity type is stationary: %i",activity.stationary);
NSLog(#"Current activity type is walking: %i",activity.walking);
NSLog(#"Current activity type is running: %i",activity.running);
NSLog(#"Current activity type is automotive: %i",activity.automotive);
}];
}
I tested it and it seems to be pretty accurate. The only drawback is that it will not give you a confirmation as soon as you start an action (walking for example). Some black box algorithm waits to ensure that you are really walking or running. But then you know you have a confirmed action.
This beats messing around with the accelerometer. Apple took care of that detail!
You can use this simple library to detect if user is walking, running, on vehicle or not moving. Works on all iOS devices and no need M7 chip.
https://github.com/SocialObjects-Software/SOMotionDetector
In repo you can find demo project
I'm following this paper(PDF via RG) in my indoor navigation project to determine user dynamics(static, slow walking, fast walking) via merely accelerometer data in order to assist location determination.
Here is the algorithm proposed in the project:
And here is my implementation in Swift 2.0:
import CoreMotion
let motionManager = CMMotionManager()
motionManager.accelerometerUpdateInterval = 0.1
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue.mainQueue()) { (accelerometerData: CMAccelerometerData?, error: NSError?) -> Void in
if((error) != nil) {
print(error)
} else {
self.estimatePedestrianStatus((accelerometerData?.acceleration)!)
}
}
After all of the classic Swifty iOS code to initiate CoreMotion, here is the method crunching the numbers and determining the state:
func estimatePedestrianStatus(acceleration: CMAcceleration) {
// Obtain the Euclidian Norm of the accelerometer data
accelerometerDataInEuclidianNorm = sqrt((acceleration.x.roundTo(roundingPrecision) * acceleration.x.roundTo(roundingPrecision)) + (acceleration.y.roundTo(roundingPrecision) * acceleration.y.roundTo(roundingPrecision)) + (acceleration.z.roundTo(roundingPrecision) * acceleration.z.roundTo(roundingPrecision)))
// Significant figure setting
accelerometerDataInEuclidianNorm = accelerometerDataInEuclidianNorm.roundTo(roundingPrecision)
// record 10 values
// meaning values in a second
// accUpdateInterval(0.1s) * 10 = 1s
while accelerometerDataCount < 1 {
accelerometerDataCount += 0.1
accelerometerDataInASecond.append(accelerometerDataInEuclidianNorm)
totalAcceleration += accelerometerDataInEuclidianNorm
break // required since we want to obtain data every acc cycle
}
// when acc values recorded
// interpret them
if accelerometerDataCount >= 1 {
accelerometerDataCount = 0 // reset for the next round
// Calculating the variance of the Euclidian Norm of the accelerometer data
let accelerationMean = (totalAcceleration / 10).roundTo(roundingPrecision)
var total: Double = 0.0
for data in accelerometerDataInASecond {
total += ((data-accelerationMean) * (data-accelerationMean)).roundTo(roundingPrecision)
}
total = total.roundTo(roundingPrecision)
let result = (total / 10).roundTo(roundingPrecision)
print("Result: \(result)")
if (result < staticThreshold) {
pedestrianStatus = "Static"
} else if ((staticThreshold < result) && (result <= slowWalkingThreshold)) {
pedestrianStatus = "Slow Walking"
} else if (slowWalkingThreshold < result) {
pedestrianStatus = "Fast Walking"
}
print("Pedestrian Status: \(pedestrianStatus)\n---\n\n")
// reset for the next round
accelerometerDataInASecond = []
totalAcceleration = 0.0
}
}
Also I've used the following extension to simplify significant figure setting:
extension Double {
func roundTo(precision: Int) -> Double {
let divisor = pow(10.0, Double(precision))
return round(self * divisor) / divisor
}
}
With raw values from CoreMotion, the algorithm was haywire.
Hope this helps someone.
EDIT (4/3/16)
I forgot to provide my roundingPrecision value. I defined it as 3. It's just plain mathematics that that much significant value is decent enough. If you like you provide more.
Also one more thing to mention is that at the moment, this algorithm requires the iPhone to be in your hand while walking. See the picture below. Sorry this was the only one I could find.
My GitHub Repo hosting Pedestrian Status
You can use Apple's latest Machine Learning framework CoreML to find out user activity. First you need to collect labeled data and train the classifier. Then you can use this model in your app to classify user activity. You may follow this series if are interested in CoreML Activity Classification.
https://medium.com/#tyler.hutcherson/activity-classification-with-create-ml-coreml3-and-skafos-part-1-8f130b5701f6

Distance between 2 arduino's using rf links

I currently have a setup where I send a char using a Tx of 434MHz and an Uno to a Mega with a Rx. The Mega counts how many times it receives the char and then if it falls below a certain number it triggers an alarm. Is this a viable way to measure the distance between two microcontrollers while indoors or is there a better way.
Transmitter (Mega)
#include <SoftwareSerial.h>
int rxPin=2; //Goes to the Receiver Pin
int txPin=5; //Make sure it is set to pin 5 going to input of receiver
SoftwareSerial txSerial = SoftwareSerial(rxPin, txPin);
SoftwareSerial rxSerial = SoftwareSerial(txPin, rxPin);
char sendChar ='H';
void setup() {
pinMode(rxPin, INPUT);
pinMode(txPin,OUTPUT);
txSerial.begin(2400);
rxSerial.begin(2400);
}
void loop() {
txSerial.println(sendChar);
Serial.print(sendChar);
}
Receiver
#include <SoftwareSerial.h>
//Make sure it is set to pin 5 going to the data input of the transmitter
int rxPin=5;
int txPin=3; //Don't need to make connections
int LED=13;
int BUZZ=9;
int t=0;
char incomingChar = 0;
int counter = 0;
SoftwareSerial rxSerial = SoftwareSerial(rxPin, txPin);
void setup() {
pinMode(rxPin, INPUT); //initilize rxpin as input
pinMode(BUZZ, OUTPUT); //initilize buzz for output
pinMode(LED, OUTPUT); //initilize led for output
rxSerial.begin(2400); //set baud rate for transmission
Serial.begin(2400); //see above
}
void loop() {
for(int i=0; i<200; i++) {
incomingChar = rxSerial.read(); //read incoming msg from tx
if (incomingChar =='H') {
counter++; //if we get bit "h" count it
}
delay(5); //delay of 10 secs
}
Serial.println(incomingChar);
Serial.println(counter); //prints the the bits we recieved
if(counter<55) {
//if we receive less than 100 bits than print out of range triggers alarm
Serial.println("out of range");
tone(BUZZ,5000,500);digitalWrite(LED,HIGH);
}
else {
noTone(BUZZ);digitalWrite(LED, LOW);
//if we get more than 100 bits than we are within range turn off alarm
Serial.println("in range");
}
counter = 0;
incomingChar=0;
}
In theory you could achieve distance measuring by making the uno send a message which the mega would echo back. That would give the uno a round-trip time for message propagation between the arduinos. You would have to approximate the processing delays. After that it is basic physics. That is basically the same as how radar works. The actual delay would be something like
troundtrip = tuno send + 2*tpropagation + tmega receive + tmega send + tuno receive
I am guessing the distance you are trying to achieve is in the order of meters. Required resolution is going to be an issue, because s = vt => t = s/v, where s is the distance between your arduinos and v = c in case of radio waves. As the transmission delays should stay constant, you have to be able to measure differences in the order of 1/c second intervals, basically. I am not very familiar with arduinos, so I do not know if they are capable of this kind of measurements.
I would suggest you use an ultrasonic range finder like the Maxbotix HRLV-EZ4 sold by Sparkfun.
It is within your price range and it should be able to measure distances up to 5m/195 inches with 1mm resolution.
It is actually possible to do it, I have seen it be done with other microcontrollers. Therefore using arduino you would have to solve the equations,fit in arduino language and make a lot of measurements to value discrepancies over communication itself. Do not forget about atmospheric attenuation wich need to be known and fit in the equations. Humidity may deviate electromagnetic waves.

Resources