How to work with accelerometer data from the Apple Watch? - ios

Via the code below I am getting accelerometer data, now I want to work with it to track the user's movement, specifically speed. Looking around at code using Core Motion on the iPhone they are using a motionManager object which can set a value for accelerometerUpdateInterval as well as get the .acceleration.x value for example. How can I work with the raw data I'm given back so that I can determine e.g. how fast a person is moving or how fast their arm is swinging?
//Record the data
if CMSensorRecorder.isAccelerometerRecordingAvailable() {
print("Accelerometer available")
recorder.recordAccelerometer(forDuration: 20 * 60) // Record for 20 minutes
}
//Read the data
if CMSensorRecorder.isAccelerometerRecordingAvailable() {
let accelerometerData = recorder.accelerometerData(from: startDate, to: endDate)
for (index, data) in (accelerometerData?.enumerated())! {
print(index, data)
}
}
Prints:
0 388, 208409.082611, 529770182.607276, (0.038574, -0.762207, -0.652832)
1 388, 208409.102722, 529770182.627387, (0.027588, -0.763184, -0.660889)
2 388, 208409.122863, 529770182.647528, (0.027100, -0.763184, -0.661865)
3 388, 208409.142974, 529770182.667639, (0.030029, -0.756836, -0.661865)
4 388, 208409.163116, 529770182.687781, (0.026611, -0.764648, -0.665039)
Edit: I found this lib which looks like it would be perfect but hasn't been updated in 3 years...anything similar that is still maintained? https://github.com/MHaroonBaig/MotionKit

I found that using a CMMotionManager on the watch works just as good as it does on iPhone. This way you can implement startAccelerometerUpdates in awake and receive real time feed back on watch positions for X Y Z coordinates so that you can get a better grasp of the data;
var motionManager = CMMotionManager()
override func awake(withContext context: Any?) {
super.awake(withContext: context)
manager.accelerometerUpdateInterval = 0.2
manager.startAccelerometerUpdates(to: OperationQueue.current!) { (data, error) in
if let myData = data {
print("x: \(myData.acceleration.x) y: \(myData.acceleration.y) z: \(myData.acceleration.z)")
if myData.acceleration.x > 1.5 && myData.acceleration.y > 1.5 {
}
}
}
}

Related

iOS fast image difference comparison

Im looking for a fast way to compare two frames of video, and decide if a lot has changed between them. This will be used to decide if I should send a request to image recognition service over REST, so I don't want to keep sending them, until there might be some different results. Something similar is doing Vuforia SDK. Im starting with a Framebuffer from ARKit, and I have it scaled to 640:480 and converted to RGB888 vBuffer_image. It could compare just few points, but it needs to find out if difference is significant nicely.
I started by calculating difference between few points using vDSP functions, but this has a disadvantage - if I move camera even very slightly to left/right, then the same points have different portions of image, and the calculated difference is high, even if nothing really changed much.
I was thinking about using histograms, but I didn't test this approach yet.
What would be the best solution for this? It needs to be fast, it can compare just smaller version of image, etc.
I have tested another approach using VNFeaturePointObservation from Vision. This works a lot better, but Im afraid it might be more CPU demanding. I need to test this on some older devices. Anyway, this is a part of code that works nicely. If someone could suggest some better approach to test, please let know:
private var lastScanningImageFingerprint: VNFeaturePrintObservation?
// Returns true if these are different enough
private func compareScanningImages(current: VNFeaturePrintObservation, last: VNFeaturePrintObservation?) -> Bool {
guard let last = last else { return true }
var distance = Float(0)
try! last.computeDistance(&distance, to: current)
print(distance)
return distance > 10
}
// After scanning is done, subclass should prepare suggestedTargets array.
private func performScanningIfNeeded(_ sender: Timer) {
guard !scanningInProgress else { return } // Wait for previous scanning to finish
guard let vImageBuffer = deletate?.currentFrameScalledImage else { return }
guard let image = CGImage.create(from: vImageBuffer) else { return }
func featureprintObservationForImage(image: CGImage) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(cgImage: image, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
guard let imageFingerprint = featureprintObservationForImage(image: image) else { return }
guard compareScanningImages(current: imageFingerprint, last: lastScanningImageFingerprint) else { return }
print("SCANN \(Date())")
lastScanningImageFingerprint = featureprintObservationForImage(image: image)
executeScanning(on: image) { [weak self] in
self?.scanningInProgress = false
}
}
Tested on older iPhone - as expected this causes some frame drops on camera preview. So I need a faster algorithm

Is heartRate data from an HKWorkoutSession a moving average?

I'm using an HKWorkoutSession to get heart rate data every 5 seconds in workoutBuilder didCollectDataOf. The heart rates are reported as "beats/minute". The question is, are they calculated as moving averages, or just over the previous time interval? (I couldn't find this specified in the documentation anywhere.)
For example, if you get the following heart rates:
t=0: 69 beats/min
t=5: 71 beats/min
t=10: 72 beats/min
...
Is each value an average of beat intervals over the past 60 seconds, or is it just extrapolated from the last 5 seconds of data?
Here's what didCollectDataOf looks like:
func workoutBuilder(_ workoutBuilder: HKLiveWorkoutBuilder, didCollectDataOf collectedTypes: Set<HKSampleType>) {
for type in collectedTypes {
guard let hrType = HKQuantityType.quantityType(forIdentifier: .heartRate) else {
return
}
if collectedTypes.contains(hrType) {
if let hrQuantity = workoutBuilder.statistics(for: hrType)?.mostRecentQuantity() {
let hrUnit = HKUnit(from: "count/min")
let hr = Int(hrQuantity.doubleValue(for: hrUnit))
debugPrint("\(Date()) HR: \(hr)")
}
}
}
}
From my experience, it's not a moving average.

iOS: Swift: Combine Accelerometer and Attitude data for comparable results

Idea
My idea is to write a little application for my iOS-Device to record a motion after start recording by touching an UIButton and save the data from the accelerometer for this motion. After the recording, I can save this data to use it in the finished Application to detect this motion again.
So, what I'm looking for is a way to compare two NSMutableArrays, each as a set of motion data, and want to check if the current NSMutableArray is the same motion as the recorded one.
Problems
To realize this idea, I found three problems, which I can't solve myself:
How to compare two NSMutableArrays and get e.g. an index-level of the similarity or a percentage e.g. 97.32% equivalent.
What if the user makes the motion in different speeds? E.g. slowly or fast.
The last problem: How to handle different device-orientations? Is there a way to calculate the data from the accelerometer to a neutral level in any orientation? I think I have a solution approach for this last problem, but don't know how to bring this to code: We have the attitude data for pitch, roll and yaw. Maybe we can calculate with these values the neutral data for the accelerometer?
Code
At this moment, I only have the code to get the data from the accelerometer and the attitude data, both from the MotionManager. I played multiple hours with my code and searched a lot on Google, but can't find the right way...
import UIKit
import CoreMotion
var motionManager = CMMotionManager()
var x:Float = 0.0
var y:Float = 0.0
var z:Float = 0.0
var roll:Float = 0.0
var pitch:Float = 0.0
var yaw:Float = 0.0
class FirstViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
motionManager.accelerometerUpdateInterval = 1
motionManager.deviceMotionUpdateInterval = 1
}
#IBAction func startRecording() {
accelerometerData = NSMutableArray()
motionManager.startDeviceMotionUpdates(using: CMAttitudeReferenceFrame.xMagneticNorthZVertical, to:
OperationQueue.current!, withHandler: {
(deviceMotion, error) -> Void in
let attitude = deviceMotion?.attitude
let roll = self.degrees(radians: attitude!.roll)
let pitch = self.degrees(radians: attitude!.pitch)
let yaw = self.degrees(radians: attitude!.yaw)
self.roll = Float(roll)
self.pitch = Float(pitch)
self.yaw = Float(yaw)
let accX = deviceMotion?.userAcceleration.x
let accY = deviceMotion?.userAcceleration.y
let accZ = deviceMotion?.userAcceleration.z
self.x = Float(accX!)
self.y = Float(accY!)
self.z = Float(accZ!)
})
}
}
Update
I will update this question, when I know more.
Problem 2: Solution:
With the Attribute CMAttitudeReferenceFrame.xMagneticNorthZVertical for the motionManager.startDeviceMotionUpdates-Method you can neutralize all sensors data to a defined orientation. You can access the accelerometer-data here too. I've updated the code above, too.

Perform Audio Analysis with FFT

I've been stuck on this problem for days now and have looked through nearly every related StackOverflow page. Through this, I now have a much greater understanding of what FFT is and how it works. Despite this, I'm having extreme difficulties implementing it into my application.
In short, what I am trying to do is make a spectrum visualizer for my application (Similar to this). From what I've gathered, I'm pretty sure I need to use the magnitudes of the sound as the heights of my bars. So with all this in mind, currently I am able to analyze an entire .caf file all at once. To do this, I am using the following code:
let audioFile = try! AVAudioFile(forReading: soundURL!)
let frameCount = UInt32(audioFile.length)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: frameCount)
do {
try audioFile.readIntoBuffer(buffer, frameCount:frameCount)
} catch {
}
let log2n = UInt(round(log2(Double(frameCount))))
let bufferSize = Int(1 << log2n)
let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
var realp = [Float](count: bufferSize/2, repeatedValue: 0)
var imagp = [Float](count: bufferSize/2, repeatedValue: 0)
var output = DSPSplitComplex(realp: &realp, imagp: &imagp)
vDSP_ctoz(UnsafePointer<DSPComplex>(buffer.floatChannelData.memory), 2, &output, 1, UInt(bufferSize / 2))
vDSP_fft_zrip(fftSetup, &output, 1, log2n, Int32(FFT_FORWARD))
var fft = [Float](count:Int(bufferSize / 2), repeatedValue:0.0)
let bufferOver2: vDSP_Length = vDSP_Length(bufferSize / 2)
vDSP_zvmags(&output, 1, &fft, 1, bufferOver2)
This works fine and outputs a long array of data. However, the problem with this code is it analyzes the entire audio file at once. What I need is to be analyzing the audio file as it is playing, very similar to this video: Spectrum visualizer.
So I guess my question is this: How do you perform FFT analysis while the audio is playing?
Also, on top of this, how do I go about converting the output of an FFT analysis to actual heights for a bar? One of the outputs I received for an audio file using the FFT analysis code from above was this: http://pastebin.com/RBLTuGx7. The only reason for the pastebin is due to how long it is. I'm assuming I average all these numbers together and use those values instead? (Just for reference, I got that array by printing out the 'fft' variable in the code above)
I've attempted reading through the EZAudio code, however I am unable to find how they are reading in samples of audio in live time. Any help is greatly appreciated.
Here's how it is done in AudioKit, using EZAudio's FFT tools:
Create a class for your FFT that will hold the data:
#objc public class AKFFT: NSObject, EZAudioFFTDelegate {
internal let bufferSize: UInt32 = 512
internal var fft: EZAudioFFT?
/// Array of FFT data
public var fftData = [Double](count: 512, repeatedValue: 0.0)
...
}
Initialize the class and setup the FFT. Also install the tap on the appropriate node.
public init(_ input: AKNode) {
super.init()
fft = EZAudioFFT.fftWithMaximumBufferSize(vDSP_Length(bufferSize), sampleRate: 44100.0, delegate: self)
input.avAudioNode.installTapOnBus(0, bufferSize: bufferSize, format: AKManager.format) { [weak self] (buffer, time) -> Void in
if let strongSelf = self {
buffer.frameLength = strongSelf.bufferSize;
let offset: Int = Int(buffer.frameCapacity - buffer.frameLength);
let tail = buffer.floatChannelData[0];
strongSelf.fft!.computeFFTWithBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
}
}
}
Then implement the callback to load your internal fftData array:
#objc public func fft(fft: EZAudioFFT!, updatedWithFFTData fftData: UnsafeMutablePointer<Float>, bufferSize: vDSP_Length) {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
for i in 0...511 {
self.fftData[i] = Double(fftData[i])
}
}
}
AudioKit's implementation may change so you should check https://github.com/audiokit/AudioKit/ to see if any improvements were made. EZAudio is at https://github.com/syedhali/EZAudio

Finding the current Accelerometer values

I'm trying to get a label to rotate based on the tit of the device in an app I'm making in swift. Given that I should be fine with rotating the label, does anybody know how to find the current accelerometer values of an iPhone? For example, getting it to print its x, y and z values once every second or so.
Thanks in advance.
Sample code:
Add the framework
Add the following code in the appropriate places
import CoreMotion
var motionManager = CMMotionManager()
var AccelX: CGFloat = 0
// Setup accelemeter detection
if motionManager.accelerometerAvailable == true {
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: { (data, error) -> Void in
self.outputAccelertionData(data.acceleration)
})
}
func outputAccelertionData (acceleration: CMAcceleration) {
AccelX = 0
if fabs(CGFloat(acceleration.x)) > fabs(AccelX) {
AccelX = CGFloat(acceleration.x)
}
}

Resources