How Can I Add DeviceMotion Capabilities to a Swift Playground? - ios

I am working on a Swift playground and I am trying to use this code to get the device motion.
#objc func update()
{
if let deviceMotion = motionManager.deviceMotion {
print("Device Motion Yaw: \(deviceMotion.attitude.yaw)")
}
}
However, it seems that device motion does not work on a Swift playground even though it works in iOS. How would I change a playground to support device motion? I am using an iPad running iOS 12 and the latest version of Swift Playgrounds and a Mac for the code. I know that the method gets called perfectly, and the code runs perfectly when I put it as part of an iOS app on both an iPad and an iPhone. How would I modify a playground to support this, as from my understanding it does not by default?

It is entirely possible. I’ve done it on several occasions. You’ll need a CMMotionManager class. There are many ways to do this, but I would recommend using a timer. Here is some example code, taken from Apple’s developer documentation and modified to fit the question.
let motion = CMMotionManager()
func startDeviceMotion() {
if motion.isDeviceMotionAvailable {
//How often to push updates
self.motion.deviceMotionUpdateInterval = 1.0/60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical)
// Configure a timer to fetch the motion data.
self.timer = Timer(fire: Date(), interval: (1.0 / 60.0), repeats: true,
block: { (timer) in
if let data = self.motion.deviceMotion {
let x = data.attitude.pitch
let y = data.attitude.roll
let z = data.attitude.yaw
//Use the data
}
})
RunLoop.current.add(self.timer!, forMode: RunLoop.Mode.default)
}
}
startDeviceMotionUpdates()
Either do that or try something like this, also from the documentation
func startQueuedUpdates() {
if motion.isDeviceMotionAvailable { self.motion.deviceMotionUpdateInterval = 1.0 / 60.0
self.motion.showsDeviceMovementDisplay = true
self.motion.startDeviceMotionUpdates(using: .xMagneticNorthZVertical,
to: self.queue, withHandler: { (data, error) in
// Make sure the data is valid before accessing it.
if let validData = data {
// Get the attitude relative to the magnetic north reference frame.
let roll = validData.attitude.roll
let pitch = validData.attitude.pitch
let yaw = validData.attitude.yaw
// Use the motion data in your app.
}
})
}
}

Related

Retrieve Device Motion data only once - Core Motion

I'm using Core Motion to see if the phone is facing upwards or downwards, and I only need to retrieve the data once. Here is some of my code:
let manager = CMMotionManager()
manager.showsDeviceMovementDisplay = true
// "Pull data" - since I only need it once
manager.deviceMotionUpdateInterval = 1.0 / 60.0
manager.startDeviceMotionUpdates(using: .xMagneticNorthZVertical)
// Repeats set to false since I only need it once - but same problem even when set to true
self.timer = Timer(fire: Date(), interval: 1.0 / 60.0, repeats: false) { _ in
print("Timer started")
if let motionData = manager.deviceMotion?.gravity.z {
print("Successfully unwrapped")
if 0.7...1 ~= motionData { // Facing downwards
print("Facing downwards")
position = .downwards(motionData)
} else if -1...(-0.7) ~= motionData { // Facing upwards
print("Facing upwards")
position = .upwards(motionData)
} else {
print("Position uknown")
position = .unknown
}
}
}
RunLoop.current.add(self.timer!, forMode: RunLoop.Mode.default)
However, I never reach "Successfully unwrapped". From my attempts to debug, I found that manager.isDeviceMotionActive is never set to true, even though I called startDeviceMotionUpdates(using: .xMagneticNorthZVertical). Why could that be?
Where is this declared? is it possible your reference to the manager is deallocating?
try storing it in a property somewhere that won't get destroyed after the function is called maybe

iOS, How to do face tracking using the rear camera?

I'm planning to use ARKit's camera feed as input into Apple's Vision API so I can recognize people's faces in screen-space, no depth information required. To simplify the process, I'm attempting to modify Apple's face tracking over frames example here: Tracking the User’s Face in Real Time
I thought that I could simply change the function here:
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .front)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
In the first line of the function, one of the arguments is .front for front-facing camera. I changed this to .back. This successfully gives me the rear-facing camera. However, the recognition region seems a little bit choppy, and as soon as it fixates on a face in the image, Xcode reports the error:
VisionFaceTrack[877:54517] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
Message from debugger: Terminated due to memory issue
In other words, the program crashes when a face is recognized, it seems. Clearly there is more to this than simply changing the constant used. Perhaps there is a buffer somewhere with the wrong size, or a wrong resolution. May I have help figuring out what may be wrong here?
A better solution would also include information about how to achieve this with arkit's camera feed, but I'm pretty sure it's the same idea with the CVPixelBuffer.
How would I adapt this example to use the rear camera?
EDIT: I think the issue is that my device has too little memory to support the algorithm using the back camera, as the back camera has a higher resolution.
However, even on another higher performance device, the tracking quality is pretty bad. --yet the vision algorithm only needs raw images, doesn't it? In that case, shouldn't this work? I can't find any examples online of using the back camera for face tracking.
Here's how I adapted the sample to make it work on my iPad Pro.
1) Download the sample project from here: Tracking the User’s Face in Real Time.
2) Change the function which loads the front facing camera to use back facing. Rename it to configureBackCamera and call this method setupAVCaptureSession:
fileprivate func configureBackCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
if let device = deviceDiscoverySession.devices.first {
if let deviceInput = try? AVCaptureDeviceInput(device: device) {
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
if let highestResolution = self.highestResolution420Format(for: device) {
try device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
}
}
throw NSError(domain: "ViewController", code: 1, userInfo: nil)
}
3) Change the implementation of the method highestResolution420Format. The problem is, now that the back-facing camera is used, you have access to formats with much higher resolution than with front-facing camera, which can impact the performance of the tracking. You need to adapt to your use case, but here's an example of limiting the resolution to 1080p.
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
var highestResolutionFormat: AVCaptureDevice.Format? = nil
var highestResolutionDimensions = CMVideoDimensions(width: 0, height: 0)
for format in device.formats {
let deviceFormat = format as AVCaptureDevice.Format
let deviceFormatDescription = deviceFormat.formatDescription
if CMFormatDescriptionGetMediaSubType(deviceFormatDescription) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange {
let candidateDimensions = CMVideoFormatDescriptionGetDimensions(deviceFormatDescription)
if (candidateDimensions.height > 1080) {
continue
}
if (highestResolutionFormat == nil) || (candidateDimensions.width > highestResolutionDimensions.width) {
highestResolutionFormat = deviceFormat
highestResolutionDimensions = candidateDimensions
}
}
}
if highestResolutionFormat != nil {
let resolution = CGSize(width: CGFloat(highestResolutionDimensions.width), height: CGFloat(highestResolutionDimensions.height))
return (highestResolutionFormat!, resolution)
}
return nil
}
4) Now the tracking will work, but the face positions will not be correct. The reason is that UI presentation is wrong, because original sample was designed for front facing cameras with mirrored display, while the back facing camera doesn't need mirroring.
In order to tweak for this, simply change updateLayerGeometry() method. Specifically, you need to change this:
// Scale and mirror the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
into this:
// Scale the image to ensure upright presentation.
let affineTransform = CGAffineTransform(rotationAngle: radiansForDegrees(rotation))
.scaledBy(x: -scaleX, y: -scaleY)
overlayLayer.setAffineTransform(affineTransform)
After this, the tracking should work and the results should be correct.

AudioKit for iOS: Frequency Discrepancy on Simulator vs Device

I am using AudioKit to monitor frequency for a simple guitar tuner application and am experiencing discrepancies in frequency after updating from AudioKit ~4.2 to 4.4, Xcode 9.x to 10, and iOS 11 to 12. Before the updates, I was achieving correct frequency readings on my device. After updating, I am getting accurate results for a low E1 (82.4 Hz) on the simulator, but false readings on the device (alternates from ~23 to ~47 kHz).
I have tried using another device, but achieve the same results.
My viewDidLoad() setting up AudioKit is relatively simple, and I used the AudioKit playgrounds as a guideline:
override func viewDidLoad() {
super.viewDidLoad()
// Enable microphone tracking.
AKSettings.audioInputEnabled = true
let mic = AKMicrophone()
let tracker = AKFrequencyTracker(mic)
let silence = AKBooster(tracker, gain: 0)
AudioKit.output = silence
do {
try AudioKit.start()
}
catch {
print("AudioKit did not start!")
}
mic.start()
tracker.start()
// Track input frequency, 100ms intervals
timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {
[weak self] (timer) in
guard let this = self else { return }
this.frequencyLabel.text = String(format: "Frequency: %.3f Hz", tracker.frequency)
this.frequencyLabel.sizeToFit()
}
}
As a sidenote, I am getting Objective-C console output regarding AudioKit classes being implemented in two places. Would this contribute to the issue?
objc[517]: Class AKRhodesPianoAudioUnit is implemented in both /private/var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Frameworks/AudioKitUI.framework/AudioKitUI (0x1058413f0) and /var/containers/Bundle/Application/5A294050-2DB2-45C9-BB0A-3A0DE25E87C6/Tuner.app/Tuner (0x104e177e8). One of the two will be used. Which one is undefined.
Any ideas? Thanks in advance!

How to recognize a screen high-five

I have a client that wants to recognize when an user smacks their screen with their whole hand, like a high-five. I suspect that Apple won't approve this, but let's look away from that.
I though of using a four-finger-tap recognizer, but that doesn't really cover it. The best approach would possibly be to check if the user is covering at least 70% of the screen with their hand, but I don't know how to do that.
Can someone help me out here?
You could use the accelerometer to detect the impact of a hand & examine the front camera feed to find a corresponding dark frame due to the hand covering the camera*
* N.B. a human hand might not be big enough to cover the front camera on an iPhone 6+
Sort of solved it. Proximity + accelerometer works good enough. Multitouch doesn't work, as it ignores stuff it doesn't think of as taps.
import UIKit
import CoreMotion
import AVFoundation
class ViewController: UIViewController {
var lastHighAccelerationEvent:NSDate? {
didSet {
checkForHighFive()
}
}
var lastProximityEvent:NSDate? {
didSet {
checkForHighFive()
}
}
var lastHighFive:NSDate?
var manager = CMMotionManager()
override func viewDidLoad() {
super.viewDidLoad()
//Start disabling the screen
UIDevice.currentDevice().proximityMonitoringEnabled = true
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(proximityChanged), name: UIDeviceProximityStateDidChangeNotification, object: nil)
//Check for acceloremeter
manager.startAccelerometerUpdatesToQueue(NSOperationQueue.mainQueue()) { (data, error) in
let sum = abs(data!.acceleration.y + data!.acceleration.z + data!.acceleration.x)
if sum > 3 {
self.lastHighAccelerationEvent = NSDate()
}
}
//Enable multitouch
self.view.multipleTouchEnabled = true
}
func checkForHighFive() {
if let lastHighFive = lastHighFive where abs(lastHighFive.timeIntervalSinceDate(NSDate())) < 1 {
print("Time filter")
return
}
guard let lastProximityEvent = lastProximityEvent else {return}
guard let lastHighAccelerationEvent = lastHighAccelerationEvent else {return}
if abs(lastProximityEvent.timeIntervalSinceDate(lastHighAccelerationEvent)) < 0.1 {
lastHighFive = NSDate()
playBoratHighFive()
}
}
func playBoratHighFive() {
print("High Five")
let player = try! AudioPlayer(fileName: "borat.mp3")
player.play()
}
func proximityChanged() {
if UIDevice.currentDevice().proximityState {
self.lastProximityEvent = NSDate()
}
}
}
You can detect finger count with multi touch event handling. check this answer

How to use CoreMotion in WatchKit?

I was quite dubious on this question's title phrasing, but I think that's the whole point as it is.
I've been trying to just read the CoreMotion data on the WatchKit, but as it turns out, I can't get startDeviceMotionUpdatesToQueue to work, my handler is never called.
I tried running in a custom background thread (NSOperationQueue()), still no luck.
I'm debugging on a real Apple Watch, not the simulator.
In my WKInterfaceController:
let manager = CMMotionManager()
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
let communicator = SessionDelegate()
manager.deviceMotionUpdateInterval = 1 / 60
manager.startDeviceMotionUpdatesToQueue(NSOperationQueue.mainQueue()) {
(motionerOp: CMDeviceMotion?, errorOp: NSError?) -> Void in
print("got into handler")
guard let motion = motionerOp else {
if let error = errorOp {
print(error.localizedDescription)
}
assertionFailure()
return
}
print("passed guard")
let roll = motion.attitude.roll
let pitch = motion.attitude.pitch
let yaw = motion.attitude.yaw
let attitudeToSend = ["roll": roll, "pitch": pitch, "yaw": yaw]
communicator.send(attitudeToSend)
}
print("normal stack")
}
the output is
normal stack
normal stack
(Yes, twice! I don't know why that either, but that is not the point, must be another thing I'm doing wrongly)
I'm posting this here 'cause I have no clue to where look into, this is freaking crazy.
Device Motion (startDeviceMotionUpdatesToQueue) is not available in WatchOS2 yet (deviceMotionAvailable returns false), probably accelerometer can help you startAccelerometerUpdatesToQueue

Resources