How to recognize a screen high-five - ios

I have a client that wants to recognize when an user smacks their screen with their whole hand, like a high-five. I suspect that Apple won't approve this, but let's look away from that.
I though of using a four-finger-tap recognizer, but that doesn't really cover it. The best approach would possibly be to check if the user is covering at least 70% of the screen with their hand, but I don't know how to do that.
Can someone help me out here?

You could use the accelerometer to detect the impact of a hand & examine the front camera feed to find a corresponding dark frame due to the hand covering the camera*
* N.B. a human hand might not be big enough to cover the front camera on an iPhone 6+

Sort of solved it. Proximity + accelerometer works good enough. Multitouch doesn't work, as it ignores stuff it doesn't think of as taps.
import UIKit
import CoreMotion
import AVFoundation
class ViewController: UIViewController {
var lastHighAccelerationEvent:NSDate? {
didSet {
checkForHighFive()
}
}
var lastProximityEvent:NSDate? {
didSet {
checkForHighFive()
}
}
var lastHighFive:NSDate?
var manager = CMMotionManager()
override func viewDidLoad() {
super.viewDidLoad()
//Start disabling the screen
UIDevice.currentDevice().proximityMonitoringEnabled = true
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(proximityChanged), name: UIDeviceProximityStateDidChangeNotification, object: nil)
//Check for acceloremeter
manager.startAccelerometerUpdatesToQueue(NSOperationQueue.mainQueue()) { (data, error) in
let sum = abs(data!.acceleration.y + data!.acceleration.z + data!.acceleration.x)
if sum > 3 {
self.lastHighAccelerationEvent = NSDate()
}
}
//Enable multitouch
self.view.multipleTouchEnabled = true
}
func checkForHighFive() {
if let lastHighFive = lastHighFive where abs(lastHighFive.timeIntervalSinceDate(NSDate())) < 1 {
print("Time filter")
return
}
guard let lastProximityEvent = lastProximityEvent else {return}
guard let lastHighAccelerationEvent = lastHighAccelerationEvent else {return}
if abs(lastProximityEvent.timeIntervalSinceDate(lastHighAccelerationEvent)) < 0.1 {
lastHighFive = NSDate()
playBoratHighFive()
}
}
func playBoratHighFive() {
print("High Five")
let player = try! AudioPlayer(fileName: "borat.mp3")
player.play()
}
func proximityChanged() {
if UIDevice.currentDevice().proximityState {
self.lastProximityEvent = NSDate()
}
}
}

You can detect finger count with multi touch event handling. check this answer

Related

Why am I having trouble filling an external screen on iPad (but not in simulator)?

I am able to detect when a screen is detected, associate it with an appropriate windowScene and add a view to it. Slightly hacky but approximately working (code for disconnection not included here), thanks to this SO question:
class ExternalViewController: UIViewController {
override func viewDidLoad() {
view.backgroundColor = .cyan
print("external frame \(view.frame.width)x\(view.frame.height)")
}
}
class ViewController: UIViewController {
var additionalWindows: [UIWindow] = []
override func viewDidLoad() {
//nb, Apple documentation seems out of date.
//https://stackoverflow.com/questions/61191134/setter-for-screen-was-deprecated-in-ios-13-0
NotificationCenter.default.addObserver(forName: UIScreen.didConnectNotification, object: nil, queue: nil) { [weak self] notification in
guard let self = self else {return}
guard let newScreen = notification.object as? UIScreen else {return}
// Give the system time to update the connected scenes
DispatchQueue.main.asyncAfter(deadline: .now() + 0.05) {
// Find matching UIWindowScene
let matchingWindowScene = UIApplication.shared.connectedScenes.first {
guard let windowScene = $0 as? UIWindowScene else { return false }
return windowScene.screen == newScreen
} as? UIWindowScene
guard let connectedWindowScene = matchingWindowScene else {
NSLog("--- Connected scene was not found ---")
return
//fatalError("Connected scene was not found") // You might want to retry here after some time
}
let screenDimensions = newScreen.bounds
let newWindow = UIWindow(frame: screenDimensions)
NSLog("newWindow \(screenDimensions.width)x\(screenDimensions.height)")
newWindow.windowScene = connectedWindowScene
let vc = ExternalViewController()
vc.mainVC = self
newWindow.rootViewController = vc
newWindow.isHidden = false
self.additionalWindows.append(newWindow)
}
}
}
}
When I do this in the iOS simulator, I see my graphics fill the screen as intended, but when running on my actual device, it appears with a substantial black border around all sides.
Note that this is not the usual border seen with the default display mirroring behaviour - the 16:9 aspect ratio is preserved, and I do see different graphics as expected, (flat cyan color in my example code, normally I'm doing some Metal rendering that has some slight anomalies that are out of scope here, although perhaps might lead to some different clues on this if I dig into it deeper).
The print messages report the expected 1920x1080 dimensions. I don't know UIKit very well, and haven't been doing much active Apple development (I'm dusting off a couple of old side projects here in the hopes of being able to use them to project visuals at a gig in the near future), so I don't know if there's something else to do with sizing constraints etc that I might be missing, but even so it's hard to see why it would behave differently in the simulator.
Other apps I have installed from the app store do indeed show fullscreen graphics on the external display - Netflix shows fullscreen video as you would expect, Concepts shows a different representation of the document than the one you see on the device.
So, in this instance the issue is to do with Overscan Compensation. Thanks to Jerrot on Discord for pointing me in the right direction.
In the context of my app, it is sufficient to add newScreen.overscanCompensation = .none in the connection notification delegate (actually, in the part that is delayed a few ms after that - it doesn't work if applied directly in the connection notification). In the question linked above, there is further discussion of other aspects that may be important in a different context.
This is my ViewController modified to achieve the desired result:
class ViewController: UIViewController {
var additionalWindows: [UIWindow] = []
override func viewDidLoad() {
//nb, Apple documentation seems out of date.
//https://stackoverflow.com/questions/61191134/setter-for-screen-was-deprecated-in-ios-13-0
NotificationCenter.default.addObserver(forName: UIScreen.didConnectNotification, object: nil, queue: nil) { [weak self] notification in
guard let self = self else {return}
guard let newScreen = notification.object as? UIScreen else {return}
// Give the system time to update the connected scenes
DispatchQueue.main.asyncAfter(deadline: .now() + 0.05) {
// Find matching UIWindowScene
let matchingWindowScene = UIApplication.shared.connectedScenes.first {
guard let windowScene = $0 as? UIWindowScene else { return false }
return windowScene.screen == newScreen
} as? UIWindowScene
guard let connectedWindowScene = matchingWindowScene else {
NSLog("--- Connected scene was not found ---")
return
//fatalError("Connected scene was not found") // You might want to retry here after some time
}
let screenDimensions = newScreen.bounds
////// new code here --->
newScreen.overscanCompensation = .none
//////
let newWindow = UIWindow(frame: screenDimensions)
NSLog("newWindow \(screenDimensions.width)x\(screenDimensions.height)")
newWindow.windowScene = connectedWindowScene
let vc = ExternalViewController()
vc.mainVC = self
newWindow.rootViewController = vc
newWindow.isHidden = false
self.additionalWindows.append(newWindow)
}
}
}
}
In this day and age, I find it pretty peculiar that overscan compensation is enabled by default.

How to 10 second forward or backward in Spotify player

I am trying to add(move forward) 10 second song duration or minus(move backward) 10 second in Spotify player but i am really confused how to add or minus.
When i m trying to use this code the song is not changed duration
// forward button action
#IBAction func moveFrdBtnAction(_ sender: Any) {
SpotifyManager.shared.audioStreaming(SpotifyManager.shared.player, didSeekToPosition: TimeInterval(10))
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
player?.seek(to: position, callback: { (error) in
let songDuration = audioStreaming.metadata.currentTrack?.duration as Any as! Double
self.delegate?.getSongTime(timeCount: Int(songDuration)+1)
})
}
We are making a music application using the same SDK in both the platforms (Android & iOS), the seekToPosition method of the Spotify SDK is working correctly in the Android version, however, it is not working in the iOS one.The delegate method calls itself but the music stops.
Can you kindly let us know why this scenario is happening, and what should we do to run it on the iOS devices as well.
Can someone please explain to me how to solve this , i've tried to solve this but no results yet.
Any help would be greatly appreciated.
Thanks in advance.
I don't use this API so my answer will be based your code and Spotify's reference documentation.
I think there are a few things wrong with your flow:
As Robert Dresler commented, you should (approximately) never call a delegate directly, a delegate calls you.
I'm pretty sure your action currently results in jumping to exactly 10 seconds, not by 10 seconds.
(As an aside, I'd suggest changing the name of your function moveFrdBtnAction to at least add more vowels)
Anyway, here's my best guess at what you want:
// forward button action
#IBAction func moveForwardButtonAction(_ sender: Any) {
skipAudio(by: 10)
}
#IBAction func moveBackButtonAction(_ sender: Any) {
skipAudio(by: -10)
}
func skipAudio(by interval: TimeInterval) {
if let player = player {
let position = player.playbackState.position // The documentation alludes to milliseconds but examples don't.
player.seek(to: position + interval, callback: { (error) in
// Handle the error (if any)
})
}
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
// Update your UI
}
Note that I have not handled seeking before the start of the track, nor after the end which could happen with a simple position + interval. The API may handle this for you, or not.
You could take a look at the examples here: spotify/ios-sdk. In the NowPlayingView example they use the 'seekForward15Seconds', maybe you could use that? If you still need 10s I have added a function below. The position is in milliseconds.
"position: The position to seek to in milliseconds"
docs
ViewController.swift
var appRemote: SPTAppRemote {
get {
return AppDelegate.sharedInstance.appRemote
}
}
fileprivate func seekForward15Seconds() {
appRemote.playerAPI?.seekForward15Seconds(defaultCallback)
}
fileprivate seekBackward15Seconds() {
appRemote.playerAPI?.seekBackward15Seconds(defaultCallback)
}
// TODO: Or you could try this function
func seekForward(seconds: Int){
appRemote.playerAPI?.getPlayerState({ (result, error) in
// playback position in milliseconds
let current_position = self.playerState?.playbackPosition
let seconds_in_milliseconds = seconds * 1000
self.appRemote.playerAPI?.seek(toPosition: current_position + seconds_in_milliseconds, callback: { (result, error) in
guard error == nil else {
print(error)
return
}
})
})
}
var defaultCallback: SPTAppRemoteCallback {
get {
return {[weak self] _, error in
if let error = error {
self?.displayError(error as NSError)
}
}
}
}
AppDelegate.swift
lazy var appRemote: SPTAppRemote = {
let configuration = SPTConfiguration(clientID: self.clientIdentifier, redirectURL: self.redirectUri)
let appRemote = SPTAppRemote(configuration: configuration, logLevel: .debug)
appRemote.connectionParameters.accessToken = self.accessToken
appRemote.delegate = self
return appRemote
}()
class var sharedInstance: AppDelegate {
get {
return UIApplication.shared.delegate as! AppDelegate
}
}
Edit1:
For this to work you need to follow the Prepare Your Environment:
Add the SpotifyiOS.framework to your Xcode project
Hope it helps!

AudioKit : AKNodeOutputPlot and AKMicrophone not working, potentially due to Lifecycle or MVVM architecture decisions

Early in my learning with AudioKit, and scaling in a larger app, I took the standard advice that AudioKit should be effectively be a global singleton. I managed to build a really sophisticated prototype and all was well in the world.
Once I started to scale up and get closer to an actual release. We decided to go MVVM for our architecture and try to not have a monstrous large AudioKit Singelton to handle every aspect of our audio needs in the app. In short, MVVM has been so incredibly elegant and has demonstrably cleaned up our code base.
In direct relation to our structure of AudioKit, it goes something like this:
AudioKit and AKMixer reside in a Singelton instance, and have public functions that allow the various viewmodels and our other Audio models to attach and detach the various nodes (AKPlayer, AKSampler, etc...). In the minimal testing I have done, I can confirm that this works as I tried it with my AKPlayer module and it works great.
I'm running into an issue where I cannot, for the life of me, get AKNodeOutputPlot and AKMicrophone to work with each other, despite the actual code implementation being identical to my working prototypes.
My concern is did I do the wrong thing thinking I could modularize AudioKit and the various nodes and components that need to connect to it, or does AKNodeOutputPlot have special requirements I am not aware of.
Here is the briefest snippets of Code I can provide without overwhelming the question:
AudioKit Singelton (called in AppDelegate):
import Foundation
import AudioKit
class AudioKitConfigurator
{
static let shared: AudioKitConfigurator = AudioKitConfigurator()
private let mainMixer: AKMixer = AKMixer()
private init()
{
makeMainMixer()
configureAudioKitSettings()
startAudioEngine()
}
deinit
{
stopAudioEngine()
}
private func makeMainMixer()
{
AudioKit.output = mainMixer
}
func mainMixer(add node: AKNode)
{
mainMixer.connect(input: node)
}
func mainMixer(remove node: AKNode)
{
node.detach()
}
private func configureAudioKitSettings()
{
AKAudioFile.cleanTempDirectory()
AKSettings.defaultToSpeaker = true
AKSettings.playbackWhileMuted = true
AKSettings.bufferLength = .medium
do
{
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
}
catch
{
AKLog("Could not set session category.")
}
}
private func startAudioEngine()
{
do
{
try AudioKit.start()
}
catch
{
AKLog("Fatal Error: AudioKit did not start!")
}
}
private func stopAudioEngine()
{
do
{
try AudioKit.stop()
}
catch
{
AKLog("Fatal Error: AudioKit did not stop!")
}
}
}
Microphone Component:
import Foundation
import AudioKit
import AudioKitUI
enum MicErrorsToThrow: String, Error
{
case recordingTooShort = "The recording was too short, just silently failing"
case audioFileFailedToUnwrap = "The Audio File failed to Unwrap from the recorder"
case recorderError = "The Recorder was unable to start recording."
case recorderCantReset = "In attempt to reset the recorder, it was unable to"
}
class Microphone
{
private var mic: AKMicrophone = AKMicrophone()
private var micMixer: AKMixer = AKMixer()
private var micBooster: AKBooster = AKBooster()
private var recorder: AKNodeRecorder!
private var recordingTimer: Timer
init()
{
micMixer = AKMixer(mic)
micBooster = AKBooster(micMixer)
micBooster.gain = 0
recorder = try? AKNodeRecorder(node: micMixer)
//TODO: Need to finish the recording timer implementation, leaving blank for now
recordingTimer = Timer(timeInterval: 120, repeats: false, block: { (timer) in
})
AudioKitConfigurator.shared.mainMixer(add: micBooster)
}
deinit {
// removeComponent()
}
public func removeComponent()
{
AudioKitConfigurator.shared.mainMixer(remove: micBooster)
}
public func reset() throws
{
if recorder.isRecording
{
recorder.stop()
}
do
{
try recorder.reset()
}
catch
{
AKLog("Recorder can't reset!")
throw MicErrorsToThrow.recorderCantReset
}
}
public func setHeadphoneMonitoring()
{
// microphone will be monitored while recording
// only if headphones are plugged
if AKSettings.headPhonesPlugged {
micBooster.gain = 1
}
}
/// Start recording from mic, call this function when using in conjunction with a AKNodeOutputPlot so that it can display the waveform in realtime while recording
///
/// - Parameter waveformPlot: AKNodeOutputPlot view object which displays waveform from recording
/// - Throws: Only error to throw is from recorder property can't start recording, something wrong with microphone. Enum is MicErrorsToThrow.recorderError
public func record(waveformPlot: AKNodeOutputPlot) throws
{
waveformPlot.node = mic
do
{
try recorder.record()
// self.recordingTimer.fire()
}
catch
{
print("Error recording!")
throw MicErrorsToThrow.recorderError
}
}
/// Stop the recorder, and get the recording as an AKAudioFile, necessary to call if you are using AKNodeOutputPlot
///
/// - Parameter waveformPlot: AKNodeOutputPlot view object which displays waveform from recording
/// - Returns: AKAudioFile
/// - Throws: Two possible errors, recording was too short (right now is 0.0, but should probably be like 0.5 secs), or could not retrieve audio file from recorder, MicErrorsToThrow.audioFileFailedToUnwrap, MicErrorsToThrow.recordingTooShort
public func stopRecording(waveformPlot: AKNodeOutputPlot) throws -> AKAudioFile
{
waveformPlot.pause()
waveformPlot.node = nil
recordingTimer.invalidate()
if let tape = recorder.audioFile
{
if tape.duration > 0.0
{
recorder.stop()
AKLog("Printing tape: CountOfFloatChannelData:\(tape.floatChannelData?.first?.count) | maxLevel:\(tape.maxLevel)")
return tape
}
else
{
//TODO: This should be more gentle than an NSError, it's just that they managed to tap the buttona and tap again to record nothing, honestly duration should probbaly be like 0.5, or 1.0 even. But let's return some sort of "safe" error that doesn't require UI
throw MicErrorsToThrow.recordingTooShort
}
}
else
{
//TODO: need to return error here, could not recover audioFile from recorder
AKLog("Can't retrieve or unwrap audioFile from recorder!")
throw MicErrorsToThrow.audioFileFailedToUnwrap
}
}
}
Now, in my VC, the AKNodeOutputPlot is a view on Storybard and hooked up via IBOutlet. It renders on screen, it's stylized per my liking and it's definitely connected and working. Also in the VC/VM is an instance property of my Microphone component. My thinking was that upon recording, we would pass the nodeOutput object to the ViewModel, which then would call the record(waveformPlot: AKNodeOutputPlot) function of Microphone, which then would waveformPlot.node = mic be sufficient to hook them up. Sadly this is not the case.
View:
class ComposerVC: UIViewController, Storyboarded
{
var coordinator: MainCoordinator?
let viewModel: ComposerViewModel = ComposerViewModel()
#IBOutlet weak var recordButton: RecordButton!
#IBOutlet weak var waveformPlot: AKNodeOutputPlot! // Here is our waveformPlot object, again confirmed rendering and styled
// MARK:- VC Lifecycle Methods
override func viewDidLoad()
{
super.viewDidLoad()
setupNavigationBar()
setupConductorButton()
setupRecordButton()
}
func setupWaveformPlot() {
waveformPlot.plotType = .rolling
waveformPlot.gain = 1.0
waveformPlot.shouldFill = true
}
override func viewDidAppear(_ animated: Bool)
{
super.viewDidAppear(animated)
setupWaveformPlot()
self.didDismissComposerDetailToRootController()
}
// Upon touching the Record Button, it in turn will talk to ViewModel which will then call Microphone module to record and hookup waveformPlot.node = mic
#IBAction func tappedRecordView(_ sender: Any)
{
self.recordButton.recording.toggle()
self.recordButton.animateToggle()
self.viewModel.tappedRecord(waveformPlot: waveformPlot)
{ (waveformViewModel, error) in
if let waveformViewModel = waveformViewModel
{
self.segueToEditWaveForm()
self.performSegue(withIdentifier: "composerToEditWaveForm", sender: waveformViewModel)
//self.performSegue(withIdentifier: "composerToDetailSegue", sender: self)
}
}
}
ViewModel:
import Foundation
import AudioKit
import AudioKitUI
class ComposerViewModel: ViewModelProtocol
{
//MARK:- Instance Variables
var recordingState: RecordingState
var mic: Microphone = Microphone()
init()
{
self.recordingState = .readyToRecord
}
func resetViewModel()
{
self.resetRecorder()
}
func resetRecorder()
{
do
{
try mic.reset()
}
catch let error as MicErrorsToThrow
{
switch error {
case .audioFileFailedToUnwrap:
print(error)
case .recorderCantReset:
print(error)
case .recorderError:
print(error)
case .recordingTooShort:
print(error)
}
}
catch {
print("Secondary catch in start recording?!")
}
recordingState = .readyToRecord
}
func tappedRecord(waveformPlot: AKNodeOutputPlot, completion: ((EditWaveFormViewModel?, Error?) -> ())? = nil)
{
switch recordingState
{
case .readyToRecord:
self.startRecording(waveformPlot: waveformPlot)
case .recording:
self.stopRecording(waveformPlot: waveformPlot, completion: completion)
case .finishedRecording: break
}
}
func startRecording(waveformPlot: AKNodeOutputPlot)
{
recordingState = .recording
mic.setHeadphoneMonitoring()
do
{
try mic.record(waveformPlot: waveformPlot)
}
catch let error as MicErrorsToThrow
{
switch error {
case .audioFileFailedToUnwrap:
print(error)
case .recorderCantReset:
print(error)
case .recorderError:
print(error)
case .recordingTooShort:
print(error)
}
}
catch {
print("Secondary catch in start recording?!")
}
}
I'm happy to provide more code but I just don't want to overwhelm anyway with their time. The logic seems sound, I just feel I'm missing something obvious and or a complete misunderstanding of AudioKit + AKNodeOutputPlot + AKMicrohone.
Any ideas are so welcome, thank you!
EDIT
AudioKit 4.6 fixed all the issues! Highly encourage MVVM/Modularization of AudioKit for your projects!
====
So after alot of experiments. I have come to a few conclusions:
In a separate project, I brought over my AudioKitConfigurator and Microphone classes, initialized them, hooked them to a AKNodeOutputPlot and it worked flawlessly.
In my very large project, no matter what I do, I cannot get the same classes to work at all.
For now, I am reverting back to an old build, slowly adding components until it breaks again, and will update the architecture one by one, as this problem is too complex and might be interacting with some other libraries. I have also downgraded from AudioKit 4.5.6, to AudioKit 4.5.3.
This is not a solution, but the only one that is workable right now. The good news is, it is entirely possible to format AudioKit to work with an MVVM architecture.

Swift 3 device motion gravity

I have an iPhone application with a level in it that is based on the gravityY parameter of a device motion call to motionmanager. I have fixed the level to the pitch of the phone, as I wish to show the user whether the phone is elevated or declined relative to a flat plane (flat to the ground) through its x-axis...side to side or rotated is not relevant. To do that, I have programmed the app to slide an indicator (red when out of level) along the level (a bar)....its maximum travel is each end of the level.
The level works great...and a correct value is displayed, until the user locks the phone and puts it in his or her back pocket. While in this stage, the level indicator shifts to one end of the level (the end of the phone that is elevated in the pocket), and when the phone is pulled out and unlocked, the app does not immediately restore the level - it remains out of level, even if I do a manual function call to restore the level. After about 5 minutes, the level seems to restore itself.
Here is the code:
func getElevation () {
//now get the device orientation - want the gravity value
if self.motionManager.isDeviceMotionAvailable {
self.motionManager.deviceMotionUpdateInterval = 0.05
self.motionManager.startDeviceMotionUpdates(
to: OperationQueue.current!, withHandler: {
deviceMotion, error -> Void in
var gravityValueY:Double = 0
if(error == nil) {
let gravityData = self.motionManager.deviceMotion
let gravityValueYRad = (gravityData!.gravity.y)
gravityValueY = round(180/(.pi) * (gravityValueYRad))
self.Angle.text = "\(String(round(gravityValueY)))"
}
else {
//handle the error
self.Angle.text = "0"
gravityValueY = 0
}
var elevationY = gravityValueY
//limit movement of bubble
if elevationY > 45 {
elevationY = 45
}
else if elevationY < -45 {
elevationY = -45
}
let outofLevel: UIImage? = #imageLiteral(resourceName: "levelBubble-1")
let alignLevel: UIImage? = #imageLiteral(resourceName: "levelBubbleGR-1")
let highElevation:Double = 1.75
let lowElevation:Double = -1.75
if highElevation < elevationY {
self.bubble.image = outofLevel
}
else if elevationY < lowElevation {
self.bubble.image = outofLevel
}
else {
self.bubble.image = alignLevel
}
// Move the bubble on the level
if let bubble = self.bubble {
UIView.animate(withDuration: 1.5, animations: { () -> Void in
bubble.transform = CGAffineTransform(translationX: 0, y: CGFloat(elevationY))
})
}
})
}
}
I would like the level to restore almost immediately (within 2-3 seconds). I have no way to force calibration or an update. This is my first post....help appreciated.
Edit - I have tried setting up a separate application without any animation with the code that follows:
//
import UIKit
import CoreMotion
class ViewController: UIViewController {
let motionManager = CMMotionManager()
#IBOutlet weak var angle: UITextField!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func startLevel(_ sender: Any) {
startLevel()
}
func startLevel() {
//now get the device orientation - want the gravity value
if self.motionManager.isDeviceMotionAvailable {
self.motionManager.deviceMotionUpdateInterval = 0.1
self.motionManager.startDeviceMotionUpdates(
to: OperationQueue.current!, withHandler: {
deviceMotion, error -> Void in
var gravityValueY:Double = 0
if(error == nil) {
let gravityData = self.motionManager.deviceMotion
let gravityValueYRad = (gravityData!.gravity.y)
gravityValueY = round(180/(.pi) * (gravityValueYRad))
}
else {
//handle the error
gravityValueY = 0
}
self.angle.text = "(\(gravityValueY))"
})}
}
}
Still behaves exactly the same way....
OK....so I figured this out through trial and error. First, I built a stopGravity function as follows:
func stopGravity () {
if self.motionManager.isDeviceMotionAvailable {
self.motionManager.stopDeviceMotionUpdates()
}
}
I found that the level was always set properly if I called that function, for example by moving to a new view, then restarting updates when returning to the original view. When locking the device or clicking the home button, I needed to call the same function, then restart the gravity features on reloading or returning to the view.
To do that I inserted the following in the viewDidLoad()...
NotificationCenter.default.addObserver(self, selector: #selector(stopGravity), name: NSNotification.Name.UIApplicationWillResignActive, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(stopGravity), name: NSNotification.Name.UIApplicationWillTerminate, object: nil)
This notifies the AppDelegate and runs the function. That fixed the issue immediately.

How to use CoreMotion in WatchKit?

I was quite dubious on this question's title phrasing, but I think that's the whole point as it is.
I've been trying to just read the CoreMotion data on the WatchKit, but as it turns out, I can't get startDeviceMotionUpdatesToQueue to work, my handler is never called.
I tried running in a custom background thread (NSOperationQueue()), still no luck.
I'm debugging on a real Apple Watch, not the simulator.
In my WKInterfaceController:
let manager = CMMotionManager()
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
let communicator = SessionDelegate()
manager.deviceMotionUpdateInterval = 1 / 60
manager.startDeviceMotionUpdatesToQueue(NSOperationQueue.mainQueue()) {
(motionerOp: CMDeviceMotion?, errorOp: NSError?) -> Void in
print("got into handler")
guard let motion = motionerOp else {
if let error = errorOp {
print(error.localizedDescription)
}
assertionFailure()
return
}
print("passed guard")
let roll = motion.attitude.roll
let pitch = motion.attitude.pitch
let yaw = motion.attitude.yaw
let attitudeToSend = ["roll": roll, "pitch": pitch, "yaw": yaw]
communicator.send(attitudeToSend)
}
print("normal stack")
}
the output is
normal stack
normal stack
(Yes, twice! I don't know why that either, but that is not the point, must be another thing I'm doing wrongly)
I'm posting this here 'cause I have no clue to where look into, this is freaking crazy.
Device Motion (startDeviceMotionUpdatesToQueue) is not available in WatchOS2 yet (deviceMotionAvailable returns false), probably accelerometer can help you startAccelerometerUpdatesToQueue

Resources