How can i use the gravity or motion sensors inside iPhone to calculate how many times the device moved up and down. e.g. as if it were lifted like a dumbbell for a couple of times, i wanted to count it.
Forgive me if this is something very simply achievable but I'm pretty new to iOS development and hence the question.
You need to use the Core Motion framework to access the gyroscope and accelerometer data.
let manager = CMMotionManager()
if manager.gyroAvailable {
// CMMotionManager is available.
manager.gyroUpdateInterval = 0.1
manager.startGyroUpdates()
// get gyro data...
let queue = NSOperationQueue.mainQueue
manager.startGyroUpdatesToQueue(queue) {
(data, error) in
// ... get data here
}
}
// accelerometer data
if manager.accelerometerAvailable {
manager.accelerometerUpdateInterval = 0.01
manager.startAccelerometerUpdatesToQueue(NSOperationQueue.mainQueue()) {
[weak self] (data: CMAccelerometerData!, error: NSError!) in
// get data here ...
}
}
Combining those 2 you can get the detection going. Experiment with the results until you get the right motion you want.
I experimented a bit with the HTML5 methods on the iPhone. I wanted to build something similar to you, an app that would count the number of chin-ups or sit-ups automatically. I tried a combination of these:
navigator.geolocation
window.ondevicemotion
window.ondeviceorientation
The app would count up when moving in one direction, then wait after movement in the opposite direction ends, then start counting again. I get a count, but it lacks precision. Calibration is difficult... Code available if needed.
Related
I have a view controller that uses CoreMotion to monitor the device's Attitude.
Here is the handler that is used in the call to startDeviceMotionUpdates():
/**
* Receives device motion updates from Core Motion and uses the attitude of the device to update
* the position of the attitude tracker inside the bubble level view.
*/
private func handleAttitude(deviceMotion: CMDeviceMotion?, error: Error?) {
guard let attitude = deviceMotion?.attitude else {
GLog.Error(message: "Could not get device attitude.")
return
}
// Calculate the current attitude vector
let roll = attitude.roll
let pitch = attitude.pitch - optimalAngle
let magnitude = sqrt(roll * roll + pitch * pitch)
// Drawing can only happen on the main thread
DispatchQueue.main.async {
[weak self] in
guard let weakSelf = self else {
GLog.Log("could not get weak self")
return
}
// Move the bubble in the attitude tracker to match the current attitude
weakSelf.bubblePosX.constant = CGFloat(roll * weakSelf.attitudeScalar)
weakSelf.bubblePosY.constant = CGFloat(pitch * weakSelf.attitudeScalar)
// Set the border color based on the current attitude.
if magnitude < weakSelf.yellowThreshold {
weakSelf.attitudeView.layer.borderColor = weakSelf.green.cgColor
} else if magnitude < weakSelf.redThreshold {
weakSelf.attitudeView.layer.borderColor = weakSelf.yellow.cgColor
} else {
weakSelf.attitudeView.layer.borderColor = weakSelf.red.cgColor
}
// Do the actual drawing
weakSelf.view.layoutIfNeeded()
}
}
I added [weak self] to see if it would fix things, but it has not. This crash is not easy to reproduce.
When I am done the with VC that uses CoreMotion, I call stopDeviceMotionUpdates() in the VC's viewWillDisappear() method. This VC is the only class in the app that imports CoreMotion.
However, when I arrive in the next VC, occasionally I see EXC_BAD_ACCESS getting thrown on a co m.apple.CoreMotion.MotionThread.
Anybody know why CoreMotion would spawn a thread after the VC that used it has been dismissed? I've verified that the VC is no longer in memory when the crash happens. And yes, the two VCs I'm dealing with are presented modally.
I've examined the memory graph, and when the crash happens, these CoreMotion objects are being reported:
And:
I don't know if those objects should still be in memory after the instance of the CoreMotionManager has been deallocated or not. According to the memory graph, there is no instance of CoreMotionManager in memory.
The VC that imports CoreMotion also imports ARKit. Not sure if some crazy interaction between CoreMotion and ARKit is the problem.
There does seem to be something going on between the main thread and the MotionThread(14):
I'm not sure what to make of the main thread stack trace.
Sometimes when the CoreMotion VC is dismissed, I've noticed that there is a lag in the memory it uses getting released. The release always happens eventually.
Thanks to anybody who can help!
We have a ARSCNView member. We were not calling sceneView.session.pause() when we dismissed the VC that used the sceneView member. One line of code. That was it.
Are you passing the function handleAttitude direct to startDeviceMotionUpdates
as in startDeviceMotionUpdates(to:someQueue, withHandler:handleAttitude)
That will set up a retain cycle between your VC and CMMotionManager
Try
singletonMM.startDeviceMotionUpdates(to:someQueue) { [weak self] motion,error in
self?.handleAttitude(motion,error)
}
To prevent a strong ref to your VC.
I am working with the Vision and CoreML frameworks. I have a real time video feed. For every frame, I first detect rectangles using VNDetectRectanglesRequest. For every rectangle I detect, I crop out that part of the image and perform a VNCoreMLRequest to classify that part of the image. After classifying the object, if it is the object type I am looking for, I draw the rectangle. It's like I built an object detector when I don't have data to train an actual neural network for detection.
Generally, I detect around 1 to 3 rectangles. Not that many. So for every VNDetectRectanglesRequest, I have 1 to 3 additional VNCoreMLRequest per frame to perform. However, performing all these requests make my video stream very laggy. It's quite noticeable when I point my camera at rectangularly shaped objects. I guess I should add that this video footage is coming from ARKit, so whatever background operations ARKit is performing might have made the lag worse.
I tried to optimize the code using DispatchQueue. Below is my pseudo-code. I'm happy with what the code is doing, but I need to get rid of the lag.
DispatchQueue.global(qos: .background).async {
let request = VNDetectRectanglesRequest(completionHandler: { (request, error) in
// ...
for observation in request.results {
let mlRequest = VNCoreMLRequest(model: model){ (request, error) in
// classify ... if is object I want then jump back to main queue and draw
DispatchQueue.main.async {
// draw rectangles
}
})
}
})
}
I don't think there's anything wrong with your code, just that making all of those requests for every frame is too high of a load for the device to handle.
Try reducing the frequency of the requests, and maybe add a few conditions to check before making the request in the first place. Simply adding a check to see if you're already waiting for other requests to finish might noticeably reduce the load. You could also check if the user is holding the device steady, the frame is in focus, the lighting is adequate, etc.
The lag is potentially caused by creation of new request objects (better check it in Instruments/Time Profiler, but object creation is often the first suspect).
In a similar situation (where I need to run a request on the entire image, then other requests on its parts) I am using https://github.com/maxvol/RxVision which does not re-create requests and hence does not introduce the lag.
let mlRequest: RxVNCoreMLRequest<CGImage> = VNCoreMLRequest.rx.request(model: model, imageCropAndScaleOption: .scaleFit)
mlRequest
.observable
.subscribe { [unowned self] (event) in
switch event {
case .next(let completion):
let cgImage = completion.value // NB you can easily pass the value along to the completion handler
if let result = completion.request.results?[0] as? VNClassificationObservation {
os_log("results: %#", type: .debug, result.identifier)
}
default:
break
}
}
.disposed(by: disposeBag)
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: requestOptions)
do {
try imageRequestHandler.rx.perform([mlRequest], with: cgImage) // NB you can easily pass the value along to the completion handler
} catch {
print(error)
}
I am porting a card-melding-game from Android to iOS (see https://play.google.com/store/apps/details?id=com.pipperpublishing.fivekings). Each turn you choose from the drawpile or discard pile, meld your cards, and then discard. To keep it responsive, I have the computer player start pre-calculating its "best action" based on your discard, before it is apparently playing. In Android, I do my own thread management; in iOS I am trying to use GCD.
What I am finding is that the computer pre-calculations running on a QOS_CLASS_USER_INITIATED queue sometimes blocks the UI, especially when testing on iOS8 on an iPhone 6. I can barely imagine that happening for USER_INTERACTIVE. otoh, I've read some confusing stuff about GCD reusing the UI thread for such queues, so maybe I am not understanding.
Here's the relevant code:
EasyComputerPlayer definition of my own queue (things were even slower when I used a global queue):
class EasyComputerPlayer : Player {
static let qos_attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_USER_INITIATED, 0)
static let concurrentDiscardTestQueue = dispatch_queue_create("com.pipperpublishing.FiveKings", qos_attr)
...
Here is the pre-calculation, which is called immediately after the human player discards - testHand.main() does the actually calculations for the possible choices of the computer's discard if the computer picked up the card that the human just discarded.
override func findBestHandStart(isFinalTurn : Bool, addedCard : Card) {
let cardsWithAdded : CardList = CardList(cardList: hand!);
cardsWithAdded.add(addedCard);
//Create the different test hands
testHandSets[method.rawValue].removeAll(keepCapacity: true)
for disCard in cardsWithAdded.cards {
let cards : CardList = CardList(cardList: cardsWithAdded);
cards.remove(disCard);
testHandSets[method.rawValue].append(ThreadedHand(parent: self, roundOf: self.hand!.getRoundOf(), cards: cards, discard: disCard, method: self.method, isFinalTurn: isFinalTurn)) //creates new hand with replaced cards
}
//and then dispatch them
dispatchGroups[method.rawValue] = dispatch_group_create()
for (iTask,testHand) in testHandSets[method.rawValue].enumerate(){
let card = testHand.hand.discard
dispatch_group_enter(dispatchGroups[method.rawValue])
dispatch_async(EasyComputerPlayer.concurrentDiscardTestQueue) {
testHand.main() //calls meldAndEvaluate
dispatch_group_leave(self.dispatchGroups[self.method.rawValue])
}
}
}
In the log, I will see the tasks dispatched, and then the UI sometimes hangs until they all finish (which in later rounds can take 5 seconds).
I replaced QOS_CLASS_USER_INITIATED with QOS_CLASS_UTILITY which seems to have fixed the problem temporarily, but of course I am worried that I have just reduced the frequency :)
I'm writing an app, that has to do some calculations every time the phone moves. I've read every question here, but couldn't get the Accelerometer to gather data in the background (after the user navigates away from the app). I've set the Location updates flag in the Plist.info. This is the code I'm using:
let motionManager = CMMotionManager()
func startMotionUpdates() {
var timestamps = [NSDate]()
if motionManager.accelerometerAvailable {
motionManager.accelerometerUpdateInterval = 1
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: { (data: CMAccelerometerData?, error: NSError?) -> Void in
dispatch_async(dispatch_get_main_queue(), { () -> Void in
let time = NSDate(timeIntervalSince1970: data!.timestamp)
timestamps.append(time)
print(timestamps)
})
})
}
}
I tried every combination out there, I've tried using CMDeviceMotion, I've tried using CLLocationManager.startUpdatingLocation() to simulate background activity, but nothing works. Does anybody have any ideas?
You cannot be woken up every time the accelerometer changes. You can be woken up whenever the location of the device changes significantly (at least several meters) using CLLocationManager, and that's what "location" means in the background modes.
To track finer-grained motion information, you need to ask the system to start recording the data using CMSensorRecorder, and then later you can ask for the data and compute what you want from it. But you won't be allowed to run in the background continuously watching every jiggle of the device. That would eat too much battery.
See also CMPedometer which addresses certain use cases more directly.
I'm trying to convert an old game application built in obj-c to the new swift code. i'm having some issues understanding swift closures and how to use them for example in the"startAccelerometerUpdatesToQueue" method.
i have initialized the motion manager in this way
motionManager!.accelerometerUpdateInterval = (1/40)
then in the viewdidload of my view controller
var queue:NSOperationQueue
motionManager?.startAccelerometerUpdatesToQueue(queue, withHandler: {(accelerometerData : CMAccelerometerData, error : NSError) in
})
the "startAccelerometerUpdatesToQueue" is giving me an error and i'm pretty sure i didn't understand the correct closure syntax.
Any ideas?
Actually, you just got the signature wrong – the arguments to your closure need to be optionals (since they are passed from Objective-C, they could be nil). Because of that, the arguments you provide don't match an existing method signature, and because of that you get an error.
Take a look at the iOS 8 API docs, they also provide Swift signatures:
func startAccelerometerUpdatesToQueue(_ queue: NSOperationQueue!,
withHandler handler: CMAccelerometerHandler!)
and CMAccelerometerHandler is defined as
typealias CMAccelerometerHandler = (CMAccelerometerData!, NSError!) -> Void
Thus, your call should be:
motionManager?.startAccelerometerUpdatesToQueue(queue, withHandler: {(accelerometerData : CMAccelerometerData!, error : NSError!) in
})
And as with any function/method that takes a closure as it's last argument, you can leave it out of the argument list and write it after the call (trailing closure syntax – this example also leaves out the types, as they can be inferred, but that is optional):
motionManager?.startAccelerometerUpdatesToQueue(queue) { accelerometerData, error in
}
CMMotionManager
object is the gateway to the motion services provided by iOS. These services provide an app with accelerometer data, rotation-rate data, magnetometer data, and other device-motion data such as attitude. These types of data originate with a device’s accelerometers and (on some models) its magnetometer and gyroscope.
Handling Motion Updates at Specified Intervals
To receive motion data at specific intervals, the app calls a “start” method that takes an operation queue (instance of NSOperationQueue) and a block handler of a specific type for processing those updates. The motion data is passed into the block handler. The frequency of updates is determined by the value of an “interval” property.
Accelerometer.
Set the accelerometerUpdateInterval property to specify an update interval. Call the startAccelerometerUpdatesToQueue:withHandler: method, passing in a block of type CMAccelerometerHandler. Accelerometer data is passed into the block as CMAccelerometerData objects.
Gyroscope.
Magnetometer.
Device motion.
The interval, in seconds, for providing accelerometer updates to the block handler.
Declaration
SWIFT
var accelerometerUpdateInterval: NSTimeInterval
Discussion
The system supplies accelerometer updates to the block handler specified in startAccelerometerUpdatesToQueue:withHandler: at regular intervals determined by the value of this property.
The interval units are in seconds. The value of this property is capped to minimum and maximum values; the maximum value is determined by the maximum frequency supported by the hardware. If your app is sensitive to the intervals of acceleration data, it should always check the timestamps of the delivered CMAccelerometerData instances to determine the true update interval.
Availability
Available in iOS 4.0 and later.
import UIKit
import CoreMotion
class ViewController: UIViewController {
let motionManager = CMMotionManager()
var timer: Timer!
override func viewDidLoad() {
super.viewDidLoad()
motionManager.startAccelerometerUpdates()
motionManager.startGyroUpdates()
motionManager.startMagnetometerUpdates()
motionManager.startDeviceMotionUpdates()
timer = Timer.scheduledTimer(timeInterval: 3.0, target: self, selector: #selector(ViewController.update), userInfo: nil, repeats: true)
}
#objc func update() {
if let accelerometerData = motionManager.accelerometerData {
print(accelerometerData)
}
if let gyroData = motionManager.gyroData {
print(gyroData)
}
if let magnetometerData = motionManager.magnetometerData {
print(magnetometerData)
}
if let deviceMotion = motionManager.deviceMotion {
print(deviceMotion)
}
}
}