I've been researching the new M7 chip's CMMotionActivityManager, for determining whether the user of the device is walking, running, in a car, etc (see Apple Documentation). This seemed like a great step forward over trying to determine this previous from using LocationManager and accelerometer data only.
I notice however that CMMotionActivityManager does not have a cycling activity, which is disappointing, and almost a deal-breaker for complete usage as a new activity manager. Has anyone else found a convenient way to use CMMotionActivityManager with cycling also without having to reincorporate CMLocationManager + accelerometer just to try to test for cycling too?
Note, this also does not include general transport options for things like a Train. For instance, I commute an hour a day on the train. Automotive could be made more generic at least, similar to how Moves uses Transport.
CMMotionActivity has these defined motion types only:
stationary
walking
running
automotive
unknown
Useful notes from Apple's code, that does not necessarily solve the issue, but is helpful:
CMMotionActivity
An estimate of the user's activity based on the motion of the device.
The activity is exposed as a set of properties, the properties are not
mutually exclusive.
For example, if you're in a car stopped at a stop sign the state might
look like:
stationary = YES, walking = NO, running = NO, automotive = YES
Or a moving vehicle, stationary = NO, walking = NO, running = NO,
automotive = YES
Or the device could be in motion but not walking or in a vehicle.
stationary = NO, walking = NO, running = NO, automotive = NO. Note in this case all of the properties are NO.
[Direct Source: Apple iOS Framework, CoreMotion/CMMotionActivity.h #interface CMMotionActivity, inline code comments]
First of all its your question or kind of informative details on M7?
Has anyone else found a convenient way to use CMMotionActivityManager
with cycling also without having to reincorporate LocationManager +
accelerometer just to try to test for cycling too?
See there is lots of confusion it will create if you want to check if activity is type of cycling ??because its just depend on accelerometer
accelerometer contain microscopic crystal structures that get stressed by accelerative forces, which causes a voltage to be generated.and from that voltage it can parse the result.. so what i know is its just classifies your speed and giving you result that its running walking or automotive so if you want to use cycling some time very fast very slow or medium so may be it will some time result in to walking or running or may be auotomotive so m7 can not clarify the thing if its automotive or cycling or running because there is not much of speed variance while you cycling.
Still while using for running and walking its some time gives wrong results in some cases.. so that will chances that your app will give wrong information too.
One more thing you asked is
Note, this also does not include general transport options for things
like a Train. For instance, I commute an hour a day on the train.
Automotive could be made more generic at least, similar to how Moves
uses Transport.
So Apple is also working on other mapping features. Apple is said to be planning notable updates to its Maps app in iOS 8, and the company is currently working on implementing both public transit directions and indoor mapping features (which Google already has on iOS).
http://www.macrumors.com/2013/09/12/apple-working-to-leverage-new-m7-motion-sensing-chip-for-mapping-improvements/ (Useful Link)
So, not sure if you still need an answer to that but here is the latest from iOs8 SDK
#property(readonly, nonatomic) BOOL cycling NS_AVAILABLE(NA, 8_0);
In session 612 at WWDC 2014, the two presenting Apple engineers provided some information: In the slides they stated:
Performance is very sensitive to location
Works best if device is worn on upper arm Best for retrospective use cases
Longest latency
Best for retrospective use cases
In the video they explain on the audio track (starting at about 11:00) that
Cycling is new, something we introduced in iOS 8.
Cycling is very challenging, and again you need the dynamics and so
it's going to be very sensitive to location.
If it was mounted on the upper arm the latency is going to be fairly
reasonable.
And if it's anywhere else, it's going to take a lot longer. So definitely I would not suggest using cycling activity classification as a hint for the context here and now. It's really something that you'll want to use in a retrospective manner for a journaling app, for example.
I made a simple test setup for iOS 8 and 9 and iPhone 5s and 6 and cycling was not detected - not a single time in over 1.5h cycling. If the new iPhone 6S makes good this major deficit in motion activity detection is unclear - Phil Schiller announced it in September 2015.
tl;tr
Currently, cycling detection in CoreMotion does not work as it works for stationary, walking, running, and car! It will be not detected and can be used retrospectively only.
Related
What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.
This is a general question seeking advice on the pattern required to calculate a user's velocity / pace / speed, when running or swimming.
Specifically, I want to be able to calculate this from watch OS, disconnected from the companion phone.
With GPS capabilities of Watch 3 / Watch OS 10.0 would the best approach be to:
Start Location Manager
Calculate distance and time between location points...
Calculate average speed?
Or are there better alternatives?
There is a good article here https://www.bignerdranch.com/blog/watchkit-2-hardware-bits-the-accelerometer/ that recommends using CoreMotion for device speed. However, this in my view would rather represent the 'device-speed' and not necessarily the user's speed over distance.
Any advice or experiences would be much appreciated.
Thanks.
The article you linked to is for WatchOS 2, not Watch 2. The motion tracking is pretty good, but to get accurate device speed you will still need to use the GPS.
If you don't need to do any other location related calculations, and don't need real time data (EDIT you can get near real time data with an HKAnchoredObjectQuery. This is sufficient for most situations) then you don't need to start location manager, just an HKWorkoutSession. This will default to using the GPS or Motion Data (which ever is more accurate/available at the time) and manage everything for you. When the workout is over, you can query for the distance samples and calculate pace from that.
If you need live motion data then the steps you outlined are correct, however you should check that the user is outdoors first. If the user is indoors or has a weak GPS signal switch to using Motion Data (and be sure to set the HKMetadataKeyIndoorWorkout appropriately if using HealthKit).
The question is already quite direct and short:
Can the Hololens be used as a virtual reality glasses?
Sorry beforehand if the question is clear for those who have tried them out, but I had not yet the chance.
From what I read I know that they have been designed to be a very good augmented reality tool. This approach is clear for everybody.
Just thinking that may be applications where you simply don't want the user to have any spatial contact with the reality for some moments, or others where you want the user to forget in the complete experience about were s-he is, then a complete environment should be shown as we are used to with the virtual reality glasses.
How are the Hololens ready for this? I think there are two key sub-questions that may be answered for this:
How solid are the holograms?
Does the screen where holograms can be placed covers the complete view?
As others already pointed out, this is a solid No due to the limited viewing window.
Apart from that, the current hardware capabilities of the Hololens is not capable of providing a full immersive experience. You can check the specifications here.
As of now, when the environment is populated with more than a few holograms (depends on the triangle count of each hologram) the device's fps count drops and a certain lag is visible. I'm sure more processing power would be added to the device in future versions, but as of right now, with the current power of the device, I seriously doubt its capabilities to populate an entire environment to give a fully immersive experience.
1) The holograms quality is defined by the following specs:
- Holographic Resolution: 2.3M total light points
- Holographic Density: 2.5k light points per radian
It is worth to say that Microsoft holograms disappear under a certain distance indicated here in 0.85m
Personal note: in the past I worked also on Google Project Tango and I can tell you from these personal experiences that the stability of Microsoft holograms is absolutely superior. Also, the holograms are kept once the device is turned off, so if you place something and you reboot the device you will find them again where you left them, without the need to restart from scratch
2) Absolutely not: "[The field of view] amounts to the size of a monitor in front of you – equivalent to 15 inches" as stated here. And it will not be addressed as reported also here. So if the holograms size exceeds this space they will be shown partially [i.e. cut]. Moreover the surrounding environment is always visible because the device purpose is interacting with the real environment adding another layer on top
Hololens is not intended to be a VR rig, there is no complete immersion that I am aware of, yes you can have solid holograms, but you can always see the real world.
VR is related with substituting the real world that is why VR goggles are always blind. HoloLens are type of see-through so you can see the hologram and the real world. There are created for augmented reality where you augment the real world. That is why you can't use HoloLens for VR purpous
Actually my initial question is: can the Hololens be used AS WELL for VR applications?
No is the answer because of its small window (equivalent to 15'' screen) where the holograms can be placed to.
I am sure this will evolve sooner or later in order to improve the AR experience. As soon as the screen does not cover toe complete view VR won't be possible with the Hololens.
The small FOV is a problem for total immersion, but there is an app for HoloLens called HoloTour, which is VR (with a few AR scenes in the beginning). In the game, the user can travel to Rome and Peru. While you can still see through the holograms, in my personal experience, people playing it will really get into it and will forget about the limitations. After a scene or two, they feel immersed. So while it certainly isn't as good at VR as a machine designed for that, it is capable, and it is still typically enjoyable to the users. There are quite a few measures to prevent nausea in the users (I can use mine for hours at a time with no problem) so I would actually prefer it to poorer VR implementations, such as a GearVR (which made me sick after 10 minutes of use!). Surely a larger FOV is in the works, so this will be less of a limitation in future releases.
I would like to develop a mobile app for iPhones, that calculates time needed to reach a given velocity. For example: I'm in my car, open the app, choose 100km/h and when I accelerate the app should start to count time and it stops counting just in the moment when I reach 100km/h. It should be very accurate.
I heard about two solutions. First is to use the accelerometer/gyroscope, but some people told me it's bad idea, because I won't be able to calculate time on longer distances. The second option is to use GPS, but on the other hand it can be not as accurate as I want it to be.
So I need suggestions, which option is better and why.
My targets are iPhones 4s and newer.
If you want to be more precise than the GPS you will need to have some sort of sensor. Most similar apps and concepts will create a receiver that plugs into the car that the iPhone can connect to. This has the benefit of making all of the sensors in the car available to you. This is an example: https://www.automatic.com/how-automatic-works/
Lets say we have a starting point, (x,y). By using iOS navigation can we tell how far from that starting point we moved to another location (a,b). So if i walked 20 feet in a certain direction after starting would it be able to tell me how far I've moved and in which direction?
If this technology exists can I get info on where to start learning about it?
This also needs to be done without GPS, sorry.
As rmaddy mentioned With core location class and incorporating GPS in a project you can obtain a distance traveled by the person who is walking. I found a great step by step tutorial for you which has a sample project you can build and take a look at. Here is the link..http://www.perspecdev.com/blog/2012/02/22/using-corelocation-on-ios-to-track-a-users-distance-and-speed/
Also here is the link to core location class reference for further study....https://developer.apple.com/library/iOS/documentation/CoreLocation/Reference/CLLocation_Class/CLLocation/CLLocation.html
No, you can't determine location changes accurately without GPS, even with GPS it is difficult to accuracy measure position change as small as 20 feet (GPS 5m accuracy means a +/-15 foot error)
In theory you might be able to write software to create an Internal Navigation System using the built in accelerometers, gyros, and magnetometers, but in practice they are too noisy and have too much error for this kind of use (see this question). A better rocket scientist than me might be able to make it work but it was also need to use the GPS to keep it from drifting. The M7 chip on the 5S might make this feasible.