Do ARKit anchors persist after pause and run again? - ios

I am considering developing an ARKit app, but before deciding to buy an iPhone I would like to ask two questions that are crucial for me. Please let me know if this has already been asked, as I could not find it somewhere else online.
The questions:
1. Let's say the motion tracking gets lost (e.g., when pointing to a white wall) and then recovers again. Does it localize in the same frame of reference or it starts from scratch? Also, are the anchors preserved?
2. Let's say I pause the session and then run it again (e.g., by leaving the app and then coming back). Does is localize back to the frame of reference from before the pause? Also, are the anchors preserved?
I am asking this because I know that localization does not work in ARCore yet and I was wondering about its state in ARKit.
Thanks!

ARKit has two or three ways to lose tracking (depending on how you think of them); each has a different effect on anchors.
1. TemporARy tracking quality issues
(I honestly fumbled caps lock in the middle of that word. My keyboard is making the puns for me!)
In the first situation you mention, and similar cases — pointing at a blank wall, giving the phone a sudden jostle, moving from a darkened area into bright light or vice versa — your app will get notified of changes to ARKit’s tracking state that effect the quality of camera pose tracking.
When the tracking state is limited, ARKit’s idea of where the world is might be out of sync with the real world, but it still has enough information to be able to relocalize when the situation passes. That includes anchors. (Try for yourself; run one of Apple’s ARKit sample code projects, and cover the camera lens for a bit while moving the phone.)
If whatever situation is affecting the tracking state persists for a long time, relocalization is unlikely to succeed. It can help to track how long you’ve been in limited tracking and offer the user a way to restart the session if things get too out of whack.
2 and 3. Session interruption and resume or restart
If something happens that interrupts ARKit’s ability to receive camera and motion data — like the incoming phone call screen on iPhone, or the user responding to an interactive notification, your app gets a sessionWasInterrupted message. There’s nothing you can do in this case (as far as session management is concerned) other than wait for a corresponding sessionInterruptionEnded message.
If the interruption was brief and the device hasn’t moved much since, there’s a chance of automatic relocalization. Of course, you can’t tell how much the device has been moved because motion tracking was off... you can make an educated guess based on the duration of interruption and how sensitive your AR experience is to tracking precision, and decide accordingly whether to restart the session. (For example, a game that has space invaders floating in the air is less affected than an app that lets the user trace out a floor plan by marking walls.)
Aside: Traditional iOS UI patterns like modal view controllers, tab views, and navigation controllers can push the view hosting an AR session away, interrupting the session and losing tracking. Like Apple’s Human Interface Guidelines for AR suggest, it can be good to use things like popover views instead, so that you keep the AR experience onscreen and the session running.
When/if you do restart your AR session, you have a choice of whether to keep anchors or reset tracking. If you’ve already lost localization, what this really means is whether you keep track of anchors in arbitrary coordinate space they’re defined in (even though that space doesn’t line up with the real world anymore), or just lose all the anchors.
Short of restarting the session, though, there’s nothing that’ll cause anchors to be removed. And if you lose tracking temporarily enough to get relocalization, anchors that track real-world objects (that is, plane anchors, as opposed to the ones you manually create) should adjust back to realistic positions even if the coordinate systems doesn’t quite line up the way it used to.

Related

SceneKit objects not showing up in ARSCNView about 1 out of 100 times

I can't share much code because it's proprietary, but this is a bug that's been haunting me for awhile. We have SceneKit geometry added to the ARKit face node and displayed inside an ARSCNView. It works perfectly almost all of the time, but about 1 in 100 times, nothing shows up at all. The ARSession is running, and none of the parent nodes are set to hidden. Further, when I look at Debug Memory Graph function in Xcode, the geometry appears to be entirely visible there (and doesn't seem to be set to hidden). I can see all the nodes attached to the face node perfectly within the ARSCNView of the memory graph, but on the screen, nothing shows up. This has been an issue for multiple iOS versions, so it didn't just appear with a recent update.
Has anybody run into a similar problem, or does anybody have any ideas to look into? Is it an apple bug, or is there a timing issue I might not be aware of? It's been really hard to debug because of how infrequent it is, and I haven't found it discussed on any other forums (but point me in the right direction if there is a previous discussion). Thanks!
This is pretty common practice if AR tracking is poor for some reason.
I ran into a similar problem too. I think it's definitely a tracking error which arises due to the fault of the user of AR app. Sometimes, if you're using World Tracking Config in ARKit and track a surrounding environment offhandedly or if you are tracking under inappropriate conditions – you get a sloppy tracking data which results in situation when your World Grid/Axis may be unpredictably shifted aside and your model may fly away somewhere. If such a situation arises - look for your model somewhere nearby – maybe it’s behind you.
If you're using a gadget with a LiDAR, the aforementioned situation is almost impossible, but if you're using a gadget with no LiDAR you need thoroughly track your room. Also there must be good lighting conditions and high-contrast real-world objects with distinguishable non-repetitive textures.

Hololens replaces GameObjects after spatialmap scan

I have made an application for the hololens with the holotoolkit. The app gets an array with 3D positions and spawns GameObjects based on these positions.
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it, then a window opens and says 'Finding your space' and all of the GameObjects are in the wrong position.
How can I use SpaitalMapping, but not allow to replace the GameObjects i place?
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it
I'm not sure the details of the changes you made on realworld objects, but some changes in the real world may have a great impact on the lighting conditions. For example, after opening an outward door in a dimly lit room. a lot of daylight will enter the room.
Lighting can make a difference in the visual features that HoloLens detects. So changeable lighting conditions might cause HoloLens to lose tracking.
Besides, in environments where the visual features change because most objects move, will also cause the same problem.
I recommend that you try the steps in I see a message that says Finding your space to fix this problem. In short, make sure your artificial lighting conditions and WiFi signal are stable and try moving more slowly.

Can ARKit (or ARSession) keep running in the background?

My question is - Can ARSession run in the background - basically I need the ARKit running and all the per frame information and camera incentrics but I DONT want to render the camera feed on the screen - As happens in the ARSCNView.
Not specifically in the background thread or process.
Basically I just want to use the Tracking information(Images + Camera Pos + Camera Euler Angles etc) from the ARKit and dont want to render anything in AR per se or the camera feed.
Before everyone jumps on me - I know that Apple restricts GPU process in the background - case in point :
Execution of the command buffer was aborted due to an error during
execution. Insufficient Permission (to submit GPU work from
background) (IOAF code 6)
But there should be a way to use the ARKit or ARSession without the camera feed and only with the tracking information, right?
There is no way to do what you want to do, as soon as the ARSCNScene or ARSKScene is fully covered AR tracking stops. There are a couple reasons for this.
First, ARKit is for Augmented Reality, not tracking operations. If you aren't displaying reality (the camera feed) on the screen then you cannot augment reality.
Second, and most important, Apple cares about user privacy and if you could access the camera without showing the feed onscreen the user would have no way of knowing when the camera was in use. This would leave open the possibility of apps spying on users without their knowledge. Apple will never allow this. Ever.
Also, AR hits the battery hard so it only runs when its actually being used.
Based on your comments it seems as though you don't actually want to create an AR experience anyway so there may be a better way to get what you are after through Heading information and the Accelerometer.

ARKit save object position and see it in any next session

I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.
In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).

Filtering GPS jitter when standing still?

I am working on an iOS app with tracking. I have implemented Kalman smoothing in order to present a pleasing path. This is working pretty well at this point.
I am having a bit of trouble dealing with the user-not-moving case though. When the user IS moving we get very good reads back from the CLLocation Manager. And even when a reading is a bit off the Kalman algorithm takes care of it.
When standing still the CLLocation Manager delegate is still receiving "accurate" locations. They have good accuracy, not an unbelievable speed. Looking at the screen with human eyes it's clear that the user is standing still with all these points just scattered around. Some points very close and a few of them far out.
I have tried setting the CLLocationManager property pausesLocationUpdatesAutomatically but it doesn't seem to be working that well. It doesn't always stop when it should and there have been difficulty restarting the tracking again as the antennas are powered down.
So I'm looking to keep the tracking on the whole time but I want to filter out the jitter in post processing. So I determine programmatically that the user is stopped and discard (or ignore) all locations until the user is moving again.
I'm not really sure how to go about this, what algorithm is appropriate to achieve something like this?

Resources