ARKit save object position and see it in any next session - ios

I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.

In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).

Related

ARKit anchor drift, localization, image anchors

I'm working on an ARKit application, in which I place several anchor around a room, save the world map data to a server, and then restore that data and the anchors on a different device. Obviously, localization and drifting of the anchors is something the app has to cope with.
My question is, can placing an image anchor in the room and scanning that image help with re-localizing after re-loading the world map data? Would ARKit use the pose of a scanned image as feedback into it's re-localization process? For the app I'm working on, it is a possibility to have an image marker (such as a QR code) placed at a consistent location within a room, such that the app can be sure the image has not physically moved. Would scanning such an image and placing an image anchor at its location help with re-localizing when the world map is later loaded on a different device?

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

Do ARKit anchors persist after pause and run again?

I am considering developing an ARKit app, but before deciding to buy an iPhone I would like to ask two questions that are crucial for me. Please let me know if this has already been asked, as I could not find it somewhere else online.
The questions:
1. Let's say the motion tracking gets lost (e.g., when pointing to a white wall) and then recovers again. Does it localize in the same frame of reference or it starts from scratch? Also, are the anchors preserved?
2. Let's say I pause the session and then run it again (e.g., by leaving the app and then coming back). Does is localize back to the frame of reference from before the pause? Also, are the anchors preserved?
I am asking this because I know that localization does not work in ARCore yet and I was wondering about its state in ARKit.
Thanks!
ARKit has two or three ways to lose tracking (depending on how you think of them); each has a different effect on anchors.
1. TemporARy tracking quality issues
(I honestly fumbled caps lock in the middle of that word. My keyboard is making the puns for me!)
In the first situation you mention, and similar cases — pointing at a blank wall, giving the phone a sudden jostle, moving from a darkened area into bright light or vice versa — your app will get notified of changes to ARKit’s tracking state that effect the quality of camera pose tracking.
When the tracking state is limited, ARKit’s idea of where the world is might be out of sync with the real world, but it still has enough information to be able to relocalize when the situation passes. That includes anchors. (Try for yourself; run one of Apple’s ARKit sample code projects, and cover the camera lens for a bit while moving the phone.)
If whatever situation is affecting the tracking state persists for a long time, relocalization is unlikely to succeed. It can help to track how long you’ve been in limited tracking and offer the user a way to restart the session if things get too out of whack.
2 and 3. Session interruption and resume or restart
If something happens that interrupts ARKit’s ability to receive camera and motion data — like the incoming phone call screen on iPhone, or the user responding to an interactive notification, your app gets a sessionWasInterrupted message. There’s nothing you can do in this case (as far as session management is concerned) other than wait for a corresponding sessionInterruptionEnded message.
If the interruption was brief and the device hasn’t moved much since, there’s a chance of automatic relocalization. Of course, you can’t tell how much the device has been moved because motion tracking was off... you can make an educated guess based on the duration of interruption and how sensitive your AR experience is to tracking precision, and decide accordingly whether to restart the session. (For example, a game that has space invaders floating in the air is less affected than an app that lets the user trace out a floor plan by marking walls.)
Aside: Traditional iOS UI patterns like modal view controllers, tab views, and navigation controllers can push the view hosting an AR session away, interrupting the session and losing tracking. Like Apple’s Human Interface Guidelines for AR suggest, it can be good to use things like popover views instead, so that you keep the AR experience onscreen and the session running.
When/if you do restart your AR session, you have a choice of whether to keep anchors or reset tracking. If you’ve already lost localization, what this really means is whether you keep track of anchors in arbitrary coordinate space they’re defined in (even though that space doesn’t line up with the real world anymore), or just lose all the anchors.
Short of restarting the session, though, there’s nothing that’ll cause anchors to be removed. And if you lose tracking temporarily enough to get relocalization, anchors that track real-world objects (that is, plane anchors, as opposed to the ones you manually create) should adjust back to realistic positions even if the coordinate systems doesn’t quite line up the way it used to.

Can I save ar data for reuse?

My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this

iphone map kit indoors?

i'm trying to develop a small application that is able to track the position of the user indoors. I also want to create a small map of the building floor, so the user can see a image of it and his/her current position as they travel though the building. They would be able to connect via wifi.
Are there any tutorials/examples of how to create a navigational map indoors on the iphone? furthermore is it possible to accuratly track the position of the player by the use of wifi indoors?
Your second question is IMHO much more crucial, and the answer is "not very easily, and not very accurately". You'd have to see multiple APs (4+) located at well known positions at the same time, and triangulate your position from the signal strengths. This would also require that the phone scans for networks all the time but doesn't connect to any.

Resources