ARKit anchor drift, localization, image anchors - ios

I'm working on an ARKit application, in which I place several anchor around a room, save the world map data to a server, and then restore that data and the anchors on a different device. Obviously, localization and drifting of the anchors is something the app has to cope with.
My question is, can placing an image anchor in the room and scanning that image help with re-localizing after re-loading the world map data? Would ARKit use the pose of a scanned image as feedback into it's re-localization process? For the app I'm working on, it is a possibility to have an image marker (such as a QR code) placed at a consistent location within a room, such that the app can be sure the image has not physically moved. Would scanning such an image and placing an image anchor at its location help with re-localizing when the world map is later loaded on a different device?

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Marker based Augmented Reality not planing properly to marker on mobile devices only

I’m working on creating a marker based AR game using AFrame 1.2.0 and ar.js 3.3.3. The display shows 2D images of animals that the user has to “find”. The whole game functions well now, but I was running into an issue of photos appearing distorted or warped. I figured out that the issue is the marker’s plane is not being read correctly by mobile devices. The pictures below include a red cube to show the issue better. The top one is on a PC’s webcam and shows correctly the box is mounted to the marker. The bottom one shows the box is not mounted to the marker.
I figure that the issue is either mobile device’s gyroscope features or that the screen dimensions are affecting the aspect ratio of the screen.
I’ve tried a few properties on Aframe’s a-entity, such as look-controls=‘Enabled:false’ and look-controls=‘magicWindowTrackingEnabled: false’. Neither of those made a difference. I haven’t found properties within ar.js to use. Just wondering if anyone has come across this issue and found a fix.
images planing correctly with the marker
images not planing correctly
arjs comes in two different, mutually exclusive builds - Image + location based tracking, and marker tracking (link).
Importing the wrong one may/will cause incorrect behavior like the one you experience.

How does image anchoring work in Reality Composer?

Whilst learning about Reality Composer I found that it is possible to anchor an image using Reality Composer, meaning if I have an image in real life and a copy of it in the Reality Composer then using that I can build a whole scene right on top of the image. I was wondering, how does the actual anchoring happen?
I have worked before with SIFT keypoint matching, which could be used in this case as well, however, I cannot find how this works in Reality Composer.
The principle of operation is as simple as that:
Reality Composer's scene element called AnchorEntity contained in .rcproject file in RealityKit app conforms to HasAnchoring protocol. When RealityKit app's Artificial Intelligence sees any image thru rear camera, it compares it with the one containing inside reference image folder. If both images are identical, app creates an image-based anchor AnchorEntity (similar to ARImageAnchor in ARKit) that tethers its corresponding 3D model. Invisible anchor appears in the center of a picture.
AnchorEntity(.image(group: "ARResourceGroup", name: "imageBasedAnchor"))
When you're using image-based anchors in RealityKit apps, you're using a RealityKit's analog of ARImageTrackingConfig that is less processor intensive than ARWorldTrackingConfig.
The difference between AnchorEntity(.image) and ARImageAnchor is that RealityKit automatically tracks all its anchors, while ARKit uses renderer(...) or session(...) methods for updating.

ARKit save object position and see it in any next session

I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.
In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).

Can I save ar data for reuse?

My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this

Resources