Marker based Augmented Reality not planing properly to marker on mobile devices only - augmented-reality

I’m working on creating a marker based AR game using AFrame 1.2.0 and ar.js 3.3.3. The display shows 2D images of animals that the user has to “find”. The whole game functions well now, but I was running into an issue of photos appearing distorted or warped. I figured out that the issue is the marker’s plane is not being read correctly by mobile devices. The pictures below include a red cube to show the issue better. The top one is on a PC’s webcam and shows correctly the box is mounted to the marker. The bottom one shows the box is not mounted to the marker.
I figure that the issue is either mobile device’s gyroscope features or that the screen dimensions are affecting the aspect ratio of the screen.
I’ve tried a few properties on Aframe’s a-entity, such as look-controls=‘Enabled:false’ and look-controls=‘magicWindowTrackingEnabled: false’. Neither of those made a difference. I haven’t found properties within ar.js to use. Just wondering if anyone has come across this issue and found a fix.
images planing correctly with the marker
images not planing correctly

arjs comes in two different, mutually exclusive builds - Image + location based tracking, and marker tracking (link).
Importing the wrong one may/will cause incorrect behavior like the one you experience.

Related

ARKit anchor drift, localization, image anchors

I'm working on an ARKit application, in which I place several anchor around a room, save the world map data to a server, and then restore that data and the anchors on a different device. Obviously, localization and drifting of the anchors is something the app has to cope with.
My question is, can placing an image anchor in the room and scanning that image help with re-localizing after re-loading the world map data? Would ARKit use the pose of a scanned image as feedback into it's re-localization process? For the app I'm working on, it is a possibility to have an image marker (such as a QR code) placed at a consistent location within a room, such that the app can be sure the image has not physically moved. Would scanning such an image and placing an image anchor at its location help with re-localizing when the world map is later loaded on a different device?

Placing objects below the ground in AR Quick Look on iOS

I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.

Suggestions to place points on a floor plan

I need some directions (tutorials, examples, etc) to help me to figure out how I could, from a floor plan (ex: png file for now), place some points/icons to different locations and be able to zoom in/out, drag and rotate.
In fact, a bit like google map, with only basic stuffs.
Thanks
Take a look at the Grid List Demo in Flutter Gallery:
https://github.com/flutter/flutter/blob/master/examples/flutter_gallery/lib/demo/material/grid_list_demo.dart
It features an Image viewer with zoom and pan support.
(You can also find the Gallery on the Play Store)
Instead of using a simple image, use a Stack that overlays Positioned widgets over your image.

Is it possible to obtain individual left and right image (or cvpixelbuffer) from iPhone dual camera

AVFoundation uses the dual camera on some the recent "plus" iPhones to compute depth map. However, I am trying to obtain the individual left and right images as taken by the camera. I tried a few googling but there's nothing yet that either from apple development page, or people who have tried and posted as blog.
Note: I am not trying to get the depth map (which is a commonly known know-how), I would like the raw individual left and right images, and process the parallax info in some other ways.

Image tracking - tracking a screen with a camera

I want to track the relative position of a camera aimed at a computer screen.
I can’t control what is displayed on the computer screen but I can receive screen dumps whenever something changes on the screen. Those screen dumps can hopefully be used to find the screen when analyzing the video from the camera.
I see many videos on youtube for face, logo or single colored objects tracking using OpenCV but I’m unsure those methods would work finding and tracking a more detailed image like a screen dump.
Maybe Template Matching is the way to go? But I need to find the screen even at an angle.
Basically I don’t know where to begin and need help from people with experience in this field to find the best way for achieving what I want.
Thanks
Using feature matching should do the trick (Sift/SURF/ORB/...)

Resources