ARPlane move with camera - augmented-reality

I have created an AR app using unreal engine 5 but when I scan the environment and spawn object on the detected plane, the plane starts to move with the camera. How to make the pplane stable and stick to the desired position in the environment?
[blueprint] (https://i.stack.imgur.com/IYdrC.png) (https://i.stack.imgur.com/BFsyr.png)

Related

Is it Possible to Place an AR Object above an AR Object?

I want to Place an AR object(Chair) onto another AR Object(Platform).
Lets say chair is 3X3 and platform is 6x6 in horizontal plane. I want to ask if it is possible? If Yes, I want to ask if it is possible in which of the following 1. ARCore, 2. ARKit, 3. VIRO React.
I know AR detects real world planes and we can place objects onto it. Also I have seen Videos of APP where in ARCore objects interact with each other.
It is possible, but you do not stack ar object over another ar object. You have to manually create a new type and then you stack the new type over one another.
An example:
You spawn a class Platform and then you detect platform similar way you detect plane, you can use a raycast to do this. Then when you detected the platform, you then offset chair from platform and you spawn it on top of it.

ARkit delete infinite planes without restarting session?

Does anyone know how to make ARkit delete infinite planes without restarting session? I'm currently trying to make my app detect one plane at a time and make it infinite. If a new plane is detected it should ideally delete the last plane and focus on the new one at whatever height the plane is at. Does anyone know how to do this without restarting the ARkit session? (Note that restarting the session causes all ARkit placed objects to lose their last position.)

How to turn off/on AR in an augmented reality app using ARkit?

I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.

Can I save ar data for reuse?

My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this

camera overlay change with bearing and elevation

Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor?
Have a look at the following, each article or link has different key things required to make your final product. You eventually will be using a combination of geolocation, the compass/or the iphone's gyroscope data coming in.
Reading all the references combined and implementing them one by one in different projects will give you a solid start on how to then combine it all together to create your application. But first you need to get a solid understanding on how to manipulate the knowledge you will learn and how you can then apply it to create your project.
References:
A cool project from Ray Winderlech teach you how to use location gps coordinates in your application
Augmented reality location based tutorial
The next two links show you how to grab gyroscope data to find out the pitch, yaw and rotation and find out the device current position in space.
Apple gyroscope example app
Another core motion gyroscope example
Will teach you how to use the compass
Ray Winderlichs augmented reality compass tutorial for ios
Here's some more augmented reality stuff on overlaying stuff on the camera view
iPhone AR Toolkit
Augmented reality marker tracking tutorial

Resources