ArCore: Get Pose for object oriented normally to plane but, rotated in direction of camera - arcore

I am writing an application using ArCore. I am getting an object's pose as a Hitresult, therefore the object is placed correctly in the plane, but not correctly orientated/rotated towards the smartphone. For a more complete problem description:
The object is a 2d rectangle and is intended to stay inside the plane (= same normal vectors). The rectangle also needs to "point" to the camera (= nearest border needs to be aligned with the smartphone screen).
currentObstaclePose = hit.getHitPose().extractTranslation().compose(frame.getCamera().getPose().extractRotation())
Using this approach I do not get matching normal vectors for object and plane.
I have no idea how to construct the objects quaternions in a manner to achieve my goal. Thanks in advance for your help.

It sounds like you want to set the 'look direction' of the renderable.
Assuming you are using Scenefrom, which I see is tagged, you can use a TransformableNode and the method setLookDirection.
The example below will set the direction of the renderable, depending on the value you set for lookDirection:
var newAnchorNode:AnchorNode = AnchorNode(newAnchor)
var transNode = TransformableNode(arFragment.transformationSystem)
transNode.setLookDirection(Vector3(0f, xPoint.toFloat(), yPoint.toFloat()), lookDirectionValue)
transNode.renderable = yourRenderable
transNode.setParent(newAnchorNode)
newAnchorNode.setParent(arFragment.arSceneView.scene)
setAnchorNodes.add(newAnchorNode)

Related

Hit test along the Y axis from the device/camera location?

My application involves a user pointing their phone like a remote—as opposed to an 'AR window'—at virtual nodes. (See illustration.) These nodes are manually placed in the environment and aren't anchored to a physical plane. They will be floating.
To do this, I would need to cast along the Y axis from the camera/screen center and test for any hits along that ray. In Unity, I would do this by doing a Raycast hit test from the camera's transform.top, but I'm not sure how to do this in SceneKit.
I have had success using sceneView.hitTest on the Z axis, but this isn't my ultimate use case. I have also tried using scene.hitTestWithSegment(), using the camera world space as the from but I'm not sure how to get the to. I'm guessing I would need to cast a ray along the local Y axis and get a point along it, which just leads me back to my original problem.
Any tips on where to look? Is there a similar convenience to casting from a local transform.top in SceneKit?
Thank you, in advance!
Illustration:
** Update **
Per Josh Homann's answer below, I wound up solving it like so:
// Set local segment end and convert to world space
let segmentLocalEnd:SCNVector3 = SCNVector3(0,20,0)
let segmentWorldEnd:SCNVector3 = sceneView.pointOfView!.convertPosition(segmentLocalEnd, to: nil)
// Get hits
let hitResults:[SCNHitTestResult] = scene.hitTestWithSegment(
from: sceneView.pointOfView!.worldPosition,
to: segmentWorldEnd,
options: [SCNHitTestOption.firstFoundOnly.rawValue: true]
)
You already have most of the answer. In an AR view you would project a ray from the camera along the look at vector. In your case though you want to go perpendicular to the look at vector, so instead you just project a ray from the camera along the up vector (0,1,0) (assuming you haven't rotated your space and positive Y is up). In SceneKit you can't actually test along a infinite ray, so just pick a sufficiently large value and pass it into hitTestWithSegment(from:to:options:)

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

How to put an object in the air?

It seems HitResult only gives us intersection with a surface (plane) or a point cloud. How can I get a point in the middle of air with my click, and thus put an object floating in the air?
It really depends on what you mean by "in the air". Two possibilities I see:
"Above a detected surface" Do a normal hit test against a plane, and offset the returned pose by some Y distance to get the hovering location. For example:
Pose.makeTranslation(0, 0.5f, 0).compose(hitResult.getHitPose())
returns a pose that is 50cm above the hit location. Create an anchor from this and you're good to go. You also could just create the anchor at the hit location and compose with the y translation each frame to allow for animating the hover height.
"Floating in front of the current device position" For this you probably want to compose a translation on the right hand side of the camera pose:
frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).extractTranslation()
gives you a translation-only pose that is 1m in front of the center of the display. If you want to be in front of a particular screen location, I put some code in this answer to do screen point to world ray conversion.
Apologies if you're in Unity/Unreal, your question didn't specify so I assumed Java.
The reason why you see so often a hit result being interpreted as the desired position by the user is that actually there is no closed-form solution for this user interaction. Which of the infinite possible positions along the ray starting from the camera pointing towards the scene was desired by the User? 2D coordinates from a click still leave the third dimension undefined.
As you said "middle of the air", why not take the centre between the camera position and the hitresult?
You can extract the current position using pose.getTranslation https://developers.google.com/ar/reference/java/com/google/ar/core/Pose.html#getTranslation(float[],%20int)

Marker scale and switching to markerless (Kudan + Unity)

I'm trying to use Kudan AR in a project, and I have a couple questions:
1) The marker size relation to the scene seems pretty weird to me. For example, I'm using a 150x150 px image as a marker, and when I use it in the scene it occupies 150 unities! It requires all my objects to be extremely huge, sometimes even extending further than the camera far plane, which breaks the augmentation. Is it correct, or am I missing something?
2) I'm trying to use a marker to define the starter position of the augmentation, and then switch to the markerless tracking to have a broader experience. They have a sample code using the native iOS lib (https://wiki.kudan.eu/Marker_to_Markerless), but no reference on how to do it in Unity. That's what I'm trying:
markerlessDriver.localScale = new Vector3(markerDriver.localScale.x, markerDriver.localScale.x, markerDriver.localScale.z);
markerlessDriver.localPosition = markerDriver.localPosition;
markerlessDriver.localRotation = markerDriver.localRotation;
target.SetParent(markerlessDriver);
tracker.ChangeTrackingMethod(markerlessTracking);
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
tracker.FloorPlaceGetPose(out floorPosition, out floorOrientation);
tracker.ArbiTrackStart(floorPosition, floorOrientation);
It switches, but the position/rotation of the model goes off. Any idea on how that can be done?
Thanks in advance!

Viewpoint for X3D Centering

Can anyone please help me in calculating center of rotation and position of a X3D object?
I've noticed that aopt tool by InstantReality adds something like:
<Viewpoint DEF='AOPT_CAM' centerOfRotation='x y z' position='x y z'/>
The result is nice, object is properly zoomed, centrated and center
of rotation is somehow perfectly "inside" the object (x,y,z, center).
I must avoid using aopt, how can I obtain that, (i.e. via JavaScript)
pheraphs looping trough XML Coordinate point and doing some calculations...?
I'm using X3DOM to render the object.
Many thanks.
"AOPT_CAM" is the name of the Viewpoint. The centerOfRotation and position values are automatically computed by the Browser (InstantReality in your case).
In order to compute these values by yourself you need to know your object size (BoundingBox) and do some math to compute where the Viewpoint should be located ('position' attribute) in your local coordinate system. You also need to know the object displacement in the coordinate system. If not specified this should be (0,0,0)

Resources