ARKIT Changing node position according to physical object - ios

I am trying to move a node relative to a face, so if the user face move right a diamond shape should move right exactly the same x. I have done it perfectly using ARFaceTrackingConfiguration. However, If there is a big distance between iPhone & the face the renderer delegate method does not fire anymore.
So I guess ARFaceTrackingConfiguration is not meant to be used on long distances because it uses depth sensors which apparently doesn't support that.
So my question is, does ARKit support adding nodes relative to an physical object and when this object moves, it would update me with the position of this object so that I can update the node?

You seem to have answered your own question.
Yes, with ARKit (and the scene graph / renderer APIs of your choice, such as SceneKit), you can place virtual content such that it moves with the tracked face. In ARSCNView, all you need to do is assign that content as a child of the node you get from renderer(_:didAdd:for:) — SceneKit automatically takes care of moving the node whenever ARKit reports that the face has moved.
If ARKit cannot track the face because it's outside the usable range of the TrueDepth camera… then it's not tracking the face. (Welcome to tautology club.) That means it doesn't know where the face is, can't tell you how the face is moving, and thus can't automatically move virtual content to follow the face.

Related

Why we use fixed focus for AR tracking in ARCore?

I am using ARCore to track an image. Based on the following reference, the FOCUSMODE of the camera should be set to FIXED for better AR tracking performance.
Since for each frame we can get camera intrinsic parameter of focal length, why we need to use a fixed focus?
With Fixed Camera Focus ARCore can better calculate a parallax (no near or distant real-world objects must be out of focus), so your Camera Tracking will be reliable and accurate. At Tracking Stage, your gadget should be able to clearly distinguish all textures of surrounding objects and feature points – to build correct 3D scene.
Also, Scene Understanding stage requires fixed focus as well (to correctly detect planes, catch lighting intensity and direction, etc). That's what you expect from ARCore, don't you?
Fixed Focus also guarantees that your "in-focus" rendered 3D model will be placed in scene beside the real-world objects that are "in-focus" too. However, if we're using Depth API we can defocus real-world and virtual objects.
P.S.
In the future ARCore engineers may change the aforementioned behaviour of camera focus.

Can I change anchors or move images with ARKit and ARSK View?

I am trying to build a compass with ARKit, but I am having trouble with moving images. The compass is made up of 4 sprites for each of the cardinal directions, and they each hover one meter away from the camera in their specific locations. I would like the compass to surround the camera, so that even when the user moves, the compass still surrounds them. However, I do not know how to move the positions of these images after they are set for the first time.
I know it is possible to move nodes in an ARSCN View, but I am using an ARSK View, and as far as I know, it is impossible to move an anchor. In what way could I change the positions of the images if I do not move the anchor? Is this even possible with ARSK View, or should I give up and switch to ARSCN View?
Your use case suggests you don't really need elements that "move" in 3D, but elements whose 3D position is fixed relative to the camera. This sort of thing is trivial in a 3D framework like SceneKit, but much harder when you're trying to work in the space between ARKit (which mostly is concerned with world-relative positions) and a 2D framework that "fakes" a 3D effect for 2D content.
In SceneKit, you can make nodes follow the camera by making them children of the pointOfView node. Of course, that'll make them follow not just the camera's position but also orientation. If you want to selectively follow some aspects of the camera's transform but not others, you'd do well to combine node hierarchy with constraints — for example, a child of the camera node with a world space orientation constraint always returning identity, so that the node follows the camera's position but not its orientation.

3D objects keep moving ARKit

I am working on an AR app for which I am placing one 3D model in front of the device without horizontal surface detection.
Based on this 3D model's transform, I creating ARAnchor object. ARAnchor objects are useful to track real world objects and 3D objects in ARKit.
Code to place ARAnchor:
ARAnchor* anchor = [[ARAnchor alloc] initWithTransform:3dModel.simdTransform]; // simd transform of the 3D model
[self.sceneView.session addAnchor:anchor];
Issue:
Sometimes, I found that the 3D model starts moving in random direction without stopping.
Questions:
Is my code to create ARAnchor is correct? If no, what is the correct way to create an anchor?
Are there any known problems with ARKit where objects starts moving? If yes, is there a way to fix it?
I would appreciate any suggestions and thoughts on this topic.
EDIT:
I am placing the 3D object when the AR tracking state in normal. The 3D object is placed (without horizontal surface detection) when the user taps on the screen. As soon as the 3D model is placed, the model starts moving without stopping, even if the device is not moving.
You don't need an ARAnchor in fact, just set the position of the 3D object in front of the user.
If the surface is not enough to determine a position, the object won’t attached to the surface. Find a plane with more textures and try again.

Can ARCore track moving surfaces?

ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.

Speed Tracking a moving object from another moving object

I am new to computer vision, and need some advice on where to start.
The project is to estimate speed of a moving object(A) relative to the moving object(B) which is tracking it(A).
what should I need to do if I assume-
if the background is appeared to be static(making the background single colored)
if the background is moving (harder)
I want to do this using opencv and c++
Any advice on where to start, general steps would be very appreciated. Thanks in advance!
If your camera is attached to object B, first you will have to design an algorithm to detect and track object A. A simplified algorithm can be:
Loop the steps below:
Capture video frame from the camera.
If object A was not in the previous frame, detect object A (manual initialisation, detection using known features, etc.). Otherwise, track the object using the previous position and a tracking algorithm (openCV offers quite a few).
Detect and record the current location of the object in image coordinates.
Convert the location to real world coordinates.
If previous locations and timestamps for the object were available, calculate its speed.
The best way to do this is to get started with at least a simple C++ program that captures frames from a camera, and keep adding steps for detection and tracking.

Resources