3D objects keep moving ARKit - ios

I am working on an AR app for which I am placing one 3D model in front of the device without horizontal surface detection.
Based on this 3D model's transform, I creating ARAnchor object. ARAnchor objects are useful to track real world objects and 3D objects in ARKit.
Code to place ARAnchor:
ARAnchor* anchor = [[ARAnchor alloc] initWithTransform:3dModel.simdTransform]; // simd transform of the 3D model
[self.sceneView.session addAnchor:anchor];
Issue:
Sometimes, I found that the 3D model starts moving in random direction without stopping.
Questions:
Is my code to create ARAnchor is correct? If no, what is the correct way to create an anchor?
Are there any known problems with ARKit where objects starts moving? If yes, is there a way to fix it?
I would appreciate any suggestions and thoughts on this topic.
EDIT:
I am placing the 3D object when the AR tracking state in normal. The 3D object is placed (without horizontal surface detection) when the user taps on the screen. As soon as the 3D model is placed, the model starts moving without stopping, even if the device is not moving.

You don't need an ARAnchor in fact, just set the position of the 3D object in front of the user.

If the surface is not enough to determine a position, the object won’t attached to the surface. Find a plane with more textures and try again.

Related

ARKit plane with real world object above it

Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas

ARKIT Changing node position according to physical object

I am trying to move a node relative to a face, so if the user face move right a diamond shape should move right exactly the same x. I have done it perfectly using ARFaceTrackingConfiguration. However, If there is a big distance between iPhone & the face the renderer delegate method does not fire anymore.
So I guess ARFaceTrackingConfiguration is not meant to be used on long distances because it uses depth sensors which apparently doesn't support that.
So my question is, does ARKit support adding nodes relative to an physical object and when this object moves, it would update me with the position of this object so that I can update the node?
You seem to have answered your own question.
Yes, with ARKit (and the scene graph / renderer APIs of your choice, such as SceneKit), you can place virtual content such that it moves with the tracked face. In ARSCNView, all you need to do is assign that content as a child of the node you get from renderer(_:didAdd:for:) — SceneKit automatically takes care of moving the node whenever ARKit reports that the face has moved.
If ARKit cannot track the face because it's outside the usable range of the TrueDepth camera… then it's not tracking the face. (Welcome to tautology club.) That means it doesn't know where the face is, can't tell you how the face is moving, and thus can't automatically move virtual content to follow the face.

How to Anchor large 3D models in ARKit (Unity)?

I am new to ARKit and I'm using Unity along with it.
So I just got one of my custom models to be displayed and I can anchor it to the ground by tapping on a discovered plane. However my model is pretty big, its a life sized shack.
The problem is when I move around to much the model loses its anchor point and becomes unstable and starts moving around all over the place. This wasn't a problem when I had it as a smaller model, only when I scaled it up.
Has anyone else had this problem? Have you gotten it to work?
Thanks!
Its kind of inherent to the plane detection. If you look up at the ceiling for instance it is no longer seeing the floor and it is basicly only using the phones gyroscope and accelerator to know how the phone has moved. As far as i know there is no real solution to this since the object is only anchored to the detected plane.

Object Detection with moving camera

I understand that with a moving object and a stationary camera, it is easy to detect objects by subtracting the previous and current camera frames. It is also possible to detect moving objects when the camera is moving freely around the scene.
But is it possible to detect stationary objects with a camera rotating around the object? The movement of the camera is predefined and the camera is only restricted to the specified path around the object.
Try camshift demo which locates in opencv source code with this path: samples/cpp/camshiftdemo.cpp. Or other algorithms like meanshift,KCF,etc. These are all object tracking algorithms.

Inverse zoom in/out (scaling) of AR 3D model using ARToolkit for Unity5

I have done a AR project using ARToolkit for Unity. It works fine but the problem I'm trying to solve here is to inverse the scaling on 3D model. Right now when you take camera further away from the marker 3D object go smaller (Zooms out) and if I bring the camera closer,the 3D model gets bigger. But what I want to do is opposite of this behaviour.
Any ideas on how I go about it?
I think this is a bad idea because it completely breaks the concept of AR, which is that the 3D objects are related to the real world, but it is definitely possible.
ARToolkit provides a transformation matrix for each marker. That matrix includes position and rotation. On each iteration, the object is updated with those values. What you need to do is find where that is applied to the object and then measure the distance to the camera and update the translation to be at the distance you want.
That code is in the Unity plugin, so it should be easy to find.

Resources