Unfortunately there is no documentation for Augmented Reality (ARButton) feature of Three.JS. In the example objects are not being placed distinguishing floor or wall. Can you guide how to correct the sizing of added object and rotation based on wall and floor. This is being done by ARCore as explained here but unable to do this in Three.JS example.
Related
This bounty has ended. Answers to this question are eligible for a +50 reputation bounty. Bounty grace period ends in 4 hours.
Alejandro Vargas wants to draw more attention to this question.
I'm trying to detect plane with table classification and I want to cover the table with an image. Imagine a chessboard covering the entire table surface.
My approach here is to detect planes (which I can) and use the attached image to get to repeat cover the table.
I tried doing it in both reality kit and SceneKit and reality kit gives the best results in terms of occlusion if there are objects on the table or if the table is near a wall or something.
I was able to achieve it in SceneKit by using the contentsTransform with wrapS and wrapT, having a repeated tile format.
But when I do it in RealityKit, the texture just stretches along the entire plane as more and more of the plane is detected. Can someone point me in the right direction here? I am using a lidar enabled device.
my image
in reality kit
in scenekit
Is finger tracking supported by ARKit 3? And if yes, can ARKit 3 be used in conjunction with the face detection API for the TrueDepth Camera module to sport the position of a certain finger in respect to eyes, nose and mouth?
If not, is there an easy way for finger tracking using, without going as deep as using Metal APIs?
Note: by finger tracking, I mean tracking the number of fingers and/or which finger(s) is visible.
It's possible that you can get pretty close positions for the fingers of a tracked body using ARKit 3's human body tracking feature (see Apple's Capturing Body Motion in 3D sample code), but you use ARBodyTrackingConfiguration for the human body tracking feature, and face tracking is not supported under that configuration. Also, the joints for fingers are not tracked, so while you can get their approximate location using a joint that is tracked (i.e., the wrist), ARKit won't tell you which fingers are extended or retracted.
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.
How can you recognize the floor in Google ARCore?
In this video, a Cube with Rigidbody falls at regular intervals.
After a while the system starts to recognize the floor, but the Cube slips past the floor and falls.
How can I catch the Cube on the floor?
Are you doing this in Unity? If so, make sure both the Cube and the TrackedPlane game objects have colliders.
Also, here is another discussion related to adding a MeshCollider to a tracked plane using ARCoreUtils:
ARCore collider on generated planes
Direct Link to ARCoreUtils:
https://github.com/jonas-johansson/ARCoreUtils