I know that ARCore can detect flat surfaces like floors and tables, but can it detect other, less regular objects? Say I have a chair and a stool sitting on the ground in front of me. Could ARCore detect these 2 objects in any way? Am I restrained to using large, flat surfaces when designing AR experiences with ARCore?
If there is any documentation about detecting real-world objects I would much appreciate a pointer to it.
Any advice would be helpful. :) Thanks.
ARCore can only detect horizontal surfaces at the moment. Vertical surfaces like walls and objects are not supported.
However, Tango devices have better support through extra hardware like dual cameras and infrared sensors
Watch this https://www.youtube.com/watch?v=rFbcOGuDMPk
Related
I know that ARKit is able to detect and classify planes on A12+ processors. It does the job reasonably well inside the house, but what about the outside? Is it able to detect windows and doors if I move around a house a little? I tried it myself and the result did not satisfy me: i moved around the building too much and still ARKit did not distinguish wall from the window.
I used app from here for tests: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
I’m I doing everything correct? Maybe there is some third party library to detect house parts better?
Thanks in advance!
When you test the sample app outside and try to use ARKit to detect the surfaces on the exterior of a house it will not work. ARKit is built to map flat surfaces and their orientations (horizontal/vertical). This means ARKit can understand that a surface is flat, is either a wall or a floor. When you attempt to "map" the exterior of a house, ARKit will only detect the horizontal surfaces as walls, it cannot distinguish between walls and windows.
You will need to develop/source an AI model and run it against the camera data using CoreML to enable your app to distinguish between windows and walls on the exterior of a house.
ARKit Plane tracking documentation for reference: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
a couple articles about ARKit with CoreML
https://www.rightpoint.com/rplabs/dev/arkit-and-coreml
https://medium.com/s23nyc-tech/using-machine-learning-and-coreml-to-control-arkit-24241c894e3b
[Update]
Yes you are correct, for A12+ devices Apple does allow for plane classification. I would assume the issue with exterior windows vs interior is either distance to the window (too far for the CV to properly classify) or Apple has tuned it more for interior windows vs exterior. The difference may seem trivial but to a CV algorithm it's quite different.
I am done with 3d object scanning and detection with ARKit 2.0. I have scanned 3d object from all sides of object. Once 100% scanning is done then had given name to that object and then save that ARReference Object and image in document directory. Then on button click I am going to detect scanned object and display it’s name and image from document directory.
Object get detected but it’s taking too much time to detect an object. I have gone through Apple document for best practices and limitations. Still having some questions regarding ARKit.
Is anything wrong while scanning or detecting object? What are best practices to scan 3d object?
What are the limitations for scanning and detecting object?
Is it possible to zoom while detecting object?
What are best practices to detect object quickly i.e. not taking too much time for detection?
ARKit engineers give the following recommendation for scanning 3D objects:
Light the object with an illuminance of 250 to 400 lux, and ensure that it’s well-lit from all sides.
Provide a light temperature of around ~6500 Kelvin (D65) – similar with daylight. Avoid warm or any other coloured light sources.
Set the object in front of a matte, middle-grey background.
For easy object scanning, use a recent, high-performance iOS device (iPhone X/Xs/Xr, iPad Pro). Scanned objects can be detected on any ARKit-supported device, but the process of creating a high-quality scan is faster and smoother on a high-performance device.
Position the object you want to scan on a surface free of other objects (like an empty tabletop).
Also, I should add four things:
Objects with non-repetitive (unlike polkadots) and non-flat textures are more preferable. Scanning objects with a "not-rich" texture takes a little longer.
Try not to scan transparent objects like a glass statuette or jar of water. For ARKit these kinds of objects are undesirable. It doesn't matter what Index of Refraction (IOR) they have 1.0 or 3.0.
Try not to scan highly reflective objects like mirror or chrome sphere. For ARKit these types of objects are undesirable too. Their "texture" depends on angle of view.
Try not to scan objects with a chromatic dispersion effect like surface of DVD or precious stones in jewelry.
Using zoom when scanning is a controversial issue.
The most robust scenario for me for ARObjectScanningConfiguration is to scan a middle-sized object 0.5 to 1.5 meters away. In ARKit Autofocus is enabled by default.
All aforementioned recommendations are general. Every object is unique and you need a different amount of time for any unique object to scan.
Hope this helps.
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.
This is probably an insanely hard question. So far ARKit works with 3D models which are built in 3d modelling software. I was wondering if there was a way to use iPhone camera to scan 3d object (let's say a car), then use it in ARKit.
Any open source projects available which do this on other platforms or iOS?
You are looking for software in the "photogrammetry" category. There are various software tools that will stitch your photos into 3D models, but one option is Autodesk Remake. There is a free version.
ARKit/RealityKit on iPad/iPhone with a LiDAR scanner let you reconstruct a current scene and obtain a 3D geometry with an Occlusion Material applied. This geometry allows you occlude any object including a human being and physically "interact" with this generated mesh. LiDAR's working distance is up to 5 meters.
However, scanning a car isn't a good idea due to paint's high reflectivity.