I know that ARKit is able to detect and classify planes on A12+ processors. It does the job reasonably well inside the house, but what about the outside? Is it able to detect windows and doors if I move around a house a little? I tried it myself and the result did not satisfy me: i moved around the building too much and still ARKit did not distinguish wall from the window.
I used app from here for tests: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
I’m I doing everything correct? Maybe there is some third party library to detect house parts better?
Thanks in advance!
When you test the sample app outside and try to use ARKit to detect the surfaces on the exterior of a house it will not work. ARKit is built to map flat surfaces and their orientations (horizontal/vertical). This means ARKit can understand that a surface is flat, is either a wall or a floor. When you attempt to "map" the exterior of a house, ARKit will only detect the horizontal surfaces as walls, it cannot distinguish between walls and windows.
You will need to develop/source an AI model and run it against the camera data using CoreML to enable your app to distinguish between windows and walls on the exterior of a house.
ARKit Plane tracking documentation for reference: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
a couple articles about ARKit with CoreML
https://www.rightpoint.com/rplabs/dev/arkit-and-coreml
https://medium.com/s23nyc-tech/using-machine-learning-and-coreml-to-control-arkit-24241c894e3b
[Update]
Yes you are correct, for A12+ devices Apple does allow for plane classification. I would assume the issue with exterior windows vs interior is either distance to the window (too far for the CV to properly classify) or Apple has tuned it more for interior windows vs exterior. The difference may seem trivial but to a CV algorithm it's quite different.
Related
We build the app for the jewelry store where AR object (the ring) is recognized when the ios app camera is pointed on the paper (and metal) marker we built in Vuforia. This is 5* marker (which is considered to be a good quality according to Vuforia) that we place on the finger.
So what we have - we have a pretty fast recognition, BUT we have unpleasant shaking of the ring, that is being recognized. The closer we point the camera to the marker - the more shaking is seen.
The paper maker has a normal cylinder form. The lighting is always enough while testing, etc
Any ideas on why this shaking appears?
Thanks in advance!
We were trying different markers, test in different conditions with different lighting, we played with camera settings, used different vuforia versions, etc - no luck
If you the target is cylindric in shape than you should create a Vuforia cylinder target https://library.vuforia.com/objects/cylinder-targets. Standard ImageTarget are expected to be planar in 3D shape. The detector still works as as the plane can locally approximate the cylindrical geometry but tracking will be extremely inaccurate and brittle.
Is it possible to import a virtual lamp object into the AR scene, that projects a light cone, which illuminates the surrounding space in the room and the real objects in it, e.g. a table, floor, walls?
For ARKit, I found this SO post.
For ARCore, there is an example of relighting technique. And this source code.
I have also been suggested that post-processing can be used to brighten the whole scene.
However, these examples are from a while ago and perhaps threre is a newer or a more straight forward solution to this problem?
At the low level, RealityKit is only responsible for rendering virtual objects and overlaying them on top of the camera frame.
If you want to illuminate the real scene, you need to post-process the camera frame.
Here are some tutorials on how to do post-processing:
Tutorial1⃣️
Tutorial2⃣️
If all you need is an effect like This , then all you need to do is add a CGImage-based post-processing effect for the virtual object (lights).
More specifically, add a bloom filter to the rendered image(You can also simulate bloom filters with Gaussian blur).
In this way, the code is all around UIImage and CGImage, so it's pretty simple😎
If you want to be more realistic, consider using the depth map provided by LiDAR to calculate which areas can be illuminated for a more detailed brightness.
Or If you're a true explorer, you can use Metal to create a real world Digital Twin point cloud in real time to simulate occlusion of light.
There's nothing new in relighting techniques based on 3D compositing principles in 2021. At the moment, when you're working with RealityKit or SceneKit, you have to personally implement the relighting functionality with the help of two additional render passes (RGB pass is always needed) - Normals pass and PointPosition pass. Both AOVs must be 32-bit.
However, in the near future, when Apple engineers finally implement texture capturing in Scene Reconstruction – any inexperienced AR developer will be able to apply a relighting procedure.
Watch this Vimeo Video to find out how relighting can be achieved in The Foundry NUKE.
A crucial point here, when implementing the Relighting effect, is the presence of a LiDAR scanner (or iToF sensor if you're using ARCore). In other words, today's relighting solution for iOS is Metal + RealityKit.
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.
This is probably an insanely hard question. So far ARKit works with 3D models which are built in 3d modelling software. I was wondering if there was a way to use iPhone camera to scan 3d object (let's say a car), then use it in ARKit.
Any open source projects available which do this on other platforms or iOS?
You are looking for software in the "photogrammetry" category. There are various software tools that will stitch your photos into 3D models, but one option is Autodesk Remake. There is a free version.
ARKit/RealityKit on iPad/iPhone with a LiDAR scanner let you reconstruct a current scene and obtain a 3D geometry with an Occlusion Material applied. This geometry allows you occlude any object including a human being and physically "interact" with this generated mesh. LiDAR's working distance is up to 5 meters.
However, scanning a car isn't a good idea due to paint's high reflectivity.