RealityKit ARRaycastQuery after ARView has looked away - ios

I have a bit of a long winded process at the moment which retains an ARFrame from the ARSessionDelegate's func session(_ session: ARSession, didUpdate frame: ARFrame) callback.
I then do some processing which can take anywhere between 2-5 seconds in which the user of the app can move the camera and point it at a different location in AR then when I grabbed the ARFrame.
I noticed that the ARFrame object has it's own method to produce an ARRaycastQuery which I assume would be relative to that frame regardless of where the camera is currently pointed.
Here is a function where I use the ARRaycastQuery from an ARFrame and execute it against the ARSession
func getQuery(forPosition point: CGPoint, frame: ARFrame) -> ARRaycastResult? {
let estimatedPlane = ARRaycastQuery.Target.estimatedPlane
let alignment = ARRaycastQuery.TargetAlignment.any
let raycastQuery: ARRaycastQuery = frame.raycastQuery(from: point, allowing: estimatedPlane, alignment: alignment)
guard let raycastResult = arKitCoordinator.arView.session.raycast(raycastQuery).first else {
return nil
}
return raycastResult
}
However, If I have moved that camera (ARView view port) away from where I captured the ARFrame I always get nil. I would expect to be able to add objects into AR even if the user has moved the view port (iPhone) away from where I got the frame.
How can I add object into the "invisible" or "out of view" portions of the AR Space?

Related

Horizontal plane detection limitations?

I'm trying to build an ARKit based app which requires detection of roads and placing virtual content 30 feet away from the camera. However horizontal plane detection is stopping to add anchors after about 10 feet. Is there a workaround for this problem?
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let usdzEntity = usdzEntity else { return }
let camera = frame.camera
let transform = camera.transform
if let rayCast = arView.scene.raycast(from: transform.translation, to: usdzEntity.transform.translation, query: .nearest, mask: .default, relativeTo: nil).first {
print(rayCast.distance)
}
}
look at this, hope this can give you some help

ARKit does not recognize reference images

I'm trying to place a 3D model on top of a recognized image with ARKit and RealityKit - all programmatically. Before I start the ARView I'm downloading the model I want to show when the reference image is detected.
This is my current setup:
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
// Check if the device supports the AR experience
if (!ARConfiguration.isSupported) {
TLogger.shared.error_objc("Device does not support Augmented Reality")
return
}
guard let qrCodeReferenceImage = UIImage(named: "QRCode") else { return }
let detectionImages: Set<ARReferenceImage> = convertToReferenceImages([qrCodeReferenceImage])
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = detectionImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
I use the ARSessionDelegate to get notified when a new image anchor was added which means the reference image got detected:
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
print("Hello")
for anchor in anchors {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
addEntity(self.localModelPath!)
}
}
However, the delegate method never gets called while other delegate functions like func session(ARSession, didUpdate: ARFrame) are getting called so I assume that the session just doesn't detect the image. The image resolution is good and the printed image the big so it should definitely get recognized by the ARSession. I also checked that the image has been found before adding it to the configuration.
Can anyone lead me in the right direction here?
It looks like you have your configuration set up correctly. Your delegate-function should be called when the reference image is recognized. Make sure your configuration isn't overwritten at any point in your code.

How to stop CoreML from running when it's no longer needed in the app?

My app runs Vision on a CoreML model. The camera frames the machine learning model runs on are from an ARKit sceneView (basically, the camera). I have a method that's called loopCoreMLUpdate() that continuously runs CoreML so that we keep running the model on new camera frames. The code looks like this:
import UIKit
import SceneKit
import ARKit
class MyViewController: UIViewController {
var visionRequests = [VNRequest]()
let dispatchQueueML = DispatchQueue(label: "com.hw.dispatchqueueml") // A Serial Queue
override func viewDidLoad() {
super.viewDidLoad()
// Setup ARKit sceneview
// ...
// Begin Loop to Update CoreML
loopCoreMLUpdate()
}
// This is the problematic part.
// In fact - once it's run there's no way to stop it, is there?
func loopCoreMLUpdate() {
// Continuously run CoreML whenever it's ready. (Preventing 'hiccups' in Frame Rate)
dispatchQueueML.async {
// 1. Run Update.
self.updateCoreML()
// 2. Loop this function.
self.loopCoreMLUpdate()
}
}
func updateCoreML() {
///////////////////////////
// Get Camera Image as RGB
let pixbuff : CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
if pixbuff == nil { return }
let ciImage = CIImage(cvPixelBuffer: pixbuff!)
// Note: Not entirely sure if the ciImage is being interpreted as RGB, but for now it works with the Inception model.
// Note2: Also uncertain if the pixelBuffer should be rotated before handing off to Vision (VNImageRequestHandler) - regardless, for now, it still works well with the Inception model.
///////////////////////////
// Prepare CoreML/Vision Request
let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
// let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage!, orientation: myOrientation, options: [:]) // Alternatively; we can convert the above to an RGB CGImage and use that. Also UIInterfaceOrientation can inform orientation values.
///////////////////////////
// Run Image Request
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
}
As you can see the loop effect is created by a DispatchQueue with the label com.hw.dispatchqueueml that keeps calling loopCoreMLUpdate(). Is there any way to stop the queue once CoreML is not needed anymore? Full code is here.
I suggest instead o running coreML model here in viewDidLoad, you can use ARSessionDelegate function for the same.
func session(_ session: ARSession, didUpdate frame: ARFrame) method to get the frame, you can set the flag, here to enable when you want the the model to work and when you dont.
Like this below:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// This is where we will analyse our frame
// We return early if currentBuffer is not nil or the tracking state of camera is not normal
// TODO: - Core ML Functionality Commented
guard isMLFlow else { //
return
}
currentBuffer = frame.capturedImage
guard let buffer = currentBuffer, let image = UIImage(pixelBuffer: buffer) else { return }
<Code here to load model>
CoreMLManager.manager.updateClassifications(for: image)
}

Show bounding box while detecting object using ARKit 2

I have scanned and trained multiple real world objects. I do have the ARReferenceObject and the app detects them fine.
The issue that I'm facing is when an object doest not have distinct, vibrant features it takes few seconds to return a detection result, which I can understand. Now, I want the app to show a bounding box and an activity indicator on top the object while it is trying to detect it.
I do not see any information regarding this. Also, if there is any way to get the time when detection starts or the confidence percentage of the object being detected.
Any help is appreciated.
It is possible to show a boundingBox in regard to the ARReferenceObject prior to it being detected; although I am not sure why you would want to do that (in advance anyway).
For example, assuming your referenceObject was on a horizontal surface you would first need to place your estimated bounding box on the plane (or use some other method to place it in advance), and in the time it took to detect the ARPlaneAnchor and place the boundingBox it is most likely that your model would already have been detected.
Possible Approach:
As you are no doubt aware an ARReferenceObject has a center, extent and scale property as well as a set of rawFeaturePoints associated with the object.
As such we can create our own boundingBox node based on some of the sample code from Apple in Scanning & Detecting 3D Objects and create our own SCNNode which will display a bounding box of the approximate size of the ARReferenceObject which is stored locally prior to it being detected.
Note you will need to locate the 'wireframe_shader' from the Apple Sample Code for the boundingBox to render transparent:
import Foundation
import ARKit
import SceneKit
class BlackMirrorzBoundingBox: SCNNode {
//-----------------------
// MARK: - Initialization
//-----------------------
/// Creates A WireFrame Bounding Box From The Data Retrieved From The ARReferenceObject
///
/// - Parameters:
/// - points: [float3]
/// - scale: CGFloat
/// - color: UIColor
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(Float.greatestFiniteMagnitude)
var localMax = float3(-Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) Has Not Been Implemented") }
//----------------
// MARK: - Shaders
//----------------
/// Sets A Shader To Render The Cube As A Wireframe
///
/// - Parameter geometry: SCNBox
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
}
To display the bounding box you you would then do something like the following, noting that in my example I have the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
To display the boundingBox prior to detection of the actual object itself, you would call the func loadBoundigBox in viewDidLoad e.g:
/// Creates A Bounding Box From The Data Available From The ARObject In The Local Bundle
func loadBoundingBox(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealityView.delegate = self
//2. Load A Single ARReferenceObject From The Main Bundle
if let objectURL = Bundle.main.url(forResource: "fox", withExtension: ".arobject"){
do{
var referenceObjects = [ARReferenceObject]()
let object = try ARReferenceObject(archiveURL: objectURL)
//3. Log it's Properties
print("""
Object Center = \(object.center)
Object Extent = \(object.extent)
Object Scale = \(object.scale)
""")
//4. Get It's Scale
let scale = CGFloat(object.scale.x)
//5. Create A Bounding Box
let boundingBoxNode = BlackMirrorzBoundingBox(points: object.rawFeaturePoints.points, scale: scale)
//6. Add It To The ARSCNView
self.augmentedRealityView.scene.rootNode.addChildNode(boundingBoxNode)
//7. Position It 0.5m Away From The Camera
boundingBoxNode.position = SCNVector3(0, -0.5, -0.5)
//8. Add It To The Configuration
referenceObjects.append(object)
configuration.detectionObjects = Set(referenceObjects)
}catch{
print(error)
}
}
//9. Run The Session
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
augmentedRealityView.automaticallyUpdatesLighting = true
}
The above example simple creates a boundingBox from the non-detected ARReferenceObject and places it 0.5m down from and 0.5meter away from the Camera which yields something like this:
You would of course need to handle the position of the boundBox initially, as well as hoe to handle the removal of the boundingBox 'indicator'.
The method below simply shows a boundBox when the actual object is detected e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
}
Which yields something like this:
In regard to the detection timer, there is an example in the Apple Sample Code, which displays how long it takes to detect the model.
In its crudest form (not accounting for milliseconds) you can do something like so:
Firstly create A Timer and a var to store the detection time e.g:
var detectionTimer = Timer()
var detectionTime: Int = 0
Then when you run your ARSessionConfiguration initialise the timer e.g:
/// Starts The Detection Timer
func startDetectionTimer(){
detectionTimer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(logDetectionTime), userInfo: nil, repeats: true)
}
/// Increments The Total Detection Time Before The ARReference Object Is Detected
#objc func logDetectionTime(){
detectionTime += 1
}
Then when an ARReferenceObject has been detected invalidate the timer and log the time e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let _ = anchor as? ARObjectAnchor else { return }
//2. Stop The Timer
detectionTimer.invalidate()
//3. Log The Detection Time
print("Total Detection Time = \(detectionTime) Seconds")
//4. Reset The Detection Time
detectionTime = 0
}
}
This should be more than enough to get your started...
And please note, that this example doesn't provide a boundingBox when scanning an object (look at the Apple Sample Code for that), it provides one based on an existing ARReferenceObject which is implied in your question (assuming I interpreted it correctly).

ARKit: notification upon feature point detection?

This answer and others explain how to get notified when ARKit detects anchors or planes, but how do you get notifications when ARKit detects feature points?
Looking at the APIs it's somewhat similar to the answers that you have linked to.
Using ARSessionDelegate session(_ session: ARSession, didUpdate frame: ARFrame) you can access the rawFeaturePoints of the ARFrame that just got passed in.
So it would look something like:
// Not actually tested
class MyARSessionDelegate: ARSessionDelegate {
var previouslyDetectedPointCount = 0
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check if new points are detected
if previouslyDetectedPointCount != frame.rawFeaturePoints?.points.count {
// point count has changed
previouslyDetectedPointCount = frame.rawFeaturePoints!.points.count
}
}
}
Though as to why you would want to be looking for the specific points is curious. The documentation clearly states:
ARKit does not guarantee that the number and arrangement of raw
feature points will remain stable between software releases, or even
between subsequent frames in the same session.
This doesn't seem like the ideal solution, but it works. Implement the func session(_ session: ARSession, didUpdate frame: ARFrame) function from the ARSessionDelegate protocol, and check for feature points in each frame.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Show tap icon once we find feature points
if !detectedFeaturePoints, let points = frame.rawFeaturePoints?.points, let firstPoint = points.first {
detectedFeaturePoints = true
}
}

Resources