I understand how to extract feature points for a single ARFrame (ARFrame.rawFeaturePoints). Is there anyway to extract all feature points that have been detected in the session? Is this something I have to aggregate myself? If so, how should I handle point matching?
find the ways to capture the feature points.
You can implement the ARSCNViewDelegate, you will get a callback whenever AR finds plane..here you can capture all the feature points.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
let cloudPoints = sceneView.session.currentFrame?.rawFeaturePoints
}
}
2.You can implement the SCNSceneRendererDelegate, implement the
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) { }.
Related
I`m using ARKit to detect if mouth is open or not.
Mouth Open:
This value is controlled by how much you open your mouth.
Range: 0.0 to 1.0
However, when you yawn the value is saying that mouth is closed. (wtf)
I'm receiving values from method faceAnchor.blendShapes
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
return
}
I saw the equivalent on Android on this post
Android Mobile Vision API detect mouth is open
I'm pretty new to AR Kit but recently I found that the image tracking feature is quite awesome. I found it's as simple as:
let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main)
let configuration = ARImageTrackingConfiguration()
configuration.trackingImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1
sceneView.session.run(configuration)
which works beautifully! However, I want to further the experience by identifying which image has been tracked and display different AR objects / nodes based on the image that was tracked. Is there a way to get more information on the specific image that is currently being tracked?
In you AR Reference Group in your assets catalog, when you click the reference image, you can open the attributes inspector and enter a "Name."
This name is then reflected in the name property of the ARImageAnchor for the anchor that is created when the AR session begins to track that specific image.
Then in
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode?
You can inspect the anchor and respond accordingly. For example:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let anchor = anchor as? ARImageAnchor else { return nil }
if anchor.name == "calculator" {
print("tracking calculator image")
return SCNNode.makeMySpecialCalculatorNode()
}
return nil
}
I am trying to return a previously created node in my Session Delegate:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return SCNNode()}
let anchorNode = sceneView.anchor(for: earthNode)
if anchorNode == nil || anchorNode == planeAnchor {
return earthNode
}
return nil
}
I am trying to see whether there is already an anchor assigned to my node, and if not, return earthNode.
My problem is that let anchorNode = sceneView.anchor(for: earthNode) is either freezing, or infinite looping.
My working theory is that this is due to the fact that earthNode isn't yet placed in the scene. But that seems like a wonky explanation. I also, of course, presume that my usage of ARkit reeks of ignorance.
Does anybody else also experience non-working ARKit scene on an iPhone8?
When I download Apple's ARKit example and run it on an iPhone8, it stays on Initializing - when I check the ARSCNViewDelegate implementation:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
DispatchQueue.main.async {
self.statusViewController.cancelScheduledMessage(for: .planeEstimation)
self.statusViewController.showMessage("SURFACE DETECTED")
[..]
}
[..]
}
It seems as if it never goes beyond the guard, so a ARPlaneAnchor is never being added to the scene...
The same project on an iPhone6s / iPhone7 runs just fine though...
Does anyone else know how to fix this?
Hey I an trying to constantly get the value of the devices camera / ARCamera. As far as I know there is only one function the allows me to access these ARCamera traits. That is this function here:
Code:
// Only gets called couple times when camera state changes
func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera)
print("\(camera.eulerAngles)")
}
I've been thinking about maybe using some trickery like putting a repeating timer in the function that would call that value. But I can't call a local selectors that get booted out. What I'm more looking for is something along the lines of how this function is:
func renderer(_ aRenderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// This constantly gets called.
}
I wonder if there is a way to incorporate the ARCamera into the function.
If you want to continuously get updates on camera state, implement ARSessionDelegate.session(_:didUpdate:):
class MyDelegate: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
print("\(frame.camera)")
}
/* ... */
}
The ARFrame object shall contain camera field with all the necessary information.
If you just want to know when tracking state changes, you might want to store the state from session(_:cameraDidChangeTrackingState:) in a field, and refer to it in your rendering loop:
class MyDelegate: SCNSceneRendererDelegate, ARSessionObserver {
var camera: ARCamera! = nil
func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
self.camera = camera
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
print("\(self.camera.trackingState)")
}
/* ... more methods ... */
}