ARKit removes a node when Reference Image disappeared - ios

I'm building AR Scanner application where users are able to scan different images and receive rewards for this.
When they point camera at some specific image - I place SCNNode on top of that image and after they remove camera from that image - SCNNode get's dismissed.
But when image disappears and camera stays at the same position SCNNode didn't get dismissed.
How can I make it disappear together with Reference image disappearance?
I have studied lot's of other answers here, on SO, but they didn't help me
Here's my code for adding and removing SCNNode's:
extension ARScannerScreenViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { self.instructionLabel.isHidden = true }
if let imageAnchor = anchor as? ARImageAnchor {
handleFoundImage(imageAnchor, node)
imageAncors.append(imageAnchor)
trackedImages.append(node)
} else if let objectAnchor = anchor as? ARObjectAnchor {
handleFoundObject(objectAnchor, node)
}
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else { return }
for (index, item) in trackedImages.enumerated() {
if !(sceneView.isNode(item, insideFrustumOf: pointOfView)) {
self.sceneView.session.remove(anchor: imageAncors[index])
}
}
}
private func handleFoundImage(_ imageAnchor: ARImageAnchor, _ node: SCNNode) {
let name = imageAnchor.referenceImage.name!
print("you found a \(name) image")
let size = imageAnchor.referenceImage.physicalSize
if let imageNode = showImage(size: size) {
node.addChildNode(imageNode)
node.opacity = 1
}
}
private func showImage(size: CGSize) -> SCNNode? {
let image = UIImage(named: "InfoImage")
let imageMaterial = SCNMaterial()
imageMaterial.diffuse.contents = image
let imagePlane = SCNPlane(width: size.width, height: size.height)
imagePlane.materials = [imageMaterial]
let imageNode = SCNNode(geometry: imagePlane)
imageNode.eulerAngles.x = -.pi / 2
return imageNode
}
private func handleFoundObject(_ objectAnchor: ARObjectAnchor, _ node: SCNNode) {
let name = objectAnchor.referenceObject.name!
print("You found a \(name) object")
}
}
I also tried to make it work using ARSession, but I couldn't even get to prints:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
for myAnchor in imageAncors {
if let imageAnchor = anchor as? ARImageAnchor, imageAnchor == myAnchor {
if !imageAnchor.isTracked {
print("Not tracked")
} else {
print("tracked")
}
}
}
}
}

You have to use ARWorldTrackingConfiguration instead of ARImageTrackingConfiguration. It's quite bad idea to use both configurations in app because each time you switch between them – tracking state is reset and you have to track from scratch.
Let's see what Apple documentation says about ARImageTrackingConfiguration:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera.
The basic differences between these two configs are about how ARAnchors behave:
ARImageTrackingConfiguration allows you get ARImageAnchors only if your reference images is in a Camera View. So if you can't see a reference image – there's no ARImageAnchor, thus there's no a 3D model (it's resetting each time you cannot-see-it-and-then-see-it-again). You can simultaneously detect up to 100 images.
ARWorldTrackingConfiguration allows you track a surrounding environment in 6DoF and get ARImageAnchor, ARObjectAnchor, or AREnvironmentProbeAnchor. If you can't see a reference image – there's no ARImageAnchor, but when you see it again ARImageAnchor is still there. So there's no reset.
Conclusion:
ARWorldTrackingConfiguration's cost of computation is much higher. However this configuration allows you perform not only image tracking but also hit-testing and ray-casting for detected planes, object detection, and a restoration of world maps.

Use nodeForAnchor to load your nodes, so when the anchors disappear, the nodes will go as well.

Related

Horizontal plane detection limitations?

I'm trying to build an ARKit based app which requires detection of roads and placing virtual content 30 feet away from the camera. However horizontal plane detection is stopping to add anchors after about 10 feet. Is there a workaround for this problem?
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let usdzEntity = usdzEntity else { return }
let camera = frame.camera
let transform = camera.transform
if let rayCast = arView.scene.raycast(from: transform.translation, to: usdzEntity.transform.translation, query: .nearest, mask: .default, relativeTo: nil).first {
print(rayCast.distance)
}
}
look at this, hope this can give you some help

ARKit does not recognize reference images

I'm trying to place a 3D model on top of a recognized image with ARKit and RealityKit - all programmatically. Before I start the ARView I'm downloading the model I want to show when the reference image is detected.
This is my current setup:
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
// Check if the device supports the AR experience
if (!ARConfiguration.isSupported) {
TLogger.shared.error_objc("Device does not support Augmented Reality")
return
}
guard let qrCodeReferenceImage = UIImage(named: "QRCode") else { return }
let detectionImages: Set<ARReferenceImage> = convertToReferenceImages([qrCodeReferenceImage])
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = detectionImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
I use the ARSessionDelegate to get notified when a new image anchor was added which means the reference image got detected:
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
print("Hello")
for anchor in anchors {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
addEntity(self.localModelPath!)
}
}
However, the delegate method never gets called while other delegate functions like func session(ARSession, didUpdate: ARFrame) are getting called so I assume that the session just doesn't detect the image. The image resolution is good and the printed image the big so it should definitely get recognized by the ARSession. I also checked that the image has been found before adding it to the configuration.
Can anyone lead me in the right direction here?
It looks like you have your configuration set up correctly. Your delegate-function should be called when the reference image is recognized. Make sure your configuration isn't overwritten at any point in your code.

Edit variable in function then recall edited variable in another function

I want to preface this with I am a beginner to Swift but need to get this ARKit project finished already.
I use the function.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
offScreen = false
print("Visible")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
offScreen = true
print("No image in view")
}
}
}
This detects if the anchor is on screen and sets the global variable offScreen to the appropriate value.
I want to take the new value of the variable and use it in my createdVideoPlayerNodeFor function. If offScreen is true, then set AVPlayer to pause.
However, I have my AVPlayer declared in my createdVideoPlayerNodeFor function so I can't contain it in one function.
I know I am referring to fragments of my code at a time so I have full code posted below.
var offScreen = false
let videoNode = SCNNode()
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
offScreen = false
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
node.addChildNode(createdVideoPlayerNodeFor(referenceImage))
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
offScreen = false
print("Visible")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
offScreen = true
print("No image in view")
}
}
}
func createdVideoPlayerNodeFor(_ target: ARReferenceImage) -> SCNNode {
let videoPlayerGeometry = SCNPlane(width: target.physicalSize.width, height: target.physicalSize.height)
var player = AVPlayer()
if let targetName = target.name,
let awsURL:NSURL = NSURL(string: "my video url :).mp4") {
player = AVPlayer(url: awsURL as URL)
player.play()
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: nil) { (notification) in
player.seek(to: CMTime.zero)
player.pause()
}
}
videoPlayerGeometry.firstMaterial?.diffuse.contents = player
videoNode.geometry = videoPlayerGeometry
videoNode.eulerAngles.x = -Float.pi / 2
return videoNode
}
I am in dire need of help with this so if anyone can help me to figure this out, It would be greatly appreciated.
Please ask questions if I didn't explain well enough or anything, I really just need to figure this out :)
Edit: In my testing, I found that when the variable was changed in either function, it was almost like the variable had 2 different values, 1 for each of the functions. So if it was set to true in the didUpdate function, it didn't matter because the createdVideo function would use the value set at the beginning of the variable's declaration. Is this even possible to set the value of the variable in one func and have it carry over to another?
offScreen is an instance variable, which means that it is in scope for both functions. You should be able to both read and set it from either. Be careful, however, that you don't read/write that variable from different threads as then the value can be unpredictable. You might want to set up an offscreenQueue, a private DispatchQueue, that will restrict access to this variable.

Show bounding box while detecting object using ARKit 2

I have scanned and trained multiple real world objects. I do have the ARReferenceObject and the app detects them fine.
The issue that I'm facing is when an object doest not have distinct, vibrant features it takes few seconds to return a detection result, which I can understand. Now, I want the app to show a bounding box and an activity indicator on top the object while it is trying to detect it.
I do not see any information regarding this. Also, if there is any way to get the time when detection starts or the confidence percentage of the object being detected.
Any help is appreciated.
It is possible to show a boundingBox in regard to the ARReferenceObject prior to it being detected; although I am not sure why you would want to do that (in advance anyway).
For example, assuming your referenceObject was on a horizontal surface you would first need to place your estimated bounding box on the plane (or use some other method to place it in advance), and in the time it took to detect the ARPlaneAnchor and place the boundingBox it is most likely that your model would already have been detected.
Possible Approach:
As you are no doubt aware an ARReferenceObject has a center, extent and scale property as well as a set of rawFeaturePoints associated with the object.
As such we can create our own boundingBox node based on some of the sample code from Apple in Scanning & Detecting 3D Objects and create our own SCNNode which will display a bounding box of the approximate size of the ARReferenceObject which is stored locally prior to it being detected.
Note you will need to locate the 'wireframe_shader' from the Apple Sample Code for the boundingBox to render transparent:
import Foundation
import ARKit
import SceneKit
class BlackMirrorzBoundingBox: SCNNode {
//-----------------------
// MARK: - Initialization
//-----------------------
/// Creates A WireFrame Bounding Box From The Data Retrieved From The ARReferenceObject
///
/// - Parameters:
/// - points: [float3]
/// - scale: CGFloat
/// - color: UIColor
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(Float.greatestFiniteMagnitude)
var localMax = float3(-Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) Has Not Been Implemented") }
//----------------
// MARK: - Shaders
//----------------
/// Sets A Shader To Render The Cube As A Wireframe
///
/// - Parameter geometry: SCNBox
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
}
To display the bounding box you you would then do something like the following, noting that in my example I have the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
To display the boundingBox prior to detection of the actual object itself, you would call the func loadBoundigBox in viewDidLoad e.g:
/// Creates A Bounding Box From The Data Available From The ARObject In The Local Bundle
func loadBoundingBox(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealityView.delegate = self
//2. Load A Single ARReferenceObject From The Main Bundle
if let objectURL = Bundle.main.url(forResource: "fox", withExtension: ".arobject"){
do{
var referenceObjects = [ARReferenceObject]()
let object = try ARReferenceObject(archiveURL: objectURL)
//3. Log it's Properties
print("""
Object Center = \(object.center)
Object Extent = \(object.extent)
Object Scale = \(object.scale)
""")
//4. Get It's Scale
let scale = CGFloat(object.scale.x)
//5. Create A Bounding Box
let boundingBoxNode = BlackMirrorzBoundingBox(points: object.rawFeaturePoints.points, scale: scale)
//6. Add It To The ARSCNView
self.augmentedRealityView.scene.rootNode.addChildNode(boundingBoxNode)
//7. Position It 0.5m Away From The Camera
boundingBoxNode.position = SCNVector3(0, -0.5, -0.5)
//8. Add It To The Configuration
referenceObjects.append(object)
configuration.detectionObjects = Set(referenceObjects)
}catch{
print(error)
}
}
//9. Run The Session
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
augmentedRealityView.automaticallyUpdatesLighting = true
}
The above example simple creates a boundingBox from the non-detected ARReferenceObject and places it 0.5m down from and 0.5meter away from the Camera which yields something like this:
You would of course need to handle the position of the boundBox initially, as well as hoe to handle the removal of the boundingBox 'indicator'.
The method below simply shows a boundBox when the actual object is detected e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
}
Which yields something like this:
In regard to the detection timer, there is an example in the Apple Sample Code, which displays how long it takes to detect the model.
In its crudest form (not accounting for milliseconds) you can do something like so:
Firstly create A Timer and a var to store the detection time e.g:
var detectionTimer = Timer()
var detectionTime: Int = 0
Then when you run your ARSessionConfiguration initialise the timer e.g:
/// Starts The Detection Timer
func startDetectionTimer(){
detectionTimer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(logDetectionTime), userInfo: nil, repeats: true)
}
/// Increments The Total Detection Time Before The ARReference Object Is Detected
#objc func logDetectionTime(){
detectionTime += 1
}
Then when an ARReferenceObject has been detected invalidate the timer and log the time e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let _ = anchor as? ARObjectAnchor else { return }
//2. Stop The Timer
detectionTimer.invalidate()
//3. Log The Detection Time
print("Total Detection Time = \(detectionTime) Seconds")
//4. Reset The Detection Time
detectionTime = 0
}
}
This should be more than enough to get your started...
And please note, that this example doesn't provide a boundingBox when scanning an object (look at the Apple Sample Code for that), it provides one based on an existing ARReferenceObject which is implied in your question (assuming I interpreted it correctly).

ARKit: Placing a 3D-object on top of image

I'm trying to add a 3D-object properly on a reference image. To add the 3D-object on the image in real world I'm using the imageAnchor.transform property as seen below.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
updateQueue.async {
// Add a virtual cup at the position of the found image
self.virtualObject.addVirtualObjectWith(sceneName: "cup.dae",
childNodeName: nil,
position: SCNVector3(x: imageAnchor.transform.columns.3.x,
y: imageAnchor.transform.columns.3.y,
z: imageAnchor.transform.columns.3.z),
recursively: true,
imageAnchor: imageAnchor)
}
}
The problem is when I move the device orientation the cup won't stay nicely in the middle on the image. I would also like to have the cup on the same spot even when I remove the image. I don't get the problem because when you add an object using plane detection and hit testing there is also a ARAnchor used for the plane.
Update 05/14/2018
func addVirtualObjectWith(sceneName: String, childNodeName: String, objectName: String, position: SCNVector3, recursively: Bool, node: SCNNode?){
print("VirtualObject: Added virtual object with scene name: \(sceneName)")
let scene = SCNScene(named: "art.scnassets/\(sceneName)")!
var sceneNode = scene.rootNode.childNode(withName: childNodeName, recursively: recursively)!
sceneNode.name = objectName
sceneNode.position = position
add(object: sceneNode, toNode: node)
}
func add(object: SCNNode, toNode: SCNNode?){
if toNode != nil {
toNode?.addChildNode(object)
}
else {
sceneView.scene.rootNode.addChildNode(object)
}
}
I finally found the solution, turns out that the size of the AR Reference Image was not set correctly in the attributes inspector. When the size is not correct, the anchor of the image will be shaky.

Resources