ARKIT - how many tracking images can it track? - ios

So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties.
Then we set the array of ARReferenceImages to the Session's World Config.
All good with that.
But HOW MANY can we track ? 10? 100? 1000000? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ?

Having a look at the Apple Docs it doesn't seem to specify a limit. As such it is likely to assume it would likely depend on memory management etc.
Regarding creating images on the fly, this is definitely possible.
According to the docs this can be done one of two ways:
Creating a a new reference image from a Core Graphics image object:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Creating a new reference image from a Core Video pixel buffer:
init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Here is an example of creating a referenceImage on the fly using an image from the standard Assets Bundle, although this can easily be adapted for parsing an image from a URL etc:
// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){
//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }
//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)
//5. Name The Image
arImage.name = "CGImage Test"
//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]
}
/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
We can then test this within ARSCNViewDelegate e.g.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
As you can see the process if fairly easy. So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)

Paraphrasing the Human Interface Guidelines for AR... image detection performance/accuracy deteriorates as the number of images increases. So there’s no hard limit in the API, but if you try to put more than around 25 images in the current detection set, it’ll start getting to where it’s too slow/inaccurate to be useful.
There are lots of other factors affecting performance/accuracy, too, so consider that a guideline, not a hard limit. Depending on scene conditions in the place where you’re running the app, how much you’re stressing the CPU with other tasks, how distinct your reference images are from one another, etc, you might manage a few more than 25... or start having detection problems with a few less than 25.

Related

Show image on SCNPlane in Scene kit & AR kit in swift iOS

I followed this raywenderlich tutorial to detect image using AR and Scene Kit. In this example they showed displaying video on top of detected image.
https://www.raywenderlich.com/6957-building-a-museum-app-with-arkit-2
I want to display image on top of detected image. Please help me how can I add childNode to SCNNode so that I can display image on top of it.
Here is the code which I tried it's not working showing a blank white screen.
DispatchQueue.main.async {
let size = imageAnchor.referenceImage.physicalSize
let imageView = UIImageView(image: UIImage(named: "BG.png")!)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: size.width, height: size.height)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
Please help to show image on top of detected image. Thanks in advance.
I'm able to show image on top of detected image but it takes lot of memory what is wrong in that code.
And same as image view I'm displaying gif play on top of detected image with the below code.
let size = imageAnchor.referenceImage.physicalSize
let imgNameFromDoc = person["actionToTake"] as! String
let documentsPathURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let imgpath = documentsPathURL?.appendingPathComponent("/Packages/\(self.selectedDatasetName)/3DScenes/\(imgNameFromDoc)")
let imageData = try! Data(contentsOf: URL(fileURLWithPath: imgpath!.path))
let imageURL = UIImage.gifImageWithData(imageData)
self.gifImgView = UIImageView(image: imageURL)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = self.gifImgView
let imgPlane = SCNPlane(width: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[0]), height: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[1]))
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
With the help of image extension I'm showing GIF on image view on top of detected image.
https://github.com/kiritmodi2702/GIF-Swift/blob/master/GIF-Swift/iOSDevCenters%2BGIF.swift
How to load GIF image in Swift?
I dragged iOSDevCenters+GIF.swift class inside app and displayed gif.
When I use this gif image viewto display gif and image view to display image it takes more than 800 MB memory in Xcode running with iPad.
can somebody help me to find any memory leaks(tried with instruments not able to fix) or anything wrong in this code.
I guess you have image tracking working.
The next method is called when a new anchor is added. In this case when your image tracked is detected. So the parameter node is the image tracked and you can add a child to that image as in the tutorial. So you can try the next (use ARSCNViewDelegate):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
let imageView = UIImageView(image: UIImage())
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: 0.1, height: 0.1)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
}
The imgNode is added to the node and the node is already added to the scene. So what happen if the node is not added to the scene? You have to add it by your own. So in case you are not using the renderer method, an example of it is the next one:
self.imgNode?.position = SCNVector3(0, 0, 0)
sceneView.scene.rootNode.addChildNode(self.imgNode)
Displaying UIKit views as material property contents is not supported.
Instead of a UIImageView you should directly use the UIImage or even better just a String or URL that points to the image on disk. This way the image won't be loaded twice in memory (in SceneKit and UIKit).

Child nodes not visible when setWorldOrigin is called?

So I have a function like below that sets the world origin to a node placed where an image is after scanning it. I load the saved nodes from a database that adds them to the scene in a separate function.
For some reason, the nodes will not show when I run the app. It works when setWorldOrigin is commented out.
I would like for the nodes to show relative to the image as the origin.
Am I missing something? Does setWorldOrigin change the session?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let nodeGeometry = SCNText(string: "Welcome!", extrusionDepth: 1)
nodeGeometry.font = UIFont(name: "Helvetica", size: 30)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.black
anchorNode.geometry = nodeGeometry
anchorNode.scale = SCNVector3(0.1, 0.1, 0.1)
anchorNode.constraints = [SCNBillboardConstraint()]
anchorNode.position = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.25
/*
* Plane is rotated to match the picture location
*/
planeNode.eulerAngles.x = -.pi / 2
/*
* Scan runs as an action for a set amount of time
*/
planeNode.runAction(self.imageHighlightAction)
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
sceneView.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
sceneView.scene.rootNode.addChildNode(node)
/*
* Populates the scene
*/
handleData()
} // renderer
I figured it out. The size I had for my image was incorrect. The image I was using is 512x512 pixels. I got that by right clicking on the image and selecting "Get Info" and looking at dimensions.
I also had the measurements as meters. I changed it to centimeters and used a pixel to centimeters converter

How to translate ARAnchor size to SpriteKit equivalent

Question:
How do I create a new SKSpriteNode that is the same height and width of the ARObjectAnchor.referenceObject.
Context:
I'm currently fiddling with ARKit's new object detection feature and have working code to detect an object that I scanned in. When ARKit detects the object it provides a new ARAnchor of type ARObjectAnchor.
I know ARSCNView provides a projectPoint method, but I can't find any equivalent function for ARSKView. How can I map the ARObjectAnchor dimensions to the new Sprite?
Here's how I'm processing the detected object:
func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
if let objectAnchor = anchor as? ARObjectAnchor {
let width = objectAnchor.referenceObject.extent.x
let height = objectAnchor.referenceObject.extent.y
// How to translate above height/width to the below size?
let box = SKSpriteNode(color: .white, size: ???)
node.addChild(box)
}
}
...
let size = CGSize(width: width, height: height)
let box = SKSpriteNode(color: .white, size: size)
...
When I add a SKScene to my SCNPlane in my image recognition app, I do a * 10000 of the size of my SCNPlane (don't ask me why it's 10000 i have no idea) as follow
let toto = SCNNode(geometry: SCNPlane(width: 0.3, height: 0.1))
let sk = SKScene(size: CGSize(width: 3000, height: 1000))
and for me it's the same size, try this out and tell me if it works for you !

Convert coordinates in ARImageTrackingConfiguration

With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...

How do I render a jpg image using ARKit ARSKViewDelegate?

I'd like to render a jpg image as a 2d rectangle floating in space. Using the SpriteKit example - how do I return a jpg image from the ARSKViewDelegate?
The demo returns a SKLabelNode - is there a Node class that would be appropriate for a jpg image that I would fetch from the network, maybe a UIImage?
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
let labelNode = SKLabelNode(text: "👾")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode;
}
You can use a SCNPlane and assign a UIImage as content of that plane.
let imagePlane = SCNPlane(width: sceneView.bounds.width/6000, height: sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = //<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
let planeNode = SCNNode(geometry: imagePlane)
UPDATE: now I noticed that you're using SpriteKit. The code I shared is using SceneKit.
Looks like I can use an SKSpriteNode from an SKTexture. The only issue with that is I see in the logs warnings about degraded ar performance. [Technique] World tracking performance is being affected by resource constraints [1]
let url = URL(string: imageURL)
let data = try? Data(contentsOf: url!) //make sure your image in this url does exist, otherwise unwrap in a if let check
let theImage = UIImage(data: data!)
let Texture = SKTexture(image: theImage!)
return SKSpriteNode(texture: Texture)

Resources