Child nodes not visible when setWorldOrigin is called? - ios

So I have a function like below that sets the world origin to a node placed where an image is after scanning it. I load the saved nodes from a database that adds them to the scene in a separate function.
For some reason, the nodes will not show when I run the app. It works when setWorldOrigin is commented out.
I would like for the nodes to show relative to the image as the origin.
Am I missing something? Does setWorldOrigin change the session?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let nodeGeometry = SCNText(string: "Welcome!", extrusionDepth: 1)
nodeGeometry.font = UIFont(name: "Helvetica", size: 30)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.black
anchorNode.geometry = nodeGeometry
anchorNode.scale = SCNVector3(0.1, 0.1, 0.1)
anchorNode.constraints = [SCNBillboardConstraint()]
anchorNode.position = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.25
/*
* Plane is rotated to match the picture location
*/
planeNode.eulerAngles.x = -.pi / 2
/*
* Scan runs as an action for a set amount of time
*/
planeNode.runAction(self.imageHighlightAction)
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
sceneView.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
sceneView.scene.rootNode.addChildNode(node)
/*
* Populates the scene
*/
handleData()
} // renderer

I figured it out. The size I had for my image was incorrect. The image I was using is 512x512 pixels. I got that by right clicking on the image and selecting "Get Info" and looking at dimensions.
I also had the measurements as meters. I changed it to centimeters and used a pixel to centimeters converter

Related

Unable to simplify 3 blocks of code to 1 in iOS for augmented reality

I created a fun little AR app that allows me to point my phone at 3 index cards, that have 2D drawings (xmas tree, ginger bread man, gift), and have 3D versions of the images pop up.
However, I have limited my app to just 3 images for now, as each picture/3D combo has its own block of code (shown below) but I would like to somehow consolidate to one block (even if I need to rename the image files to a numbered format, but would appreciate any advice).
I have tried two methods:
one in which I have 3 blocks of code, one for each picture (this method works - a translucent plane appears just over the index card and a 3d Object appears)
one in which I have consolidated to one block of code (this method results in only the translucent plane appearing when pointing the camera at the index card)
3 blocks of code method
Screenshots of the app, along with the code are as follows:
// MARK: - ARSCNViewDelegate
//the anchor will be the image detected and the node will be a 3d object
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
//the 3d object
let node = SCNNode()
//if this detects a plane or point or anything other then an image, will not run this code
if let imageAnchor = anchor as? ARImageAnchor {
//this plane is created from the image detected (card). want the width and height in the physical world to be the same as the card, so effectively saying "look at the anchor image found, and get its size properties”
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
//once dimension is given to the plane, use it to create a planenode which is a 3d object that will be rendered on top of the card
let planeNode = SCNNode(geometry: plane)
//makes the plane translucent
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
//by default the plane created is vertical so need to flip it to be flat with the card detected
planeNode.eulerAngles.x = -.pi / 2
// tap into the node created above and add a child node which will be the plane node
node.addChildNode(planeNode)
// ------------3 BLOCKS OF CODE BELOW THAT NEED TO SIMPLIFY TO 1----------------
// This is the first block of code for the ChristmasTree.png image/ChristmasTree.scn 3D object – note that the 2D image is not detected if “.png” is included at the end of the image anchor, however the “.scn” appears to be required for the 3D object).
if imageAnchor.referenceImage.name == "ChristmasTree" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/ChristmasTree.scn") {
//create a node that will represent the 3d object.
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the second block of code for the Gift.png image/Gift.scn 3D object
if imageAnchor.referenceImage.name == "Gift" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/Gift.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the third block of code for the GingerbreadMan.png image/GingerbreadMan.scn 3D object
if imageAnchor.referenceImage.name == "GingerbreadMan" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/GingerbreadMan.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
}
//this method is expecting an output of SCNNode and it needs to send that back onto the scene so it can render the 3d object
return node
}
}
One block of code method
I have revised the code as follows, however it has resulted in only the translucent plane appearing when pointing the camera at the index card:
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
//------------single block of code------------
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn") {
if let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
//------------single block of code------------
}
return node
}
}
Your issue there is that name is an optional String and when you do string interpolation it results in a string like "Optional("whatever"). What you need is to unwrap your optional String:
if let name = imageAnchor.referenceImage.name {
// this condition is not needed considering that you are already safely unwrapping SCNScene fallible initializer
// if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn"),
let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
// }
}

How can I reduce 3 blocks of code to 1 in iOS

I created a fun little AR app that allows me to point my phone at index cards, that contain 2D images of Christmas items drawn by my niece, and have 3D versions of the images pop up.
However, I have limited my app to just 3 images for now, as each picture/3D combo has its own block of code (shown below) but I would like to somehow consolidate to one block (even if I need to rename the image files to a numbered format, but would appreciate any advice).
Screenshots of the app, using the "3 blocks of code method" are included.
// MARK: - ARSCNViewDelegate
//the anchor will be the image detected and the node will be a 3d object
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
//the 3d object
let node = SCNNode()
//if this detects a plane or point or anything other then an image, will not run this code
if let imageAnchor = anchor as? ARImageAnchor {
//this plane is created from the image detected (card). want the width and height in the physical world to be the same as the card, so effectively saying "look at the anchor image found, and get its size properties”
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
//once dimension is given to the plane, use it to create a planenode which is a 3d object that will be rendered on top of the card
let planeNode = SCNNode(geometry: plane)
//makes the plane translucent
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
//by default the plane created is vertical so need to flip it to be flat with the card detected
planeNode.eulerAngles.x = -.pi / 2
// tap into the node created above and add a child node which will be the plane node
node.addChildNode(planeNode)
// ------------3 BLOCKS OF CODE BELOW THAT NEED TO SIMPLIFY TO 1----------------
// This is the first block of code for the ChristmasTree.png image/ChristmasTree.scn 3D object – note that the 2D image is not detected if “.png” is included at the end of the image anchor, however the “.scn” appears to be required for the 3D object).
if imageAnchor.referenceImage.name == "ChristmasTree" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/ChristmasTree.scn") {
//create a node that will represent the 3d object.
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the second block of code for the Gift.png image/Gift.scn 3D object
if imageAnchor.referenceImage.name == "Gift" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/Gift.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the third block of code for the GingerbreadMan.png image/GingerbreadMan.scn 3D object
if imageAnchor.referenceImage.name == "GingerbreadMan" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/GingerbreadMan.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
}
//this method is expecting an output of SCNNode and it needs to send that back onto the scene so it can render the 3d object
return node
}
}
EDIT
Using Vadian's recommendation, I have revised the code as follows, however it has resulted in only the translucent plane appearing when pointing the camera at the index card:
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
//------------single block of code------------
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn") {
if let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
//------------single block of code------------
}
return node
}
}
As the only difference is the name the three branches can be reduced to
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
Or avoiding the pyramid of doom
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name),
let cardScene = SCNScene(named: "art.scnassets/\(name).scn"),
let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}

ARKit Continuously get Camera Real World Position

This is the first time I am creating an ARKit app, and also not very familiar with iOS development, however I am trying to achieve something relitively simple...
All the the app needs to do is get the world position of the phone and send it continously to a rest api.
I am using the default ARKit project in Xcode, and am able to get the phone's position with the following function in ViewController.swift:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.z))
plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.75)
let planeNode = SCNNode(geometry: plane)
planeNode.position = SCNVector3Make(planeAnchor.center.x, planeAnchor.center.x, planeAnchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
print(currentPositionOfCamera)
}
this function works as expected, in renders a plane in the view and prints the position, but only once
I need to get the phone's position as it is moved, I tried to add:
updateAtTime time: NSTimeInterval
to the function definition, but after doing this even the plane was not rendered any longer.
How can I continuously get the phone's position vector?
TIA

Convert coordinates in ARImageTrackingConfiguration

With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...

Coordinates of ARImageAnchor transform matrix are way too different from the ARPlaneAnchor ones

I am doing this simple thing:
Vertical plane detection
Image recognition on a vertical plane
The image is hanged on the detected plane (on my wall). In both case I implement the renderer:didAddNode:forAnchor: function from ARSCNViewDelegate. I stand at the place for the vertical plane detection and the image recognition.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let shipScene = SCNScene(named: "ship.scn"), let shipNode = shipScene.rootNode.childNode(withName: "ship", recursively: false) else { return }
shipNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
sceneView.scene.rootNode.addChildNode(shipNode)
print(anchor.transform)
}
In the case of a vertical plane detection the anchor will be an ARPlaneAnchor. In the case of an image recognition the anchor will be an ARImageAnchor.
Why are the transform matrices of those two anchors so different? I'm printing the anchor.transform and I get those results:
1.
simd_float4x4([
[0.941312, 0.0, -0.337538, 0.0)],
[0.336284, -0.0861278, 0.937814, 0.0)],
[-0.0290714,-0.996284, -0.0810731, 0.0)],
[0.191099, 0.172432, -1.14543, 1.0)]
])
2.
simd_float4x4([
[0.361231, 0.10894, 0.926093, 0.0)],
[-0.919883, -0.121052, 0.373049, 0.0)],
[0.152743, -0.986651, 0.0564843, 0.0)],
[75.4418, 10.9618, -14.3788, 1.0)]
])
So if I want to place a 3D object on the detected vertical plane I can simply use [x = 0.191099, y = 0.172432, z = -1.14543] as coordinates to set the position of my node (myNode), and then add this node to the scene with sceneView.scene.rootNode.addChildNode(myNode) but if I want to place a 3D object at the detected image's anchor, I cannot use [x = 75.4418, y = 10.9618, z = -14.3788].
What should I do to place a 3D object on the detected image's anchor? I really don't understand the transform matrix of the ARImageAnchor.
Here is an example for you in which I use the func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
From my understanding (I could of course by wrong) you would know that your placement area is the width and height of the referenceImage.physicalSize which is expressed in Metres:
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
As such you would need to scale your content (if needed to fit) within these boundaries assuming you wanted it to appear to overlay the image.

Resources