How to add node relative to tracked image position with ARKit? - ios

When an image is detected, I add a node in front of it.
Image - what i need garantee
The node is this picture and in relation to this picture I add other nodes, one above (1) of the picture and another next (2). Node 1 always gets in the correct position, but node 2 sometimes gets in front of the image, which is wrong, and sometimes gets in the right position that it's like the photo.
Image - wrong
for anchor in sceneView.session.currentFrame!.anchors {
let imageAnchor = anchor as! ARImageAnchor
nodeInformation.position = SCNVector3Make(imageAnchor.transform.columns.3.x/100, imageAnchor.transform.columns.3.y/100 + 0.3, imageAnchor.transform.columns.3.z/100) //node 1
nodeAuthorInformation.position = SCNVector3Make(imageAnchor.transform.columns.3.x/100 + 0.3, imageAnchor.transform.columns.3.y/100, imageAnchor.transform.columns.3.z/100) //node 2
This is the code that I'm using to set the initial position of my two nodes.
So i would like to know how i can garantee that my second node will stay always in the right position, that it is next to picture.
How I add the nodes:
let tappedNode = self.sceneView.hitTest(gesture.location(in: gesture.view), options: [:])
if let result = tappedNode.first{
nodeInformation.transform = result.modelTransform
nodeAuthorInformation.transform = result.modelTransform
}
print("TAP PLACE = ", gesture.location(in: gesture.view))
if !tappedNode.isEmpty{
let node = tappedNode[0].node
print("NODE TAP = ", node.position)
let artPlane = SCNPlane(width: 0.3, height: 0.3)
artPlane.firstMaterial?.diffuse.contents = arArtDetailsScene
artPlane.firstMaterial?.isDoubleSided = true
let planeNode = SCNNode(geometry: artPlane)
planeNode.eulerAngles.x = .pi
authorPlane = SCNPlane(width: 0.2, height: 0.3)
authorPlane.firstMaterial?.diffuse.contents = arAuthorDetailsScene
authorPlane.firstMaterial?.isDoubleSided = true
let planeAuthorNode = SCNNode(geometry: authorPlane)
planeAuthorNode.eulerAngles.x = .pi
nodeInformation.addChildNode(planeNode)
nodeAuthorInformation.addChildNode(planeAuthorNode)
changeArtDetails(artId: artIndex + 1)
sceneView.scene.rootNode.addChildNode(nodeInformation)
sceneView.scene.rootNode.addChildNode(nodeAuthorInformation)
if sceneView.session.currentFrame != nil {
for anchor in sceneView.session.currentFrame!.anchors {
let imageAnchor = anchor as! ARImageAnchor
nodeInformation.position = SCNVector3Make(imageAnchor.transform.columns.3.x/100, imageAnchor.transform.columns.3.y/100 + 0.3, imageAnchor.transform.columns.3.z/100) // initial position node 1
nodeAuthorInformation.position = SCNVector3Make(imageAnchor.transform.columns.3.x/100 + 0.3, imageAnchor.transform.columns.3.y/100, imageAnchor.transform.columns.3.z/100) // initial position node 2
}
}
And I update the position of the node like this:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
if sceneView.session.currentFrame != nil {
for anchor in sceneView.session.currentFrame!.anchors {
if let imageAnchor = anchor as? ARImageAnchor{
DispatchQueue.main.async {
self.nodeInformation.position = SCNVector3Make(imageAnchor.transform.columns.3.x/100, imageAnchor.transform.columns.3.y/100 + 0.3, imageAnchor.transform.columns.3.z/100)
self.nodeAuthorInformation.position = SCNVector3Make(self.nodeInformation.position.x + 0.3, self.nodeInformation.position.y - 0.3, self.nodeInformation.position.z)//SCNVector3Make(imageAnchor.transform.columns.3.x/100 + 0.3, imageAnchor.transform.columns.3.y/100, imageAnchor.transform.columns.3.z/100)
}
}
}
}
}

ARKit updates position of anchors, not nodes. You will need to provide a node with three custom nodes as children in ARSCNViewDelegate.renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? if anchor is ARImageAnchor instead of adding it manually to the scene.

Related

How to play a video using on Camera (SCNNode) Image Tracking using Swift

I am new to ARKit . I am using Image Tracking to detect the image and dislpay the content beside that Image ..
On the left side of the image I am displaying some information of the image and on the right side I am showing the Web View .
I just want to display some video over the image (top).
Can you guys please help me to display a video over the image (top). I have attached the code for the Info and webview similary i want display to the video
func displayDetailView(on rootNode: SCNNode, xOffset: CGFloat) {
let detailPlane = SCNPlane(width: xOffset, height: xOffset * 1.4)
detailPlane.cornerRadius = 0.25
let detailNode = SCNNode(geometry: detailPlane)
detailNode.geometry?.firstMaterial?.diffuse.contents = SKScene(fileNamed: "DetailScene")
// Due to the origin of the iOS coordinate system, SCNMaterial's content appears upside down, so flip the y-axis.
detailNode.geometry?.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)
detailNode.position.z -= 0.5
detailNode.opacity = 0
rootNode.addChildNode(detailNode)
detailNode.runAction(.sequence([
.wait(duration: 1.0),
.fadeOpacity(to: 1.0, duration: 1.5),
.moveBy(x: xOffset * -1.1, y: 0, z: -0.05, duration: 1.5),
.moveBy(x: 0, y: 0, z: -0.05, duration: 0.2)
])
)
}
func displayWebView(on rootNode: SCNNode, xOffset: CGFloat) {
// Xcode yells at us about the deprecation of UIWebView in iOS 12.0, but there is currently
// a bug that does now allow us to use a WKWebView as a texture for our webViewNode
// Note that UIWebViews should only be instantiated on the main thread!
DispatchQueue.main.async {
let request = URLRequest(url: URL(string: "https://www.youtube.com/watch?v=QvzVCOiC-qs")!)
let webView = UIWebView(frame: CGRect(x: 0, y: 0, width: 400, height: 672))
webView.loadRequest(request)
let webViewPlane = SCNPlane(width: xOffset, height: xOffset * 1.4)
webViewPlane.cornerRadius = 0.25
let webViewNode = SCNNode(geometry: webViewPlane)
webViewNode.geometry?.firstMaterial?.diffuse.contents = webView
webViewNode.position.z -= 0.5
webViewNode.opacity = 0
rootNode.addChildNode(webViewNode)
webViewNode.runAction(.sequence([
.wait(duration: 3.0),
.fadeOpacity(to: 1.0, duration: 1.5),
.moveBy(x: xOffset * 1.1, y: 0, z: -0.05, duration: 1.5),
.moveBy(x: 0, y: 0, z: -0.05, duration: 0.2)
])
)
}
}
I had called this methods in the below function ..
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
updateQueue.async {
let physicalWidth = imageAnchor.referenceImage.physicalSize.width
let physicalHeight = imageAnchor.referenceImage.physicalSize.height
// Create a plane geometry to visualize the initial position of the detected image
let mainPlane = SCNPlane(width: physicalWidth, height: physicalHeight)
mainPlane.firstMaterial?.colorBufferWriteMask = .alpha
// Create a SceneKit root node with the plane geometry to attach to the scene graph
// This node will hold the virtual UI in place
let mainNode = SCNNode(geometry: mainPlane)
mainNode.eulerAngles.x = -.pi / 2
mainNode.renderingOrder = -1
mainNode.opacity = 1
// Add the plane visualization to the scene
node.addChildNode(mainNode)
// Perform a quick animation to visualize the plane on which the image was detected.
// We want to let our users know that the app is responding to the tracked image.
self.highlightDetection(on: mainNode, width: physicalWidth, height: physicalHeight, completionHandler: {
// Introduce virtual content
self.displayDetailView(on: mainNode, xOffset: physicalWidth)
// Animate the WebView to the right
self.displayWebView(on: mainNode, xOffset: physicalWidth)
})
}
}
Any help is Appreciated ..
The effect you are trying to achieve is to play a video on a SCNNode. To do this, in your renderer func, you need to create an AVPlayer and use it to create a SKVideoNode. Then you need to create an SKScene, add the SKVideoNode. Next set the texture of your plane to that SKScene.
so this in the context of your code above:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
updateQueue.async {
let physicalWidth = imageAnchor.referenceImage.physicalSize.width
let physicalHeight = imageAnchor.referenceImage.physicalSize.height
// Create a plane geometry to visualize the initial position of the detected image
let mainPlane = SCNPlane(width: physicalWidth, height: physicalHeight)
mainPlane.firstMaterial?.colorBufferWriteMask = .alpha
// Create a SceneKit root node with the plane geometry to attach to the scene graph
// This node will hold the virtual UI in place
let mainNode = SCNNode(geometry: mainPlane)
mainNode.eulerAngles.x = -.pi / 2
mainNode.renderingOrder = -1
mainNode.opacity = 1
// Add the plane visualization to the scene
node.addChildNode(mainNode)
// Perform a quick animation to visualize the plane on which the image was detected.
// We want to let our users know that the app is responding to the tracked image.
self.highlightDetection(on: mainNode, width: physicalWidth, height: physicalHeight, completionHandler: {
// Introduce virtual content
self.displayDetailView(on: mainNode, xOffset: physicalWidth)
// Animate the WebView to the right
self.displayWebView(on: mainNode, xOffset: physicalWidth)
// setup AV player & create SKVideoNode from avPlayer
let videoURL = URL(fileURLWithPath: Bundle.main.path(forResource: videoAssetName, ofType: videoAssetExtension)!)
let player = AVPlayer(url: videoURL)
player.actionAtItemEnd = .none
videoPlayerNode = SKVideoNode(avPlayer: player)
// setup SKScene to hold the SKVideoNode
let skSceneSize = CGSize(width: physicalWidth, height: physicalHeight)
let skScene = SKScene(size: skSceneSize)
skScene.addChild(videoPlayerNode)
videoPlayerNode.position = CGPoint(x: skScene.size.width/2, y: skScene.size.height/2)
videoPlayerNode.size = skScene.size
// Set the SKScene as the texture for the main plane
mainPlane.firstMaterial?.diffuse.contents = skScene
mainPlane.firstMaterial?.isDoubleSided = true
// setup node to auto remove itself upon completion
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: nil, using: { (_) in
DispatchQueue.main.async {
if self.debug { NSLog("video completed") }
// do something when the video ends
}
})
// play the video node
videoPlayerNode.play()
})
}
}
For 2022 ..
This is now (conceptually) simple,
guard let url = URL(string: " ... ") else { return }
let vid = AVPlayer(url: url)
.. some node .. .geometry?.firstMaterial?.diffuse.contents = vid
vid.play()
It's that easy.
You'll spend a lot of time monkeying with the mesh, buffers etc to get a good result, depending on what you're doing.

Add plane nodes to ARKit scene vertically and horizontally

I want my app to lay the nodes on the surface, which can be vertical or horizontal. However, the node is always vertical. Here's a pic, these nodes aren't placed correctly.
#objc func didTapAddButton() {
let screenCentre = CGPoint(x: self.sceneView.bounds.midX, y: self.sceneView.bounds.midY)
let arHitTestResults: [ARHitTestResult] = sceneView.hitTest(screenCentre, types: [.featurePoint]) // Alternatively, we could use '.existingPlaneUsingExtent' for more grounded hit-test-points.
if let closestResult = arHitTestResults.first {
let transform: matrix_float4x4 = closestResult.worldTransform
let worldCoord: SCNVector3 = SCNVector3Make(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
if let node = createNode() {
sceneView.scene.rootNode.addChildNode(node)
node.position = worldCoord
}
}
}
func createNode() -> SCNNode? {
guard let theView = myView else {
print("Failed to load view")
return nil
}
let plane = SCNPlane(width: 0.06, height: 0.06)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = true
imageMaterial.diffuse.contents = theView.asImage()
plane.materials = [imageMaterial]
let node = SCNNode(geometry: plane)
return node
}
The app is able to see the ground but the nodes are still parallel to us. How can I fix this?
Edit: I figured I can use node.eulerAngles.x = -.pi / 2, this makes sure that the plane is laid down horizontally but it's still horizontal on vertical surfaces as well.
Solved! Here's how to make the view "parallel" to the camera at all times:
let yourNode = SCNNode()
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = [.X, .Y, .Z]
yourNode.constraints = [billboardConstraint]
Or
guard let currentFrame = sceneView.session.currentFrame else {return nil}
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.z = -0.1
let modifiedMatrix = simd_mul(transform, translationMatrix)
let node = SCNNode(geometry: plane)
node.simdTransform = modifiedMatrix

ARKit uses "UIView" as contents in "SCNNode"

I have an odd problem, or maybe what I am doing is odd.
I have an ARKit app that displays a SCNNode with a data view embedded when a barcode is detected. When the data view is tapped I want to detach it from the node and add it as a subview to the main view. When tapped again I want to re attach it to the SCNNode.
I found this code that does at least part of what I want.
My app works well the first time. The data appears in the SCNNode:
{
let transform = anchor.transform
let label = SimpleValueViewController(nibName: "SimpleValueViewController", bundle: nil)
let plane = SCNPlane(width: self.sceneView.bounds.width/2000, height: self.sceneView.bounds.height/2000/2) //was sceneView
label.update(dataItem.title!, andValue: dataItem.value!, withTrend: dataItem.trend)
label.delegate = self
plane.firstMaterial?.blendMode = .alpha
plane.firstMaterial?.diffuse.contents = label.view //self.contentController?.view
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.75
planeNode.simdPosition = float3(transform.columns.3.x + ( base * count), 0, transform.columns.3.z)
node.addChildNode(planeNode)
childControllers[label] = planeNode
label.myNode = planeNode
label.parentNode = node
if("Speed" == anchorLabels[anchor.identifier]!) {
let constraint = SCNBillboardConstraint()
planeNode.constraints = [constraint]
}
count += 1
}
SimpleValueViewController has a tap recognizer that toggles between being a subview and being inside the node.
Here is the problem.
It comes up fine. The view is inside the node.
When tapped it gets attached to the main view.
When tapped again the node just shows a white square and taps no longer work.
I finally gave up and recreate a new viewController, but it still occasionally fails with only a white square:
Here is the code to detach/attach It is pretty messy as I have been pounding on it for a while:
func detach(_ vc:Detachable) {
if let controller = vc as? SimpleValueViewController {
controller.view.removeFromSuperview();
self.childControllers.removeValue(forKey: controller)
let node = controller.myNode
let parent = controller.parentNode
let newController = SimpleValueViewController(nibName: "SimpleValueViewController", bundle: nil)
if let v = newController.view {
print("View: \(v)")
newController.myNode = node
newController.parentNode = parent
newController.update(controller.titleString!, andValue: controller.valueString!, withTrend: controller.trendString!)
newController.delegate = self
newController.scaley = vc.scaley
newController.scalex = vc.scalex
newController.refreshView()
let plane = node?.geometry as! SCNPlane
plane.firstMaterial?.blendMode = .alpha
plane.firstMaterial?.diffuse.contents = v
// newController.viewWillAppear(false)
newController.parentNode?.addChildNode(newController.myNode!)
newController.attached = false;
self.childControllers[newController] = node
}
} else {
print("Not a know type of controller")
}
}
func attach(_ vc:Detachable) {
DispatchQueue.main.async {
if let controller = vc as? SimpleValueViewController {
self.childControllers.removeValue(forKey: controller)
let newController = SimpleValueViewController(nibName: "SimpleValueViewController", bundle: nil)
let node = controller.myNode
let parent = controller.parentNode
let scaleX = vc.scalex!/self.view.frame.size.width
let scaleY = vc.scaley!/self.view.frame.size.height
let scale = min(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
newController.view.transform = transform
newController.myNode = node
newController.parentNode = parent
newController.update(controller.titleString!, andValue: controller.valueString!, withTrend: controller.trendString!)
newController.delegate = self
newController.scaley = vc.scaley
newController.scalex = vc.scalex
node?.removeFromParentNode()
self.childControllers[newController] = node
newController.attached = true
self.view.addSubview(newController.view)
} else {
print("Not a know type of controller to attach")
}
}
}
I was thinking it was a timing issue as loading a view from a nib is done lazily so I put in a sleep for 1 second after the SimpleValueViewController(nibName: "SimpleValueViewController", bundle: nil) but that did not help.
Anything else I can try? I am way off base? I would like not to have to recreate the view controller every time the data is toggled.
Thanks in Advance.
Render functions:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// updateQueue.async {
//Loop through the near and far data and add stuff
let data = "Speed" == anchorLabels[anchor.identifier]! ? nearData : farData
let base = "Speed" == anchorLabels[anchor.identifier]! ? 0.27 : 0.2 as Float
print ("Getting Node for: " + self.anchorLabels[anchor.identifier]!)
var count = 0.0 as Float
for dataItem in data {
let transform = anchor.transform
let label = SimpleValueViewController(nibName: "SimpleValueViewController", bundle: nil)
let plane = SCNPlane(width: self.sceneView.bounds.width/2000, height: self.sceneView.bounds.height/2000/2) //was sceneView
label.update(dataItem.title!, andValue: dataItem.value!, withTrend: dataItem.trend)
label.delegate = self
plane.firstMaterial?.blendMode = .alpha
plane.firstMaterial?.diffuse.contents = label.view //self.contentController?.view
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.75
planeNode.simdPosition = float3(transform.columns.3.x + ( base * count), 0, transform.columns.3.z)
node.addChildNode(planeNode)
childControllers[label] = planeNode
label.myNode = planeNode
label.parentNode = node
if("Speed" == anchorLabels[anchor.identifier]!) {
let constraint = SCNBillboardConstraint()
planeNode.constraints = [constraint]
}
count+=1
}
and update
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard currentBuffer == nil else {
return
}
self.updateFocusSquare(isObjectVisible: false)
// Retain the image buffer for Vision processing.
self.currentBuffer = sceneView.session.currentFrame?.capturedImage
classifyCurrentImage()
}
ClassifyCurrentImage is where the barcode recognizer lives. If a barcode is found it adds an anchor at the spot in the scene.

ARKit ImageDetection - get reference image when tapping 3D object

I want to create an App, that detects reference images, then a 3D (SCNScene) object appears (multiple images / objects in 1 Camera is possible). This is already running.
Now, when the user taps on the object, I need to know the file-name of the referenceImage, because the image should be shown.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
private var planeNode: SCNNode?
private var imageNode: SCNNode?
private var animationInfo: AnimationInfo?
private var currentMediaName: String?
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
sceneView.scene = scene
sceneView.delegate = self
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Load reference images to look for from "AR Resources" folder
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
// Add previously loaded images to ARScene configuration as detectionImages
configuration.detectionImages = referenceImages
// Run the view's session
sceneView.session.run(configuration)
let tap = UITapGestureRecognizer(target: self, action: #selector(handleTap(rec:)))
//Add recognizer to sceneview
sceneView.addGestureRecognizer(tap)
}
//Method called when tap
#objc func handleTap(rec: UITapGestureRecognizer){
//GET Reference-Image Name
loadReferenceImage()
if rec.state == .ended {
let location: CGPoint = rec.location(in: sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if !hits.isEmpty{
let tappedNode = hits.first?.node
}
}
}
func loadReferenceImage(){
print("CLICK")
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else {
return
}
currentMediaName = imageAnchor.referenceImage.name
// 1. Load plane's scene.
let planeScene = SCNScene(named: "art.scnassets/plane.scn")!
let planeNode = planeScene.rootNode.childNode(withName: "planeRootNode", recursively: true)!
// 2. Calculate size based on planeNode's bounding box.
let (min, max) = planeNode.boundingBox
let size = SCNVector3Make(max.x - min.x, max.y - min.y, max.z - min.z)
// 3. Calculate the ratio of difference between real image and object size.
// Ignore Y axis because it will be pointed out of the image.
let widthRatio = Float(imageAnchor.referenceImage.physicalSize.width)/size.x
let heightRatio = Float(imageAnchor.referenceImage.physicalSize.height)/size.z
// Pick smallest value to be sure that object fits into the image.
let finalRatio = [widthRatio, heightRatio].min()!
// 4. Set transform from imageAnchor data.
planeNode.transform = SCNMatrix4(imageAnchor.transform)
// 5. Animate appearance by scaling model from 0 to previously calculated value.
let appearanceAction = SCNAction.scale(to: CGFloat(finalRatio), duration: 0.4)
appearanceAction.timingMode = .easeOut
// Set initial scale to 0.
planeNode.scale = SCNVector3Make(0.0, 0.0, 0.0)
// Add to root node.
sceneView.scene.rootNode.addChildNode(planeNode)
// Run the appearance animation.
planeNode.runAction(appearanceAction)
self.planeNode = planeNode
self.imageNode = node
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor, updateAtTime time: TimeInterval) {
guard let imageNode = imageNode, let planeNode = planeNode else {
return
}
// 1. Unwrap animationInfo. Calculate animationInfo if it is nil.
guard let animationInfo = animationInfo else {
refreshAnimationVariables(startTime: time,
initialPosition: planeNode.simdWorldPosition,
finalPosition: imageNode.simdWorldPosition,
initialOrientation: planeNode.simdWorldOrientation,
finalOrientation: imageNode.simdWorldOrientation)
return
}
// 2. Calculate new animationInfo if image position or orientation changed.
if !simd_equal(animationInfo.finalModelPosition, imageNode.simdWorldPosition) || animationInfo.finalModelOrientation != imageNode.simdWorldOrientation {
refreshAnimationVariables(startTime: time,
initialPosition: planeNode.simdWorldPosition,
finalPosition: imageNode.simdWorldPosition,
initialOrientation: planeNode.simdWorldOrientation,
finalOrientation: imageNode.simdWorldOrientation)
}
// 3. Calculate interpolation based on passedTime/totalTime ratio.
let passedTime = time - animationInfo.startTime
var t = min(Float(passedTime/animationInfo.duration), 1)
// Applying curve function to time parameter to achieve "ease out" timing
t = sin(t * .pi * 0.5)
// 4. Calculate and set new model position and orientation.
let f3t = simd_make_float3(t, t, t)
planeNode.simdWorldPosition = simd_mix(animationInfo.initialModelPosition, animationInfo.finalModelPosition, f3t)
planeNode.simdWorldOrientation = simd_slerp(animationInfo.initialModelOrientation, animationInfo.finalModelOrientation, t)
//planeNode.simdWorldOrientation = imageNode.simdWorldOrientation
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let name = currentImageAnchor.referenceImage.name!
print("TEST")
print(name)
}
func refreshAnimationVariables(startTime: TimeInterval, initialPosition: float3, finalPosition: float3, initialOrientation: simd_quatf, finalOrientation: simd_quatf) {
let distance = simd_distance(initialPosition, finalPosition)
// Average speed of movement is 0.15 m/s.
let speed = Float(0.15)
// Total time is calculated as distance/speed. Min time is set to 0.1s and max is set to 2s.
let animationDuration = Double(min(max(0.1, distance/speed), 2))
// Store animation information for later usage.
animationInfo = AnimationInfo(startTime: startTime,
duration: animationDuration,
initialModelPosition: initialPosition,
finalModelPosition: finalPosition,
initialModelOrientation: initialOrientation,
finalModelOrientation: finalOrientation)
}
}
Since your ARReferenceImage is stored within the Assets.xcassets catalogue you can simply load your image using the following initialization method of UIImage:
init?(named name: String)
For your information:
if this is the first time the image is being
loaded, the method looks for an image with the specified name in the
application’s main bundle. For PNG images, you may omit the filename
extension. For all other file formats, always include the filename
extension.
In my example I have an ARReferenceImage named TargetCard:
So to load it as a UIImage and then apply it as an SCNNode or display it in screenSpace you could so something like so:
//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: "TargetCard"){
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}
//2. Load The Image Into A UIImageView
if let image = UIImage(named: "TargetCard"){
let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}
In your context:
Each SCNNode has a name property:
var name: String? { get set }
As such I suggest that when you create content in regard to your ARImageAnchor you provide it with the name of your ARReferenceImage e.g:
//---------------------------
// MARK: - ARSCNViewDelegate
//---------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARImageAnchor & Check It's The One We Want
guard let validImageAnchor = anchor as? ARImageAnchor,
let targetName = validImageAnchor.referenceImage.name else { return}
//2. Create An SCNNode With An SCNPlaneGeometry
let nodeToAdd = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeToAdd.geometry = planeGeometry
//3. Set It's Name To That Of Our ARReferenceImage
nodeToAdd.name = targetName
//4. Add It To The Hierachy
node.addChildNode(nodeToAdd)
}
}
Then it is easy to get a reference to the Image later e.g:
/// Checks To See If We Have Hit A Named SCNNode
///
/// - Parameter gesture: UITapGestureRecognizer
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An SCNHitTest To See If We Have Tapped A Valid SCNNode & See If It Is Named
guard let hitTestForNode = self.augmentedRealityView.hitTest(currentTouchLocation, options: nil).first?.node,
let nodeName = hitTestForNode.name else { return }
//3. Load The Reference Image
self.loadReferenceImage(nodeName, inAR: true)
}
/// Loads A Matching Image For The Identified ARReferenceImage Name
///
/// - Parameters:
/// - fileName: String
/// - inAR: Bool
func loadReferenceImage(_ fileName: String, inAR: Bool){
if inAR{
//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: fileName){
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}
}else{
//2. Load The Image Into A UIImageView
if let image = UIImage(named: fileName){
let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}
}
}
Important:
One thing I have just discovered is that if we load the the ARReferenceImage e.g:
let image = UIImage(named: "TargetCard")
Then the image is displayed is in GrayScale, which is properly what you dont want!
As such what you probably need to do is to copy the ARReferenceImage into the Assets Catalogue and give it a prefix e.g. ColourTargetCard...
Then you would need to change the function slightly by naming your nodes using a prefix e.g:
nodeToAdd.name = "Colour\(targetName)"
Hope it helps...

Coordinates of ARImageAnchor transform matrix are way too different from the ARPlaneAnchor ones

I am doing this simple thing:
Vertical plane detection
Image recognition on a vertical plane
The image is hanged on the detected plane (on my wall). In both case I implement the renderer:didAddNode:forAnchor: function from ARSCNViewDelegate. I stand at the place for the vertical plane detection and the image recognition.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let shipScene = SCNScene(named: "ship.scn"), let shipNode = shipScene.rootNode.childNode(withName: "ship", recursively: false) else { return }
shipNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
sceneView.scene.rootNode.addChildNode(shipNode)
print(anchor.transform)
}
In the case of a vertical plane detection the anchor will be an ARPlaneAnchor. In the case of an image recognition the anchor will be an ARImageAnchor.
Why are the transform matrices of those two anchors so different? I'm printing the anchor.transform and I get those results:
1.
simd_float4x4([
[0.941312, 0.0, -0.337538, 0.0)],
[0.336284, -0.0861278, 0.937814, 0.0)],
[-0.0290714,-0.996284, -0.0810731, 0.0)],
[0.191099, 0.172432, -1.14543, 1.0)]
])
2.
simd_float4x4([
[0.361231, 0.10894, 0.926093, 0.0)],
[-0.919883, -0.121052, 0.373049, 0.0)],
[0.152743, -0.986651, 0.0564843, 0.0)],
[75.4418, 10.9618, -14.3788, 1.0)]
])
So if I want to place a 3D object on the detected vertical plane I can simply use [x = 0.191099, y = 0.172432, z = -1.14543] as coordinates to set the position of my node (myNode), and then add this node to the scene with sceneView.scene.rootNode.addChildNode(myNode) but if I want to place a 3D object at the detected image's anchor, I cannot use [x = 75.4418, y = 10.9618, z = -14.3788].
What should I do to place a 3D object on the detected image's anchor? I really don't understand the transform matrix of the ARImageAnchor.
Here is an example for you in which I use the func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
From my understanding (I could of course by wrong) you would know that your placement area is the width and height of the referenceImage.physicalSize which is expressed in Metres:
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
As such you would need to scale your content (if needed to fit) within these boundaries assuming you wanted it to appear to overlay the image.

Resources