Measure horizontal plane surface ARKIT [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How to measure horizontal plane surface using ARKit Scenekit before placing objects? I want to have a model of the room before placing objects. Thanks in advance!

Knowing the size of a room in advance is tricky...
Measuring the size of any detected planes however isn’t:
Each time a Horizontal or Vertical surface is detected (assuming you have them enabled), an ARPlaneAnchor is generated:
When you run a world-tracking AR session whose planeDetection option is enabled, the session automatically adds to its list of anchors an ARPlaneAnchor object for each flat surface ARKit detects with the back-facing camera. Each plane anchor provides information about the estimated position and shape of the surface.
This is called in the following delegate callback:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { }
You can get the width of the ARPlaneAnchor therefore, like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Log The Initial Width & Height
print("""
Initial Width = \(anchor.extent.x)
Initial Height = \(anchor.extent.z)
""")
}
There is one issue with this initial solution however, in that an ARPlaneAnchor gets updated (e.g. it's size changes) via the the following callback:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { }
As such if you want to track the updated size of an ARPlaneAnchor you need take this into consideration.
Let's look at how this could be done:
First we create our own SCNNode Subclass called PlaneNode, which will return the size of the plane even when updated.
Please note that you don’t need to create a subclass to achieve the same results, although I have in order that it can be easily reused:
class PlaneNode: SCNNode {
let DEFAULT_IMAGE: String = "defaultGrid"
let NAME: String = "PlaneNode"
var planeGeometry: SCNPlane
var planeAnchor: ARPlaneAnchor
var widthInfo: String!
var heightInfo: String!
var alignmentInfo: String!
//---------------
//MARK: LifeCycle
//---------------
/// Inititialization
///
/// - Parameters:
/// - anchor: ARPlaneAnchor
/// - node: SCNNode
/// - node: Bool
init(anchor: ARPlaneAnchor, node: SCNNode, image: Bool, identifier: Int, opacity: CGFloat = 0.25){
//1. Create The SCNPlaneGeometry
self.planeAnchor = anchor
self.planeGeometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
let planeNode = SCNNode(geometry: planeGeometry)
super.init()
//2. If The Image Bool Is True We Use The Default Image From The Assets Bundle
let planeMaterial = SCNMaterial()
if image{
planeMaterial.diffuse.contents = UIImage(named: DEFAULT_IMAGE)
}else{
planeMaterial.diffuse.contents = UIColor.cyan
}
//3. Set The Geometries Contents
self.planeGeometry.materials = [planeMaterial]
//4. Set The Position Of The PlaneNode
planeNode.simdPosition = float3(self.planeAnchor.center.x, 0, self.planeAnchor.center.z)
//5. Rotate It On It's XAxis
planeNode.eulerAngles.x = -.pi / 2
//6. Set The Opacity Of The Node
planeNode.opacity = opacity
//7. Add The PlaneNode
node.addChildNode(planeNode)
//8. Set The Nodes ID
node.name = "\(NAME) \(identifier)"
}
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") }
/// Updates The Size Of The Plane As & When The ARPlaneAnchor Has Been Updated
///
/// - Parameter anchor: ARPlaneAnchor
func update(_ anchor: ARPlaneAnchor) {
self.planeAnchor = anchor
self.planeGeometry.width = CGFloat(anchor.extent.x)
self.planeGeometry.height = CGFloat(anchor.extent.z)
self.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
returnPlaneInfo()
}
//-----------------------
//MARK: Plane Information
//-----------------------
/// Returns The Size Of The ARPlaneAnchor & Its Alignment
func returnPlaneInfo(){
let widthOfPlane = self.planeAnchor.extent.x
let heightOfPlane = self.planeAnchor.extent.z
var planeAlignment: String!
switch planeAnchor.alignment {
case .horizontal:
planeAlignment = "Horizontal"
case .vertical:
planeAlignment = "Vertical"
}
#if DEBUG
print("""
Width Of Plane = \(String(format: "%.2fm", widthOfPlane))
Height Of Plane = \(String(format: "%.2fm", heightOfPlane))
Plane Alignment = \(planeAlignment)
""")
#endif
self.widthInfo = String(format: "%.2fm", widthOfPlane)
self.heightInfo = String(format: "%.2fm", heightOfPlane)
self.alignmentInfo = planeAlignment
}
}
Having created our subclass we then need to use it within our ViewController. You will normally get more than one ARPlaneAnchor, however in this example we just assume there will be one.
So we will create a variable which will reference our PlaneNode:
var planeNode:PlaneNode?
Then in the ARSCNViewDelegate we will create our PlaneNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Create The PlaneNode
if planeNode == nil{
planeNode = PlaneNode(anchor: anchor, node: node, image: true, identifier: 0, opacity: 1)
node.addChildNode(planeNode!)
planeNode?.name = String("Detected Plane")
}
}
Then all you need to do is track the updating of the PlaneNode e.g:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let anchor = anchor as? ARPlaneAnchor, let existingPlane = planeNode else { return }
existingPlane.update(anchor)
}
If all goes according to plan you should see something like this in your consoleLog:
Width Of Plane = 0.07m
Height Of Plane = 0.15m
Plane Alignment = Optional("Horizontal")
Hopefully this is more than enough to get you started...

Related

Converting ARAnchors to SCNGeometry with texture image is stretching

I am using WorldTracking in ARKit and converting ARAnchors to SCNNodes to display it later using SceneView. Here is the code for adding new anchor and I am adding new node on each anchor added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let anchor = anchor as? ARMeshAnchor ,
let frame = sceneView.session.currentFrame else { return nil }
let node = SCNNode()
let geometry = scanGeometory(frame: frame, anchor: anchor, node: node, needTexture: true, cameraImage: captureCamera())
node.geometry = geometry
return node
}
Till this point everything is working fine. Now when Anchors are updated and geometry object is reconstructed it is causing issue while applying texture. Here is the anchor update call back code.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let frame = self.sceneView.session.currentFrame else { return }
guard let anchor = anchor as? ARMeshAnchor else { return }
let geometry = self.scanUpdatedGeometory(frame: frame, anchor: anchor, node: node, needTexture: true, cameraImage: captureCamera())
node.geometry = geometry
}
Here is the scanGeometry code where image texture is being applied on Geometry.
func scanGeometory(frame: ARFrame, anchor: ARMeshAnchor, node: SCNNode, needTexture: Bool = false, cameraImage: UIImage? = nil) -> SCNGeometry {
let camera = frame.camera
let geometry = SCNGeometry(geometry: anchor.geometry, camera: camera, modelMatrix: anchor.transform, needTexture: needTexture)
if let image = cameraImage, needTexture {
geometry.firstMaterial?.diffuse.contents = image
} else {
geometry.firstMaterial?.diffuse.contents = UIColor(red: 0.5, green: 1.0, blue: 0.0, alpha: 0.7)
}
node.geometry = geometry
return geometry
}
When scan geometry is called on didUpdate anchor at that time it is trying to apply current image on updated anchor doesn't matter that anchor is within current frame or or not. This is causing stretched texture outside current frame/view. Here is the reference image. Left side stretched area was not in view when I stopped scan.
What can be the solution for applying current view texture only and leave old one as is. Or is there any another way to apply texture for world tracking?

How to get current position of 3D object while animation is going on in ARKit?

On image marker detection, I want to play animation of walking guy within that marker boundary only using ARKit. For that I want to find out the position of that 3D object while it is walking on marker. Animation is created using external 3D authoring tool and saved in .scnassets as .dae file. I have added node and start animation using below code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let imageAnchor = anchor as? ARImageAnchor {
DispatchQueue.main.async {
//let translation = imageAnchor.transform.columns.3
let idleScene = SCNScene(named: "art.scnassets/WalkAround/WalkAround.dae")!
// This node will be parent of all the animation models
let node1 = SCNNode()
// Add all the child nodes to the parent node
for child in idleScene.rootNode.childNodes {
node1.addChildNode(child)
}
node1.scale = SCNVector3(0.2, 0.2, 0.2)
let physicalSize = imageAnchor.referenceImage.physicalSize
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
skScene.backgroundColor = .white
let plane = SCNPlane(width: self.referenceImage!.physicalSize.width, height: self.referenceImage!.physicalSize.height)
let material = SCNMaterial()
material.lightingModel = SCNMaterial.LightingModel.constant
material.isDoubleSided = true
material.diffuse.contents = skScene
plane.materials = [material]
let rectNode = SCNNode(geometry: plane)
rectNode.eulerAngles.x = -.pi / 2
node.addChildNode(rectNode)
node.addChildNode(node1)
self.loadAnimation(withKey: "walking", sceneName: "art.scnassets/WalkAround/SambaArmtr", animationIdentifier: "SambaArmtr-1")
}
}
}
func loadAnimation(withKey: String, sceneName:String, animationIdentifier:String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObject = sceneSource?.entryWithIdentifier(animationIdentifier, withClass: CAAnimation.self) {
// The animation will only play once
animationObject.repeatCount = 1
}
}
I tried using node.presentation.position in both below methods to get current position of object.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval)
// Or
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
If I will not move device anymore once animation has been started, those methods will not get called and till the time I am getting same position of node. Thats why I am not getting where I am wrong. Or is there any way to get current position of object while animation is going on in ARKit?
I don't know of any way to get the current frame within an embedded animation. With that said, the animation embedded within a model uses CoreAnimation to run the animation. You could use the CAAimationDelegate to listen to the start/end events of your animation and run a timer. The timer would give you the best estimate of which frame the animation is on.
References:
SceneKit Animating Content Documentation: https://developer.apple.com/documentation/scenekit/animation/animating_scenekit_content
CAAnimationDelegate Documentation: https://developer.apple.com/documentation/quartzcore/caanimationdelegate

Convert coordinates in ARImageTrackingConfiguration

With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...

ARKit Moving Horizontal Plane Visualization

I'm able to create a horizontal plane the moment I detect a flat surface, but I am wondering if there is a way to display it while moving around similarly to the feature point debugging option?
Essentially, I want the plane to move with you, not add on to the plane.
Assuming I have interpreted your question correctly, you are essentially asking how can I update the visualisation of a detected plane as it increases in size etc.
There are a number of different ways that this can be achieved, but in this instance I will provide an example using a subclass of SCNNode which will update as the plane increases.
class PlaneNode: SCNNode {
let DEFAULT_IMAGE: String = "defaultGrid"
let NAME: String = "PlaneNode"
var planeGeometry: SCNPlane
var planeAnchor: ARPlaneAnchor
var widthInfo: String!
var heightInfo: String!
var alignmentInfo: String!
//---------------
//MARK: LifeCycle
//---------------
/// Inititialization
///
/// - Parameters:
/// - anchor: ARPlaneAnchor
/// - node: SCNNode
/// - node: Bool
init(anchor: ARPlaneAnchor, node: SCNNode, image: Bool, identifier: Int, opacity: CGFloat = 0.25){
//1. Create The SCNPlaneGeometry
self.planeAnchor = anchor
self.planeGeometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
let planeNode = SCNNode(geometry: planeGeometry)
super.init()
//2. If The Image Bool Is True We Use The Default Image From The Assets Bundle
let planeMaterial = SCNMaterial()
if image{
planeMaterial.diffuse.contents = UIImage(named: DEFAULT_IMAGE)
}else{
planeMaterial.diffuse.contents = UIColor.cyan
}
//3. Set The Geometries Contents
self.planeGeometry.materials = [planeMaterial]
//4. Set The Position Of The PlaneNode
planeNode.simdPosition = float3(self.planeAnchor.center.x, 0, self.planeAnchor.center.z)
//5. Rotate It On It's XAxis
planeNode.eulerAngles.x = -.pi / 2
//6. Set The Opacity Of The Node
planeNode.opacity = opacity
//7. Add The PlaneNode
node.addChildNode(planeNode)
//8. Set The Nodes ID
node.name = "\(NAME) \(identifier)"
}
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") }
/// Updates The Size Of The Plane As & When The ARPlaneAnchor Has Been Updated
///
/// - Parameter anchor: ARPlaneAnchor
func update(_ anchor: ARPlaneAnchor) {
self.planeAnchor = anchor
self.planeGeometry.width = CGFloat(anchor.extent.x)
self.planeGeometry.height = CGFloat(anchor.extent.z)
self.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
}
Having created our subclass we then need to use it within our ViewController.
You will normally get more than one ARPlaneAnchor and would probably store these in a dictionary, however in this example we just assume there will be one.
As such we will create a variable which will reference our PlaneNode:
var planeNode:PlaneNode?
Then in the ARSCNViewDelegate we will create our PlaneNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Create The PlaneNode
if planeNode == nil{
planeNode = PlaneNode(anchor: anchor, node: node, image: true, identifier: 0, opacity: 1)
node.addChildNode(planeNode!)
planeNode?.name = String("Detected Plane")
}
}
Then all you need to do is track the updating of the PlaneNode so that the visualization keeps updating e.g:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let anchor = anchor as? ARPlaneAnchor, let existingPlane = planeNode else { return }
existingPlane.update(anchor)
}

Physics object falls to infinity in SceneKit

I'm making an AR app that's a ball toss game using Swift's ARKit.
Click here for my repo
The point of the game is to toss the ball and make it land in the hat. However, whenever I try to toss the ball, it always appear to fall to infinity instead of landing in the hat or on the floor plane that I've created.
Here's the code for tossing the ball:
#IBAction func throwBall(_ sender: Any) {
// Create ball
let ball = SCNSphere(radius: 0.02)
currentBallNode = SCNNode(geometry: ball)
currentBallNode?.physicsBody = .dynamic()
currentBallNode?.physicsBody?.allowsResting = true
currentBallNode?.physicsBody?.isAffectedByGravity = true
// Apply transformation
let camera = sceneView.session.currentFrame?.camera
let cameraTransform = camera?.transform
currentBallNode?.simdTransform = cameraTransform!
// Add current ball node to balls array
balls.append(currentBallNode!)
// Add ball node to root node
sceneView.scene.rootNode.addChildNode(currentBallNode!)
// Set force to be applied
let force = simd_make_float4(0, 0, -3, 0)
let rotatedForce = simd_mul(cameraTransform!, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
// Apply force to ball
currentBallNode?.physicsBody?.applyForce(vectorForce, asImpulse: true)
}
And here's the physics body setting for the floor:
Look at below screenshot for get more idea.
Nevermind, I managed to resolve this by adding the following function:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor, planeAnchor.center == self.planeAnchor?.center || self.planeAnchor == nil else { return }
// Set the floor's geometry to be the detected plane
let floor = sceneView.scene.rootNode.childNode(withName: "floor", recursively: true)
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.y))
floor?.geometry = plane
}

Resources