Unable to draw boundBox while detecting object with arkit2 - ios

I am developing iOS app which will highlight objects ( not specific one that we do with .arobject files) with outlined box in real world. Idea is to implement only boundBox drawing from this documentation / example.
Got some idea from this stackoverflow answer but still unable to draw outlined boundBox to scanned object.
// Declaration
let configuration = ARObjectScanningConfiguration()
let augmentedRealitySession = ARSession()
// viewWillAppear(_ animated: Bool)
configuration.planeDetection = .horizontal
sceneView.session.run(configuration, options: .resetTracking)
// renderer
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//print("\(self.detectionObjects.debugDescription)")
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
// BlackMirrorzBoundingBox class
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(repeating: Float.greatestFiniteMagnitude)
var localMax = float3(repeating: -Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
With above logic i am getting box only on plane surface instead of outlined box as in this picture.

You are probably missing the wireframe_shader shader file.
Make sure you add it to the project into the art.scnassets, then try to reload the app.
You can find a similar shader in this repository, don't forget to change the name of the shader file or the resource name in the code.

Related

How to recognize a lot of objects and add a lot of nodes at the same time?

This is the simple object when I use for ARImageTrackingConfiguration().
Within code I add a plane and paperPlane onto recognized object:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor.green.withAlphaComponent(0.8)
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
planeNode.position.y = 0
let paperPlaneScene = SCNScene(named: "Scenes.scnassets/paperPlane.scn")!
let paperPlaneNode = paperPlaneScene.rootNode.childNodes.first!
paperPlaneNode.position = SCNVector3Zero
paperPlaneNode.position.z = 0.1
paperPlaneNode.eulerAngles.y = -.pi / 2
paperPlaneNode.eulerAngles.z = .pi
planeNode.addChildNode(paperPlaneNode)
node.addChildNode(planeNode)
}
return node
}
But the result is following:
Why there is only one recognized object at time? And not every of them? They are recognized one by one, but never all at once. Why?
In the latest version of ARKit 5.0 you can simultaneously detect up to 100 images using such classes as ARImageTrackingConfiguration or ARWorldTrackingConfiguration.
To detect up to 100 images at a time use the following instance property:
var maximumNumberOfTrackedImages: Int { get set }
In a real code it looks like this:
guard let reference = ARReferenceImage.referenceImages(inGroupNamed: "ARRes",
bundle: nil)
else { fatalError("Missing resources") }
let config = ARWorldTrackingConfiguration()
config.maximumNumberOfTrackedImages = 10
config.detectionImages = reference
arView.session.run(config, options: [.resetTracking, .removeExistingAnchors])
And here's an extension with renderer(_:didAdd:for:) instance method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name
else { return }
let geometryNode = nodeGetter(name: imageName)
node.addChildNode(geometryNode)
}
func nodeGetter(name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "geometry_01": node = modelOne
case "geometry_02": node = modelTwo
case "geometry_03": node = modelThree
case "geometry_04": node = modelFour
case "geometry_05": node = modelFive
case "geometry_06": node = modelSix
case "geometry_07": node = modelSeven
case "geometry_08": node = modelEight
default: break
}
return node
}
}

Repost Swift: Displaying the vertices label properly on AR Face mesh [duplicate]

I am trying to generate the face mesh from the AR face tutorial with proper vertices label using SCNText. I follow the online tutorial and have the following:
enterextension EmojiBlingViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,didUpdate node: SCNNode,for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
return
}
faceGeometry.update(from: faceAnchor.geometry)
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else {
return nil
}
let faceGeometry = ARSCNFaceGeometry(device: device)
let node = SCNNode(geometry: faceGeometry)
for x in [1076, 1070, 1163] {
let text = SCNText(string: "\(x)", extrusionDepth: 1)
let txtnode = SCNNode(geometry: text)
txtnode.scale = SCNVector3(x: 0.001, y: 0.001, z: 0.001)
txtnode.name = "\(x)"
node.addChildNode(txtnode)
txtnode.geometry?.firstMaterial?.fillMode = .fill
}
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
}
However, those vertices (1076, 1070, 1163) not show properly- they are overlapped in the centre.
So my question is, does anyone would know how to show (x in [1076, 1070, 1163]) properly in the correct location on the mesh?
Thanks!
Updates: I saw the numbers overlapped and they moved along with the front camera..

Inverse Kinematics in Arkit

I have Scenekit model with 4 joints.
When I use it in SCNView with the code below and change the position of the parent node (Joint) I can see it animated according to the joints.
private lazy var scene: SCNScene = {
let scene = SCNScene(named: "art.scnassets/ears")!
return scene
}()
in viewDidLoad
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 5)
rootNode.addChildNode(cameraNode)
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.backgroundColor = .white
let joint = contentNode!.childNode(withName: "Joint", recursively: true)!
var ik:SCNIKConstraint = .inverseKinematicsConstraint(chainRootNode: joint)
joint.childNode(withName: "head", recursively: true)!.constraints = [ik]
The problem occurs when I use the same model in Arkit. In this case, I use a separate View Controller than above. I place it on the head. The model collapse when I show it in Arkit with the code below. I expect it to track head movement and bend according to it.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let sceneView = renderer as? ARSCNView,
anchor is ARFaceAnchor else { return nil }
faceGeometry = ARSCNFaceGeometry(device: sceneView.device!)!
contentNode = SCNReferenceNode(named: "art/ears")
//contentNode!.physicsBody = .kinematic()
let joint = contentNode!.childNode(withName: "Joint", recursively: true)!
var ik:SCNIKConstraint = .inverseKinematicsConstraint(chainRootNode: joint)
joint.childNode(withName: "head", recursively: true)!.constraints = [ik]
return contentNode
}
I have this extension
extension SCNReferenceNode {
convenience init(named resourceName: String, loadImmediately: Bool = true) {
let url = Bundle.main.url(forResource: resourceName, withExtension: "scn", subdirectory: "Models.scnassets")!
self.init(url: url)!
if loadImmediately {
self.load()
}
}
Normal look of model is like this
Correctly applied inverse kinematics snapshot
I tried setting setMaxAllowedRotationAngle(between 20-45 degrees) to the joints, this prevents collapse behavior but it also prevents bending.
y property of model gravity is -9.8, x and z are 0.
What might be the problem that I can't create the same effect in Arkit?

is it possible to recognise virtual image instead of real image using ARKit ? (iOS 12)

I had done few AR Image tracking apps and AR World Tracking apps.
AR Image Tracking works on recognise the images on the physical map captured from camera.
is there any way to make AR Image Tracking to recognise the virtual "image" which basically is SCNPlane materials?
Would be appreciated if anyone can point me some direction or advice.
(Note: for this project, I use detection image on ARWorldTrackingConfiguration)
I think probably yes, it is possible by adding content image(which you want to detect i.e in your map) to your Assets.xcassets. and then use this code to detect a virtual image like below:
// Put your image name in (withName: "namehere")
lazy var mapNode: SCNNode = {
let node = scene.rootNode.childNode(withName: "map", recursively: false)
return node
}()
// Now When detecting Images
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name else { return }
// TODO: Comment out code
// let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
// planeNode.opacity = 0.0
// planeNode.eulerAngles.x = -.pi / 2
// planeNode.runAction(self.fadeAction)
// node.addChildNode(planeNode)
// TODO: Overlay 3D Object
let overlayNode = self.getNode(withImageName: imageName)
overlayNode.opacity = 0
overlayNode.position.y = 0.2
overlayNode.runAction(self.fadeAndSpinAction)
node.addChildNode(overlayNode)
self.label.text = "Image detected: \"\(imageName)\""
}
}
func getPlaneNode(withReferenceImage image: ARReferenceImage) -> SCNNode {
let plane = SCNPlane(width: image.physicalSize.width,
height: image.physicalSize.height)
let node = SCNNode(geometry: plane)
return node
}
func getNode(withImageName name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "map":
node = mapNode
default:
break
}
return node
}
}

ARKit 1.5: Extract the recognized image

So, my goal is:
Find known image
Extract it from the sceneView (e.g take a snapshot)
Perform further processing
It was quite easy to complete 1st step using ARReferenceImage :
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
But now I can't figure out how to extract the image form the SceneView. I have the plane node added to the imageAnchor:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { [unowned self] in
guard let imageAnchor = anchor as? ARImageAnchor
else { return }
let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
planeNode.opacity = 0.5
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
}
And the result:
So that, I need the planeNode's projection on the screen to get node's 2D screen coordinates and then cut the image. I found this method: renderer.projectPoint(node.position) but it didn't help me. It can even return negative values when the whole picture is on the screen..
Am I doing it the correct way? Any help would be very appreciated
.
In my case I solved it.
let transform = node.simdConvertTransform(node.simdTransform, to: node.parent!)
let x = transform.columns.3.x
let y = transform.columns.3.y
let z = transform.columns.3.z
let position = SCNVector3(x, y, z)
let projection = renderer.projectPoint(position)
let screenPoint = CGPoint(x: CGFloat(projection.x), y: CGFloat( projection.y))
For more info
ARKit 2 - Crop Recognizing Images
iOS 8 Core Image saving a section of an Image via Swift
https://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html

Resources