So, my goal is:
Find known image
Extract it from the sceneView (e.g take a snapshot)
Perform further processing
It was quite easy to complete 1st step using ARReferenceImage :
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
But now I can't figure out how to extract the image form the SceneView. I have the plane node added to the imageAnchor:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { [unowned self] in
guard let imageAnchor = anchor as? ARImageAnchor
else { return }
let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
planeNode.opacity = 0.5
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
}
And the result:
So that, I need the planeNode's projection on the screen to get node's 2D screen coordinates and then cut the image. I found this method: renderer.projectPoint(node.position) but it didn't help me. It can even return negative values when the whole picture is on the screen..
Am I doing it the correct way? Any help would be very appreciated
.
In my case I solved it.
let transform = node.simdConvertTransform(node.simdTransform, to: node.parent!)
let x = transform.columns.3.x
let y = transform.columns.3.y
let z = transform.columns.3.z
let position = SCNVector3(x, y, z)
let projection = renderer.projectPoint(position)
let screenPoint = CGPoint(x: CGFloat(projection.x), y: CGFloat( projection.y))
For more info
ARKit 2 - Crop Recognizing Images
iOS 8 Core Image saving a section of an Image via Swift
https://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
Related
This is the simple object when I use for ARImageTrackingConfiguration().
Within code I add a plane and paperPlane onto recognized object:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor.green.withAlphaComponent(0.8)
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
planeNode.position.y = 0
let paperPlaneScene = SCNScene(named: "Scenes.scnassets/paperPlane.scn")!
let paperPlaneNode = paperPlaneScene.rootNode.childNodes.first!
paperPlaneNode.position = SCNVector3Zero
paperPlaneNode.position.z = 0.1
paperPlaneNode.eulerAngles.y = -.pi / 2
paperPlaneNode.eulerAngles.z = .pi
planeNode.addChildNode(paperPlaneNode)
node.addChildNode(planeNode)
}
return node
}
But the result is following:
Why there is only one recognized object at time? And not every of them? They are recognized one by one, but never all at once. Why?
In the latest version of ARKit 5.0 you can simultaneously detect up to 100 images using such classes as ARImageTrackingConfiguration or ARWorldTrackingConfiguration.
To detect up to 100 images at a time use the following instance property:
var maximumNumberOfTrackedImages: Int { get set }
In a real code it looks like this:
guard let reference = ARReferenceImage.referenceImages(inGroupNamed: "ARRes",
bundle: nil)
else { fatalError("Missing resources") }
let config = ARWorldTrackingConfiguration()
config.maximumNumberOfTrackedImages = 10
config.detectionImages = reference
arView.session.run(config, options: [.resetTracking, .removeExistingAnchors])
And here's an extension with renderer(_:didAdd:for:) instance method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name
else { return }
let geometryNode = nodeGetter(name: imageName)
node.addChildNode(geometryNode)
}
func nodeGetter(name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "geometry_01": node = modelOne
case "geometry_02": node = modelTwo
case "geometry_03": node = modelThree
case "geometry_04": node = modelFour
case "geometry_05": node = modelFive
case "geometry_06": node = modelSix
case "geometry_07": node = modelSeven
case "geometry_08": node = modelEight
default: break
}
return node
}
}
I am trying to generate the face mesh from the AR face tutorial with proper vertices label using SCNText. I follow the online tutorial and have the following:
enterextension EmojiBlingViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,didUpdate node: SCNNode,for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
return
}
faceGeometry.update(from: faceAnchor.geometry)
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else {
return nil
}
let faceGeometry = ARSCNFaceGeometry(device: device)
let node = SCNNode(geometry: faceGeometry)
for x in [1076, 1070, 1163] {
let text = SCNText(string: "\(x)", extrusionDepth: 1)
let txtnode = SCNNode(geometry: text)
txtnode.scale = SCNVector3(x: 0.001, y: 0.001, z: 0.001)
txtnode.name = "\(x)"
node.addChildNode(txtnode)
txtnode.geometry?.firstMaterial?.fillMode = .fill
}
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
}
However, those vertices (1076, 1070, 1163) not show properly- they are overlapped in the centre.
So my question is, does anyone would know how to show (x in [1076, 1070, 1163]) properly in the correct location on the mesh?
Thanks!
Updates: I saw the numbers overlapped and they moved along with the front camera..
I am developing iOS app which will highlight objects ( not specific one that we do with .arobject files) with outlined box in real world. Idea is to implement only boundBox drawing from this documentation / example.
Got some idea from this stackoverflow answer but still unable to draw outlined boundBox to scanned object.
// Declaration
let configuration = ARObjectScanningConfiguration()
let augmentedRealitySession = ARSession()
// viewWillAppear(_ animated: Bool)
configuration.planeDetection = .horizontal
sceneView.session.run(configuration, options: .resetTracking)
// renderer
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//print("\(self.detectionObjects.debugDescription)")
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
// BlackMirrorzBoundingBox class
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(repeating: Float.greatestFiniteMagnitude)
var localMax = float3(repeating: -Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
With above logic i am getting box only on plane surface instead of outlined box as in this picture.
You are probably missing the wireframe_shader shader file.
Make sure you add it to the project into the art.scnassets, then try to reload the app.
You can find a similar shader in this repository, don't forget to change the name of the shader file or the resource name in the code.
I had done few AR Image tracking apps and AR World Tracking apps.
AR Image Tracking works on recognise the images on the physical map captured from camera.
is there any way to make AR Image Tracking to recognise the virtual "image" which basically is SCNPlane materials?
Would be appreciated if anyone can point me some direction or advice.
(Note: for this project, I use detection image on ARWorldTrackingConfiguration)
I think probably yes, it is possible by adding content image(which you want to detect i.e in your map) to your Assets.xcassets. and then use this code to detect a virtual image like below:
// Put your image name in (withName: "namehere")
lazy var mapNode: SCNNode = {
let node = scene.rootNode.childNode(withName: "map", recursively: false)
return node
}()
// Now When detecting Images
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name else { return }
// TODO: Comment out code
// let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
// planeNode.opacity = 0.0
// planeNode.eulerAngles.x = -.pi / 2
// planeNode.runAction(self.fadeAction)
// node.addChildNode(planeNode)
// TODO: Overlay 3D Object
let overlayNode = self.getNode(withImageName: imageName)
overlayNode.opacity = 0
overlayNode.position.y = 0.2
overlayNode.runAction(self.fadeAndSpinAction)
node.addChildNode(overlayNode)
self.label.text = "Image detected: \"\(imageName)\""
}
}
func getPlaneNode(withReferenceImage image: ARReferenceImage) -> SCNNode {
let plane = SCNPlane(width: image.physicalSize.width,
height: image.physicalSize.height)
let node = SCNNode(geometry: plane)
return node
}
func getNode(withImageName name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "map":
node = mapNode
default:
break
}
return node
}
}
I'm creating an anchor and adding it to my ARSKView at a certain distance in front of the camera like this:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
then the node gets created for the anchor like this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This works fine and I see the image get added at the appropriate distance. However, what I'm trying to do is then calculate how far the anchor/node is in front of the camera as you move. The problem is the calculation seems to be off immediately using fabs(cameraZ - anchor.transform.columns.3.z). Please see my code below that is in the update() method to calculate distance between camera and object:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
//print("token is within camera view from update method")
print("DISTANCE BETWEEN CAMERA AND TOKEN: \(fabs(cameraZ - anchor.transform.columns.3.z))")
print(cameraZ)
print(anchor.transform.columns.3.z)
}
}
}
}
Any help is appreciated in order to accurately get distance between camera and the anchor.
The last column of a 4x4 transform matrix is the translation vector (or position relative to a parent coordinate space), so you can get the distance in three dimensions between two transforms by simply subtracting those vectors.
let anchorPosition = anchor.transforms.columns.3
let cameraPosition = camera.transform.columns.3
// here’s a line connecting the two points, which might be useful for other things
let cameraToAnchor = cameraPosition - anchorPosition
// and here’s just the scalar distance
let distance = length(cameraToAnchor)
What you’re doing isn’t working right because you’re subtracting the z-coordinates of each vector. If the two points are different in x, y, and z, just subtracting z doesn’t get you distance.
This one is for scenekit, I'll leave it here though.
let end = node.presentation.worldPosition
let start = sceneView.pointOfView?.worldPosition
let dx = (end?.x)! - (start?.x)!
let dy = (end?.y)! - (start?.y)!
let dz = (end?.z)! - (start?.z)!
let distance = sqrt(pow(dx,2)+pow(dy,2)+pow(dz,2))
With RealityKit there is a slightly different way to do this. If you're using the world tracking configuration, your AnchorEntity object conforms to HasAnchoring which gives you a target. Target is an enum of AnchoringComponent.Target. It has a case .world(let transform). You can compare your world transform to the camera's world transform like this:
if case let AnchoringComponent.Target.world(transform) = yourAnchorEntity.anchoring.target {
let theDistance = distance(transform.columns.3, frame.camera.transform.columns.3)
}
This took me a bit to figure out but I figure others that might be using RealityKit might benefit from this.
As mentioned above by #codeman, this is the right solution:
let distance = simd_distance(YOUR_NODE.simdTransform.columns.3, (sceneView.session.currentFrame?.camera.transform.columns.3)!);
3D distance - You can check these utils,
class ARSceneUtils {
/// return the distance between anchor and camera.
class func distanceBetween(anchor : ARAnchor,AndCamera camera: ARCamera) -> CGFloat {
let anchorPostion = SCNVector3Make(
anchor.transform.columns.3.x,
anchor.transform.columns.3.y,
anchor.transform.columns.3.z
)
let cametaPosition = SCNVector3Make(
camera.transform.columns.3.x,
camera.transform.columns.3.y,
camera.transform.columns.3.z
)
return CGFloat(self.calculateDistance(from: cametaPosition , to: anchorPostion))
}
/// return the distance between 2 vectors.
class func calculateDistance(from: SCNVector3, to: SCNVector3) -> Float {
let x = from.x - to.x
let y = from.y - to.y
let z = from.z - to.z
return sqrtf( (x * x) + (y * y) + (z * z))
}
}
And now you can call:
guard let camera = session.currentFrame?.camera else { return }
let anchor = // you anchor
let distanceAchorAndCamera = ARSceneUtils.distanceBetween(anchor: anchor, AndCamera: camera)