Bad scaling rendering ARKit - ios

I'm working on perspective tests with ARKit and SceneKit. The idea is to improve 3D rendering when displaying a flat 3D model on the ground. I had already opened a ticket to another perspective problem that is almost solved. (ARKit Perspective Rendering)
However, I noticed after my multitudes tests / 3D display, that sometimes when I anchor a 3D model, the size of it can differ... (width and length)
I usually display a 3D model that is 16 meters long and 1.5 meters wide. You can well imagine that this distorts my rendering.
I don't know why my display may differ in terms of 3D model size.
Maybe it's from the tracking and my test environment.
Below is the code I use to add my 3D model to the scene:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageAnchorPosition = imageAnchor.transform.columns.3
print("Image detected")
let modelName = "couloirV2"
//let modelName = "lamp"
guard let object = VirtualObject
.availableObjects
.filter({ $0.modelName == modelName })
.first else { fatalError("Cannot get model \(modelName)") }
print("Loading \(object)...")
self.sceneView.prepare([object], completionHandler: { _ in
self.updateQueue.async {
// Translate the object's position to the reference node position.
object.position.x = imageAnchorPosition.x
object.position.y = imageAnchorPosition.y
object.position.z = imageAnchorPosition.z
// Save the initial y value for slider handler function
self.tmpYPosition = object.position.y
// Match y node's orientation
object.orientation.y = node.orientation.y
print("Adding object to the scene")
// Prepare the object
object.load()
// Show origin axis
object.showObjectOrigin()
// Translate on z axis to match perfectly with image detected.
var translation = matrix_identity_float4x4
translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
object.simdTransform = matrix_multiply(object.simdTransform, translation)
self.sceneView.scene.rootNode.addChildNode(object)
self.virtualObjectInteraction.selectedObject = object
self.sceneView.addOrUpdateAnchor(for: object)
}
})
}

Related

ProjectPoint for getting 2D image coordinates in ARKit

I am trying to get 2D image coordinates from 3D vertices in ARKit/SceneKit. I have read the projectPoint document and have the following code using projectPoint,CGPoint:
var currentFaceAnchor: ARFaceAnchor?
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard anchor == currentFaceAnchor,
let contentNode = selectedContentController.contentNode,
contentNode.parent == node
else { return }
let vertices = currentFaceAnchor!.geometry.vertices
for vertex in vertices {
let projectedPoint = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
let xVertex = CGFloat(projectedPoint.x)
let yVertex = CGFloat(projectedPoint.y)
if beginSaving == true {
savedPositions.append(CGPoint(x: xVertex, y: yVertex))
}
}
I obtained some points: (216.42970275878906, 760.9256591796875),(208.1529541015625, 761.989501953125),(206.86944580078125, 750.0242919921875)...
I could not find much info regarding to 2D image coordinates and I find myself I could not fully understand this. I am wondering whether these points are the actual 2D pointes obtained from the 3D coordinates var vertices: [SIMD3<Float>] and how could one verify this? Any suggestions and help are appreciated!
Thanks,

How to get image data based on ARFaceAnchor or node position?

I'm trying to create a 2D face unwrap image from a live camera feed from an ARKit session.
I see there is another question about mapping an image onto a mesh. My question is different - it is about generating an image (or many smaller images) from the mesh and camera feed.
The session detects the user's face and adds a ARFaceAnchor to the
renderer.
The anchor has a geometry object, defining a mesh
Mesh has a large number of vertices
For each update, there is a corresponding camera image
Camera image has a pixel buffer with image data
How do I retrieve image data around the face anchor vertices, to "stitch" together a face unwrap from the corresponding camera frames?
var session: ARSession = self.sceneView.session
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
if let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry: ARSCNFaceGeometry = faceGeometry {
if let pixelBuffer: CVPixelBuffer = session.currentFrame?.capturedImage,
let anchors: [ARAnchor] = session.currentFrame?.anchors {
print("how to get pixel buffer bytes around any given anchor?")
}
for index in 0 ..< faceGeometry.vertices.count {
let point = faceGeometry.vertices[index]
let position = SCNVector3(point.x, point.y, point.z)
print("How to use this position to retrieve image data from pixel buffer around this position?")
}
}
}
Included is a sample of where the face geometry vertices are positioned

is it possible to recognise virtual image instead of real image using ARKit ? (iOS 12)

I had done few AR Image tracking apps and AR World Tracking apps.
AR Image Tracking works on recognise the images on the physical map captured from camera.
is there any way to make AR Image Tracking to recognise the virtual "image" which basically is SCNPlane materials?
Would be appreciated if anyone can point me some direction or advice.
(Note: for this project, I use detection image on ARWorldTrackingConfiguration)
I think probably yes, it is possible by adding content image(which you want to detect i.e in your map) to your Assets.xcassets. and then use this code to detect a virtual image like below:
// Put your image name in (withName: "namehere")
lazy var mapNode: SCNNode = {
let node = scene.rootNode.childNode(withName: "map", recursively: false)
return node
}()
// Now When detecting Images
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name else { return }
// TODO: Comment out code
// let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
// planeNode.opacity = 0.0
// planeNode.eulerAngles.x = -.pi / 2
// planeNode.runAction(self.fadeAction)
// node.addChildNode(planeNode)
// TODO: Overlay 3D Object
let overlayNode = self.getNode(withImageName: imageName)
overlayNode.opacity = 0
overlayNode.position.y = 0.2
overlayNode.runAction(self.fadeAndSpinAction)
node.addChildNode(overlayNode)
self.label.text = "Image detected: \"\(imageName)\""
}
}
func getPlaneNode(withReferenceImage image: ARReferenceImage) -> SCNNode {
let plane = SCNPlane(width: image.physicalSize.width,
height: image.physicalSize.height)
let node = SCNNode(geometry: plane)
return node
}
func getNode(withImageName name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "map":
node = mapNode
default:
break
}
return node
}
}

ARKit - getting distance from camera to anchor

I'm creating an anchor and adding it to my ARSKView at a certain distance in front of the camera like this:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
then the node gets created for the anchor like this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This works fine and I see the image get added at the appropriate distance. However, what I'm trying to do is then calculate how far the anchor/node is in front of the camera as you move. The problem is the calculation seems to be off immediately using fabs(cameraZ - anchor.transform.columns.3.z). Please see my code below that is in the update() method to calculate distance between camera and object:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
//print("token is within camera view from update method")
print("DISTANCE BETWEEN CAMERA AND TOKEN: \(fabs(cameraZ - anchor.transform.columns.3.z))")
print(cameraZ)
print(anchor.transform.columns.3.z)
}
}
}
}
Any help is appreciated in order to accurately get distance between camera and the anchor.
The last column of a 4x4 transform matrix is the translation vector (or position relative to a parent coordinate space), so you can get the distance in three dimensions between two transforms by simply subtracting those vectors.
let anchorPosition = anchor.transforms.columns.3
let cameraPosition = camera.transform.columns.3
// here’s a line connecting the two points, which might be useful for other things
let cameraToAnchor = cameraPosition - anchorPosition
// and here’s just the scalar distance
let distance = length(cameraToAnchor)
What you’re doing isn’t working right because you’re subtracting the z-coordinates of each vector. If the two points are different in x, y, and z, just subtracting z doesn’t get you distance.
This one is for scenekit, I'll leave it here though.
let end = node.presentation.worldPosition
let start = sceneView.pointOfView?.worldPosition
let dx = (end?.x)! - (start?.x)!
let dy = (end?.y)! - (start?.y)!
let dz = (end?.z)! - (start?.z)!
let distance = sqrt(pow(dx,2)+pow(dy,2)+pow(dz,2))
With RealityKit there is a slightly different way to do this. If you're using the world tracking configuration, your AnchorEntity object conforms to HasAnchoring which gives you a target. Target is an enum of AnchoringComponent.Target. It has a case .world(let transform). You can compare your world transform to the camera's world transform like this:
if case let AnchoringComponent.Target.world(transform) = yourAnchorEntity.anchoring.target {
let theDistance = distance(transform.columns.3, frame.camera.transform.columns.3)
}
This took me a bit to figure out but I figure others that might be using RealityKit might benefit from this.
As mentioned above by #codeman, this is the right solution:
let distance = simd_distance(YOUR_NODE.simdTransform.columns.3, (sceneView.session.currentFrame?.camera.transform.columns.3)!);
3D distance - You can check these utils,
class ARSceneUtils {
/// return the distance between anchor and camera.
class func distanceBetween(anchor : ARAnchor,AndCamera camera: ARCamera) -> CGFloat {
let anchorPostion = SCNVector3Make(
anchor.transform.columns.3.x,
anchor.transform.columns.3.y,
anchor.transform.columns.3.z
)
let cametaPosition = SCNVector3Make(
camera.transform.columns.3.x,
camera.transform.columns.3.y,
camera.transform.columns.3.z
)
return CGFloat(self.calculateDistance(from: cametaPosition , to: anchorPostion))
}
/// return the distance between 2 vectors.
class func calculateDistance(from: SCNVector3, to: SCNVector3) -> Float {
let x = from.x - to.x
let y = from.y - to.y
let z = from.z - to.z
return sqrtf( (x * x) + (y * y) + (z * z))
}
}
And now you can call:
guard let camera = session.currentFrame?.camera else { return }
let anchor = // you anchor
let distanceAchorAndCamera = ARSceneUtils.distanceBetween(anchor: anchor, AndCamera: camera)

ARKit 1.5: Extract the recognized image

So, my goal is:
Find known image
Extract it from the sceneView (e.g take a snapshot)
Perform further processing
It was quite easy to complete 1st step using ARReferenceImage :
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
But now I can't figure out how to extract the image form the SceneView. I have the plane node added to the imageAnchor:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { [unowned self] in
guard let imageAnchor = anchor as? ARImageAnchor
else { return }
let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
planeNode.opacity = 0.5
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
}
And the result:
So that, I need the planeNode's projection on the screen to get node's 2D screen coordinates and then cut the image. I found this method: renderer.projectPoint(node.position) but it didn't help me. It can even return negative values when the whole picture is on the screen..
Am I doing it the correct way? Any help would be very appreciated
.
In my case I solved it.
let transform = node.simdConvertTransform(node.simdTransform, to: node.parent!)
let x = transform.columns.3.x
let y = transform.columns.3.y
let z = transform.columns.3.z
let position = SCNVector3(x, y, z)
let projection = renderer.projectPoint(position)
let screenPoint = CGPoint(x: CGFloat(projection.x), y: CGFloat( projection.y))
For more info
ARKit 2 - Crop Recognizing Images
iOS 8 Core Image saving a section of an Image via Swift
https://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html

Resources