ProjectPoint for getting 2D image coordinates in ARKit - ios

I am trying to get 2D image coordinates from 3D vertices in ARKit/SceneKit. I have read the projectPoint document and have the following code using projectPoint,CGPoint:
var currentFaceAnchor: ARFaceAnchor?
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard anchor == currentFaceAnchor,
let contentNode = selectedContentController.contentNode,
contentNode.parent == node
else { return }
let vertices = currentFaceAnchor!.geometry.vertices
for vertex in vertices {
let projectedPoint = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
let xVertex = CGFloat(projectedPoint.x)
let yVertex = CGFloat(projectedPoint.y)
if beginSaving == true {
savedPositions.append(CGPoint(x: xVertex, y: yVertex))
}
}
I obtained some points: (216.42970275878906, 760.9256591796875),(208.1529541015625, 761.989501953125),(206.86944580078125, 750.0242919921875)...
I could not find much info regarding to 2D image coordinates and I find myself I could not fully understand this. I am wondering whether these points are the actual 2D pointes obtained from the 3D coordinates var vertices: [SIMD3<Float>] and how could one verify this? Any suggestions and help are appreciated!
Thanks,

Related

ARKit + SceneKit: Using reconstructed scene for physics?

I'm using ARKit with SceneKit and would like to let my 3D objects physically interact with the reconstructed scene created by devices with LiDAR sensors (config.sceneReconstruction = .mesh). For example, having a virtual ball bounce off of the geometry of the reconstructed scene.
In RealityKit, this seems to be possible using sceneUnderstanding:
arView.environment.sceneUnderstanding.options.insert(.physics)
How can I achieve the same thing when using SceneKit?
As far as I know, there is no built-in support for enabling this using SceneKit. However you can fairly easily put together a custom solution using the ARMeshAnchor created by ARKit.
First, configure ARKit to enable scene reconstruction:
let config = ARWorldTrackingConfiguration()
if ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh) {
config.sceneReconstruction = .mesh
} else {
// Handle device that doesn't support scene reconstruction
}
// and enable physics visualization for debugging
sceneView.debugOptions = [.showPhysicsShapes]
Then in your ARSCNViewDelegate use renderer nodeFor to create a scenekit node for newly created ARMeshAnchor instances:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor else {
return nil
}
let geometry = createGeometryFromAnchor(meshAnchor: meshAnchor)
// Optionally hide the node from rendering as well
geometry.firstMaterial?.colorBufferWriteMask = []
let node = SCNNode(geometry: geometry)
// Make sure physics apply to the node
// You must used concavePolyhedron here!
node.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(geometry: geometry, options: [.type: SCNPhysicsShape.ShapeType.concavePolyhedron]))
return node
}
// Taken from https://developer.apple.com/forums/thread/130599
func createGeometryFromAnchor(meshAnchor: ARMeshAnchor) -> SCNGeometry {
let meshGeometry = meshAnchor.geometry
let vertices = meshGeometry.vertices
let normals = meshGeometry.normals
let faces = meshGeometry.faces
// use the MTL buffer that ARKit gives us
let vertexSource = SCNGeometrySource(buffer: vertices.buffer, vertexFormat: vertices.format, semantic: .vertex, vertexCount: vertices.count, dataOffset: vertices.offset, dataStride: vertices.stride)
let normalsSource = SCNGeometrySource(buffer: normals.buffer, vertexFormat: normals.format, semantic: .normal, vertexCount: normals.count, dataOffset: normals.offset, dataStride: normals.stride)
// Copy bytes as we may use them later
let faceData = Data(bytes: faces.buffer.contents(), count: faces.buffer.length)
// create the geometry element
let geometryElement = SCNGeometryElement(data: faceData, primitiveType: toSCNGeometryPrimitiveType(faces.primitiveType), primitiveCount: faces.count, bytesPerIndex: faces.bytesPerIndex)
return SCNGeometry(sources: [vertexSource, normalsSource], elements: [geometryElement])
}
func toSCNGeometryPrimitiveType(_ ar: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch ar {
case .triangle: return .triangles
default: fatalError("Unknown type")
}
}
Finally, update the scene nodes whenever the reconstructed geometry changes in the ARSCNViewDelegate renderer didUpdate: function:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor else {
return
}
let geometry = SCNGeometry.fromAnchor(meshAnchor: meshAnchor, setColors: false)
geometry.firstMaterial?.colorBufferWriteMask = []
node.geometry = geometry
node.physicsBody!.physicsShape = SCNPhysicsShape(geometry: geometry, options: [.type: SCNPhysicsShape.ShapeType.concavePolyhedron])
}
Any physics objects you create in SceneKit should now be able to interact with reconstructed scene:

How to get image data based on ARFaceAnchor or node position?

I'm trying to create a 2D face unwrap image from a live camera feed from an ARKit session.
I see there is another question about mapping an image onto a mesh. My question is different - it is about generating an image (or many smaller images) from the mesh and camera feed.
The session detects the user's face and adds a ARFaceAnchor to the
renderer.
The anchor has a geometry object, defining a mesh
Mesh has a large number of vertices
For each update, there is a corresponding camera image
Camera image has a pixel buffer with image data
How do I retrieve image data around the face anchor vertices, to "stitch" together a face unwrap from the corresponding camera frames?
var session: ARSession = self.sceneView.session
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
if let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry: ARSCNFaceGeometry = faceGeometry {
if let pixelBuffer: CVPixelBuffer = session.currentFrame?.capturedImage,
let anchors: [ARAnchor] = session.currentFrame?.anchors {
print("how to get pixel buffer bytes around any given anchor?")
}
for index in 0 ..< faceGeometry.vertices.count {
let point = faceGeometry.vertices[index]
let position = SCNVector3(point.x, point.y, point.z)
print("How to use this position to retrieve image data from pixel buffer around this position?")
}
}
}
Included is a sample of where the face geometry vertices are positioned

Repost Swift: Displaying the vertices label properly on AR Face mesh [duplicate]

I am trying to generate the face mesh from the AR face tutorial with proper vertices label using SCNText. I follow the online tutorial and have the following:
enterextension EmojiBlingViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,didUpdate node: SCNNode,for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
return
}
faceGeometry.update(from: faceAnchor.geometry)
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else {
return nil
}
let faceGeometry = ARSCNFaceGeometry(device: device)
let node = SCNNode(geometry: faceGeometry)
for x in [1076, 1070, 1163] {
let text = SCNText(string: "\(x)", extrusionDepth: 1)
let txtnode = SCNNode(geometry: text)
txtnode.scale = SCNVector3(x: 0.001, y: 0.001, z: 0.001)
txtnode.name = "\(x)"
node.addChildNode(txtnode)
txtnode.geometry?.firstMaterial?.fillMode = .fill
}
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
}
However, those vertices (1076, 1070, 1163) not show properly- they are overlapped in the centre.
So my question is, does anyone would know how to show (x in [1076, 1070, 1163]) properly in the correct location on the mesh?
Thanks!
Updates: I saw the numbers overlapped and they moved along with the front camera..

is it possible to recognise virtual image instead of real image using ARKit ? (iOS 12)

I had done few AR Image tracking apps and AR World Tracking apps.
AR Image Tracking works on recognise the images on the physical map captured from camera.
is there any way to make AR Image Tracking to recognise the virtual "image" which basically is SCNPlane materials?
Would be appreciated if anyone can point me some direction or advice.
(Note: for this project, I use detection image on ARWorldTrackingConfiguration)
I think probably yes, it is possible by adding content image(which you want to detect i.e in your map) to your Assets.xcassets. and then use this code to detect a virtual image like below:
// Put your image name in (withName: "namehere")
lazy var mapNode: SCNNode = {
let node = scene.rootNode.childNode(withName: "map", recursively: false)
return node
}()
// Now When detecting Images
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name else { return }
// TODO: Comment out code
// let planeNode = self.getPlaneNode(withReferenceImage: imageAnchor.referenceImage)
// planeNode.opacity = 0.0
// planeNode.eulerAngles.x = -.pi / 2
// planeNode.runAction(self.fadeAction)
// node.addChildNode(planeNode)
// TODO: Overlay 3D Object
let overlayNode = self.getNode(withImageName: imageName)
overlayNode.opacity = 0
overlayNode.position.y = 0.2
overlayNode.runAction(self.fadeAndSpinAction)
node.addChildNode(overlayNode)
self.label.text = "Image detected: \"\(imageName)\""
}
}
func getPlaneNode(withReferenceImage image: ARReferenceImage) -> SCNNode {
let plane = SCNPlane(width: image.physicalSize.width,
height: image.physicalSize.height)
let node = SCNNode(geometry: plane)
return node
}
func getNode(withImageName name: String) -> SCNNode {
var node = SCNNode()
switch name {
case "map":
node = mapNode
default:
break
}
return node
}
}

Bad scaling rendering ARKit

I'm working on perspective tests with ARKit and SceneKit. The idea is to improve 3D rendering when displaying a flat 3D model on the ground. I had already opened a ticket to another perspective problem that is almost solved. (ARKit Perspective Rendering)
However, I noticed after my multitudes tests / 3D display, that sometimes when I anchor a 3D model, the size of it can differ... (width and length)
I usually display a 3D model that is 16 meters long and 1.5 meters wide. You can well imagine that this distorts my rendering.
I don't know why my display may differ in terms of 3D model size.
Maybe it's from the tracking and my test environment.
Below is the code I use to add my 3D model to the scene:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageAnchorPosition = imageAnchor.transform.columns.3
print("Image detected")
let modelName = "couloirV2"
//let modelName = "lamp"
guard let object = VirtualObject
.availableObjects
.filter({ $0.modelName == modelName })
.first else { fatalError("Cannot get model \(modelName)") }
print("Loading \(object)...")
self.sceneView.prepare([object], completionHandler: { _ in
self.updateQueue.async {
// Translate the object's position to the reference node position.
object.position.x = imageAnchorPosition.x
object.position.y = imageAnchorPosition.y
object.position.z = imageAnchorPosition.z
// Save the initial y value for slider handler function
self.tmpYPosition = object.position.y
// Match y node's orientation
object.orientation.y = node.orientation.y
print("Adding object to the scene")
// Prepare the object
object.load()
// Show origin axis
object.showObjectOrigin()
// Translate on z axis to match perfectly with image detected.
var translation = matrix_identity_float4x4
translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
object.simdTransform = matrix_multiply(object.simdTransform, translation)
self.sceneView.scene.rootNode.addChildNode(object)
self.virtualObjectInteraction.selectedObject = object
self.sceneView.addOrUpdateAnchor(for: object)
}
})
}

Resources