In ARKit 2.0, I am trying to use PBR base material. Using 3D model in .obj file format.
Problem: I am not able see the dark product. I am not sure, if it's related to set proper lighting in SceneKit. Please help me how to set PBR base lighting in ARKit/SceneKit.
Try below code:
//Set lighting
let light = SCNLight()
light.type = .ambient
node.light = light
// if light estimation is enabled, update the intensity
// of the model's lights and the environment map
if let lightEstimate = self.session.currentFrame?.lightEstimate {
self.enableEnvironmentMapWithIntensity(lightEstimate.ambientIntensity / 1000.0)
} else {
self.enableEnvironmentMapWithIntensity(6)
}
// Call environment Map
func enableEnvironmentMapWithIntensity(_ intensity: CGFloat) {
if sceneView.scene.lightingEnvironment.contents == nil {
if let environmentMap = UIImage(named: "Models.scnassets/sharedImages/environment_blur.exr") {
sceneView.scene.lightingEnvironment.contents = environmentMap
}
}
sceneView.scene.lightingEnvironment.intensity = intensity
}
Required Result : http://prntscr.com/luwlax
For reference, attaching PBR Materials:
Diffuse : https://prnt.sc/luwc40
Roughness : https://prnt.sc/luwcl7
Normal : https://prnt.sc/luwcuw
Metalness : https://prnt.sc/luwdaz
If you have a separated geometry (a screen separately from frame), it's easy to assign texture onto these parts.
If you have a combined geometry (all parts as one piece) the only way to texture such a model is to assign pre-made UV-mapped texture.
Related
I am using ARKit's ARFaceTrackingConfiguration to track the facial blendshapes along with left and right Eye Transforms. I am exporting this data into json and apply this data on 3d model ( which preconfigured shape keys, eye nodes). I was able to apply the blend shape data, but I got struck at how to apply the eye rotations. am getting leftEyeTransform, rightEyeTransform which is simd_float4*4 from FaceAnchor.
Here how to apply the rotation on eye nodes from the transform values.I believe for eyes, it is enough to apply the rotation.
I have tried with the below to get the orientation from eyeTransforms:
Method 1:
let faceNode = SCNNode()
faceNode.simdTransform = eyeTransform
let vector = faceNode.eulerAngles
eyeLeftNode.eulerAngles = vector
Method:2
let faceNode = SCNNode()
faceNode.simdTransform = eyeTransform
let rotation = vector_float3(faceNode.orientation.x,faceNode.orientation.y,faceNode.orientaton.z)
let yaw = (rotation.y)
let pitch = (rotation.x)
let roll = (rotation.z)
let vector = SCNVector3(pitch, yaw, roll)
eyeLeftNode.eulerAngles = vector
Method: 3
let simd_quatf = simd_quaternion(eyeTransform)
let vector = SCNVector3(simd_quatf.axis.x,simd_quatf.axis.y,simd_quatf.axis.z)
eyeLeftNode.eulerAngles = vector
None of the ways are working. I am not able to figure out the actual problem on how to rotate the eyeBalls. Can you please tell me how to do this
Thanks,
Chaitanya
I use the following two extensions in my apps for simd_float4x4 translation and orientation components if that's all you need:
extension float4x4 {
var translation: SIMD3<Float> {
let translation = columns.3
return SIMD3<Float>(translation.x, translation.y, translation.z)
}
/**
Factors out the orientation component of the transform.
*/
var orientation: simd_quatf {
return simd_quaternion(self)
}
}
What am I looking for?
A simple explanation of my requirement is this
Using ARKit, detect an object using iPhone camera
Find the position of this object on this virtual space
Place a 3D object on this virtual space using SceneKit. The 3D object should be behind the
marker.
An example would be to detect a small image/marker position in a 3D space using camera, place another 3D ball model behind this marker in virtual space (so the ball will be hidden from the user because the marker/image is in front)
What I am able to do so far?
I am able to detect a marker/image using ARKit
I am able to position a ball 3D model on the screen.
What is my problem?
I am unable to position the ball in such a way that ball is behind the marker that is detected.
When the ball is in front the marker, the ball correctly hide the marker. You can see in the side view that ball is in front of the marker. See below
But when the ball is behind the marker, opposite doesn't happen. The ball is always seeing in front blocking the marker. I expected the marker to hide the ball. So the scene is not respecting the z depth of the ball's position. See below
Code
Please look into the comments as well
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.autoenablesDefaultLighting = true
//This loads my 3d model.
let ballScene = SCNScene(named: "art.scnassets/ball.scn")
ballNode = ballScene?.rootNode
//The model I have is too big. Scaling it here.
ballNode?.scale = SCNVector3Make(0.1, 0.1, 0.1)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
//I am trying to detect a marker/image. So ImageTracking configuration is enough
let configuration = ARImageTrackingConfiguration()
//Load the image/marker and set it as tracking image
//There is only one image in this set
if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "Markers",
bundle: Bundle.main) {
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 1
}
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if anchor is ARImageAnchor {
//my image is detected
if let ballNode = self.ballNode {
//for some reason changing the y position translate the ball in z direction
//Positive y value moves it towards the screen (infront the marker)
ballNode.position = SCNVector3(0.0, -0.02, 0.0)
//Negative y value moves it away from the screen (behind the marker)
ballNode.position = SCNVector3(0.0, -0.02, 0.0)
node.addChildNode(ballNode)
}
}
return node
}
How to make the scene to respect the z position? Or in other words, how to show a 3D model behind an image/marker that has been detected using ARKit framework?
I am running against iOS 12, using Xcode 10.3. Let me know if any other information is needed.
To achieve that you need to create an occluder in the 3D scene. Since an ARReferenceImage has a physicalSize it should be straightforward to add a geometry in the scene when the ARImageAnchor is created.
The geometry would be a SCNPlane with a SCNMaterial appropriate for an occluder. I would opt for a SCNLightingModelConstant lighting model (it's the cheapest and we won't actually draw the plane) with a colorBufferWriteMask equal to SCNColorMaskNone. The object should be transparent but still write in the depth buffer (that's how it will act as an occluder).
Finally, make sure that the occluder is rendered before any augmented object by setting its renderingOrder to -1 (or an even lower value if the app already uses rendering orders).
In ARKit 3.0 Apple engineers implemented ZDepth compositing technique called People Occlusion. This feature is available only on devices with A12 and A13 'cause it's highly processor intensive. At the moment ARKit ZDepth compositing feature is in its infancy, hence it allows you only composite people over and under (or people-like objects) background, not any other object seen via rear camera. And, I think, you know about front TrueDepth camera – it's for face tracking and it has additional IR sensor for this task.
To turn ZDepth compositing feature on, use these instance properties in ARKit 3.0:
var frameSemantics: ARConfiguration.FrameSemantics { get set }
static var personSegmentationWithDepth: ARConfiguration.FrameSemantics { get }
Real code should look like this:
let config = ARWorldTrackingConfiguration()
if let config = mySession.configuration as? ARWorldTrackingConfiguration {
config.frameSemantics.insert(.personSegmentationWithDepth)
mySession.run(config)
}
After alpha channel's segmentation a formula for every channel computation looks like this:
r = Az > Bz ? Ar : Br
g = Az > Bz ? Ag : Bg
b = Az > Bz ? Ab : Bb
a = Az > Bz ? Aa : Ba
where Az is a ZDepth channel of Foreground image (3D model)
Bz is ZDepth a channel of Background image (2D video)
Ar, Ag, Ab, Aa – Red, Green, Blue and Alpha channels of 3D model
Br, Bg, Bb, Ba – Red, Green, Blue and Alpha channels of 2D video
But in early versions of ARKit there's no ZDepth compositing feature, so you can composite a 3D model over 2D background video only using standard 4-channel compositing OVER operation:
(Argb * Aa) + (Brgb * (1 - Aa))
where Argb is RGB channels of Foreground A image (3D model)
Aa is an Alpha channel of Foreground A image (3D model)
Brgb is RGB channels of Background B image (2D video)
(1 - Aa) is an inversion of Foreground Alpha channel
As a result, without personSegmentationWithDepth property your 3D model will always be OVER a 2D video.
Thus, if object on a Video doesn't look like humans' hand or like a human body, when using regular ARKit tools, you can't place the object from 2D video over 3D model.
.....
Nonetheless, you can do it using Metal and AVFoundation frameworks. Consider – it's not easy.
To extract ZDepth data from video stream you need the following instance property:
// Works from iOS 11
var capturedDepthData: AVDepthData? { get }
Or you may use these two instance methods (remember ZDepth channel must be 32-bit):
// Works from iOS 13
func generateDilatedDepth(from frame: ARFrame,
commandBuffer: MTLCommandBuffer) -> MTLTexture
func generateMatte(from frame: ARFrame,
commandBuffer: MTLCommandBuffer) -> MTLTexture
Please read this SO post if you wanna know how to do it using Metal.
For additional information, please read this SO post.
Using SceneKit, I'm loading a very simple .dae file consisting of a large cylinder with three associated bones. I want to scale the cylinder down and position it on the ground. Here's the code
public class MyNode: SCNNode {
public convenience init() {
self.init()
let scene = SCNScene(named: "test.dae")
let cylinder = (scene?.rootNode.childNode(withName: "Cylinder", recursively: true))!
let scale: Float = 0.1
cylinder.scale = SCNVector3Make(scale, scale, scale)
cylinder.position = SCNVector3(0, scale, 0)
self.addChildNode(cylinder)
}
}
This doesn't work; the cylinder is still huge when I view it. The only way I can get the code to work is to remove associated SCNSKinner.
cylinder.skinner = nil
Why does this happen and how can I properly scale and position the model, bones and all?
when a geometry is skinned it is driven by its skeleton. Which means that the transform of the skinned node is no longer used, it's the transforms of the bones that are important.
For this file Armature is the root of the skeleton. If you translate/scale this node instead of Cylinder you'll get what you want.
How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}
I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}