swift gestures RealityKit like in SceneKit - ios

I created an AR scene with RealityComposer.
let boxScene = try! Experience.loadBox()
But I don't like the gestures that Apple has provided for this method.
boxScene.generateCollisionShapes(recursive: true)
let box = boxScene.box as? Entity & Has Collision
arView.installGestures(for:box!) //Add gestures
I would like to use the same gestures as in SceneKit. I created a cube with the help of RealityKit and added a textEntity to each of its faces.
let textEntity_0: Entity = boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0]
var textModelComp_0: ModelComponent = (textEntity_0.components[ModelComponent])!
var material_0 = SimpleMaterial()
material_0.baseColor = .color(.red)
textModelComp_0.materials[0] = material_0
textModelComp_0.mesh = .generateText("testText",
extrusionDepth: 0.001
font: .systemFont(ofSize: 0.03),
containerFrame: CGRect(),
lineBreakMode: .byCharWrapping)
boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0].components.set(textModelComp_0)
I do not need it to stand on the surface or be strictly attached to the surface, but it is necessary that all its faces be visible during rotation. It is also necessary to implement the method:
arView.scene.anchors.append(boxScene)
let anchor = AnchorEntity(.image(group: "LogoTypes", name: "logo"))
anchor.addChild(boxScene)
This method is from RealityKit. It only shows my 3D cube when it sees a marker. I need the surrounding space from the camera in the background, like in AR.
How can I use SceneKit gestures in this case?

Related

Normal mapping in Scenekit

I am trying to add normal map for a 3D model in swift using SCNMaterial properties. The diffuse property is working but no other property including normal property is visible on the screen. When I debug to check if the node's material consists of the normal property, it shows the property exists with the image that I added.
I have also checked if the normal image that I am using is correct or not in the SceneKit Editor where it works fine.
I have added the code that I am using.
let node = SCNNode()
node.geometry = SCNSphere(radius: 0.1)
node.geometry!.firstMaterial!.diffuse.contents = UIColor.lightGray
node.geometry!.firstMaterial!.normal.contents = UIImage(named: "normal")
node.position = SCNVector3(0,0,0)
sceneView.scene.rootNode.addChildNode(node)
This is the output I am getting
I am expecting something like this
I got the solution. Since I did not enable DefaultLighting, there was no lighting in the scene. Added this to the code.
sceneView.autoenablesDefaultLighting = true
Given the screenshot, it seems like there is no lighting in the scene, or the material does not respond to lighting, since the sphere is not shaded. For a normal map to work, lighting has to be taken into account, because it responds to lighting direction. Have you tried creating an entirely new SCNMaterial and played with its properties? (I.E. https://developer.apple.com/documentation/scenekit/scnmaterial/lightingmodel seems interesting)
I would try setting
node.geometry!.firstMaterial!.lightingModel = .physicallyBased
Try this.
let scene = SCNScene()
let sphere = SCNSphere(radius: 0.1)
let sphereMaterial = SCNMaterial()
sphereMaterial.diffuse.contents = UIImage(named: "normal.png")
let sphereNode = SCNNode()
sphereNode.geometry = sphere
sphereNode.geometry?.materials = [sphereMaterial]
sphereNode.position = SCNVector3(0.5,0.1,-1)
scene.rootNode.addChildNode(sphereNode)
sceneView.scene = scene

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

Add shape to sphere surface in SceneKit

I'd like to be able to add shapes to the surface of a sphere using SceneKit. I started with a simple example where I'm just trying to color a portion of the sphere's surface another color. I'd like this to be an object that can be tapped, selected, etc... so my thought was to add shapes as SCNNodes using custom SCNShape objects for the geometry.
What I have now is a blue square that I'm drawing from a series of points and adding to the scene containing a red sphere. It basically ends up tangent to a point on the sphere, but the real goal is to draw it on the surface. Is there anything in SceneKit that will allow me to do this? Do I need to do some math/geometry to make it the same shape as the sphere or map to a sphere's coordinates? Is what I'm trying to do outside the scope of SceneKit?
If this question is way too broad I'd be glad if anyone could point me towards books or resources to learn what I'm missing. I'm totally new to SceneKit and 3D in general, just having fun playing around with some ideas.
Here's some playground code for what I have now:
import UIKit
import SceneKit
import XCPlayground
class SceneViewController: UIViewController {
let sceneView = SCNView()
private lazy var sphere: SCNSphere = {
let sphere = SCNSphere(radius: 100.0)
sphere.materials = [self.surfaceMaterial]
return sphere
}()
private lazy var testScene: SCNScene = {
let scene = SCNScene()
let sphereNode: SCNNode = SCNNode(geometry: self.sphere)
sphereNode.addChildNode(self.blueChildNode)
scene.rootNode.addChildNode(sphereNode)
//scene.rootNode.addChildNode(self.blueChildNode)
return scene
}()
private lazy var surfaceMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.specular.contents = UIColor(white: 0.6, alpha: 1.0)
material.shininess = 0.3
return material
}()
private lazy var blueChildNode: SCNNode = {
let node: SCNNode = SCNNode(geometry: self.blueGeometry)
node.position = SCNVector3(0, 0, 100)
return node
}()
private lazy var blueGeometry: SCNShape = {
let points: [CGPoint] = [
CGPointMake(0, 0),
CGPointMake(50, 0),
CGPointMake(50, 50),
CGPointMake(0, 50),
CGPointMake(0, 0)]
var pathRef: CGMutablePathRef = CGPathCreateMutable()
CGPathAddLines(pathRef, nil, points, points.count)
let bezierPath: UIBezierPath = UIBezierPath(CGPath: pathRef)
let shape = SCNShape(path: bezierPath, extrusionDepth: 1)
shape.materials = [self.blueNodeMaterial]
return shape
}()
private lazy var blueNodeMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.blueColor()
return material
}()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.blackColor()
self.view.addSubview(sceneView)
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = testScene
}
}
XCPShowView("SceneKit", view: SceneViewController().view)
If you want to map 2D content into the surface of a 3D SceneKit object, and have the 2D content be dynamic/interactive, one of the easiest solutions is to use SpriteKit for the 2D content. You can set your sphere's diffuse contents to an SKScene, and create/position/decorate SpriteKit nodes in that scene to arrange them on the face of the sphere.
If you want to have this content respond to tap events... Using hitTest in your SceneKit view gets you a SCNHitTestResult, and from that you can get texture coordinates for the hit point on the sphere. From texture coordinates you can convert to SKScene coordinates and spawn nodes, run actions, or whatever.
For further details, your best bet is probably Apple's SceneKitReel sample code project. This is the demo that introduced SceneKit for iOS at WWDC14. There's a "slide" in that demo where paint globs fly from the camera at a spinning torus and leave paint splashes where they hit it — the torus has a SpriteKit scene as its material, and the trick for leaving splashes on collisions is basically the same hit test -> texture coordinate -> SpriteKit coordinate approach outlined above.
David Rönnqvist's SceneKit book (available as an iBook) has an example (the EarthView example, a talking globe, chapter 5) that is worth looking at. That example constructs a 3D pushpin, which is then attached to the surface of a globe at the location of a tap.
Your problem is more complicated because you're constructing a shape that covers a segment of the sphere. Your "square" is really a spherical trapezium, a segment of the sphere bounded by four great circle arcs. I can see three possible approaches, depending on what you're ultimately looking for.
The simplest way to do it is to use an image as the material for the sphere's surface. That approach is well illustrated in the Ronnqvist EarthView example, which uses several images to show the earth's surface. Instead of drawing continents, you'd draw your square. This approach isn't suitable for interactivity, though. Look at SCNMaterial.
Another approach would be to use hit test results. That's documented on SCNSceneRenderer (which SCNView complies with) and SCNHitTest. Using the hit test results, you could pull out the face that was tapped, and then its geometry elements. This won't get you all the way home, though, because SceneKit uses triangles for SCNSphere, and you're looking for quads. You will also be limited to squares that line up with SceneKit's underlying wireframe representation.
If you want full control of where the "square" is drawn, including varying its angle relative to the equator, I think you'll have to build your own geometry from scratch. That means calculating the latitude/longitude of each corner point, then generating arcs between those points, then calculating a bunch of intermediate points along the arcs. You'll have to add a fudge factor, to raise the intermediate points slightly above the sphere's surface, and build up your own quads or triangle strips. Classes here are SCNGeometry, SCNGeometryElement, and SCNGeometrySource.

SceneKit get texture coordinate after touch with Swift

I want to manipulate 2D textures in a 3D SceneKit scene.
Therefore i used this code to get local coordinates:
#IBAction func tap(sender: UITapGestureRecognizer) {
var arr:NSArray = my3dView.hitTest(sender.locationInView(my3dView), options: NSDictionary(dictionary: [SCNHitTestFirstFoundOnlyKey:true]))
var res:SCNHitTestResult = arr.firstObject as SCNHitTestResult
var vect:SCNVector3 = res.localCoordinates}
I have the texture read out from my scene with:
var mat:SCNNode = myscene.rootNode.childNodes[0] as SCNNode
var child:SCNNode = mat.childNodeWithName("ID12", recursively: false)
var geo:SCNMaterial = child.geometry.firstMaterial
var channel = geo.diffuse.mappingChannel
var textureimg:UIImage = geo.diffuse.contents as UIImage
and now i want to draw at the touchpoint to the texture...
how can i do that? how can i transform my coordinate from touch to the texture image?
Sounds like you have two problems. (Without even having used regular expressions. :))
First, you need to get the texture coordinates of the tapped point -- that is, the point in 2D texture space on the surface of the object. You've almost got that right already. SCNHitTestResult provides those with the textureCoordinatesWithMappingChannel method. (You're using localCoordinates, which gets you a point in the 3D space owned by the node in the hit-test result.) And you already seem to have found the business about mapping channels, so you know what to pass to that method.
Problem #2 is how to draw.
You're doing the right thing to get the material's contents as a UIImage. Once you've got that, you could look into drawing with UIGraphics and CGContext functions -- create an image with UIGraphicsBeginImageContext, draw the existing image into it, then draw whatever new content you want to add at the tapped point. After that, you can get the image you were drawing with UIGraphicsGetImageFromCurrentImageContext and set it as the new diffuse.contents of your material. However, that's probably not the best way -- you're schlepping a bunch of image data around on the CPU, and the code is a bit unwieldy, too.
A better approach might be to take advantage of the integration between SceneKit and SpriteKit. This way, all your 2D drawing is happening in the same GPU context as the 3D drawing -- and the code's a bit simpler.
You can set your material's diffuse.contents to a SpriteKit scene. (To use the UIImage you currently have for that texture, just stick it on an SKSpriteNode that fills the scene.) Once you have the texture coordinates, you can add a sprite to the scene at that point.
var nodeToDrawOn: SCNNode!
var skScene: SKScene!
func mySetup() { // or viewDidLoad, or wherever you do setup
// whatever else you're doing for setup, plus:
// 1. remember which node we want to draw on
nodeToDrawOn = myScene.rootNode.childNodeWithName("ID12", recursively: true)
// 2. set up that node's texture as a SpriteKit scene
let currentImage = nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents as UIImage
skScene = SKScene(size: currentImage.size)
nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents = skScene
// 3. put the currentImage into a background sprite for the skScene
let background = SKSpriteNode(texture: SKTexture(image: currentImage))
background.position = CGPoint(x: skScene.frame.midX, y: skScene.frame.midY)
skScene.addChild(background)
}
#IBAction func tap(sender: UITapGestureRecognizer) {
let results = my3dView.hitTest(sender.locationInView(my3dView), options: [SCNHitTestFirstFoundOnlyKey: true]) as [SCNHitTestResult]
if let result = results.first {
if result.node === nodeToDrawOn {
// 1. get the texture coordinates
let channel = nodeToDrawOn.geometry!.firstMaterial!.diffuse.mappingChannel
let texcoord = result.textureCoordinatesWithMappingChannel(channel)
// 2. place a sprite there
let sprite = SKSpriteNode(color: SKColor.greenColor(), size: CGSize(width: 10, height: 10))
// scale coords: texcoords go 0.0-1.0, skScene space is is pixels
sprite.position.x = texcoord.x * skScene.size.width
sprite.position.y = texcoord.y * skScene.size.height
skScene.addChild(sprite)
}
}
}
For more details on the SpriteKit approach (in Objective-C) see the SceneKit State of the Union Demo from WWDC14. That shows a SpriteKit scene used as the texture map for a torus, with spheres of paint getting thrown at it -- whenever a sphere collides with the torus, it gets a SCNHitTestResult and uses its texcoords to create a paint splatter in the SpriteKit scene.
Finally, some Swift style comments on your code (unrelated to the question and answer):
Use let instead of var wherever you don't need to reassign a value, and the optimizer will make your code go faster.
Explicit type annotations (res: SCNHitTestResult) are rarely necessary.
Swift dictionaries are bridged to NSDictionary, so you can pass them directly to an API that takes NSDictionary.
Casting to a Swift typed array (hitTest(...) as [SCNHitTestResult]) saves you from having to cast the contents.

Resources