How to draw different images on each side of SCNPlane Geometry - ios

My ultimate goal is to have an SCNNode representing an image floating in space. This is more or less easily accomplished with the current code I have below, but the problem is that the back side of the image isn't rendered and is thus transparent from the back. I want to be able to display a different image on the back so that there is something to see from both sides. the isDoubleSided property doesn't work here because it simply mimics what's on the front. Any Ideas? I looked into the idea of creating my own geometry from Sources and Elements but it seemed very complex for what should be really simple.
My current code:
private func createNode() -> SCNNode{
let scaleFactor = image.size.width/0.2
let width = image.size.width/scaleFactor
let height = image.size.height/scaleFactor
let geometry = SCNPlane(width: width, height: height)
let material = SCNMaterial()
material.diffuse.contents = image
geometry.materials = [material]
return SCNNode(geometry: geometry)
}
Thanks!

Since you want different images, you need to use different materials. SceneKit allows specifying material per geometry element. SCNPlane has only one element, that's why isDoubleSided just mirrors image on the back side. You have two options here:
Create two SCNPlane geometries, orient them back to back and assign different images to each geometry.firstMaterial.diffuse.contents
Create custom SCNGeometry from SCNGeometrySource (4 vertices of plane) and two SCNGeometryElements (one for each side: 2 triangles, 6 indices), and assign array of two materials (different images) to geometry.
The first option is easier, but looks more like a workaround.

Related

SCNBox – Map a texture onto five of six sides

I'm trying to create something like canvas in SceneKit using an SCNBox, with a UIImage "wrapped" around from one surface and onto the four others adjacent to it.
The only way I can currently think to do this would be to chop up the UIImage into five separate images and put those onto the sides as materials, but I'm sure there must be an easier way.
Can anyone steer me in the right direction here? The box will have a separate texture/material on the side opposite the "front".
The easiest way would probably be to create a custom geometry with matching texture coordinates using +geometryWithSources:elements:
You can use contentsTransform property from SCNMaterialProperty, for adjust needed texture coordinates from your image to SCNBox
Some explanations with simplified example:
Lets suppose that you are using cube and you have a texture like this
By dividing it into rectangles, you will have
You want to skip rectangles 1, 3, 7, 9 and cover your cube with this texture.
For this just normalize the size of side from your SCNBox between 0 and 1, and use it to set the scale and transform in contentsTransform matrix.
I have a cube with equal sides in my example - so it will be the third part of the whole texture. For taking the 5 rectangle from the texture
let normalizedWidth = 1/3
let normilizedHeight = 1/3
let xOffset = 1 //skip 1,4,7 line
let yOffset = 1 //skip 1,2,3 line
let sideMaterial = SCNMaterial()
sideMaterial.diffuse.contents = textureImage
let scaleMatrix = SCNMatrix4MakeScale(normalizedWidth, normilizedHeight, 0.0)
sideMaterial.diffuse.contentsTransform = SCNMatrix4Translate(scaleMatrix,
normalizedWidth * xOffset, yOffset * yOffset, 0.0)
You can fill 5 sides with configured materials, and the last on (on the back) just with the color and set them to materials property of your SCNBox.
In the result you will have

UILabel text doesn't appear when using ARKit

I'm programmatically generating a set of UILabels, attaching them to SCNNodes and then placing them in a scene.
The problem is that the text on some of the labels doesn't appear. This occurs (seemingly) randomly.
Here's the code:
var labels = [SCNNode]()
var index: Int
var x: Float
var y: Float
let N = 3
for i in 0 ... N-1 {
for k in 0 ... N-1 {
let node = label()
labels.append(node)
index = labels.count - 1
x = Float(i) * 0.5 - 0.5
y = Float(k) * 0.5 - 0.5
sceneView.scene.rootNode.addChildNode(labels[index])
labels[index].position = SCNVector3Make(x, y, -1)
}
}
and the method to create the label node:
func label() -> SCNNode {
let node = SCNNode()
let label = UILabel(frame: CGRect(x: CGFloat(0), y: CGFloat(0),
width: CGFloat(100), height: CGFloat(50)))
let plane = SCNPlane(width: 0.2, height: 0.1)
label.text = "test"
label.adjustsFontSizeToFitWidth = true
plane.firstMaterial?.diffuse.contents = label
node.geometry = plane
return node
}
The labels themselves always appear correctly, it's just that some of them are blank, with no text.
I've tried playing around with the size of the label, the size of the plane it is attached to, the font size etc - nothing seems to work.
I've also tried enclosing the label creation in DispatchQueue.main.async { ... }, which didn't help either.
I'm moderately new to Swift and iOS, so could easily have missed something very obvious.
Thanks in advance!
EDIT:
(1) Setting label.backgroundColor = UIColor.magenta makes it clear that in fact the label is not being created, but the node / plane is.
Some of the labels are left white (i.e only the SCNNode is being rendered), however after a short delay they sometimes then become magenta and the text will appear. Some of the labels will remain missing though.
(2) It further appears that it's related to the position and orientation of the node (label) relative to the camera. I created a large (10x10) grid of labels, then tested placing the camera at different initial positions in the grid. The likelihood that a node appeared seemed directly related to the distance of the node from the initial camera position. Those nodes directly in front of the camera were always rendered, and those far away almost never were.
(3) workaround / hack is to convert the labels to images, and use them instead - code is at https://github.com/Jordan-Campbell/uiimage-arkit-testing if anyone is interested.
If you are labeling things in AR, 99% of the time it is better to do so in "Screen Space" rather than in "Perspective".
Benefits of labels in Screen Space:
ALWAYS readable, regardless of user's distance from the label
You can use regular UILabels, no need to draw them to an image and then map the image to an SCNPlane.
Your app will have a first party feel to it because Apple uses Screen Space for their labels in all of their AR apps (see Measure).
You will be able to use standard animations on your UILabel, animations are much more complex to set up when working with content in Perspective.
If you are sold on Screen Space, let me know and I'll be happy to put up some code showing you the basics.
Use main thread for creating and adding labels to scene. This will make things faster, and avoid coupling this addition to scene with plane detection this really slows down the rendering. this works at times.
DispatchQueue.main.async {
}
Use a UIView SuperView as Parent to your label this would make things smoother.

Glass effect in SceneKit material

I want to make glass effect in SceneKit.
I searched in google but there's no perfect answer.
So I'm finding SceneKit warrior who can solve my problem clearly.
There's an image that I'm going to make.
It should be looks like real.
The glass effect, reflection and shadow are main point here.
I have obj and dae file already.
So, Is there anyone to help me?
Create a SCNMaterial and configure the following properties and assign it to the bottle geometry of a SCNNode :
.lightingModel = .blinn
.transparent.content = // an image/texture whose alpha channel defines
// the area of partial transparency (the glass)
// and the opaque part (the label).
.transparencyMode = .dualLayer
.fresnelExponent = 1.5
.isDoubleSide = true
.specular.contents = UIColor(white: 0.6, alpha: 1.0)
.diffuse.contents = // texture image including the label (rest can be gray)
.shininess = // somewhere between 25 and 100
.reflective.contents = // glass won’t look good unless it has something
// to reflect, so also configure this as well.
// To at least a gray color with value 0.7
// but preferably an image.
Depending on what else is in your scene, the background, and the lighting used, you will probably have to tune the values above to get the desired results. If you want a bottle without the label, use the .transparency property (set its contents to a gray color) instead of the .transparent property.

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

Add shape to sphere surface in SceneKit

I'd like to be able to add shapes to the surface of a sphere using SceneKit. I started with a simple example where I'm just trying to color a portion of the sphere's surface another color. I'd like this to be an object that can be tapped, selected, etc... so my thought was to add shapes as SCNNodes using custom SCNShape objects for the geometry.
What I have now is a blue square that I'm drawing from a series of points and adding to the scene containing a red sphere. It basically ends up tangent to a point on the sphere, but the real goal is to draw it on the surface. Is there anything in SceneKit that will allow me to do this? Do I need to do some math/geometry to make it the same shape as the sphere or map to a sphere's coordinates? Is what I'm trying to do outside the scope of SceneKit?
If this question is way too broad I'd be glad if anyone could point me towards books or resources to learn what I'm missing. I'm totally new to SceneKit and 3D in general, just having fun playing around with some ideas.
Here's some playground code for what I have now:
import UIKit
import SceneKit
import XCPlayground
class SceneViewController: UIViewController {
let sceneView = SCNView()
private lazy var sphere: SCNSphere = {
let sphere = SCNSphere(radius: 100.0)
sphere.materials = [self.surfaceMaterial]
return sphere
}()
private lazy var testScene: SCNScene = {
let scene = SCNScene()
let sphereNode: SCNNode = SCNNode(geometry: self.sphere)
sphereNode.addChildNode(self.blueChildNode)
scene.rootNode.addChildNode(sphereNode)
//scene.rootNode.addChildNode(self.blueChildNode)
return scene
}()
private lazy var surfaceMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.specular.contents = UIColor(white: 0.6, alpha: 1.0)
material.shininess = 0.3
return material
}()
private lazy var blueChildNode: SCNNode = {
let node: SCNNode = SCNNode(geometry: self.blueGeometry)
node.position = SCNVector3(0, 0, 100)
return node
}()
private lazy var blueGeometry: SCNShape = {
let points: [CGPoint] = [
CGPointMake(0, 0),
CGPointMake(50, 0),
CGPointMake(50, 50),
CGPointMake(0, 50),
CGPointMake(0, 0)]
var pathRef: CGMutablePathRef = CGPathCreateMutable()
CGPathAddLines(pathRef, nil, points, points.count)
let bezierPath: UIBezierPath = UIBezierPath(CGPath: pathRef)
let shape = SCNShape(path: bezierPath, extrusionDepth: 1)
shape.materials = [self.blueNodeMaterial]
return shape
}()
private lazy var blueNodeMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.blueColor()
return material
}()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.blackColor()
self.view.addSubview(sceneView)
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = testScene
}
}
XCPShowView("SceneKit", view: SceneViewController().view)
If you want to map 2D content into the surface of a 3D SceneKit object, and have the 2D content be dynamic/interactive, one of the easiest solutions is to use SpriteKit for the 2D content. You can set your sphere's diffuse contents to an SKScene, and create/position/decorate SpriteKit nodes in that scene to arrange them on the face of the sphere.
If you want to have this content respond to tap events... Using hitTest in your SceneKit view gets you a SCNHitTestResult, and from that you can get texture coordinates for the hit point on the sphere. From texture coordinates you can convert to SKScene coordinates and spawn nodes, run actions, or whatever.
For further details, your best bet is probably Apple's SceneKitReel sample code project. This is the demo that introduced SceneKit for iOS at WWDC14. There's a "slide" in that demo where paint globs fly from the camera at a spinning torus and leave paint splashes where they hit it — the torus has a SpriteKit scene as its material, and the trick for leaving splashes on collisions is basically the same hit test -> texture coordinate -> SpriteKit coordinate approach outlined above.
David Rönnqvist's SceneKit book (available as an iBook) has an example (the EarthView example, a talking globe, chapter 5) that is worth looking at. That example constructs a 3D pushpin, which is then attached to the surface of a globe at the location of a tap.
Your problem is more complicated because you're constructing a shape that covers a segment of the sphere. Your "square" is really a spherical trapezium, a segment of the sphere bounded by four great circle arcs. I can see three possible approaches, depending on what you're ultimately looking for.
The simplest way to do it is to use an image as the material for the sphere's surface. That approach is well illustrated in the Ronnqvist EarthView example, which uses several images to show the earth's surface. Instead of drawing continents, you'd draw your square. This approach isn't suitable for interactivity, though. Look at SCNMaterial.
Another approach would be to use hit test results. That's documented on SCNSceneRenderer (which SCNView complies with) and SCNHitTest. Using the hit test results, you could pull out the face that was tapped, and then its geometry elements. This won't get you all the way home, though, because SceneKit uses triangles for SCNSphere, and you're looking for quads. You will also be limited to squares that line up with SceneKit's underlying wireframe representation.
If you want full control of where the "square" is drawn, including varying its angle relative to the equator, I think you'll have to build your own geometry from scratch. That means calculating the latitude/longitude of each corner point, then generating arcs between those points, then calculating a bunch of intermediate points along the arcs. You'll have to add a fudge factor, to raise the intermediate points slightly above the sphere's surface, and build up your own quads or triangle strips. Classes here are SCNGeometry, SCNGeometryElement, and SCNGeometrySource.

Resources