I'm trying to draw a billboarded quad using SceneKit and ARKit. I have basic billboarding working, however when I roll the camera the billboard also rotates in place. This video shows this in action as I roll the camera to the left (the smily face is the billboard):
Instead I'd like the billboard to still face the camera but keep oriented vertically in the scene, no matter what the camera is doing
Here's how I compute billboarding:
// inside frame update function
struct Vertex {
var position: SIMD3<Float>
var texCoord: SIMD2<Float>
}
let halfSize = Float(0.25)
let cameraNode = sceneView.scene.rootNode.childNodes.first!
let modelTransform = self.scnNode.simdWorldTransform
let viewTransform = cameraNode.simdWorldTransform.inverse
let modelViewTransform = viewTransform * modelTransform
let right = SIMD3<Float>(modelViewTransform[0][0], modelViewTransform[1][0], modelViewTransform[2][0]);
let up = SIMD3<Float>(modelViewTransform[0][1], modelViewTransform[1][1], modelViewTransform[2][1]);
// drawBuffer is a MTL buffer of vertex data
let data = drawBuffer.contents().bindMemory(to: ParticleVertex.self, capacity: 4)
data[0].position = (right + up) * halfSize
data[0].texCoord = SIMD2<Float>(0, 0)
data[1].position = -(right - up) * halfSize
data[1].texCoord = SIMD2<Float>(1, 0)
data[2].position = (right - up) * halfSize
data[2].texCoord = SIMD2<Float>(0, 1)
data[3].position = -(right + up) * halfSize
data[3].texCoord = SIMD2<Float>(1, 1)
Again this gets the billboard facing the camera correctly, however when I roll the camera, the billboard rotates along with it.
What I'd like instead is for the billboard to point towards the camera but keep its orientation in the world. Any suggestions on how to fix this?
Note that my code example is simplified so I can't use SCNBillboardConstraint or anything like that; I need to be able to compute the billboarding myself
Here's the solution I came up with: create a new node that matches the camera's position and rotation, but without any roll:
let tempNode = SCNNode()
tempNode.simdWorldPosition = cameraNode.simdWorldPosition
// This changes the node's pitch and yaw, but not roll
tempNode.simdLook(at: cameraNode.simdConvertPosition(SIMD3<Float>(0, 0, 1), to: nil))
let view = tempNode.simdWorldTransform.inverse
let modelViewTransform = view * node.simdWorldTransform
This keeps the billboard pointing upwards in world space, even as the camera rolls.
I had actually tried doing this earlier by setting tempNode.eulerAngles.z = 0, however that seems to effect the rest of the transform matrix in unexpected ways
There's probably a way to do this without creating a temporary node too but this works well enough for me
Related
I am using ARKit's ARFaceTrackingConfiguration to track the facial blendshapes along with left and right Eye Transforms. I am exporting this data into json and apply this data on 3d model ( which preconfigured shape keys, eye nodes). I was able to apply the blend shape data, but I got struck at how to apply the eye rotations. am getting leftEyeTransform, rightEyeTransform which is simd_float4*4 from FaceAnchor.
Here how to apply the rotation on eye nodes from the transform values.I believe for eyes, it is enough to apply the rotation.
I have tried with the below to get the orientation from eyeTransforms:
Method 1:
let faceNode = SCNNode()
faceNode.simdTransform = eyeTransform
let vector = faceNode.eulerAngles
eyeLeftNode.eulerAngles = vector
Method:2
let faceNode = SCNNode()
faceNode.simdTransform = eyeTransform
let rotation = vector_float3(faceNode.orientation.x,faceNode.orientation.y,faceNode.orientaton.z)
let yaw = (rotation.y)
let pitch = (rotation.x)
let roll = (rotation.z)
let vector = SCNVector3(pitch, yaw, roll)
eyeLeftNode.eulerAngles = vector
Method: 3
let simd_quatf = simd_quaternion(eyeTransform)
let vector = SCNVector3(simd_quatf.axis.x,simd_quatf.axis.y,simd_quatf.axis.z)
eyeLeftNode.eulerAngles = vector
None of the ways are working. I am not able to figure out the actual problem on how to rotate the eyeBalls. Can you please tell me how to do this
Thanks,
Chaitanya
I use the following two extensions in my apps for simd_float4x4 translation and orientation components if that's all you need:
extension float4x4 {
var translation: SIMD3<Float> {
let translation = columns.3
return SIMD3<Float>(translation.x, translation.y, translation.z)
}
/**
Factors out the orientation component of the transform.
*/
var orientation: simd_quatf {
return simd_quaternion(self)
}
}
I am trying to put several models in the scene.
for candidate in selectedCandidate {
sceneView.scene.rootNode.addChildNode(selectedObjects[candidate])
}
The candidate and selectedCandidate stands for the index of the model I want to use. Each model contains a rootNode and nodes attached to it. I use the API worldPosition and position of SCNNode to get and modify 3D model's position.
The thing I want to do is put those models right in front users' eyes. It means I need to get the camera's position and orientation vector to put the models in the right position I want. I also use these codes to get the camera's position according to this solution https://stackoverflow.com/a/47241952/7772038:
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
The PROBLEM is that the camera's position and the model's position I printed out directly are severely different in order of magnitude. Camera's position is 10^-2 level like {0.038..., 0.047..., 0.024...} BUT the model's position is 10^2 level like {197.28, 100.29, -79.25}. From my point of view when I run the program, I am in the middle of those models and models are very near, but the positions are so different. So can you tell me how to modify the model's position to whatever I want? I really need to put the model right in front of user's eyes. If I simply do addChildNode() the models are behind me or somewhere else, while I need the model just be in front of users' eyes. Thank you in advance!
If you want to place an SCNNode infront of the camera you can do so like this:
/// Adds An SCNNode 3m Away From The Current Frame Of The Camera
func addNodeInFrontOfCamera(){
guard let currentTransform = augmentedRealitySession.currentFrame?.camera.transform else { return }
let nodeToAdd = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
boxGeometry.firstMaterial?.diffuse.contents = UIColor.red
nodeToAdd.geometry = boxGeometry
var translation = matrix_identity_float4x4
//Change The X Value
translation.columns.3.x = 0
//Change The Y Value
translation.columns.3.y = 0
//Change The Z Value
translation.columns.3.z = -3
nodeToAdd.simdTransform = matrix_multiply(currentTransform, translation)
augmentedRealityView?.scene.rootNode.addChildNode(nodeToAdd)
}
And you can change any of the X,Y,Z values as you need.
Hope it points you in the right direction...
Update:
If you have multiple nodes e.g. in a scene, in order to use this function, it's probably best to create a 'holder' node, and then add all your content as a child.
Which means then you can simply call this function on the holder node.
I'm making an app where the user can create some flat shapes by positioning some points on a 3D space with ARKit, but it seems that the part where I create the UIBezierPath using these points is problematic.
In my app, the user starts by positioning a virtual transparent wall in AR at the same place that his device by pressing a button:
guard let currentFrame = sceneView.session.currentFrame else {
return
}
let imagePlane = SCNPlane(width: sceneView.bounds.width, height: sceneView.bounds.height)
imagePlane.firstMaterial?.diffuse.contents = UIColor.black
imagePlane.firstMaterial?.lightingModel = .constant
var windowNode = SCNNode()
windowNode.geometry = imagePlane
sceneView.scene.rootNode.addChildNode(windowNode)
windowNode.simdTransform = currentFrame.camera.transform
windowNode.opacity = 0.1
Then, the user place some points (some sphere nodes) on that wall to determine the shape of the flat object that he wants to create by pressing a button. If the user points back to the first sphere node created, I close the shape, create a node of it and place it at the same position that the wall:
let hitTestResult = sceneView.hitTest(self.view.center, options: nil)
if let firstHit = hitTestResult.first {
if firstHit.node == windowNode {
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
let pointCoordinates = CGPoint(x: x , y: y)
let sphere = SCNSphere(radius: 0.02)
sphere.firstMaterial?.diffuse.contents = UIColor.white
sphere.firstMaterial?.lightingModel = .constant
let sphereNode = SCNNode(geometry: sphere)
sceneView.scene.rootNode.addChildNode(sphereNode)
sphereNode.worldPosition = firstHit.worldCoordinates
if points.isEmpty {
windowPath.move(to: pointCoordinates)
} else {
windowPath.addLine(to: pointCoordinates)
}
points.append(sphereNode)
if undoButton.alpha == 0 {
undoButton.alpha = 1
}
} else if firstHit.node == points.first {
windowPath.close()
let windowShape = SCNShape(path: windowPath, extrusionDepth: 0)
windowShape.firstMaterial?.diffuse.contents = UIColor.white
windowShape.firstMaterial?.lightingModel = .constant
let tintedWindow = SCNNode(geometry: windowShape)
let worldPosition = windowNode.worldPosition
tintedWindow.worldPosition = worldPosition
sceneView.scene.rootNode.addChildNode(tintedWindow)
//removing all the sphere nodes from points and reinitializing the UIBezierPath windowPath
removeAllPoints()
}
}
That code works when I create a first invisible wall and a first shape, but when I create a second wall, when I'm done to draw my shape, the shape appears to be deformed and not at the right place like really not at the right place at all. So I think that I'm missing something with the coordinates of my UIBezierPath points but what ?
EDIT
Ok so after several tests, it seems that it depends on the orientation of the device at the launch of the AR session. When the device, at launch, faces the first wall that the user will create, the shape is created and places as expected. But if the user for exemple launch the app with his device pointed in one direction, then do a rotation of 90 degrees on himself, place the first wall and create his shape, the shape will be deformed and not at the right place.
So it seems that it's a problem of 3D coordinates but I still don't figure it out.
Ok I just found the problem ! I was just using the wrong vectors and coordinates... I've never been a math/geometry guy haha
So instead of using:
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
I now use:
let x = Double(firstHit.localCoordinates.x)
let y = Double(firstHit.localCoordinates.y)
And instead of using:
let worldPosition = windowNode.worldPosition
I now use:
let worldPosition = windowNode.transform
That's why the position of my shape node was depending of the initialisation of the AR session, I was working with world coordinates, seems obvious to me now.
I am using two virtual joysticks to move my camera around the scene. The left stick controls the position and the right one controls the rotation.
When using the right stick, the camera rotates, but it seems that the camera rotates around the center point of the model.
This is my code:
fileprivate func rotateCamera(_ x: Float, _ y: Float)
{
if let cameraNode = self.cameraNode
{
let moveX = x / 50.0
let rotated = SCNMatrix4Rotate(cameraNode.transform, moveX, 0, 1, 0)
cameraNode.transform = rotated
}
}
I have also tried this code:
fileprivate func rotateCamera(_ x: Float, _ y: Float)
{
if let cameraNode = self.cameraNode
{
let moveX = x / 50.0
cameraNode.rotate(by: SCNQuaternion(moveX, 0, 1, 0), aroundTarget: cameraNode.transform)
}
}
But the camera just jumps around. What is my error here?
There are many ways to handle rotation, some are very suitable for giving headaches to the coder.
It sounds like the model is at 0,0,0, meaning it’s in the center of the world, and the camera is tranformed to a certain location. In the first example using matrices, you basically rotate that transformation. So you transform first, then rotate, which yes will cause it to rotate around the origin (0,0,0).
What you should do instead, to rotate the camera in local space, is rotate the camera first in local space and then translate it to its position in world space.
Translation x rotation matrix results in rotation in world space
Rotation x translation matrix results in rotation in local space
So a solution is to remove the translation from the camera first (moving it back to 0,0,0), then apply the rotation matrix, and then reapply the translation. This comes down to the same result as starting with an identity matrix. For example:
let rotated = SCNMatrix4Rotate(SCNMatrixIdentity, moveX, 0, 1, 0)
cameraNode.transform = SCNMatrix4Multiply(rotated, cameraNode.transform)
I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.
I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)