I am currently trying to set up a rotating ball in scene kit. I have created the ball and applied a texture to it.
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.doubleSided = true
ballGeometry.materials = [ballMaterial]
The current ballTexture is a semi-transparent texture as I am hoping to see the back face roll around.
However I get some strange culling where only half of the back facing polygons are shown even though the doubleSided property is set to true.
Any help would be appreciated, thanks.
This happens because the effects of transparency are draw-order dependent. SceneKit doesn't know to draw the back-facing polygons of the sphere before the front-facing ones. (In fact, it can't really do that without reorganizing the vertex buffers on the GPU for every frame, which would be a huge drag on render performance.)
The vertex layout for an SCNSphere has it set up like the lat/long grid on a globe: the triangles render in order along the meridians from 0° to 360°, so depending on how the sphere is oriented with respect to the camera, some of the faces on the far side of the sphere will render before the nearer ones.
To fix this, you need to force the rendering order — either directly, or through the depth buffer. Here's one way to do that, using a separate material for the inside surface to illustrate the difference.
// add two balls, one a child of the other
let node = SCNNode(geometry: SCNSphere(radius: 1))
let node2 = SCNNode(geometry: SCNSphere(radius: 1))
scene.rootNode.addChildNode(node)
node.addChildNode(node2)
// cull back-facing polygons on the first ball
// so we only see the outside
let mat1 = node.geometry!.firstMaterial!
mat1.cullMode = .Back
mat1.transparent.contents = bwCheckers
// my "bwCheckers" uses black for transparent, white for opaque
mat1.transparencyMode = .RGBZero
// cull front-facing polygons on the second ball
// so we only see the inside
let mat2 = node2.geometry!.firstMaterial!
mat2.cullMode = .Front
mat2.diffuse.contents = rgCheckers
// sphere normals face outward, so to make the inside respond
// to lighting, we need to invert them
let shader = "_geometry.normal *= -1.0;"
mat2.shaderModifiers = [SCNShaderModifierEntryPointGeometry: shader]
(The shader modifier bit at the end isn't required — it just makes the inside material get diffuse shading. You could just as well use a material property that doesn't involve normals or lighting, like emission, depending on the look you want.)
You can also do this using a single node with a double-sided material by disabling writesToDepthBuffer, but that could also lead to undesirable interactions with the rest of your scene content — you might also need to mess with renderingOrder in that case.
macOS 10.13 and iOS 11 added SCNTransparencyMode.dualLayer which as far as I can tell doesn't even require setting isDoubleSided to true (the documentation doesn't provide any information at all). So a simple solution that's working for me would be:
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.transparencyMode = .dualLayer
ballGeometry.materials = [ballMaterial]
Related
I am quite new and experimenting with Apple's ARKit and have a question regarding rotation information of the ARCamera. I am capturing photos and saving the current position, orientation and rotation of the camera with each image taken. The idea is to create 2d plane nodes with these images and have them appear in another view in the same position/orientation/rotation (with respect to the origin) as when when they were captured (as if the images were frozen in the air when they were captured). The position information seems to work fine, but the orientation/rotation comes up completely off as I’m having a difficulty in understanding when it’s relevant to use self.sceneView.session.currentFrame?.camera.eulerAngles vs self.sceneView.pointOfView?.orientation vs self.sceneView.pointOfView?.rotation.
This is how I set up my 2d image planes:
let imagePlane = SCNPlane(width: self.sceneView.bounds.width/6000, height: self.sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = self.image//<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
self.planeNode = SCNNode(geometry: imagePlane)
Then I set the self.planeNode.eulerAngles.x to the value I get from the view where the image is being captured using self.sceneView.session.currentFrame?.camera.eulerAngles.xfor x (and do the same for y and z as well).
I then set the rotation of the node as self.planeNode.rotation.x = self.rotX(where self.rotX is the information I get from self.sceneView.pointOfView?.rotation.x).
I have also tried to set it as follows:
let xAngle = SCNMatrix4MakeRotation(Float(self.rotX), 1, 0, 0);
let yAngle = SCNMatrix4MakeRotation(Float(self.rotY), 0, 1, 0);
let zAngle = SCNMatrix4MakeRotation(Float(self.rotZ), 0, 0, 1);
let rotationMatrix = SCNMatrix4Mult(SCNMatrix4Mult(xAngle, yAngle), zAngle);
self.planeNode.pivot = SCNMatrix4Mult(rotationMatrix, self.planeNode.transform);
The documentation states that eulerAngles is the “orientation” of the camera in roll, pitch and yaw values, but then what is self.sceneView.pointOfView?.orientation used for?
So when I specify the position, orientation and rotation of my plane nodes, is the information I get from eulerAngles enough to capture the correct orientation of the images?
Is my approach to this completely wrong or am I missing something obvious? Any help would be much appreciated!
If what you want to do is essentially create a billboard that is facing the camera at the time of capture then you can basically take the transform matrix of the camera (it already has the correct orientation) and just apply an inverse translation to it to move it to the objects location. They use that matric to position your billboard. This way you don't have to deal with any of the angles or worry about the correct order to composite the rotations. The translation is easy to do because all you need to do is subtract the object's location from the camera's location. One of the ARkit WWDC sessions actually has an example that sort of does this (it creates billboards at the camera's location). The only change you need to make is to translate the billboard away from the camer's position.
The graph is below:
ARFrame -> 3DModelFilter(SCNScene + SCNRender) -> OtherFilters -> GPUImageView.
Load 3D model:
NSError* error;
SCNScene* scene =[SCNScene sceneWithURL:url options:nil error:&error];
Render 3D model:
SCNRenderer* render = [SCNRenderer rendererWithContext:context options:nil];
render.scene = scene;
[render renderAtTime:0];
Now,I am puzzle on how to apply ARFrame's camera transform to the SCNScene.
Some guess:
Can I assign ARFrame camera's transform to the transform of camera node in scene without any complex operation?
The ARFrame camera's projectMatrix do not have any help to me in this case?
update 2017-12-23.
First of all, thank #rickster for your reply. According to your suggestion, I add code in ARSession didUpdateFrame callback:
ARCamera* camera = frame.camera;
SCNMatrix4 cameraMatrix = SCNMatrix4FromMat4(camera.transform);
cameraNode.transform = cameraMatrix;
matrix_float4x4 mat4 = [camera projectionMatrixForOrientation:UIInterfaceOrientationPortrait viewportSize:CGSizeMake(375, 667) zNear:0.001 zFar:1000];
camera.projectionTransform = SCNMatrix4FromMat4(mat4);
Run app.
1. I can't see the whole ship, only part of it. So I add a a translation to the camera's tranform. I add the code below and can see the whole ship.
cameraMatrix = SCNMatrix4Mult(cameraMatrix, SCNMatrix4MakeTranslation(0, 0, 15));
2. When I move the iPhone up or down, the tracking seem's work. But when I move the iPhone left or right, the ship is follow my movement until disappear in screen.
I think there is some important thing I missed.
ARCamera.transform tells you where the camera is in world space (and its orientation). You can assign this directly to the simdTransform property of the SCNNode holding your SCNCamera.
ARCamera.projectionMatrix tells you how the camera sees the world — essentially, what its field of view is. If you want content rendered by SceneKit to appear to inhabit the real world seen in the camera image, you'll need to set up SCNCamera with the information ARKit provides. Conveniently, you can bypass all the individual SCNCamera properties and set a projection matrix directly on the SCNCamera.projectionTransform property. Note that property is a SCNMatrix4, not a SIMD matrix_float4x4 as provided by ARKit, so you'll need to convert it:
scnCamera.projectionTransform = SCNMatrix4FromMat4(arCamera.projectionMatrix);
Note: Depending on how your view is set up, you may need to use ARCamera.projectionMatrixForOrientation:viewportSize:zNear:zFar: instead of ARCamera.projectionMatrix so you get a projection appropriate for your view's size and UI orientation.
I'm currently working on an ARKit project where I would like to darken the actual camera feed so the objects in my 3D scene stand out more.
2 Solutions I found so far:
A) Manually applying CIFilter to the camera frames and setting those as background image to the SceneKit scene as answered in this SO post
The problem here is that fps tanks significantly.
B) Set a background color like so:
sceneView.scene.background.contents = UIColor(white: 0.0, alpha: 0.2)
Sadly, colors with alpha <1 are still opaque, so no matter what alpha I set I can't see anything of the camera feed.
Can anyone think of a different trick to darken the camera feed?
Your option B doesn't work for two reasons:
the scene view is opaque, so there's nothing behind it for a partially transparent background color to blend with.
sceneView.scene.background is what actually displays the camera image, so if you set it to a color you're not displaying the camera image at all anymore.
Some other options (mostly untested) you might look into:
As referenced from the answer you linked, use SCNTechnique to set up multipass rendering. On the first pass, render the whole scene with the excludeCategoryMask (and your scene contents) set up to render nothing but the background, using a shader that dims (or blurs or whatever) all pixels. On the second pass, render only the node(s) you want to appear without that shader (use a simple pass-through shader).
Keep your option B, but make the scene view non-opaque. Set the view's backgroundColor (not the scene's background) to a solid color, and set the transparency of the scene background to fade out the camera feed against that color.
Use geometry to create a "physical" backdrop for your scene — e.g. an SCNPlane of large size, placed as a child of the camera at some fixed distance that's much farther than any other scene content. Set the plane's background color and transparency.
Use multiple views — the ARSCNView, made non-opaque and with a clear background, and another (not necessarily an SceneKit view) that just shows the camera feed. Mess with the other view (or drop in other fun things like UIVisualEffectView) to obscure the camera feed.
File a bug with Apple about sceneView.background not getting the full set of shading customization options that nodes and materials get (filters, shader modifiers, full shader programs) etc, without which customizing the background is much more difficult than customizing other aspects of the scene.
I achieved this effect by creating a SCNNode with a SCNSphere geometry and keeping it attached to the camera using ARSCNView.pointOfView.
override func viewDidLoad() {
super.viewDidLoad()
let sphereFogNode = makeSphereNode()
arView.pointOfView!.addChildNode(sphereFogNode)
view.addSubview(arView)
}
private static func makeSphereGeom() -> SCNSphere {
let sphere = SCNSphere(radius: 5)
let material = SCNMaterial()
material.diffuse.contents = UIColor(white: 1.0, alpha: 0.7)
material.isDoubleSided = true
sphere.materials = [material]
return sphere
}
private static func makeSphereNode() -> SCNNode {
SCNNode(geometry: makeSphereGeom())
}
Clipping Outside Sphere
This darkens the camera along with anything outside the bounds of the sphere. Hit testing (ARFrame.hitTest) does not respect the sphere boundary. You can receive results from outside the sphere.
Things which are outside your sphere will be seen through the sphere's opacity. It seems that non-transparent things will become transparent outside the sphere.
The white part is the plane inside a sphere and the grey is the plane outside the sphere. The plane is a solid white and non-transparent. I tried using SCNScene.fog* to clip SceneKit graphics outside the sphere, but it seems that fog doesn't occlude rendered content, just affects its appearance. SCNCamera.zFar doesn't work also as it clips based on the Z-distance, not on the straight line distance between the camera and target.
Just make your sphere big enough and everything will look fine.
For those who want to implement rickster's option "Use geometry to create a "physical" backdrop for your scene" here is my code (plane and transparency of it made in .scnassets)
guard let backgroundPlane = sceneView.scene.rootNode.childNode(withName: "background", recursively: false) else { return }
backgroundPlane.removeFromParentNode()
backgroundPlane.position = SCNVector3Make(0, 0, -2)
sceneView.pointOfView?.addChildNode(backgroundPlane)
Is it possible to cast a static shadow in SceneKit? Don't know if static is the right word. I would only like a smooth black circle underneath my object when it falls. The object moves in y- and x-direction. I know that I can use sampleRadius property but that has a significant impact on performance. I have seen such thing in other game engines and I am wondering if I can achieve it in SceneKit too.
EDIT:
I used this, but I only ge black scene with very little lightning. It looks like that floor is completely black. I have tried different gobo images, but no luck. What have I missed?
let spotNode = scene.rootNode.childNodeWithName("spot", recursively: true)
let spotlight = spotNode?.light
spotlight?.categoryBitMask = 1
spotlight!.shadowMode = SCNShadowMode.Modulated
spotlight?.gobo?.contents = UIImage(named: "goboImage")
floorNode?.categoryBitMask = 1
//Apple code:
// Use modulated mode
light.shadowMode = SCNShadowModeModulated;
// Configure the projected shadow
light.gobo.contents = aShadowImage;
// Use bit masks to specify receivers
light.categoryBitMask = kProjectorLightMask;
floor.categoryBitMask = kProjectorLightMask;
you'll want to use the SCNShadowModeModulated shadow mode. The different techniques for shadows are explained in depth in the Building a Game with SceneKit presentation from WWDC 2014.
General on this:
The categoryBitMask represents "categories".
So when you set a categoryBitMask on a light you are saying "This light will only hit objects in this category"
Ex:
static let WorldCategory: Int = 1 << 0 // Category for background objects
static let GameObjectsCat: Int = 1 << 1 // Category for monsters in the game :)
...
// When setting up the lights
spotlight.categoryBitMask = GameObjectsCat // Light will only affect game objects
spotlight.castsShadow = true
...
// Pretend this is a SCNNode representing a 3d fortress...
// Will not be affected the by spotlight or project it's shadows
fortress.categoryBitMask = WorldCategory
// Pretend this is a SCNNode representing a evil 3d monster...
// Affected by spotlight and projects shadows
orc.categoryBitMask = GameObjectsCat
orc.castsShadow = true
That was the general take on categoryBitMask.
In your case we want to:
Create a modulated light (SCNShadowMode.Modulated)
Set the gobo image
Give light and floor the same categoryBitMask but make sure it is not set for any other node (like game characters or whatever). Tip would be to create a new category like static let SimpleShadow: Int = 1 << 2
This light will not illuminate anything in the scene (not even the floor), it will only project your gobo where ever it is pointed. So a second light source is needed to see something (Ambient will be easiest).
I have not had the possibility to test this out, so I am writing from what I remember. :)
But please note. With proper use of categoryBitMask you could easily create a "special light" designed to project a real shadow of your main character against the floor node. This would be very cheap, as the shadow would only be calculated for one single node, projected against one single node.
Hope this will help.
I am drawing a sky sphere as the background for a 3D view. Occasionally, when navigating around the view, there is a visual glitch that pops in:
Example of the glitch: a black shape where rendering has apparently not placed fragments onscreen
Black is the colour the device is cleared to at the beginning of each frame.
The shape of the black area is different each time, and is sometimes visibly many polygons. They are always centred around a common point, usually close to the centre of the screen
Repainting without changing the navigation (eye position and look) doesn't make the glitch vanish, i.e. it does seem to be dependent on specific navigation
The moment navigation is changed, even an infinitesimal amount, it vanishes and the sky draws solidly. The vast majority of painting is correct. Eventually as you move around you will spot another glitch.
Changing the radius of the sphere (to, say, 0.9 of the near/far plane distance) doesn't seem to remove the glitches
Changing Z-buffer writing or the Z test in the effect technique makes no difference
There is no DX debug output (when running with the debug version of the runtime, maximum validation, and shader debugging enabled.)
What could be the cause of these glitches?
I am using Direct3D9 (June 2010 SDK), shaders compiled to SM3, and the glitch has been observed on ATI cards and VMWare Fusion virtual cards on Windows 7 and XP.
Example code
The sky is being drawn as a sphere (error-checking etc removed the the below code):
To create
const float fRadius = GetScene().GetFarPlane() - GetScene().GetNearPlane()*2;
D3DXCreateSphere(GetScene().GetDevicePtr(), fRadius, 64, 64, &m_poSphere, 0);
Changing the radius doesn't seem to affect the presence of glitches.
Vertex shader
OutputVS ColorVS(float3 posL : POSITION0, float4 c : COLOR0) {
OutputVS outVS = (OutputVS)0;
// Center around the eye
posL += g_vecEyePos;
// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), g_mWorldViewProj).xyzw; // Always on the far plane
Pixel shader
Does't matter, even one outputting a solid colour will glitch:
float4 ColorPS(float altitude : COLOR0) : COLOR {
return float4(1.0, 0.0, 0.0, 1.0);
The same image with a solid-colour pixel shader, to be certain the PS isn't the cause of the problem
Technique
technique BackgroundTech {
pass P0 {
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_3_0 ColorVS();
pixelShader = compile ps_3_0 ColorPS();
// sky is visible from inside - cull mode is inverted (clockwise)
CullMode = CW;
}
}
I tried adding in state settings affecting the depth, such as ZWriteEnabled = false. None made any difference.
The problem is certainly caused by far plane clipping. If changing the sphere's radius a bit doesn't help, then the sphere's position may be wrong...
Make sure you're properly initializing g_vecEyePos constant (maybe you've mispelled it in one of the DirectX SetShaderConstant functions?).
Also, if you've included the translation to the eye's position in the world matrix of g_mWorldViewProj, you shouldn't do posL += g_vecEyePos; in your VS, because it causes a vertex to be moved twice the eye's position.
In other words you should choose one of these options:
g_mWorldViewProj = mCamView * mCamProj; and posL += g_vecEyePos;
g_mWorldViewProj = MatrixTranslation(g_vecEyePos) * mCamView * mCamProj;