How to integrate ARKit into GPUImage render with SCNRender? - ios

The graph is below:
ARFrame -> 3DModelFilter(SCNScene + SCNRender) -> OtherFilters -> GPUImageView.
Load 3D model:
NSError* error;
SCNScene* scene =[SCNScene sceneWithURL:url options:nil error:&error];
Render 3D model:
SCNRenderer* render = [SCNRenderer rendererWithContext:context options:nil];
render.scene = scene;
[render renderAtTime:0];
Now,I am puzzle on how to apply ARFrame's camera transform to the SCNScene.
Some guess:
Can I assign ARFrame camera's transform to the transform of camera node in scene without any complex operation?
The ARFrame camera's projectMatrix do not have any help to me in this case?
update 2017-12-23.
First of all, thank #rickster for your reply. According to your suggestion, I add code in ARSession didUpdateFrame callback:
ARCamera* camera = frame.camera;
SCNMatrix4 cameraMatrix = SCNMatrix4FromMat4(camera.transform);
cameraNode.transform = cameraMatrix;
matrix_float4x4 mat4 = [camera projectionMatrixForOrientation:UIInterfaceOrientationPortrait viewportSize:CGSizeMake(375, 667) zNear:0.001 zFar:1000];
camera.projectionTransform = SCNMatrix4FromMat4(mat4);
Run app.
1. I can't see the whole ship, only part of it. So I add a a translation to the camera's tranform. I add the code below and can see the whole ship.
cameraMatrix = SCNMatrix4Mult(cameraMatrix, SCNMatrix4MakeTranslation(0, 0, 15));
2. When I move the iPhone up or down, the tracking seem's work. But when I move the iPhone left or right, the ship is follow my movement until disappear in screen.
I think there is some important thing I missed.

ARCamera.transform tells you where the camera is in world space (and its orientation). You can assign this directly to the simdTransform property of the SCNNode holding your SCNCamera.
ARCamera.projectionMatrix tells you how the camera sees the world — essentially, what its field of view is. If you want content rendered by SceneKit to appear to inhabit the real world seen in the camera image, you'll need to set up SCNCamera with the information ARKit provides. Conveniently, you can bypass all the individual SCNCamera properties and set a projection matrix directly on the SCNCamera.projectionTransform property. Note that property is a SCNMatrix4, not a SIMD matrix_float4x4 as provided by ARKit, so you'll need to convert it:
scnCamera.projectionTransform = SCNMatrix4FromMat4(arCamera.projectionMatrix);
Note: Depending on how your view is set up, you may need to use ARCamera.projectionMatrixForOrientation:viewportSize:zNear:zFar: instead of ARCamera.projectionMatrix so you get a projection appropriate for your view's size and UI orientation.

Related

Orientation/rotation of a plane node using ARCamera information in ARKit

I am quite new and experimenting with Apple's ARKit and have a question regarding rotation information of the ARCamera. I am capturing photos and saving the current position, orientation and rotation of the camera with each image taken. The idea is to create 2d plane nodes with these images and have them appear in another view in the same position/orientation/rotation (with respect to the origin) as when when they were captured (as if the images were frozen in the air when they were captured). The position information seems to work fine, but the orientation/rotation comes up completely off as I’m having a difficulty in understanding when it’s relevant to use self.sceneView.session.currentFrame?.camera.eulerAngles vs self.sceneView.pointOfView?.orientation vs self.sceneView.pointOfView?.rotation.
This is how I set up my 2d image planes:
let imagePlane = SCNPlane(width: self.sceneView.bounds.width/6000, height: self.sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = self.image//<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
self.planeNode = SCNNode(geometry: imagePlane)
Then I set the self.planeNode.eulerAngles.x to the value I get from the view where the image is being captured using self.sceneView.session.currentFrame?.camera.eulerAngles.xfor x (and do the same for y and z as well).
I then set the rotation of the node as self.planeNode.rotation.x = self.rotX(where self.rotX is the information I get from self.sceneView.pointOfView?.rotation.x).
I have also tried to set it as follows:
let xAngle = SCNMatrix4MakeRotation(Float(self.rotX), 1, 0, 0);
let yAngle = SCNMatrix4MakeRotation(Float(self.rotY), 0, 1, 0);
let zAngle = SCNMatrix4MakeRotation(Float(self.rotZ), 0, 0, 1);
let rotationMatrix = SCNMatrix4Mult(SCNMatrix4Mult(xAngle, yAngle), zAngle);
self.planeNode.pivot = SCNMatrix4Mult(rotationMatrix, self.planeNode.transform);
The documentation states that eulerAngles is the “orientation” of the camera in roll, pitch and yaw values, but then what is self.sceneView.pointOfView?.orientation used for?
So when I specify the position, orientation and rotation of my plane nodes, is the information I get from eulerAngles enough to capture the correct orientation of the images?
Is my approach to this completely wrong or am I missing something obvious? Any help would be much appreciated!
If what you want to do is essentially create a billboard that is facing the camera at the time of capture then you can basically take the transform matrix of the camera (it already has the correct orientation) and just apply an inverse translation to it to move it to the objects location. They use that matric to position your billboard. This way you don't have to deal with any of the angles or worry about the correct order to composite the rotations. The translation is easy to do because all you need to do is subtract the object's location from the camera's location. One of the ARkit WWDC sessions actually has an example that sort of does this (it creates billboards at the camera's location). The only change you need to make is to translate the billboard away from the camer's position.

Back face culling in SceneKit

I am currently trying to set up a rotating ball in scene kit. I have created the ball and applied a texture to it.
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.doubleSided = true
ballGeometry.materials = [ballMaterial]
The current ballTexture is a semi-transparent texture as I am hoping to see the back face roll around.
However I get some strange culling where only half of the back facing polygons are shown even though the doubleSided property is set to true.
Any help would be appreciated, thanks.
This happens because the effects of transparency are draw-order dependent. SceneKit doesn't know to draw the back-facing polygons of the sphere before the front-facing ones. (In fact, it can't really do that without reorganizing the vertex buffers on the GPU for every frame, which would be a huge drag on render performance.)
The vertex layout for an SCNSphere has it set up like the lat/long grid on a globe: the triangles render in order along the meridians from 0° to 360°, so depending on how the sphere is oriented with respect to the camera, some of the faces on the far side of the sphere will render before the nearer ones.
To fix this, you need to force the rendering order — either directly, or through the depth buffer. Here's one way to do that, using a separate material for the inside surface to illustrate the difference.
// add two balls, one a child of the other
let node = SCNNode(geometry: SCNSphere(radius: 1))
let node2 = SCNNode(geometry: SCNSphere(radius: 1))
scene.rootNode.addChildNode(node)
node.addChildNode(node2)
// cull back-facing polygons on the first ball
// so we only see the outside
let mat1 = node.geometry!.firstMaterial!
mat1.cullMode = .Back
mat1.transparent.contents = bwCheckers
// my "bwCheckers" uses black for transparent, white for opaque
mat1.transparencyMode = .RGBZero
// cull front-facing polygons on the second ball
// so we only see the inside
let mat2 = node2.geometry!.firstMaterial!
mat2.cullMode = .Front
mat2.diffuse.contents = rgCheckers
// sphere normals face outward, so to make the inside respond
// to lighting, we need to invert them
let shader = "_geometry.normal *= -1.0;"
mat2.shaderModifiers = [SCNShaderModifierEntryPointGeometry: shader]
(The shader modifier bit at the end isn't required — it just makes the inside material get diffuse shading. You could just as well use a material property that doesn't involve normals or lighting, like emission, depending on the look you want.)
You can also do this using a single node with a double-sided material by disabling writesToDepthBuffer, but that could also lead to undesirable interactions with the rest of your scene content — you might also need to mess with renderingOrder in that case.
macOS 10.13 and iOS 11 added SCNTransparencyMode.dualLayer which as far as I can tell doesn't even require setting isDoubleSided to true (the documentation doesn't provide any information at all). So a simple solution that's working for me would be:
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.transparencyMode = .dualLayer
ballGeometry.materials = [ballMaterial]

Please help me correctly apply device rotation data

So I have a bit of a project I am trying to do. I am trying to get the devices rotation relative to gravity, and translation from where it started. So basically getting "tracking" data for the device. I plan to basically apply this by making a 3d pt that will mimic the data I record from the device later on.
Anyway to attempt to achieve this I thought it would be best to work with scene kit that way I can see things in 3 dimensions just like the data I am trying to record. Right now I have been trying to get the ship to rotate so that it always looks like its following gravity (like its on the ground or something) no mater what the device rotation is. I figure once I have this down it will be a sinch to apply this to a point. So I made the following code:
if let attitude = motionManager.deviceMotion?.attitude {
print(attitude)
ship.eulerAngles.y = -Float(attitude.roll)
ship.eulerAngles.z = -Float(attitude.yaw)
ship.eulerAngles.x = -Float(attitude.pitch)
}
When you only run one of the rotation lines then everything is perfectly. It does behave properly on that axis. However when I do all three axis' at once it becomes chaotic and performs far from expected with jitter and everything.
I guess my question is:
Does anyone know how to fix my code above so that the ship properly stays "upright" no matter what the orientation.
J.Doe!
First there is a slight trick. If you want to use the iphone laying down as the default position you have to notice that the axis used on sceneKit are different then those used by the DeviceMotion. Check the axis:
(source: apple.com)
First thing you need to set is the camera position. When you start a SceneKit project it creates your camera in the position (0, 0, 15). There is a problem with that:
The values of eulerAngles = (0,0,0) would mean the object would be in the plane xz, but as long as you are looking from Z, you just see it from the side. For that to be equivalent to the iphone laying down, you would need to set the camera to look from above. So it would be like you were looking at it from the phone (like a camera, idk)
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 15, z: 0)
// but then you need to make the cameraNode face the ship (the origin of the axis), rotating it
cameraNode.eulerAngles.x = -Float(M_PI)*0.5 //or Float(M_PI)*1.5
With this we are going to see the ship from above, so the first part is done.
Now we gotta make the ship remain "still" (facing the ground) with the device rotation.
//First we need to use SCNRendererDelegate
class GameViewController : UIViewController SCNSceneRendererDelegate{
private let motion = CMMotionManager();
...
Then on viewDidLoad:
//important if you remove the sceneKit initial action from the ship.
//The scene would be static, and static scenes do not trigger the renderer update, setting the playing property to true forces that:
scnView.playing = true;
if(motion.deviceMotionAvailable){
motion.startDeviceMotionUpdates();
motion.deviceMotionUpdateInterval = 1.0/60.0;
}
Then we go to the update method
Look at the axis: the axis Y and Z are "switched" if you compare the sceneKit axis and the deviceMotion axis. Z is up on the phone, while is to the side on the scene, and Y is up on the scene, while to the side on the phone. So the pitch, roll and yaw, respectively associated to the X, Y and Z axis, will be applied as pitch, yaw and roll.
Notice I've put the roll value positive, that's because there is something else "switched". It's kinda hard to visualize. See the Y axis of device motion is correlated to the Z axis of the scene. Now imagine an object rotation along this axis, in the same direction (clock-wise for example), they would be going in opposite directions because of the disposition of the axis. (you can set the roll negative too see how it goes wrong)
func renderer(renderer: SCNSceneRenderer, updateAtTime time: NSTimeInterval) {
if let rot = motion.deviceMotion?.attitude{
print("\(rot.pitch) \(rot.roll) \(rot.yaw)")
ship.eulerAngles.x = -Float(rot.pitch);
ship.eulerAngles.y = -Float(rot.yaw);
ship.eulerAngles.z = Float(rot.roll);
}
Hope that helps! See ya!

integrate Oculus SDK Distortion within simple DirectX Engine

I was working some time on a very simple DirectX11 Render Engine. Today I managed to Setup Stereo Rendering (Rendering the Scene twice into textures) for my Oculus Rift integration.
[Currently]
So what I am basically doing currently is:
I have a 1280 x 800 Window
render the whole scene into the RenderTargetViewLeft_ (1280 x 800)
render the content of RenderTargetViewLeft_ as a "EyeWindow" (like in the tutorial) to the left side of the Screen (640 x 800)
render the whole Scene into the RenderTargetViewRight_ (1280 x 800)
render the content of RenderTargetViewRight_ as a "EyeWindow" (like in the tutorial) to the right side of the Screen (640 x 800)
so all of this works so far, I got the Scene rendered twice into seperate Textures, ending up in a Splitscreen.
[DirectX11 Render Loop]
bool GraphicsAPI::Render()
{
bool result;
// [Left Eye] The first pass of our render is to a texture now.
result = RenderToTexture(renderTextureLeft_);
if (!result)
{
return false;
}
// Clear the buffers to begin the scene.
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
// Turn off the Z buffer to begin all 2D rendering.
TurnZBufferOff();
// Render The Eye Window orthogonal to the screen
RenderEyeWindow(eyeWindowLeft_, renderTextureLeft_);
// Turn the Z buffer back on now that all 2D rendering has completed.
TurnZBufferOn();
// [Right Eye] ------------------------------------
result = RenderToTexture(renderTextureRight_);
if (!result)
{
return false;
}
TurnZBufferOff();
RenderEyeWindow(eyeWindowRight_, renderTextureRight_);
TurnZBufferOn();
// [End] Present the rendered scene to the screen.
EndScene(); // calls Present
return true;
}
[What I want to do now]
Now I am trying to achieve a Barrel Distortion with the Oculus SDK. Currently I am not concerning about a different virtual camera for the second Image, just want to achieve the Barrel distortion for now.
I have read the Developers Guide [1] and also tried to look into the TinyRoom Demo, but I don't understand completely what's necessary now to achieve the distortion with the SDK in my already working DirectX Engine.
In Developers Guide Render Texture Initialization, they show how to create a texture for the API. I guess it means, I need to setup ALL my RendertargetViews with the according API size (Render Targets are currently sized 1280 x 800) - and even change the DepthStencilView and Backbuffer sice aswell I guess.
The render-loop would look something like this then:
ovrHmd_BeginFrame(hmd, 0);
BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
...
// Render Loop as the Code above
...
ovrHmd_EndFrame(hmd, headPose, EyeTextures);
// EndScene(); // calls Present, not needed on Oculus Rendering
I feel something's missing so I am sure I don't got all that right.
[Update]
So, i achieve to Render the Scene with barrel Distortion using the Oculus API. Though the polygon of the left- and right image are too far seperated, but this could be caused by using my default 1280 x 800 Texture Size for the Render Targets. The CameraStream seems aswell not rendered orthogonal to the Screen when moving the HMD. Gonna do some further testing ;)
[1] - Oculus Developers Guide: https://developer.oculus.com/documentation/
The key Point of an 3d HMD Support is generally to render the whole graphics Scene twice. Once with the left virtual camera, and once with the right virtual camera - the "eye" distance between them varies, but it's approximaetly about 65mm.
To store the Scene, one has to render the graphic Scene to textures. I have rendered my Scene first using my left virtual camera into a RenderTextureLeft_, and afterwards I have rendered the exact same Scene with my right virtual camera into a RenderTextureRight_. This technique is called "render to texture". This means we save the Image for further post processing stuff into a separate Texture instead of rendering it directly into the backbuffer to Display on the Monitor.
So well, but how can we render to the Oculus Rift now? It's important to set up an hmd instance and configure it correctly first. This is really good explained here in the official docs [1]: https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/
After both RenderTextures (left eye, right eye) are successfully rendered into textures and the oculus device has been configured accordingly, one needs to supply the Oculus SDK with both of the rendered Textures to print them on the hmd's Monitor and doing the Barrel distortion using the Oculus SDK (not the Client distortion which is no longer supported in the newer SDK Versions).
Here I am showing my DirectX code which supplys the oculus sdk with both of my renderetextures and doing also the Barrel distortion:
bool OculusHMD::RenderDistortion()
{
ovrD3D11Texture eyeTexture[2]; // Gather data for eye textures
Sizei size;
size.w = RIFT_RESOLUTION_WIDTH;
size.h = RIFT_RESOLUTION_HEIGHT;
ovrRecti eyeRenderViewport[2];
eyeRenderViewport[0].Pos = Vector2i(0, 0);
eyeRenderViewport[0].Size = size;
eyeRenderViewport[1].Pos = Vector2i(0, 0);
eyeRenderViewport[1].Size = size;
eyeTexture[0].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[0].D3D11.Header.TextureSize = size;
eyeTexture[0].D3D11.Header.RenderViewport = eyeRenderViewport[0];
eyeTexture[0].D3D11.pTexture = graphicsAPI_->renderTextureLeft_->renderTargetTexture_;
eyeTexture[0].D3D11.pSRView = graphicsAPI_->renderTextureLeft_->GetShaderResourceView();
eyeTexture[1].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[1].D3D11.Header.TextureSize = size;
eyeTexture[1].D3D11.Header.RenderViewport = eyeRenderViewport[1];
eyeTexture[1].D3D11.pTexture = graphicsAPI_->renderTextureRight_->renderTargetTexture_;
eyeTexture[1].D3D11.pSRView = graphicsAPI_->renderTextureRight_->GetShaderResourceView();
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
return true;
}
The presentation of the Stereo Image including the Barrel distortion as Kind of a post processing effect is finally done via this line:
ovrHmd_EndFrame(hmd_, eyeRenderPose_, &eyeTexture[0].Texture);
Hopefully the code helps the one or other to understand the Pipeline better.

How draw only the models " front of the camera " full or partially displayed - XNA

I am developing a small game in XNA style "MinecraftGame."
As there are a lot of cubes to draw. I created a function that allows you to draw only the cubes in front of the camera! But the problem is that if a cube is not completely full in my field of vision, it will not be drawn.
As you can see on the "screenshot" below. Cubes located on the edges are not drawn.
How to draw cubes fully and partially displayed in front of the camera? and not only entirely.
Thanks a lot
Here my code to check if the Frustum contain the model:
//Initialize frustum
private void GenerateFrustum()
{
Matrix viewProjection = View * Projection;
Frustum = new BoundingFrustum(viewProjection);
}
//private void UpdateFrustum
{
Matrix viewProjection = View * Projection;
Frustum.Matrix = viewProjection;
}
//Function that will add models instantiated in the transformation matrix only if the model is in the field of view !
private udpateTransformModelInstancied()
{
for (int i = 0; i < ListInstance.Count; i++)
{
if(camera.Frustum.Contains(ListInstance[i].Transform.Translation) != ContainmentType.Disjoint)
{
instanceTransforms.Add(ListInstance[i].Transform);
}
}
.......
}
SreenShot :
You're checking the position of the cubes. This means that you're not taking the cubes' physical size into account; you're treating them as a point, and if that point is out of view, then you won't render it. What you need to do is check whether any part of the cube is visible. The two simplest ways to do this are to work out a bounding shape and use that to check, or to check whether your view contains any of the cube's corner points, rather than just its position point.
Testing for containment of bounding structures for your cubes instead of just the cube's position would work but that adds to your game's complexity by needing to manage a bunch of bounding structures plus the math of testing a bounding structure as opposed to a point. If you need the bounding structures for other stuff, then go that route. But if not, then I would simply take the cube's position, determine a point to the cube's width left or right of it and test those points. If either are 'in', then draw the cube.
Vector3 leftRightVector = Matrix.Transpose(view).Right;//calc once for each frame(or only when camera rotates)
Vestor3 testPoint1 = cubePosition + (leftRightVector * maxCubeWidth);//calc for each cube
Vestor3 testPoint2 = cubePosition + (leftRightVector * -maxCubeWidth);
if(frustum.Contains(testPoint1 || testPoint2)
{
//draw
}
As far as I can see, you are just checking the position of each cube. The most effective fix would be to make a BoundingSphere that would fully encompass one of your cubes, translate it to the cubes location and done the Frustum.Contains with that sphere instead of position :)
Also; Make the sphere just a tad bit larger than needed to account for the edges of the frustum. And remember; If you want to make a minecraft-clone, you will need to use some sort of batch-rendering technique. I reccomend using an instance buffer, as there will be less data to send to the GPU, as well as easier to implement.

Resources