How to realize ARQuickLook's function by using RealityKit - augmented-reality

drag the ModelEntity and can switch planes smoothly.
I want to use the gesture which RealityKit provided
But I don't how to switch different planes(use realitykit's gestures) and keep the shadow which RealityKit provided.
This is my project -- RealityKit_ARQL
But I think ARQuicklook is better than my code.

Related

How to add directional light for shadows properly in sceneView of SceneKit?

I am learning ARKit. I'm placing Virtual Objects in Augmented Reality scene and struggling in these problems!
I am using this demo project from github
1- How to add only one directional light for all (separate) nodes in SceneView of SceneKit & Move directional light with camera position? So that my added shadow can also move with light direction.
If I translate the object shadows are working as it should be. But If I rotate object now shadow should not move on the plan. They are moving because of light is at fixed position.
2- Shadow is looking fine only in case If I add only one object on the plane. But If I add two or more objects more directional lights are adding in SceneView. Now every object has more than one shadows. I want it to restrict only one shadow.
I have added light and shadow plane in sceneKit editor. (not programmatically). Here are my scenekit editor's screenshots.
3- I have read and confirmed that shadow are adding only If I set directional light property to deffered. But in this case app is crashing If I call remove all nodes from sceneView's root node. My removing nodes code is.
self.sceneView.scene.rootNode.enumerateChildNodes { (node, stop) -> Void in
node.removeFromParentNode()
print("removed ", node.name as Any)
}
You can watch my apps video for more clearance. Apps video, How it is working now
My requirement is to add only one shadow for every object. When I Rotate and translate objects shadows should look real.
I also have tried it as removing light from scn file of vase, and add a separate light.scn file having only light in it. added These two (vase and light) nodes in sceneView. But No shadow is appearing.
Directional lights are interpreted from the shader only with their direction (that one light-red vector coming out of the node in Xcode). It does not matter where the position of the light is. Directional light are often used for imitating sun light.
I implemented a similar project so here is my attempt.
I added one directional light from a separate SCN-File to the scene when I initialize the SCNScene.
My settings for it:
castsShadow: true
mode: deferred (my App is not crashing, if I remove my objects from the scene :/ )
And actually thats it to make it work in my project.
One thing about your planes: I think you have not disabled castsShadow on the planes. Therefore a user can see the planes.
Edit:
I could reproduce the crash. It does occur when removing the directional light. Thats why the app is not crashing in my project. So you could do it like me and add the directional light in viewDidLoad() for example.

Camera Output onto SceneKit Object

I'm trying to use SceneKit in an application and am wanting to use images captured from an iPhone/iPad's camera (front or back) as a texture on an object in my SCNScene.
From everything that I can tell from the documentation as well as other questions here on StackOverflow, I should just be able to create a AVCaptureVideoPreviewLayer with an appropriate AVCaptureSession and have it "just work". Unfortunately, it does not.
I'm using a line of code like this:
cubeGeometry.firstMaterial?.diffuse.contents = layer
Having the layer as the material seems to work because if I set the layer's backgroundColor, I see the background color, but the camera capturing does not appear to work. The layer is set up properly, because if I use the layer as a sublayer of the SCNView instead of on the object in the SCNScene, the layer appears properly in UIKit.
An example project can be found here.
You can use the USE_FRONT_CAMERA constant in GameViewController.swift to toggle between using front and back camera. You can use the USE_LAYER_AS_MATERIAL constant to toggle between using the AVCaptureVideoPreviewLayer as the texture for a material or as a sub layer in the SCNView.
I've found a pretty hacky workaround for this using some OpenGL calls, but I'd prefer to have this code working as a more general and less fragile solution. Anyone know how to get this working properly on device? I've tried both iOS 8.0 and iOS 9.0 devices.

iOS Swift SpriteKit Achieving a sprite "explosion" effect

In my game I would like that when a collision occurs, the designated sprite would undergo an "explosion" or "glass break" effect, in which the sprite is split up into random pieces which are then moved at a random rate, speed, and angle. I would imagine that something like this may require using particles or at the very a least texture atlas.
I found a little bit on this, but the questions/explantations were catered for Objective-C. I am fairly new to iOS development and have solely used swift, so I can't really translate from one language to another. Thanks.
I would suggest you to try using the SpriteKit Emitter class for this. Add a new SpriteKit Particle Effect file to the project and configure the type of explosion there. You do not need any code, to configure it as Apple has very conveniently provided an editor window for us to easily change the values.
Once you are satisfied with the way the emitter looks, you can then open the Game Scene (assuming that is where this collision would be detected) and type:
let explosionEmitterNode = SKEmitterNode(fileNamed:"the file name")
sprite.addChild(explosionEmitterNode)
Here sprite is the actual node to which you would like to add the emitter effect to. Or you could add it to the scene directly and set its position as:
let explosionEmitterNode = SKEmitterNode(fileNamed:"the file name")
explosionEmitterNode.position = CGPointMake(200,300)
addChild(explosionEmitterNode)

Transform to create barrel effect seen in Apple's Camera app

I'm trying to recreate the barrel effect that can be seen on the camera mode picker below:
(source: androidnova.org)
Do I have to use OpenGL in order to achieve this effect? What is the best approach?
I found a great library on GitHub that can be used to achieve this effect (https://github.com/Ciechan/BCMeshTransformView), but unfortunately it doesn't support animation and is therefore not usable.
I bet Apple used CGMeshTransform. It's just like BCMeshTransform, except it is a private API and fully integrates with Core Graphics. BCMeshTransformView was born when a developer discovered this.
The only easy option I see is:
Use CALayer.transform, which is a CATransform3D. You can use this to simulate the barrel effect you want by adjusting the z position and y rotation of each item on the barrel. Also add a semitransparent dark gradient (CAGradientLayer) to the wheel to simulate the effect of choices getting darker towards the edges. This will be simple to do, but won't look as smooth and realistic as an actual 3D barrel. Maybe it will look good enough to create a convincing illusion though? (To enable 3D transforms, you need to enable depth by using view.layer.transform.m34 = 1/500.f or similar)
http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/
The hardest option is using a custom OpenGL view that makes a barrel shape and applies your contents on top of it as a texture. I would expect that you run into most of the complexities behind creating BCMeshTransformView, and have difficulty supporting animations just like BCMeshTransformView did.
You may still be able to use BCMeshTransformView though. BCMeshTransformView is slow at processing content animations such as color changes, but is very fast at processing geometry changes. So you could use it to do a barrel effect, as long as you define the barrel effect entirely in terms of mesh geometry changes (instead of as content changes like using a scroll view or adjusting subview positions). You would need to do gesture handling + scrolling yourself instead of using UIScrollView though, which is tricky and tedious to get right.
Considering the options, I would want to fudge it by using 3D transforms, then move to other options only if I can't create a convincing illusion using 3D transforms.

iOS : Creating a 3D Compass

I want to make a 3D metal compass in iOS which will have a movable cover.
That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed. I have attached a sketch about what I'm thinking.
Is it possible using core animations and CALayers? Or would I have to use OpenGL ES?
First you should obviously create a textured 3d model in app like 3Ds Max or Maya. Then export it to some suitable format. The simplest one is OBJ (there are lots of examples about how to load it). There are two options about animation:
Do animation manually by rotating the cover object. It's probably the easiest way to do that.
Create animation in you 3D editor and then interpolate between frames. By doing this you can get much more realistic view. However in this case OBJ format is not suitable, but COLLADA is. To load it I suggest to use Assimp library.
And if you don't need some advanced interraction another option is to use pseude 3D: just pre render all the compass animation frames and use that animation applied to 2d texture.

Resources