I am trying to programmatically create a camera for my MainScene.scn file.
I need to create the camera in code, as I am wanting to make a camera orbit node, and this is the only way I can think of. I also would like to keep using my scene file.
This is my code (simplified) from my view controller:
import UIKit
import SceneKit
class GameViewController: UIViewController {
// MARK: Scene objects
private var gameView: SCNView!
private var gameScene: SCNScene!
private var gameCameraNode: SCNNode!
// MARK: View controller overrides
override func viewDidLoad() {
super.viewDidLoad()
// Setup game
initView()
initScene()
initCamera()
}
}
private extension GameViewController {
// Initialise the view and scene
private func initView() {
self.view = SCNView(frame: view.bounds) // Create an SCNView to play the game within
gameView = self.view as? SCNView // Assign the view
gameView.showsStatistics = true // Show game statistics
gameView.autoenablesDefaultLighting = true // Allow default lighting
gameView.antialiasingMode = .multisampling2X // Use anti-aliasing for a smoother look
}
private func initScene() {
gameScene = SCNScene(named: "art.scnassets/MainScene.scn")! // Assign the scene
gameView.scene = gameScene // Set the game view's scene
gameView.isPlaying = true // The scene is playing (not paused)
}
private func initCamera() {
gameCameraNode = SCNNode()
gameCameraNode.camera = SCNCamera()
gameCameraNode.position = SCNVector3(0, 3.5, 27)
gameCameraNode.eulerAngles = SCNVector3(-2, 0, 0)
gameScene.rootNode.addChildNode(gameCameraNode)
gameView.pointOfView = gameCameraNode
}
}
This code can easily be pasted to replace the default view controller code. All you need to do is add the MainScene.scn and drag in something like a box.
If you try the code, the camera is in the wrong position. If I use the same properties for the camera in the scene, it works, but that is not what I am looking for.
From what I have read, SceneKit may be creating a default camera as said here and here. However, I am setting the pointOfView property just as they said in those answers, but it still does not work.
How can I place my camera in the correct position in the scene programmatically?
After a while, I discovered that you can actually add empty nodes directly within the Scene Builder. I originally only wanted a programmatic answer, as I wanted to make a camera orbit node like the questions I linked to. Now I can add an empty node, I can make the child of the orbit the camera.
This requires no code, unless you want to access the nodes (e.g. changing position or rotation):
gameCameraNode = gameView.pointOfView // Use camera object from scene
gameCameraOrbitNode = gameCameraNode.parent // Use camera orbit object from scene
Here are the steps to create an orbit node:
1) Drag it in from the Objects Library:
2) Setup up your Scene Graph like so:
Related
I was trying to solve this problem (TL;DR An overlaid SKScene using the overlaySKScene property in SCNView wasn't causing a redraw when children were added and removed from it) using view.setNeedsDisplay() to force a redraw since the SCNView wasn't doing it automatically.
The problem with using view.setNeedsDisplay() was that the CPU usage was spiking to 50% and I assumed it was because the entire SCNView was having to redraw its contents, which included a 3D SCNScene as well. My solution was to use view.setNeedsDisplay(_: CGRect) to minimise the region that needs to be redrawn. However, to my surprise, no matter what I put as the CGRect value the SCNView refused to render the SKScene contents that had been overlaid on it.
Steps to reproduce issue
Open SceneKit template
From the Main (Base) storyboard, set the "Scene" attribute on the SCNView to be "art.scnassets/ship.scn" or whatever the path is
Delete all boilerplate code and just leave
class CustomSKScene: SKScene {
override func didMove(to view: SKView) {
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(userTapped(_:)))
view.addGestureRecognizer(tapGestureRecognizer)
}
#objc func userTapped(_ sender: UITapGestureRecognizer) {
let finger = convertPoint(fromView: sender.location(in: view))
let circle = SKShapeNode(circleOfRadius: 25)
circle.position = finger
addChild(circle)
}
}
class GameViewController: UIViewController {
private var gameView: SCNView { view as! SCNView }
override func viewDidLoad() {
gameView.overlaySKScene = CustomSKScene(size: gameView.bounds.size)
}
}
(This should still allow the ship scene to render when you run the app)
When you tap the screen, circles shouldn't show up. Fix this issue by adding view!.setNeedsDisplay() below the addChild function. Notice how CPU usage goes up to around 40-50% if you tap repeatedly after adding this fix.
Replace view!.setNeedsDisplay() with view!.setNeedsDisplay(view!.frame) (which should be equivalent).
At this point we are now back to square one. The circles are not showing up on screen again and confusion ensues. view.setNeedsDisplay() and view.setNeedsDisplay(view.frame) should be equivalent, yet, nothing is redrawn.
Does anyone know how to fix this problem? I feel it only happens when using the overlaySKScene property so maybe there is some caveat with its implementation that I am unaware of.
Some observations:
When you debug the view hierarchy, the overlaid SKScene doesn't show up anywhere, which is strange
sender.view === view returns true
(sender.view as! SCNScene).overlaySKScene === self also returns true
I am fairly new to Swift and just started playing with RealityKit and ARKit. I am working on a personal project where I would like to have a 3D object stick to the camera in first person. Similar to the AR Angry birds or any FPS game. I have seen a few examples in SceneKit or SpriteKit, I am sure it is just a misunderstanding of how anchoring entities works.
My main question is:
How would I go about sticking a reality object I created in Reality Composer to the camera in first person? I want to create a Reality Scene, in this case an arm cannon and upon tapping it shots.
Below is the code for my ViewController
extension ViewController: ARSessionDelegate
{
func session(_ session: ARSession, didUpdate frame: ARFrame)
{
guard let arCamera = session.currentFrame?.camera else { return }
// Probably where I update the location of my reality experience
}
}
class ViewController: UIViewController
{
#IBOutlet var arView: ARView!
override func viewDidLoad()
{
super.viewDidLoad()
arView.session.delegate = self
// Load the "ArmCannon" scene from the "Experience" Reality File
let armCannonAnim = try! Experience.loadArmcannon()
// Create Anchor to anchor arm cannon to
let anchor = AnchorEntity(.camera)
anchor.transform = arView.cameraTransform
// Add the anchor to the scene
arView.scene.addAnchor(anchor)
// Setup tap gesture on arm cannon
let tapGesture = UITapGestureRecognizer(target: self, action:#selector(onTap))
arView.addGestureRecognizer(tapGesture)
// Add the the cannon animation to arView
arView.scene.anchors.append(armCannonAnim)
}
#IBAction func onTap(_ sender: UITapGestureRecognizer)
{
let tapLocation = sender.location(in: arView)
// Get the entity at the location we've tapped, if one exists
if let cannonFiring = arView.entity(at: tapLocation)
{
print(cannonFiring.name)
print("firing Cannon")
}
}
}
I have looked at and read Track camera position with RealityKit and Where is the .camera AnchorEntity located?
Instead of:
arView.scene.anchors.append(armCannonAnim)
put:
anchor.addChild(armCannonAnim)
You need this armCannonAnim to be a child of the camera, and the anchor object is an anchor at the camera transform. This is equivalent to in SceneKit adding a child to the cameraNode.
I can't find a good explanation of what a SCNCamera is and it's purpose. This is Apple's definition:
A set of camera attributes that can be attached to a node to provide a
point of view for displaying the scene.
This definition isn't clear because I set up the scene and added a SCNNode without attaching a SCNCamera to it. The point of view from the device's camera shows the SCNNode at the location I positioned it at with no problem and the scene is displayed fine.
What is the difference between the device's camera and a SCNCamera?
What is the benefit of attaching a SCNCamera to a SCNNode vs not using one?
If I have multiple SCNNodes (all detached no hierarchy amongst each other) does each node need it's own SCNCamera?
If I have multiple SCNNodes in a hierarchy (parent node with child nodes) does each node need it's own SCNCamera or does just the parent node?
lazy var sceneView: ARSCNView = {
let sceneView = ARSCNView()
sceneView.translatesAutoresizingMaskIntoConstraints = false
sceneView.delegate = self
return sceneView
}()
let configuration = ARWorldTrackingConfiguration()
override func viewDidLoad() {
super.viewDidLoad()
// pin sceneView to the view
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "earth")
let plane = SCNPlane(width: 0.33, height: 0.33)
plane.materials = [material]
plane.firstMaterial?.isDoubleSided = true
let myNode = SCNNode(geometry: plane)
myNode.name = "earth"
myNode.position = SCNVector3(0.0, 0.6, -0.9)
sceneView.scene.rootNode.addChildNode(myNode)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
sceneView.session.run(configuration, options: [])
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillAppear(animated)
sceneView.session.pause()
sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
In SceneKit, the SCNCamera represents the point of view from which the user sees a scene. Ray Wenderlich provides a good explanation:
Think back to the analogy of the movie set from Chapter 1: to shoot a
scene, you’d position a camera looking at the scene and the resulting
image of that scene would be from the camera’s perspective.
Scene Kit
works in a similar fashion; the position of the node that contains the
camera determines the point of view from which you view the scene.
You do not need to have a SCNCamera for each node. You should only need to have one camera for each angle that you want to show, or even just one. You can move one camera throughout the scene using its parent's position property.
It looks like you're working with ARKit, which behaves a little differently. When using an ARSCNView, as opposed to a non-AR SCNView, you get the following behvior:
The view automatically renders the live video feed from the device camera as the scene background.
The world coordinate system of the view's SceneKit scene directly responds to the AR world coordinate system established by the session
configuration.
The view automatically moves its SceneKit camera to match the real-world movement of the device.
You do not need to worry as much about the scene's camera in this case, as it is automatically being controlled by the system so that it matches the device's movement for AR.
For more detail, see Apple's documentation on SCNCamera: SCNCamera - SceneKit
I got the answer to my question from within this answer. Basically in ARKit using ARSCNView the camera comes from sceneView.pointOfView but in SceneKit you need to create a camera to get the camera pov (code below).
Getting the camera node
To get the camera node, it depends if you're using SCNKit, ARKit, or other framework. Below are examples for ARKit and SceneKit.
With ARKit, you have ARSCNView to render the 3D objects of an SCNScene overlapping the camera content. You can get the camera node from ARSCNView's pointOfView property:
let cameraNode = sceneView.pointOfView
For SceneKit, you have an SCNView that renders the 3D objects of an SCNScene. You can create camera nodes and position them wherever you want, so you'd do something like:
let scnScene = SCNScene()
// (Configure scnScene here if necessary)
scnView.scene = scnScene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 5, 10) // For example
scnScene.rootNode.addChildNode(cameraNode)
Once a camera node has been setup, you can access the current camera in the same way as ARKit:
let cameraNode = scnView.pointOfView
I want to write the iOS Game Example that uses SpritKit in Swift, which is provided with Xcode only in code. That means I don’t want to use the GameScene.sks, actions.sks and the main.storyboard. I know how to write it without storyboard, but I can’t get it working without the .sks files. Can you say what I must change or can you provide me with a full project?
You will first need to have a View Controller. You can adjust the properties how you would like them. Here is mine:
import UIKit
import SpriteKit
class GameViewController: UIViewController {
// MARK: View Controller overrides
override func viewDidLoad() {
super.viewDidLoad()
view = SKView(frame: view.bounds)
if let view = self.view as! SKView? {
// Initialise the scene
let scene = GameScene(size: view.bounds.size) // <-- IMPORTANT: Initialise your first scene (as you have no .sks)
// Set the scale mode to scale to fit the window
scene.scaleMode = .aspectFill
// Present the scene
view.presentScene(scene)
// Scene properties
view.showsPhysics = false
view.ignoresSiblingOrder = true
view.showsFPS = true
view.showsNodeCount = true
}
}
}
Then, you create a class for your first scene. Mine is called GameScene which was initialised in the view controller. Make sure this is a subclass of SKScene. It will look something like this:
import SpriteKit
class GameScene: SKScene {
/* All Scene logic (which you could extend to multiple files) */
}
If you have any questions, let me know :)
I created an .sks particle emitter based on the spark template.
My app is a normal app (not a game). When a user clicks a button, I have a new View controller that shows modally over fullscreen so that I can blur the background.
In this modal, I created a view and gave it a class of SCNView see image below:
How can I load the particle .sks file to do the animation on that viewController on the Particles view?
Update
How to load a SceneKit particle systems in view controller?
As mentioned by #mnuages, you can use .scnp file instead of .sks, which is a SceneKit Particle System.
So the steps are:
Create a SceneKit Particle System, I called it ConfettiSceneKitParticleSystem.scnp
Then in your art-board, select the view and select the class SCNView for it like in the printscreen of the question
In your UIViewController:
class SomeVC: UIViewController {
#IBOutlet weak var particles: SCNView!
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let particlesNode = SCNNode()
let particleSystem = SCNParticleSystem(named: "ConfettiSceneKitParticleSystem", inDirectory: "")
particlesNode.addParticleSystem(particleSystem!)
scene.rootNode.addChildNode(particlesNode)
particles.scene = scene
}
}
Et Voila...you have you animation :)
.sks files are SpriteKit particle systems. You can also create SceneKit particle systems in Xcode, they are .scnp files.
A .scnp file is basically an archived SCNParticleSystem that you can load with NSKeyedUnarchiver and add to your scene using -addParticleSystem:withTransform:.
It might be easier to create a SpriteKit Particle File (which is what you did). You can add it to your main view in your UIViewController.
Add this somewhere:
extension SKView {
convenience init(withEmitter name: String) {
self.init()
self.frame = UIScreen.main.bounds
backgroundColor = .clear
let scene = SKScene(size: self.frame.size)
scene.backgroundColor = .clear
guard let emitter = SKEmitterNode(fileNamed: name + ".sks") else { return }
emitter.name = name
emitter.position = CGPoint(x: self.frame.size.width / 2, y: self.frame.size.height / 2)
scene.addChild(emitter)
presentScene(scene)
}
}
To use:
override func viewWillAppear(_ animated: Bool) {
view.addSubview(SKView(withEmitter: "ParticleFileName"))
}