Preload Scenekit Scene - ios

There seems to be a delay for when my SCNView is ready, and when it actually renders on my view - I always see a split second of white before the view appears. What can I do to prevent this?
func loadModels(frame:CGRect, type:ItemViewType, completion: (() -> Void)!) {
self.container = self.scene.rootNode.childNodeWithName("Container", recursively: true)!
self.item = self.container.childNodeWithName("Item", recursively: true)
self.pattern = self.item.childNodeWithName("TopLayer", recursively: true)
self.buttom = self.item.childNodeWithName("BottomRubber", recursively: true)!
self.shell = self.item.childNodeWithName("Shell", recursively: true)!
self.leftRimLit = self.scene.rootNode.childNodeWithName("leftRimLit", recursively: true)
self.specDirLit = self.scene.rootNode.childNodeWithName("specDirLit", recursively: true)
self.omni = self.scene.rootNode.childNodeWithName("omni", recursively: true)
self.scnView = SCNView(frame: frame)
self.scnView!.scene = self.scene
self.scnView!.autoenablesDefaultLighting = false
//default position
self.scnView?.pointOfView?.position = SCNVector3Make(-1.41011, -21.553, 3.34132)
self.scnView?.pointOfView?.eulerAngles = SCNVector3Make(1.58788, -0.0114007, -0.0574705)
self.scnView?.pointOfView?.scale = SCNVector3Make(1, 1, 1.5)
self.item.pivot = SCNMatrix4MakeTranslation(0, 0, 4)
self.item.position = SCNVector3(x: 0.387027979, y: 9.58867455, z: 3.71733069)
self.item.scale = SCNVector3(x: 1.0999999, y: 1.0999999, z: 1.10000002)
self.item.rotation = SCNVector4(x: 0.865282714, y: 0.455411941, z: 0.20948948, w: 2.20000005)
self.container.pivot = SCNMatrix4MakeTranslation(0, 0, 0)
self.setBackground(type)
self.setItemPosition(type)
completion()
}
All of the positioning a background is correct once it does finally appear.
self.SO3DItemView = SO3DItemModel(frame: self.view.frame)
self.SO3DItemView!.loadModels(self.view.frame, type: .Tada, completion:
{() -> Void in
self.view.insertSubview(self.SO3DItemView!.scnView!, belowSubview:self.overlay)
UIView.animateWithDuration(0.7, delay: 0.0, options: .CurveEaseIn, animations:
{() -> Void in
self.overlay.alpha = 0.0
}, completion:{ (finished: Bool) -> Void in
self.view.sendSubviewToBack(self.overlay)
})
Right now I have an overlay that fades out to mitigate some of this, but on slow devices (iPhone 5) I can still see the white before the scnView snaps in to complete.

The SCNSceneRenderer protocol (your SCNView) has a function prepareObjects:withCompletionHandler: that will accept a SCNScene. This will instruct SceneKit to perform any initialisation needed, otherwise it will wait for the scene to be added before transferring data to the GPU, etc.
FWIW I've settled on using background threads to get around this same issue. Simple example below...
let scene = SCNScene()
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0)) {
//lengthy scene setup process
let slowNode = SCNNode(geometry:SCNSphere(radius:1.0))
sleep(2)
dispatch_async(dispatch_get_main_queue())
{
scene.rootNode.addChildNode(slowNode)
}
}
let quickNode = SCNNode(geometry:SCNBox(width:1.0, height:1.0, length:1.0))
scene.rootNode.addChildNode(quickNode)
scnView.scene = scene
If you were to run this code, which I haven't, you should see the box appear immediately when the app launches. Two seconds later the sphere will be added to the scene.

Related

Continuously rendering new child node objects on SceneKit scene in Swift

I got a question about a Scene Kit project I'm doing. So I have this scene where I randomly spawn cubes and add them to my scene root node, the thing is that they are rendered at the beginning, but quickly they stop being rendered, so I don't see these new objects being spawned, I do see them again when I tap on the screen (so when I make an action on the scene or whatever) OR sometimes some of them are randomly rendered and I don't know why
I have tried setting rendersContinuously to true but this does not change anything.
Here is the cube spawning thread :
DispatchQueue.global(qos: .userInitiated).async {
while true {
self.spawnShape()
sleep(1)
}
}
Here is how I add them to the child nodes :
let geometryNode = SCNNode(geometry: geometry)
geometryNode.simdWorldPosition = simd_float3(Float.random(in: -10..<10), 2, shipLocation+Float.random(in: 20..<30))
self.mainView.scene!.rootNode.addChildNode(geometryNode)
And here's the endless action applied to the camera and my main node :
ship.runAction(SCNAction.repeatForever(SCNAction.moveBy(x: 0, y: 0, z: 30, duration: 1)))
camera.runAction(SCNAction.repeatForever(SCNAction.moveBy(x: 0, y: 0, z: 30, duration: 1)))
The added geometryNode cubes stop being rendered unless I tap on screen
How can I force the rendering of these new child node objects on the scene even when I don't touch the screen?
Thank you
EDIT asked by ZAY:
Here is the beginning of my code basically:
override func viewDidLoad() {
super.viewDidLoad()
guard let scene = SCNScene(named: "art.scnassets/ship.scn")
else { fatalError("Unable to load scene file.") }
let scnView = self.view as! SCNView
self.mainView = scnView
self.mainView.rendersContinuously = true
self.ship = scene.rootNode.childNode(withName: "ship", recursively: true)!
self.camera = scene.rootNode.childNode(withName: "camera", recursively: true)!
self.ship.renderingOrder = 1
self.ship.simdWorldPosition = simd_float3(0, 0, 0)
self.camera.simdWorldPosition = simd_float3(0, 15, -35)
self.mainView.scene = scene
DispatchQueue.global(qos: .userInitiated).async {
while true {
self.spawnShape()
sleep(1)
}
}
ship.runAction(SCNAction.repeatForever(SCNAction.moveBy(x: 0, y: 0, z: 30, duration: 1)))
camera.runAction(SCNAction.repeatForever(SCNAction.moveBy(x: 0, y: 0, z: 30, duration: 1)))
let tap = UILongPressGestureRecognizer(target: self, action: #selector(tapHandler))
tap.minimumPressDuration = 0
self.mainView.addGestureRecognizer(tap)
}
When I tap on screen, this code is called:
func tapHandler(gesture: UITapGestureRecognizer) {
let p = gesture.location(in: self.mainView)
let turnDuration: Double = 0.3
print("x: \(p.x) y: \(p.y)")
if gesture.state == .began {
if p.x >= self.screenSize.width / 2 {
self.delta = 0.5
}
else {
self.delta = -0.5
}
self.ship.runAction(SCNAction.rotateBy(x: 0, y: 0, z: self.delta, duration: turnDuration))
return
}
if gesture.state == .changed {
return
}
self.ship.runAction(SCNAction.rotateBy(x: 0, y: 0, z: -self.delta, duration: turnDuration))
}
Basically it rotates my ship to right or left depending on which side of screen I tap, and it displays the coordinates where I tapped in the console. When I release, the ship goes back to the initial position/rotation.
DispatchQueue.global(qos: .userInitiated).async {
while true {
self.spawnShape()
sleep(1)
}
}
Aren't you blocking that shared queue by doing that? Use a Timer if you want to run something periodically.

How to play a video using on Camera (SCNNode) Image Tracking using Swift

I am new to ARKit . I am using Image Tracking to detect the image and dislpay the content beside that Image ..
On the left side of the image I am displaying some information of the image and on the right side I am showing the Web View .
I just want to display some video over the image (top).
Can you guys please help me to display a video over the image (top). I have attached the code for the Info and webview similary i want display to the video
func displayDetailView(on rootNode: SCNNode, xOffset: CGFloat) {
let detailPlane = SCNPlane(width: xOffset, height: xOffset * 1.4)
detailPlane.cornerRadius = 0.25
let detailNode = SCNNode(geometry: detailPlane)
detailNode.geometry?.firstMaterial?.diffuse.contents = SKScene(fileNamed: "DetailScene")
// Due to the origin of the iOS coordinate system, SCNMaterial's content appears upside down, so flip the y-axis.
detailNode.geometry?.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)
detailNode.position.z -= 0.5
detailNode.opacity = 0
rootNode.addChildNode(detailNode)
detailNode.runAction(.sequence([
.wait(duration: 1.0),
.fadeOpacity(to: 1.0, duration: 1.5),
.moveBy(x: xOffset * -1.1, y: 0, z: -0.05, duration: 1.5),
.moveBy(x: 0, y: 0, z: -0.05, duration: 0.2)
])
)
}
func displayWebView(on rootNode: SCNNode, xOffset: CGFloat) {
// Xcode yells at us about the deprecation of UIWebView in iOS 12.0, but there is currently
// a bug that does now allow us to use a WKWebView as a texture for our webViewNode
// Note that UIWebViews should only be instantiated on the main thread!
DispatchQueue.main.async {
let request = URLRequest(url: URL(string: "https://www.youtube.com/watch?v=QvzVCOiC-qs")!)
let webView = UIWebView(frame: CGRect(x: 0, y: 0, width: 400, height: 672))
webView.loadRequest(request)
let webViewPlane = SCNPlane(width: xOffset, height: xOffset * 1.4)
webViewPlane.cornerRadius = 0.25
let webViewNode = SCNNode(geometry: webViewPlane)
webViewNode.geometry?.firstMaterial?.diffuse.contents = webView
webViewNode.position.z -= 0.5
webViewNode.opacity = 0
rootNode.addChildNode(webViewNode)
webViewNode.runAction(.sequence([
.wait(duration: 3.0),
.fadeOpacity(to: 1.0, duration: 1.5),
.moveBy(x: xOffset * 1.1, y: 0, z: -0.05, duration: 1.5),
.moveBy(x: 0, y: 0, z: -0.05, duration: 0.2)
])
)
}
}
I had called this methods in the below function ..
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
updateQueue.async {
let physicalWidth = imageAnchor.referenceImage.physicalSize.width
let physicalHeight = imageAnchor.referenceImage.physicalSize.height
// Create a plane geometry to visualize the initial position of the detected image
let mainPlane = SCNPlane(width: physicalWidth, height: physicalHeight)
mainPlane.firstMaterial?.colorBufferWriteMask = .alpha
// Create a SceneKit root node with the plane geometry to attach to the scene graph
// This node will hold the virtual UI in place
let mainNode = SCNNode(geometry: mainPlane)
mainNode.eulerAngles.x = -.pi / 2
mainNode.renderingOrder = -1
mainNode.opacity = 1
// Add the plane visualization to the scene
node.addChildNode(mainNode)
// Perform a quick animation to visualize the plane on which the image was detected.
// We want to let our users know that the app is responding to the tracked image.
self.highlightDetection(on: mainNode, width: physicalWidth, height: physicalHeight, completionHandler: {
// Introduce virtual content
self.displayDetailView(on: mainNode, xOffset: physicalWidth)
// Animate the WebView to the right
self.displayWebView(on: mainNode, xOffset: physicalWidth)
})
}
}
Any help is Appreciated ..
The effect you are trying to achieve is to play a video on a SCNNode. To do this, in your renderer func, you need to create an AVPlayer and use it to create a SKVideoNode. Then you need to create an SKScene, add the SKVideoNode. Next set the texture of your plane to that SKScene.
so this in the context of your code above:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
updateQueue.async {
let physicalWidth = imageAnchor.referenceImage.physicalSize.width
let physicalHeight = imageAnchor.referenceImage.physicalSize.height
// Create a plane geometry to visualize the initial position of the detected image
let mainPlane = SCNPlane(width: physicalWidth, height: physicalHeight)
mainPlane.firstMaterial?.colorBufferWriteMask = .alpha
// Create a SceneKit root node with the plane geometry to attach to the scene graph
// This node will hold the virtual UI in place
let mainNode = SCNNode(geometry: mainPlane)
mainNode.eulerAngles.x = -.pi / 2
mainNode.renderingOrder = -1
mainNode.opacity = 1
// Add the plane visualization to the scene
node.addChildNode(mainNode)
// Perform a quick animation to visualize the plane on which the image was detected.
// We want to let our users know that the app is responding to the tracked image.
self.highlightDetection(on: mainNode, width: physicalWidth, height: physicalHeight, completionHandler: {
// Introduce virtual content
self.displayDetailView(on: mainNode, xOffset: physicalWidth)
// Animate the WebView to the right
self.displayWebView(on: mainNode, xOffset: physicalWidth)
// setup AV player & create SKVideoNode from avPlayer
let videoURL = URL(fileURLWithPath: Bundle.main.path(forResource: videoAssetName, ofType: videoAssetExtension)!)
let player = AVPlayer(url: videoURL)
player.actionAtItemEnd = .none
videoPlayerNode = SKVideoNode(avPlayer: player)
// setup SKScene to hold the SKVideoNode
let skSceneSize = CGSize(width: physicalWidth, height: physicalHeight)
let skScene = SKScene(size: skSceneSize)
skScene.addChild(videoPlayerNode)
videoPlayerNode.position = CGPoint(x: skScene.size.width/2, y: skScene.size.height/2)
videoPlayerNode.size = skScene.size
// Set the SKScene as the texture for the main plane
mainPlane.firstMaterial?.diffuse.contents = skScene
mainPlane.firstMaterial?.isDoubleSided = true
// setup node to auto remove itself upon completion
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: nil, using: { (_) in
DispatchQueue.main.async {
if self.debug { NSLog("video completed") }
// do something when the video ends
}
})
// play the video node
videoPlayerNode.play()
})
}
}
For 2022 ..
This is now (conceptually) simple,
guard let url = URL(string: " ... ") else { return }
let vid = AVPlayer(url: url)
.. some node .. .geometry?.firstMaterial?.diffuse.contents = vid
vid.play()
It's that easy.
You'll spend a lot of time monkeying with the mesh, buffers etc to get a good result, depending on what you're doing.

Using Scenekit sceneTime to scrub through animations iOS

I'm trying to modify Xcode's default game setup so that I can: program an animation into the geometry, scrub through that animation, and let the user playback the animation automatically.
I managed to get the scrubbing of the animation to work by setting the view's scene time based on the value of a scrubber. However, when I set the isPlaying boolean on the SCNSceneRenderer to true, it resets the time to 0 on every frame, and I can't get it to move off the first frame.
From the docs, I'm assuming this means it won't detect my animation and thinks the duration of all animations is 0.
Here's my viewDidLoad function in my GameViewController:
override func viewDidLoad() {
super.viewDidLoad()
// create a new scene
let scene = SCNScene(named: "art.scnassets/ship.scn")!
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// create and add a light to the scene
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light!.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 10)
scene.rootNode.addChildNode(lightNode)
// create and add an ambient light to the scene
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = UIColor.darkGray
scene.rootNode.addChildNode(ambientLightNode)
// retrieve the ship node
let ship = scene.rootNode.childNode(withName: "ship", recursively: true)!
// define the animation
//ship.runAction(SCNAction.repeatForever(SCNAction.rotateBy(x: 0, y: 2, z: 0, duration: 1)))
let positionAnimation = CAKeyframeAnimation(keyPath: "position.y")
positionAnimation.values = [0, 2, -2, 0]
positionAnimation.keyTimes = [0, 1, 3, 4]
positionAnimation.duration = 5
positionAnimation.usesSceneTimeBase = true
// retrieve the SCNView
let scnView = self.view as! SCNView
scnView.delegate = self
// add the animation
ship.addAnimation(positionAnimation, forKey: "position.y")
// set the scene to the view
scnView.scene = scene
// allows the user to manipulate the camera
scnView.allowsCameraControl = true
// show statistics such as fps and timing information
scnView.showsStatistics = true
// configure the view
scnView.backgroundColor = UIColor.black
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
scnView.addGestureRecognizer(tapGesture)
// play the scene
scnView.isPlaying = true
//scnView.loops = true
}
Any help is appreciated! :)
References:
sceneTime:
https://developer.apple.com/documentation/scenekit/scnscenerenderer/1522680-scenetime
isPlaying:
https://developer.apple.com/documentation/scenekit/scnscenerenderer/1523401-isplaying
related question:
SceneKit SCNSceneRendererDelegate - renderer function not called
I couldn't get it to work in an elegant way, but I fixed it by adding this Timer call:
Timer.scheduledTimer(timeInterval: timeIncrement, target: self, selector: (#selector(updateTimer)), userInfo: nil, repeats: true)
timeIncrement is a Double set to 0.01, and updateTimer is the following function:
// helper function updateTimer
#objc func updateTimer() {
let scnView = self.view.subviews[0] as! SCNView
scnView.sceneTime += Double(timeIncrement)
}
I'm sure there's a better solution, but this works.
sceneTime is automatically set to 0.0 after actions and animations are run on every frame.
Use can use renderer(_:updateAtTime:) delegate method to set sceneTime to the needed value before SceneKit runs actions and animations.
Make GameViewController comply to SCNSceneRendererDelegate:
class GameViewController: UIViewController, SCNSceneRendererDelegate {
// ...
}
Make sure you keep scnView.delegate = self inside viewDidLoad().
Now implement renderer(_:updateAtTime:) inside your GameViewController class:
// need to remember scene start time in order to calculate current scene time
var sceneStartTime: TimeInterval? = nil
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// if startTime is nil assign time to it
sceneStartTime = sceneStartTime ?? time
// make scene time equal to the current time
let scnView = self.view as! SCNView
scnView.sceneTime = time - sceneStartTime!
}

Use case of convexSweepTest(with:from:to:options)

The Apple Documentation gave a seemingly straightforward example of how to use the convexSweepTest in Objective C. Unfortunately the example code in the documentation does not (as of now) exist in Swift.
I can move it over to Swift and compile without error, but I cannot get the 'contacts.count' to ever be anything other than zero no matter how many objects (all with a physicsBody) I add to the scene, including the one I'm doing the convexSweepTest with.
I have a layer of static objects with a physicsBody along the x-z axis, with my object I'm doing the convexSweepTest with a positive y value.
For some reason I'm required to add a custom physicsShape in the SCNPhysicsBody(type:shape) constructor (opposed to leaving it as nil as described in the documentation should work) in order to get the convexSweepTest to recognize the physicsShape so it will compile.
Here's a snippet of the code that compiles without compile error but does not work:
let physicsShape = SCNPhysicsShape(node: selectedBlock, options: nil)
selectedBlock.physicsBody = SCNPhysicsBody(type: .dynamic, shape: physicsShape) // why can't I use nil? who knows
// note: 'selectedBlock' has a positive y value ... and there are objects that can be collided with at every y=0 point
let current = selectedBlock.transform
let downBelow = SCNMatrix4Translate(current, 0, -selectedBlock.position.y, 0)
let physicsWorld = SCNPhysicsWorld()
let physicsContacts = physicsWorld.convexSweepTest(with: (selectedBlock.physicsBody?.physicsShape)!, from: current, to: downBelow, options: nil)
print("count \(physicsContacts.count)") // ALWAYS prints zero
I'm looking for a working use case example in Swift with the convexSweepTest method.
You can't create SCNPhysicsWorld yourself, but use the one in the current scene (scene.physicsWorld).
Old post but while I was playing with convexSweepTest I found out that if the collisionBitMask of the other physicbody doesn't have the first bit set it won't return the contact, which makes no sense as the convexSweepTest doesn't have a categorymask or it's defaulted to 1 without saying?
Run the code below in playground
First static node, green, is with a physic body, with its category NOT matching the collisionBitMask of the convexSweep option
Second static node, orange, is with a physic body, with its category matching the collisionBitMask of the convexSweep option
One node is in movement with no physic body, just to visually show where is the shape used in the convexSweep as I'm using the coordinate of the moving node
I move the 'moving' node that represent the convexSweep shape across the 2 static shapes, if it passes through both without contact I set the first bit of the collision mask of the 2nd node and rerun the movement, if it does contact I reset to what the values should be, runs for ever so that you can see the convexSweep in action. You'll see that the convexSweep honor it's collisionBitMask option as only the 2nd node gets a contact (only if the contact node has the first bit).
import SceneKit
import SpriteKit
import PlaygroundSupport
import simd
let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 800, height: 600))
PlaygroundPage.current.liveView = sceneView
var scene = SCNScene()
sceneView.scene = scene
sceneView.backgroundColor = SKColor.lightGray
sceneView.debugOptions = .showPhysicsShapes
sceneView.allowsCameraControl = true
sceneView.autoenablesDefaultLighting = true
// a camera
var cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 5, z: 5)
scene.rootNode.addChildNode(cameraNode)
// setup a plane just for contrast and look at
let planeNode = SCNNode(geometry: SCNPlane(width: 10, height: 10))
planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
planeNode.position = SCNVector3(0, -0.5, 0)
planeNode.rotation = SCNVector4Make(1, 0, 0, -.pi/2);
scene.rootNode.addChildNode(planeNode)
// add cmaera constraint to look at the center of the plane
let centerConstraint = SCNLookAtConstraint(target: planeNode)
cameraNode.constraints = [centerConstraint]
let BitMaskContact = 0x0010
let BitMaskMoving = 0x0100 // To show that this needs to be with first bit set to 1
let BitMaskPassThrough = 0x1000
// Add a box to be a contact body
var passthroughNode = SCNNode(geometry: SCNBox(width: 0.5, height: 1.0, length: 1.0, chamferRadius: 5))
passthroughNode.geometry?.firstMaterial?.diffuse.contents = SKColor.green
passthroughNode.name = "passthroughNode static"
passthroughNode.position = SCNVector3(x: 0, y: 0.5, z: 0)
passthroughNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
passthroughNode.physicsBody?.categoryBitMask = BitMaskPassThrough
passthroughNode.physicsBody?.collisionBitMask = BitMaskMoving
scene.rootNode.addChildNode(passthroughNode)
// Add a box to be a contact body
var contactNode = SCNNode(geometry: SCNBox(width: 0.5, height: 1.0, length: 1.0, chamferRadius: 5))
contactNode.geometry?.firstMaterial?.diffuse.contents = SKColor.orange
contactNode.name = "contactNode static"
contactNode.position = SCNVector3(x: 1, y: 0.5, z: 0)
contactNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
contactNode.physicsBody?.categoryBitMask = BitMaskContact
contactNode.physicsBody?.collisionBitMask = BitMaskMoving
scene.rootNode.addChildNode(contactNode)
// Add a box just to visually see the shape to be used with convexSweepTest
let sweepOriginalPosition = SCNVector3(x: -1, y: 0.5, z: 0)
let sweepGeo = SCNBox(width: 1, height: 0.5, length: 0.5, chamferRadius: 5)
let sweepNode = SCNNode(geometry: sweepGeo)
sweepNode.name = "sweepNode moving"
sweepNode.position = sweepOriginalPosition
let sweepShape = SCNPhysicsShape(geometry: sweepGeo, options: nil)
scene.rootNode.addChildNode(sweepNode)
//let rendererDelegate = RendererDelegate( sweepNode, physicsWorld:scene.physicsWorld)
//sceneView.delegate = rendererDelegate
//scene.physicsWorld.contactDelegate = rendererDelegate
let timer = Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { _ in
let start = sweepNode.simdWorldPosition
let velocityX:Float = 0.1
// Where the shape stands now
var from = matrix_identity_float4x4
from.position = start
// Where the shape should move to
var to: matrix_float4x4 = matrix_identity_float4x4
to.position = start + SIMD3<Float>(velocityX, 0, 0)
let options: [SCNPhysicsWorld.TestOption: Any] = [
SCNPhysicsWorld.TestOption.collisionBitMask: BitMaskContact
]
let contacts = scene.physicsWorld.convexSweepTest(
with: sweepShape,
from: SCNMatrix4(from),
to: SCNMatrix4(to),
options: options)
if !contacts.isEmpty {
print( "contact found. reseting contact node collision to normal")
contactNode.physicsBody?.collisionBitMask = BitMaskMoving // 4 <-- set it back to what it should be
passthroughNode.physicsBody?.collisionBitMask = BitMaskPassThrough // 8 <-- set it back to what it should be
sweepNode.position = sweepOriginalPosition
} else {
sweepNode.position.x = sweepNode.position.x + velocityX
if sweepNode.position.x > 2 {
print( "contact missed. reseting contact node collision to | x0001")
contactNode.physicsBody?.collisionBitMask = BitMaskMoving | 0x0001 // 5 <-- have the first bit to 1
passthroughNode.physicsBody?.collisionBitMask = BitMaskPassThrough | 0x0001 // 9 <-- have the first bit to 1, but with convexSweep opion set it won't matter
sweepNode.position = sweepOriginalPosition
}
}
}

Animating a UIView's alpha in sequence with UIViewPropertyAnimator

I have a UIView that I want to reveal after 0.5 seconds, and hide again after 0.5 seconds, creating a simple animation. My code is as follows:
let animation = UIViewPropertyAnimator.init(duration: 0.5, curve: .linear) {
self.timerBackground.alpha = 1
let transition = UIViewPropertyAnimator.init(duration: 0.5, curve: .linear) {
self.timerBackground.alpha = 0
}
transition.startAnimation(afterDelay: 0.5)
}
animation.startAnimation()
When I test it out, nothing happens. I assume it's because they're both running at the same time, which would mean they cancel each other out, but isn't that what the "afterDelay" part should prevent?
If I run them separately, i.e. either fading from hidden to visible, or visible to hidden, it works, but when I try to run them in a sequence, it doesn't work.
My UIView is not opaque or hidden.
You can use Timer, and add appearing / hiding animations blocks on every timer tick to your UIViewPropertyAnimatorobject.
Here's a codebase:
#IBOutlet weak var timerBackground: UIImageView!
private var timer: Timer?
private var isShown = false
private var viewAnimator = UIViewPropertyAnimator.init(duration: 0.5, curve: .linear)
override func viewDidLoad() {
super.viewDidLoad()
viewAnimator.addAnimations {
self.timerBackground.alpha = 1
}
viewAnimator.startAnimation()
isShown = true
self.timer = Timer.scheduledTimer(timeInterval: 0.5, target: self, selector: #selector(self.startReversedAction), userInfo: nil, repeats: true)
}
func startReversedAction() {
// stop the previous animations block if it did not have time to finish its movement
viewAnimator.stopAnimation(true)
viewAnimator.addAnimations ({
self.timerBackground.alpha = self.isShown ? 0 : 1
})
viewAnimator.startAnimation()
isShown = !isShown
}
I've implemented the very similar behavior for dots jumping of iOS 10 Animations demo project.
Please, feel free to look at it to get more details.
Use UIView.animateKeyframes you'll structure your code nicely if you have complicated animations. If you'll use UIView animations nested within the completion blocks of others, it will probably result in ridiculous indentation levels and zero readability.
Here's an example:
/* Target frames to move our object to (and animate)
or it could be alpha property in your case... */
let newFrameOne = CGRect(x: 200, y: 50, width: button.bounds.size.width, height: button.bounds.size.height)
let newFrameTwo = CGRect(x: 300, y: 200, width: button.bounds.size.width, height: button.bounds.size.height)
UIView.animateKeyframes(withDuration: 2.0,
delay: 0.0,
options: .repeat,
animations: { _ in
/* First animation */
UIView.addKeyframe(withRelativeStartTime: 0.0, relativeDuration: 0.5, animations: { [weak self] in
self?.button.frame = newFrameOne
})
/* Second animation */
UIView.addKeyframe(withRelativeStartTime: 0.5, relativeDuration: 0.5, animations: { [weak self] in
self?.button.frame = newFrameTwo
})
/* . . . */
}, completion: nil)
What worked for me, was using sequence of UIViewPropertyAnimators. Here is example of my code:
let animator1 = UIViewPropertyAnimator(duration:1, curve: .easeIn)
animator1.addAnimations {
smallCoin.transform = CGAffineTransform(scaleX: 4, y: 4)
smallCoin.center = center
}
let animator2 = UIViewPropertyAnimator(duration:1, curve: .easeIn)
animator2.addAnimations {
center.y -= 20
smallCoin.center = center
}
let animator3 = UIViewPropertyAnimator(duration:10, curve: .easeIn)
animator3.addAnimations {
smallCoin.alpha = 0
}
animator1.addCompletion { _ in
animator2.startAnimation()
}
animator2.addCompletion { _ in
animator3.startAnimation()
}
animator3.addCompletion ({ _ in
print("finished")
})
animator1.startAnimation()
You can even add afterdelay attribute to manage speed of animations.
animator3.startAnimation(afterDelay: 10)

Resources