SKVideoNode only rendering in SCNScene when the node or camera moves - ios

I am using a very simple method to setup a SKVideoNode and place it inside an SCNNode via the geometry's diffuse contents. When I do this, the only time the texture updates and shows the video properly is when the camera or node is moving. When both are stationary, the texture never updates (like the video isn't even playing) but the sound does play.
Obviously it's still playing the video, but not rendering properly. I have no idea why.
func setupAndPlay() {
// create the asset & player and grab the dimensions
let path = NSBundle.mainBundle().pathForResource("indycar", ofType: "m4v")!
let asset = AVAsset(URL: NSURL(fileURLWithPath: path))
let size = asset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
// setup the video SKVideoNode
let videoNode = SKVideoNode(AVPlayer: player)
videoNode.size = size
videoNode.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
// setup the SKScene that will house the video node
let videoScene = SKScene(size: size)
videoScene.addChild(videoNode)
// create a wrapper ** note that the geometry doesn't matter, it happens with spheres and planes
let videoWrapperNode = SCNNode(geometry: SCNSphere(radius: 10))
videoWrapperNode.position = SCNVector3(x: 0, y: 0, z: 0)
// set the material's diffuse contents to be the video scene we created above
videoWrapperNode.geometry?.firstMaterial?.diffuse.contents = videoScene
videoWrapperNode.geometry?.firstMaterial?.doubleSided = true
// reorient the video properly
videoWrapperNode.scale.y = -1
videoWrapperNode.scale.z = -1
// add it to our scene
scene.rootNode.addChildNode(videoWrapperNode)
// if I uncomment this, the video plays correctly; if i comment it, the texture on the videoWrapperNode only
// get updated when I'm moving the camera around. the sound always plays properly.
videoWrapperNode.runAction( SCNAction.repeatActionForever( SCNAction.rotateByAngle(CGFloat(M_PI * 2.0), aroundAxis: SCNVector3(x: 0, y: 1, z: 0), duration: 15.0 )))
videoNode.play()
}
Has anyone come across anything similar? Any help would be appreciated.

Sounds like you need to set .playing = true on your SCNView.
From the docs.
If the value of this property is NO (the default), SceneKit does not
increment the scene time, so animations associated with the scene do
not play. Change this property’s value to YES to start animating the
scene.

I also found that setting rendersContinously to true on the renderer (e.g. scnView) will make the video play.
If you log the SCNRendererDelegate's update calls, you can see when frames are drawn

I have been unsuccessful & getting a SKVideoNode to display video in Xcode 7.0 or 7.1. If I run my code or other samples on a hardware device like an iPad or iPhone the video display fine, but on the simulator only audio plays. Same code works fine in xCode 6.4's simulator.
I have the CatNap example from Ray Wenderlich iOS & tvOS Games by tutorial (iOS 9) & it does NOT run in the Simulator 9.1 that comes with Xcode 7.1. I believe the simulator is broken & have filed a bug with Apple but have had no response in a month.
Does anyone have sample code for a SKVideoNode the works on the simulator in xCode 7.1??

Related

SceneKit + ARKit: Billboarding without rolling with camera

I'm trying to draw a billboarded quad using SceneKit and ARKit. I have basic billboarding working, however when I roll the camera the billboard also rotates in place. This video shows this in action as I roll the camera to the left (the smily face is the billboard):
Instead I'd like the billboard to still face the camera but keep oriented vertically in the scene, no matter what the camera is doing
Here's how I compute billboarding:
// inside frame update function
struct Vertex {
var position: SIMD3<Float>
var texCoord: SIMD2<Float>
}
let halfSize = Float(0.25)
let cameraNode = sceneView.scene.rootNode.childNodes.first!
let modelTransform = self.scnNode.simdWorldTransform
let viewTransform = cameraNode.simdWorldTransform.inverse
let modelViewTransform = viewTransform * modelTransform
let right = SIMD3<Float>(modelViewTransform[0][0], modelViewTransform[1][0], modelViewTransform[2][0]);
let up = SIMD3<Float>(modelViewTransform[0][1], modelViewTransform[1][1], modelViewTransform[2][1]);
// drawBuffer is a MTL buffer of vertex data
let data = drawBuffer.contents().bindMemory(to: ParticleVertex.self, capacity: 4)
data[0].position = (right + up) * halfSize
data[0].texCoord = SIMD2<Float>(0, 0)
data[1].position = -(right - up) * halfSize
data[1].texCoord = SIMD2<Float>(1, 0)
data[2].position = (right - up) * halfSize
data[2].texCoord = SIMD2<Float>(0, 1)
data[3].position = -(right + up) * halfSize
data[3].texCoord = SIMD2<Float>(1, 1)
Again this gets the billboard facing the camera correctly, however when I roll the camera, the billboard rotates along with it.
What I'd like instead is for the billboard to point towards the camera but keep its orientation in the world. Any suggestions on how to fix this?
Note that my code example is simplified so I can't use SCNBillboardConstraint or anything like that; I need to be able to compute the billboarding myself
Here's the solution I came up with: create a new node that matches the camera's position and rotation, but without any roll:
let tempNode = SCNNode()
tempNode.simdWorldPosition = cameraNode.simdWorldPosition
// This changes the node's pitch and yaw, but not roll
tempNode.simdLook(at: cameraNode.simdConvertPosition(SIMD3<Float>(0, 0, 1), to: nil))
let view = tempNode.simdWorldTransform.inverse
let modelViewTransform = view * node.simdWorldTransform
This keeps the billboard pointing upwards in world space, even as the camera rolls.
I had actually tried doing this earlier by setting tempNode.eulerAngles.z = 0, however that seems to effect the rest of the transform matrix in unexpected ways
There's probably a way to do this without creating a temporary node too but this works well enough for me

Rendering Alpha Channel Video Over Background (AVFoundation, Swift)

Recently, I have been following this tutorial, which has taught me how to play a video with alpha channel in iOS. This has been working great to build an AVPlayer over something like a UIImageView, which allows me to make it look like my video (with the alpha channel removed) is playing on top of the image.
Using this approach, I now need to find a way to do this while rendering/saving the video to the user's device. This is my code to generate the alpha video that plays in the AVPlayer;
let videoSize = CGSize(width: playerItem.presentationSize.width, height: playerItem.presentationSize.height / 2.0)
let composition = AVMutableVideoComposition(asset: playerItem.asset, applyingCIFiltersWithHandler: { request in
let sourceRect = CGRect(origin: .zero, size: videoSize)
let alphaRect = sourceRect.offsetBy(dx: 0, dy: sourceRect.height)
let filter = AlphaFrameFilter()
filter.inputImage = request.sourceImage.cropped(to: alphaRect)
.transformed(by: CGAffineTransform(translationX: 0, y: -sourceRect.height))
filter.maskImage = request.sourceImage.cropped(to: sourceRect)
return request.finish(with: filter.outputImage!, context: nil)
(That's been truncated a bit for ease, but I can confirm this approach properly returns an AVVideoComposition that I can play in AVPlayer.
I recognize that I can use an AVVideoComposition with an AVExportSession, but this only allows me to render my alpha video over a black background (and not an image or video, as I'd need).
Is there a way to overlay the now "background-removed" alpha channel, on top of another video and process out?

SCNBillboardConstraint isn’t working. Node with constraint doesn’t change

Perhaps I am not setting up the camera properly… I’m starting with a scn file with a camera. In Xcode, rotating the free camera around, the geometries rotate as expected. However, at runtime, nothing happens.
It doesn’t seem to matter if I add the constraint in code or in the editor. The look at constraint works.
It also doesn’t seem to matter if I use the camera from the scn file or if I add a camera in code.
The sample code is
class Poster: SCNNode {
let match:GKTurnBasedMatch
init(match:GKTurnBasedMatch, width:CGFloat, height:CGFloat) {
self.match = match
super.init()
// SCNPlane(width: width, height: height)
self.geometry = SCNBox(width: width, height: height, length: 0.1, chamferRadius: 0)
self.constraints = [SCNBillboardConstraint()]
self.updatePosterImage()
}
}
So… I gave up on the billboard constraint.
I’m using a SCNLookAtConstraint that looks at the camera node, with the gimbal lock enabled.
I was using a SCNPlane but it was doing weird stuff. So I went with a SCNBox for the geometry.
So, in the constructor:
self.geometry = SCNBox(width: self.size.width, height: self.size.height, length: 0.1, chamferRadius: 0)
let it = SCNLookAtConstraint(target: cameraNode)
it.isGimbalLockEnabled = true
self.constraints = [it]
It works.
You're missing ".init()" on the SCNBillboardConstraint(). This line alone did all the work for me:
node.constraints = [SCNBillboardConstraint.init()]
Try this:
A constraint that orients theNode to always point toward the current camera.
// Create the constraint
SCNBillboardConstraint *aConstraint = [SCNBillboardConstraint billboardConstraint];
theNode.constraints = #[aConstraint];
theNode is the node you want pointing to the camera. This should work.
Updated
Ok, if you were to create a sample Project with the Game template. And then make the following changes:
// create a clone of the ship, change position and rotation
SCNNode *ship2 = [ship clone];
ship2.position = SCNVector3Make(0, 4, 0);
ship2.eulerAngles = SCNVector3Make(0, M_PI_2, 0);
[scene.rootNode addChildNode:ship2];
// Add the constraint to `ship`
SCNBillboardConstraint *aConstraint = [SCNBillboardConstraint billboardConstraint];
ship.constraints = #[aConstraint];
ship is constrained but ship2 isn't.
If you were to add this:
ship2.constraints = #[aConstraint];
Now, both will face the camera. Isn't this what you are looking for?
By any chance are you using the allowsCameraControl property on SCNView?
If so, remember setting that will add a new camera to your scene (cloning an existing camera if one is present to match its settings), so if you create a constraint to your own camera, the constraint will not be linked to the camera that’s actually being used.
Per Apple in their WWDC 2017 video, they say that property is really for debugging purposes, not for a real-world use.
Simply put, you have to ensure you are moving around your own camera, not relying on the auto-created one.

Is 5.1 channel positional audio output in Sprite Kit possible?

I'm trying to play positional audio using the front and back channels in Sprite Kit, and testing on an Apple TV device.
I'm using the following code:
let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a")
let music = SKAudioNode(URL: musicURL!)
addChild(music)
music.positional = true
music.position = CGPoint(x: 0, y: 0)
let moveForward = SKAction.moveToY(1024, duration: 2)
let moveBack = SKAction.moveToY(-1024, duration: 2)
let sequence = SKAction.sequence([moveForward, moveBack])
let repeatForever = SKAction.repeatActionForever(sequence)
music.runAction(repeatForever)
What I want to accomplish is a sound that pans from the front to the back channels but Sprite Kit seems to be using just the 2 channel stereo output.
If I use moveToX instead of moveToY I get a sound panning from left to right.
I'm surely missing some initialization code to signal I want a 5.1 sound output, but I'm not sure if the SKAudioNode positional feature only works for 2 channel stereo output.
Is positional audio with more than 2 channels achievable in Sprite Kit or should I resort to AVFoundation or even OpenAL for this?
I have tried similar code with SceneKit and it seems that it also uses only 2 channels for positional audio.
A sound can't be positioned in 3D space using SceneKit. You should not use an SKAudioNode but use AVFoundation directly to play the sound.
First you have to setup the audio session to use a 5.1 channel output layout:
let session = AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayback)
session.setActive(true)
session.setPreferredOutputNumberOfChannels(6)
And then wire an AVAudioEnvironmentNode setup to output the 6 output channels.
A starting point can be found in this existing answer:
https://stackoverflow.com/a/35657416/563802

Coordinate system doesn't seem to match between Playground and iOS Device/Simulator

I've been trying to wrap my head around all the goodness in Xcode 6 and iOS 8 over the last couple of days. I'm currently working with SceneKit to get a feel for what it can do.
I'm trying to build a visual grid to make placing objects in the scene a bit easier.
The Playground displays how I expect it to, but the Simulator/Device does not. I'm not sure if it's a bug, or if I'm doing something wrong.
I have the following code:
for index in -20..20 {
let i = CFloat(index)
let neg = i - 20
let pos = i + 20
var lat = [
SCNVector3Make(neg, 0, i),
SCNVector3Make(pos, 0, i)
]
var lng = [
SCNVector3Make(i, 0, neg),
SCNVector3Make(i, 0, pos)
]
var indices: CInt[] = [0, 1]
let latSource = SCNGeometrySource(vertices:&lat, count:2)
let lngSource = SCNGeometrySource(vertices:&lng, count:2)
let indexData = NSData(bytes:indices, length:sizeof(CInt) * countElements(indices))
let element = SCNGeometryElement(data:indexData, primitiveType:SCNGeometryPrimitiveType.Line, primitiveCount:2, bytesPerIndex:sizeof(CInt))
let latLine = SCNGeometry(sources:[latSource], elements:[element])
let lngLine = SCNGeometry(sources:[lngSource], elements:[element])
let latLineNode = SCNNode(geometry:latLine)
let lngLineNode = SCNNode(geometry:lngLine)
scene.rootNode.addChildNode(latLineNode)
scene.rootNode.addChildNode(lngLineNode)
}
In a Playground the second line is let i = CGFloat(index), but other than that the code is identical between the Playground and the iOS Xcode 6 project I have.
In the Playground, I get the grid I'm after. In the Simulator and on the Device, however, I get garbage. No matter how I change the SCNVector3Make calls I can't get the grid to display properly in iOS or the Simulator.
It should also be noted that what is displayed in the Simulator and on the device is identical.
I tried adding a box to the scene also. When I used an SCNBox it displays correctly - though much bigger than it should. When I use custom geometry (that works correctly in the Playground), however, the box dimensions are way off. It looks more like a wall than a cube.
I tried to include screenshots to show what I'm seeing, but apparently I need at least 10 reputation points to post images, sorry.
Thanks in advance!
UPDATE
To answer the commenter's question below (regarding how I initialize the scene):
In the project (that runs in the Simulator/Device) this is how I get it:
let scene = SCNScene()
let sceneView = SCNView(frame:UIScreen.mainScreen().bounds)
// Build up grid
sceneView.scene = scene
self.view = sceneView
In the Playground, I do this:
let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 500, height: 300))
let scene = SCNScene()
sceneView.scene = scene
XCPShowView("The Scene View", sceneView)
// Build up grid
UPDATE 2
I created an OSX app, modified the generated GameViewController code to generate the same structures and everything worked as expected. I didn't update the box's color to be red, and the positioning of the cube and camera is a bit different in the OSX app.
Now that I have enough points, I will add images showing the what I'm seeing.
Also of note I tried this on both Xcode Beta 2 & 3 - the results were identical.
What I see in the Playground
What I see in the OSX app
What I see in the iOS Simulator
I removed iOS 8 from by phone and iPad, so i don't have any screen shots from those - but they look identical to the Simulator.
I'll be filing a bug report for this through Apple.
UPDATE 3
I have created repos for these projects so anyone who's interested can take a look (maybe there's something I'm not aware of that I'm doing wrong):
Playground project
OSX project
iOS project
Please let me know if you see anything I'm doing wrong or how I could be doing things better!
Thanks for your help everyone!
I had a similar issue recently playing with UIBezierPath in Playground. Upon posting the question in Apple developer forms, I believe Playground does use a different coordinate system (LLO) than the devices (ULO):
https://forums.developer.apple.com/message/39277#39277
Here is apple's docs on the two coordinate systems:
https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html#//apple_ref/doc/uid/TP40010156-CH14-SW10
you have does not use on first scene (frame:UIScreen.mainScreen().bounds).
In the project (that runs in the Simulator/Device) this is how I get it:
let scene = SCNScene()
let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 500, height: 300))
// Build up grid
sceneView.scene = scene
self.view = sceneView
In the Playground, I do this:
let sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 500, height: 300))
let scene = SCNScene()
sceneView.scene = scene
XCPShowView("The Scene View", sceneView)

Resources