Is 5.1 channel positional audio output in Sprite Kit possible? - ios

I'm trying to play positional audio using the front and back channels in Sprite Kit, and testing on an Apple TV device.
I'm using the following code:
let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a")
let music = SKAudioNode(URL: musicURL!)
addChild(music)
music.positional = true
music.position = CGPoint(x: 0, y: 0)
let moveForward = SKAction.moveToY(1024, duration: 2)
let moveBack = SKAction.moveToY(-1024, duration: 2)
let sequence = SKAction.sequence([moveForward, moveBack])
let repeatForever = SKAction.repeatActionForever(sequence)
music.runAction(repeatForever)
What I want to accomplish is a sound that pans from the front to the back channels but Sprite Kit seems to be using just the 2 channel stereo output.
If I use moveToX instead of moveToY I get a sound panning from left to right.
I'm surely missing some initialization code to signal I want a 5.1 sound output, but I'm not sure if the SKAudioNode positional feature only works for 2 channel stereo output.
Is positional audio with more than 2 channels achievable in Sprite Kit or should I resort to AVFoundation or even OpenAL for this?
I have tried similar code with SceneKit and it seems that it also uses only 2 channels for positional audio.

A sound can't be positioned in 3D space using SceneKit. You should not use an SKAudioNode but use AVFoundation directly to play the sound.
First you have to setup the audio session to use a 5.1 channel output layout:
let session = AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayback)
session.setActive(true)
session.setPreferredOutputNumberOfChannels(6)
And then wire an AVAudioEnvironmentNode setup to output the 6 output channels.
A starting point can be found in this existing answer:
https://stackoverflow.com/a/35657416/563802

Related

Use an ARSCNView on devices that don't support ARKit

Is it possible to use an ARSCNView, configure it with an ARWorldTrackingConfiguration for device that support it, and configure it another way for devices that don't support it (with A8 chip and lower) and still have the video render in the background?
if ARWorldTrackingConfiguration.isSupported{
let config=ARWorldTrackingConfiguration()
config.planeDetection = .horizontal
sceneView.session.run(config)
}else if AROrientationTrackingConfiguration.isSupported{
let config=AROrientationTrackingConfiguration()
sceneView.session.run(config)
}else{
print("not supported")
//what here? <<<
}
You need a running ARSession to drive the camera feed to an ARSCNView, to run an ARSession you need a supported ARConfiguration, and all configurations require A9.
However, if your below-A9 fallback plan is to have a SceneKit view that doesn't get any of ARKit's motion tracking... you don't need ARKit. Just use a regular SceneKit view (SCNView). To make the camera feed show up behind your SceneKit content, find the AVCaptureDevice for the camera you want, and pass that to the background.contents of the scene in your view.
(Using a capture device as a SceneKit background is new in iOS 11. It doesn't appear to be in the docs (yet?), but it's described in the WWDC17 SceneKit session.)
By the way, there aren't any devices that support orientation tracking but don't support world tracking, so the multiple-branch if statement in your question is sort of overkill.
Below code resolve this issue, it might be broken code
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
mask = Mask(geometry: maskGeometry, maskType: maskType)
replace with
let device = MTLCreateSystemDefaultDevice()
let maskGeometry = ARSCNFaceGeometry(device: device!)!
mask = Mask(geometry: maskGeometry)

AudioKit (still) missing Clock for sequencing Audiofiles

I am trying to playback audiofiles using with the sequencer in AudioKit framework.
AudioKit.output = sampler
AudioKit.start()
sampler.enableMIDI(midi.client,name: "sampler")
// sequencer start
let seq = AKSequencer()
seq.setLength(AKDuration(beats:Double(4)))
seq.enableLooping()
let pattern = seq.newTrack()
pattern?.setMIDIOutput(sampler.midiIn)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(1)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(1)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(2)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.setLoopInfo(AKDuration( beats:Double(4) ), numberOfLoops: 80)
seq.play()
I got to the point where the AKMidiSampler will only play sine waves but not the right sample as described here
So as it turns out it is not possible to create sequences "on the fly" so i started to look for workarounds and found SelectorClock Its a workaround from the AudioKit Developers. Sadly this is not working anymore.. many of the class definitions and their properties changed.
Maybe I am not up to date and this is fixed already.. if not I'm sure there must be a go to solution to this issue.
Turn on the background capability to your target.
Choose audio.
Without this, you get just sine waves.
If you want to be completely independent from using AKSequencer you can try the following:
let metro = AKMetronome()
metro.tempo = 120.0
metro.frequency1 = 0
metro.frequency2 = 0
metro.callback = {
// your code e.g.: trigger a AKSamplePlayer() which should have been defined earlier in your code:
sample.play()
}
AudioKit.output = AKMixer(metro, sample)
try! AudioKit.start()
metro.start()
I haven‘t tested this piece of code since I am on my phone right now but it should work. I have this concept running on my iPhone 6s and it works very well. I also tried to replace AKMetronome() with my own class but I haven‘t figured out every single aspect of the sporth parameter yet. I basically want to get rid of initiating any metronome sound ( which is already set to zero in sporth so shouldn‘t produce any noise ) in the first place.. I‘ll let you know in case I‘ll achieve that.

Audio spatialization with WebGL

I currently work on a little Project, where I render a CubeMap with WebGL and then apply some sounds with the "web audio API" Web Audio API
Since the project is very large, I just try to explain what I am looking for. When I load an audio file, the sounds gets visualized (looks like a cube). The audio listener position is ALWAYS at position 0,0,0. What I have done so far is that I have created "Camera" (gl math library) with lookAt and perspective and when I rotate the camera away from the audio emitting cube, the audio played should sound different.
How am I doing this?
Every Frame I set the the orientation of the PannerNode (panner node set orientation) to the upVector of the camera. Here is the every frame update-Method (for the sound):
update(camera) {
this._upVec = vec3.copy(this._upVec, camera.upVector);
//vec3.transformMat4(this._upVec, this._upVec, camera.vpMatrix);
//vec3.normalize(this._upVec, this._upVec);
this._sound.panner.setOrientation(this._upVec[0], this._upVec[1], this._upVec[2]);
}
And here is the updateViewProjectionMethod-Methof from my Camera class, where I update the Orientation of the listener:
updateViewProjMatrix() {
let gl = Core.mGetGL();
this._frontVector[0] = Math.cos(this._yaw) * Math.cos(this._pitch);
this._frontVector[1] = Math.sin(this._pitch);
this._frontVector[2] = Math.sin(this._yaw) * Math.cos(this._pitch);
vec3.normalize(this._lookAtVector, this._frontVector);
vec3.add(this._lookAtVector, this._lookAtVector, this._positionVector);
mat4.lookAt(this._viewMatrix, this._positionVector, this._lookAtVector, this._upVector);
mat4.perspective(this._projMatrix, this._fov * Math.PI / 180, gl.canvas.clientWidth / gl.canvas.clientHeight, this._nearPlane, this._farPlane);
mat4.multiply(this._vpMatrix, this._projMatrix, this._viewMatrix);
Core.getAudioContext().listener.setOrientation(this._lookAtVector[0], this._lookAtVector[1], this._lookAtVector[2], 0, 1, 0);
}
Is this the right way? I can hear that the sound is different if I rotate the camera, but I am not sure. And do I have to multiply the resulting upVector with the current viewProjectionMatrix?

SKVideoNode only rendering in SCNScene when the node or camera moves

I am using a very simple method to setup a SKVideoNode and place it inside an SCNNode via the geometry's diffuse contents. When I do this, the only time the texture updates and shows the video properly is when the camera or node is moving. When both are stationary, the texture never updates (like the video isn't even playing) but the sound does play.
Obviously it's still playing the video, but not rendering properly. I have no idea why.
func setupAndPlay() {
// create the asset & player and grab the dimensions
let path = NSBundle.mainBundle().pathForResource("indycar", ofType: "m4v")!
let asset = AVAsset(URL: NSURL(fileURLWithPath: path))
let size = asset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
// setup the video SKVideoNode
let videoNode = SKVideoNode(AVPlayer: player)
videoNode.size = size
videoNode.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
// setup the SKScene that will house the video node
let videoScene = SKScene(size: size)
videoScene.addChild(videoNode)
// create a wrapper ** note that the geometry doesn't matter, it happens with spheres and planes
let videoWrapperNode = SCNNode(geometry: SCNSphere(radius: 10))
videoWrapperNode.position = SCNVector3(x: 0, y: 0, z: 0)
// set the material's diffuse contents to be the video scene we created above
videoWrapperNode.geometry?.firstMaterial?.diffuse.contents = videoScene
videoWrapperNode.geometry?.firstMaterial?.doubleSided = true
// reorient the video properly
videoWrapperNode.scale.y = -1
videoWrapperNode.scale.z = -1
// add it to our scene
scene.rootNode.addChildNode(videoWrapperNode)
// if I uncomment this, the video plays correctly; if i comment it, the texture on the videoWrapperNode only
// get updated when I'm moving the camera around. the sound always plays properly.
videoWrapperNode.runAction( SCNAction.repeatActionForever( SCNAction.rotateByAngle(CGFloat(M_PI * 2.0), aroundAxis: SCNVector3(x: 0, y: 1, z: 0), duration: 15.0 )))
videoNode.play()
}
Has anyone come across anything similar? Any help would be appreciated.
Sounds like you need to set .playing = true on your SCNView.
From the docs.
If the value of this property is NO (the default), SceneKit does not
increment the scene time, so animations associated with the scene do
not play. Change this property’s value to YES to start animating the
scene.
I also found that setting rendersContinously to true on the renderer (e.g. scnView) will make the video play.
If you log the SCNRendererDelegate's update calls, you can see when frames are drawn
I have been unsuccessful & getting a SKVideoNode to display video in Xcode 7.0 or 7.1. If I run my code or other samples on a hardware device like an iPad or iPhone the video display fine, but on the simulator only audio plays. Same code works fine in xCode 6.4's simulator.
I have the CatNap example from Ray Wenderlich iOS & tvOS Games by tutorial (iOS 9) & it does NOT run in the Simulator 9.1 that comes with Xcode 7.1. I believe the simulator is broken & have filed a bug with Apple but have had no response in a month.
Does anyone have sample code for a SKVideoNode the works on the simulator in xCode 7.1??

AVAudioMixerNode pan or AVAudioUnitSampler stereoPan properties not working to change left/right balance of sound output for AVAudioEngine

I have the following code, which plays a single midi note, but I want to be able to adjust the balance/pan so that it only plays out of the left speaker or the right speaker or perhaps some combination. I thought changing "sampler.stereoPan" or perhaps "engine.mainMixerNode.pan" would do the trick but it seems to have no effect. Any ideas what I'm doing wrong?
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
sampler.stereoPan = -1.0 // doesn't work
//engine.mainMixerNode.pan = -1.0 // doesn't work
engine.attachNode(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: engine.mainMixerNode.outputFormatForBus(0))
var error: NSError?
engine.startAndReturnError(&error)
sampler.startNote(65, withVelocity: 64, onChannel: 1)
You should set the pan of any node after it has been connected, the pan settings are defaulted again at the engine.connect method.
According to Apple Developer Forum the range of stereopan is -100 to 100.

Resources