I am having trouble getting positional audio to work in SceneKit. Starting with the SceneKit game template generated by Xcode I have added the following code to the end of the handleTap method:
let ship = scnView.scene!.rootNode.childNode(withName: "ship", recursively: true)!
if let source = SCNAudioSource(fileNamed: "art.scnassets/monoAudioTest.wav")
{
source.volume = 1
source.isPositional = true
source.shouldStream = true
source.loops = true
source.load()
let player = SCNAudioPlayer(source: source)
ship.addAudioPlayer(player)
}
ship.runAction(SCNAction.move(to: SCNVector3(0, 0, -10000), duration: 8))
The audio plays but the volume doesn't decrease as the jet moves away from the camera. Am I missing some steps or making some wrong assumptions?
Cross-posted to the Apple Developer Forums.
As mentioned by Jed Soane, and confirmed by Apple in a Radar, the issue was my audio file was stereo instead of mono. Only mono audio files will work for positional audio.
You should be able to get positional audio with source.shouldStream = false.
Related
Hello fellow AudioKit users,
I'm trying to setup AudioKit 5 with a playback time indication, and am having trouble.
If I use AudioPlayer's duration property, this is the total time of the audio file, not the current playback time.
ex:
let duration = player.duration
Always gives the file's total time.
Looking at old code from AKAudioPlayer, it seemed to have a "currentTime" property.
The migration guide (https://github.com/AudioKit/AudioKit/blob/v5-main/docs/MigrationGuide.md) mentions some potentially helpful classes from the old version, however the "AKTimelineTap" has no replacement with no comments from the developers... nice.
Also still not sure how to manipulate the current playback time either...
I've also checked out Audio Kit 5's Cookbooks, however this is for adding effects and nodes, not necessarily for playback display, etc..
Thanks for any help with this new version of AudioKit.
You can find playerNode in AudioPlayer, it's AVAudioPlayerNode class.
Use lastRenderTime and playerTime, you can calculate current time.
ex:
// get playerNode in AudioPlayer.
let playerNode = player.playerNode
// get lastRenderTime, and transform to playerTime.
guard let lastRenderTime = playerNode.lastRenderTime else { return }
guard let playerTime = playerNode.playerTime(forNodeTime: lastRenderTime) else { return }
// use sampleRate and sampleTime to calculate current time in seconds.
let sampleRate = playerTime.sampleRate
let sampleTime = playerTime.sampleTime
let currentTime = Double(sampleTime) / sampleRate
I'm trying to play positional audio in a Swift iOS app using AVAudioEngine and AVAudioEnvironmentNode. I can successfully play the audio fine and hear it spatialized, shifting between both outputs in stereo, but only in the simulator. When I run the same app on an iPhone, the audio plays but in both ears rather than panning when the source is moved around. Is there some special configuration I need to do, like manually handling the device audio output?
I initialize the audio engine and player like so:
let audioEngine = AVAudioEngine()
let audioEnv = AVAudioEnvironmentNode()
audioEngine.attach(audioEnv)
audioEngine.connect(
audioEnv,
to: audioEngine.mainMixerNode,
format: audioEnv.outputFormat(forBus: 0)
)
try audioEngine.start()
let player = AVAudioPlayerNode()
audioEngine.attach(player)
audioEngine.connect(
player,
to: audioEnv,
format: AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1)
)
player.scheduleFile(...)
player.play()
My source files are mono channel .wav.
At some point in the future, I change the position of the player:
player.position = AVAudio3DPoint(x: 5, y: 0, z: 0)
This should play only (or mostly) in one ear. When run in the iOS simulator, it does exactly what I expect. However, on an actual device it just plays evenly in both ears no matter what player.position is set to. I suspect it has to do with the configuration of audioEngine.
Thoughts?
Try setting:
audioEnv.renderingAlgorithm = .HRTFHQ // or .HRTF
I would like to make a 5-band audio equalizer (60Hz, 230Hz, 910Hz, 4kHz, 14kHz) using AVAudioEngine. I would like to have the user input gain per band through a vertical slider and accordingly adjust the audio that is playing. I tried using AVAudioUnitEQ to do this, but I hear no difference when playing the audio. I tried to hardcode in values to specify a gain at each frequency, but it still does not work. Here is the code I have:
var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!
// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 5)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
bands[0].gain = -10.0
bands[0].filterType = .lowShelf
bands[1].gain = -10.0
bands[1].filterType = .lowShelf
bands[2].gain = -10.0
bands[2].filterType = .lowShelf
bands[3].gain = 10.0
bands[3].filterType = .highShelf
bands[4].gain = 10.0
bands[4].filterType = .highShelf
do {
if let filepath = Bundle.main.path(forResource: "song", ofType: "mp3") {
let filepathURL = NSURL.fileURL(withPath: filepath)
audioFile = try AVAudioFile(forReading: filepathURL)
audioEngine.prepare()
try audioEngine.start()
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
audioPlayerNode.play()
}
} catch _ {}
Since the low frequencies have a gain of -10 and the high frequencies have a gain of 10, there should be a very noticeable difference when playing any media. However, when the media starts playing, it sounds the same as if played without any equalizer attached.
I'm not sure why this is happening, but I tried several different things to debug. I thought that it might be the order of the functions so I tried switching it so that audioEngine.connect is called after adjusting all of the bands, but that did not make a difference either.
I tried this same code with using an AVAudioUnitTimePitch, and it worked perfectly, so I am dumbfounded as to why it does not work with AVAudioUnitEQ.
I do not want to use any third-party libraries or cocoa pods for this project, I would like to do it using AVFoundation alone.
Any help would be greatly appreciated!
Thanks in advance.
AVAudioUnitEQFilterParameters
Looking through the documentation, I noticed that I had messed with all of the parameters except bypass and it seems that changing this flag fixed everything!
So, I believe the main issue here is that each AVAudioUnitEQ band must not be bypassed by the provided system values rather than the values the programmer sets.
So, I changed
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
to
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
bands[i].bypass = false
bands[i].filtertype = .parametric
}
and everything started working. Furthermore, to make an effective equalizer that allows the user to modify individual frequencies the filtertype for each band should be set to .parametric.
I am still unsure on what I should set the bandwith to, but I can probably check online for that or just mess with it until the sound matches a different equalizer application.
I have a simple audio file in .wav format (the audio file is cut perfectly to loop). I've tried different methods to loop it. My first attempt was simply using AVPlayer and NSNotification to detect when audioItem ended to seek time at zero and play again. However, there was clearly a gap.
I've been looking at different solutions online, and found people using AVQueuePlayer to do a switching:
Looping AVPlayer seamlessly
However, when implemented, this still produces a gap.
Here's my current notification code:
weak var weakSelf = self
NSNotificationCenter.defaultCenter().addObserverForName(AVPlayerItemDidPlayToEndTimeNotification, object: nil, queue: nil, usingBlock: {(note: NSNotification) -> Void in
if weakSelf?.currentQueuePlayer.currentItem == weakSelf?.currentAudioItemOne {
weakSelf?.currentQueuePlayer.insertItem((weakSelf?.currentAudioItemTwo)!, afterItem: nil)
weakSelf?.currentAudioItemTwo.seekToTime(kCMTimeZero)
} else {
weakSelf?.currentQueuePlayer.insertItem((weakSelf?.currentAudioItemOne)!, afterItem: nil)
weakSelf?.currentAudioItemOne.seekToTime(kCMTimeZero)
}
})
Here's my code to set up the current QueuePlayer.
let audioPlayerItem = AVPlayerItem(URL: url)
currentAudioItemOne = audioPlayerItem
currentAudioItemTwo = audioPlayerItem
currentQueuePlayer = AVQueuePlayer()
currentQueuePlayer.insertItem(currentAudioItemOne, afterItem: nil)
currentQueuePlayer.play()
I've been working at this problem for several days now. Any leads or new things to try would be appreciated. The only thing I haven't tried so far is lower quality audio files. These .wav files are all over 1mb, and had be suspecting that the file size could be affecting the seamless looping.
EDIT:
Using AVPlayerLooper to create the 'Treadmill' effect:
let url = URL(fileURLWithPath: path)
let audioPlayerItem = AVPlayerItem(url: url)
currentAudioItemOne = audioPlayerItem
currentQueuePlayer = AVQueuePlayer()
currentAudioPlayerLayer = AVPlayerLayer(player: currentQueuePlayer)
currentAudioLooper = AVPlayerLooper(player: currentQueuePlayer, templateItem: currentAudioItemOne)
currentQueuePlayer.play()
EDIT 2:
afinfo on one of my wav files:
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer
no channel layout.
estimated duration: 11.302336 sec
audio bytes: 1993732
audio packets: 498433
bit rate: 1411200 bits per second
packet size upper bound: 4
maximum packet size: 4
audio data file offset: 44
not optimized
source bit depth: I16
----
You are inserting the item too late in your current solution. You need to queue up more than one initial item, so there's always a primed AVPlayerItem ready to go.
This is called the AVPlayerQueue "treadmill pattern" as better described in this WWDC 2016 session. If you're targeting iOS 10, you can use new AVPlayerLooper class which does it for you (also described in the same link). Apple has also provided a sample project which provides an example of both strategies.
Lower level solutions include queuing up the audio buffers to an AVAudioEngine instance or using an AudioQueue or mashing the buffers together yourself with an AudioUnit.
I'm developing an app for Swift and Sprite Kit (xCode 6.4, currently building for iOS 8.4). I'm using SKVideoNode in conjunction with AVPlayer to play a full-screen video. The code follows:
let path = NSBundle.mainBundle().pathForResource("SPLASH_x", ofType:"mov")
let vUrl = NSURL.fileURLWithPath(path!)
let asset = AVAsset.assetWithURL(vUrl) as? AVAsset
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(URL: vUrl)
SplashVideo = SKVideoNode(AVPlayer: player)
SplashVideo!.xScale = self.size.width / SplashVideo!.size.width
SplashVideo!.yScale = self.size.height / SplashVideo!.size.height
SplashVideo!.position = CGPointMake(self.frame.midX, self.frame.midY)
self.addChild(SplashVideo!)
var observer: AnyObject? = nil
observer = player.addPeriodicTimeObserverForInterval(CMTimeMake(1,30), queue: dispatch_get_main_queue(),
usingBlock: { (time: CMTime) -> Void in
let secs:Float64 = CMTimeGetSeconds(time)
println("I think it's playing")
if (secs > 0.01) {
self.hideBackground()
println("I think I'm done observing. Background hidden!")
player.removeTimeObserver(observer!)
}
})
println("I think I'm playing the splash video:")
SplashVideo!.play()
(In case it's not clear, this happens in didMoveToView; I have imported Foundation, AVFoundation, and SpriteKit at the top of the file).
This works fine in the simulator; if I build and run for my iPad nothing happens at all--it displays a black screen, or if I remove the time observer (so that the background never gets hidden), I just see the background (The background is the first frame of the movie--I was experiencing a black flash at the beginning of video playback and am using the time observer as a masking technique to hide it). One of my users has reported that it worked for him until he upgraded to iOS9 (less of a concern), another reports that he hears the audio that goes with the .mov file but doesn't see the video itself (more of a concern). So I'm getting a variety of non-working behaviors, which is the best kind of bug. And by best I mean worst.
Things I have tried:
Various versions and combinations of directly linking in Foundation, AVFoundation, SpriteKit when building.
Using AVPlayerLayer instead of SpriteKit (no change in behavior for me, didn't deploy so I don't know if it would help any of my testers).
Removing the time observer entirely (no change).
Searching the interwebs (no help).
Tearing my hair out (ouch).
All were ineffective. And now I am bald. And sad.
Answering my own question: after much trial and error, it appears that you can't scale an SKVideoNode in iOS9 (or possibly this was never supported? The documentation is not clear). It's also true that the simulator for xCode 7 isn't playing video no matter what I do, which wasn't helping matters. In any case, what you can do is change the size property of the Node (and, I guess, let Sprite Kit do the scaling? Documentation seems spotty) and that seems to work:
let asset = AVAsset.assetWithURL(vUrl) as? AVAsset
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(URL: vUrl)
SplashVideo = SKVideoNode(AVPlayer: player)
SplashVideo!.size = self.size
SplashVideo!.position = CGPointMake(self.frame.midX, self.frame.midY)
self.addChild(SplashVideo!)