How can I pan an oscillator to one ear using AudioKit? This is what I have right now:
oscillator = AKOscillator()
leftPan = AKPanner(oscillator, pan: -1)
mix = AKMixer(leftPan, oscillator)
AudioKit.output = mix
AudioKit.start()
mix.start()
leftPan.start()
oscillator.start()
However, the sound is still playing in both ears.
How can I fix this?
Try this:
oscillator.start()
leftPan = AKPanner(oscillator, pan: -1)
AudioKit.output = leftPan
AudioKit.start()
You seem to not even need the mixer. My guess is that it's redundant, which messes up the pan.
Source.
The root cause of the sound being in both ears is mixing the original oscillator, which is mono (not panned). DopApps's answer is correct: simply assign "leftPan" to the AudioKit.output.
Related
I'm new to AudioKit and am using v5. The oscillator frequency ramp doesn't seem to work as expected. The following example has 2 successive ramps. First it ramps 440-880, but then sounds like it goes 880-660 instead of 440-660.
If I comment out osc.$frequency.ramp(to: 660.0, duration: 1.0), it ramps 440-880, then ramps extremely quickly to 440 but gets there.
Seems like something isn't resetting after the ramp.
(The sleeps are only for testing.)
Is this a bug? Is there something else I should be doing? Any insight would be much appreciated! Thanks!
import Cocoa
import AudioKit
class Test {
let akEngine = AudioEngine()
let osc = Oscillator()
func setup() {
osc.amplitude = 0.1
akEngine.output = osc
do {
try akEngine.start()
} catch {
print("Couldn't start AudioEngine.")
}
osc.frequency = 440.0
osc.start()
osc.$frequency.ramp(to: 880.0, duration: 1.0)
sleep(2)
osc.stop()
sleep(1)
osc.frequency = 440.0
osc.start()
osc.$frequency.ramp(to: 660.0, duration: 1.0)
sleep(2)
osc.stop()
}
}
This probably deserves an explanation, or maybe it should be fixed to work as you have it above, but once you start automating changes, you should stick with the automation syntax and not jump out to plain setting of values. For you this means just replacing osc.frequency = 440 with osc.$frequency.ramp(to: 440.0, duration: 0.0) and I believe you will get the effect you want.
Setting a parameter (osc.frequency = 440.0) causes a tiny ramp to be used to avoid zippering (equivalent to osc.$frequency.ramp(to: 440.0, duration:epsilon)). The test code immediately applies another ramp, overriding the zippering ramp.
If you insert a sleep after your second osc.start(), you'll hear the quick zipper ramp back to 440.
If I use AKOscillator only for specific purpose, should I anyway use Envelopes classes to avoide
amplitude click when I start/stop oscillator?
Or there any other more light methods?
One "light" method is to set your parameter ramp to a non zero value, start your amplitude at zero, and then set your amplitude. Ramping is the same value for all parameters, though, so depending on if you want your frequency to change at a different ramp, you may want to change the ramp again after it has reached the amplitude you want.
Here's an example playground:
import AudioKitPlaygrounds
import AudioKit
let oscillator = AKOscillator(waveform: AKTable(.sine), amplitude: 0)
oscillator.rampDuration = 0.2
AudioKit.output = oscillator
try AudioKit.start()
oscillator.start()
oscillator.amplitude = 1.0
sleep(1)
oscillator.amplitude = 0
I used your code and it did not help, but I found out that this 'click' appears on the end, when oscillator stops. so if even rampDuration is 0.0, there is not 'click' at the start, the only 'click' is on the end. Here is my code (it is inside IOS app):
class ViewController: UIViewController {
var osc = AKOscillator(waveform: AKTable(.sine), amplitude: 0)
#IBAction func buttonTapped(_ sender: UIButton) { //when button in App is pressed
osc.rampDuration = 0.2
AudioKit.output = osc
osc.frequency = Double.random(in: 100.0...1000.0)
try? AudioKit.start()
osc.start()
osc.amplitude = 0.5
osc.rampDuration = 0.0 //to avoid frequency glide effect
sleep(1)
//osc.rampDuration - I tried to change rampDuration before oscillator stop, but it
//did not help
osc.stop() //here is amplitude 'click' appears
try? AudioKit.stop()
}
So, as I supposed I have to use envelopes anyway?
We are using two AKMixer (one for left, one for right channel) and one AKMixer as output with these two mixers as inputs.
If one of the mixers has a volume lower than 0.00001 the output signal is lost. But lower volumes are possible, because if we lower the main system volume on values over 0.00001 the signal on the headphone-jack is going lower.
As a workaround I tried to set the AKMixer.output.volume to 0.5 and the input mixers to 0.00001 and it works too. But in my application I also need max output and than I got weird "clicks" when changing the both volume levels at once.
It would be great if somebody can help. With the workaround or the causing problem.
Thanks.
var rightSine = AKOscillator(waveform: AKTable(.sine))
var rightPanner : AKMixer!
let pan2 = AKPanner(self.rightSine, pan: 1)
pan2.rampDuration = 0
let right1: AKMixer = AKMixer(pan2 /*, .... some more */)
self.rightPanner = right1
let mix = AKMixer(self.rightPanner /* left channel... */)
mix.volume = 1.0
AudioKit.output = mix
do {
try AudioKit.start()
} catch {
}
self.rightPanner.volume = 0.00002
This is the code used to initialise the audio stuff (shortened) and afterwards the nodes are started.
*Edit: I'm testing the precise threshold on which the output is broken..
AudioKit's AKMixer is a simple wrapper around Apple's AVAudioMixerNode and as such, I can't really dig much deeper to help you solve the problem using that node. But, if you're willing to switch to AKBooster, whose job it is to amplify or diminish a signal, I think you will be fine to use small numbers for your gain value.
var rightSine = AKOscillator(waveform: AKTable(.sine))
var rightBooster: AKBooster!
let pan2 = AKPanner(self.rightSine, pan: 1)
pan2.rampDuration = 0
let right1: AKMixer = AKMixer(pan2 /*, .... some more */)
self. rightBooster = AKBooster(right1)
let mix = AKMixer(self. rightBooster /* left channel... */)
mix.volume = 1.0
AudioKit.output = mix
self.rightBooster.gain = 0.00002
I am trying to playback audiofiles using with the sequencer in AudioKit framework.
AudioKit.output = sampler
AudioKit.start()
sampler.enableMIDI(midi.client,name: "sampler")
// sequencer start
let seq = AKSequencer()
seq.setLength(AKDuration(beats:Double(4)))
seq.enableLooping()
let pattern = seq.newTrack()
pattern?.setMIDIOutput(sampler.midiIn)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(1)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(1)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.add(noteNumber: 48, velocity: 127, position: AKDuration(beats:Double(2)), duration: AKDuration(beats:Double(0.2)), channel: 0)
pattern!.setLoopInfo(AKDuration( beats:Double(4) ), numberOfLoops: 80)
seq.play()
I got to the point where the AKMidiSampler will only play sine waves but not the right sample as described here
So as it turns out it is not possible to create sequences "on the fly" so i started to look for workarounds and found SelectorClock Its a workaround from the AudioKit Developers. Sadly this is not working anymore.. many of the class definitions and their properties changed.
Maybe I am not up to date and this is fixed already.. if not I'm sure there must be a go to solution to this issue.
Turn on the background capability to your target.
Choose audio.
Without this, you get just sine waves.
If you want to be completely independent from using AKSequencer you can try the following:
let metro = AKMetronome()
metro.tempo = 120.0
metro.frequency1 = 0
metro.frequency2 = 0
metro.callback = {
// your code e.g.: trigger a AKSamplePlayer() which should have been defined earlier in your code:
sample.play()
}
AudioKit.output = AKMixer(metro, sample)
try! AudioKit.start()
metro.start()
I haven‘t tested this piece of code since I am on my phone right now but it should work. I have this concept running on my iPhone 6s and it works very well. I also tried to replace AKMetronome() with my own class but I haven‘t figured out every single aspect of the sporth parameter yet. I basically want to get rid of initiating any metronome sound ( which is already set to zero in sporth so shouldn‘t produce any noise ) in the first place.. I‘ll let you know in case I‘ll achieve that.
I have the following code, which plays a single midi note, but I want to be able to adjust the balance/pan so that it only plays out of the left speaker or the right speaker or perhaps some combination. I thought changing "sampler.stereoPan" or perhaps "engine.mainMixerNode.pan" would do the trick but it seems to have no effect. Any ideas what I'm doing wrong?
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
sampler.stereoPan = -1.0 // doesn't work
//engine.mainMixerNode.pan = -1.0 // doesn't work
engine.attachNode(sampler)
engine.connect(sampler, to: engine.mainMixerNode, format: engine.mainMixerNode.outputFormatForBus(0))
var error: NSError?
engine.startAndReturnError(&error)
sampler.startNote(65, withVelocity: 64, onChannel: 1)
You should set the pan of any node after it has been connected, the pan settings are defaulted again at the engine.connect method.
According to Apple Developer Forum the range of stereopan is -100 to 100.