Using AKMixer with volume lower 0.00001 there is no output - audiokit

We are using two AKMixer (one for left, one for right channel) and one AKMixer as output with these two mixers as inputs.
If one of the mixers has a volume lower than 0.00001 the output signal is lost. But lower volumes are possible, because if we lower the main system volume on values over 0.00001 the signal on the headphone-jack is going lower.
As a workaround I tried to set the AKMixer.output.volume to 0.5 and the input mixers to 0.00001 and it works too. But in my application I also need max output and than I got weird "clicks" when changing the both volume levels at once.
It would be great if somebody can help. With the workaround or the causing problem.
Thanks.
var rightSine = AKOscillator(waveform: AKTable(.sine))
var rightPanner : AKMixer!
let pan2 = AKPanner(self.rightSine, pan: 1)
pan2.rampDuration = 0
let right1: AKMixer = AKMixer(pan2 /*, .... some more */)
self.rightPanner = right1
let mix = AKMixer(self.rightPanner /* left channel... */)
mix.volume = 1.0
AudioKit.output = mix
do {
try AudioKit.start()
} catch {
}
self.rightPanner.volume = 0.00002
This is the code used to initialise the audio stuff (shortened) and afterwards the nodes are started.
*Edit: I'm testing the precise threshold on which the output is broken..

AudioKit's AKMixer is a simple wrapper around Apple's AVAudioMixerNode and as such, I can't really dig much deeper to help you solve the problem using that node. But, if you're willing to switch to AKBooster, whose job it is to amplify or diminish a signal, I think you will be fine to use small numbers for your gain value.
var rightSine = AKOscillator(waveform: AKTable(.sine))
var rightBooster: AKBooster!
let pan2 = AKPanner(self.rightSine, pan: 1)
pan2.rampDuration = 0
let right1: AKMixer = AKMixer(pan2 /*, .... some more */)
self. rightBooster = AKBooster(right1)
let mix = AKMixer(self. rightBooster /* left channel... */)
mix.volume = 1.0
AudioKit.output = mix
self.rightBooster.gain = 0.00002

Related

How to use AudioKit Sequencer to configure time signature for a metronome?

Thanks for AudioKit!
I'm a beginner in Swift and AudioKit, so this may be an easy question:
I want to build a metronome app. From the example in Audiokit's new CookBook and the old AKMetronome(), I can see how to build a simple metronome using AudioKit. But I don't know how to play beats with compound time signatures (3/8, 7/8, etc.). Both examples here use a time signature with 4 as a fixed bottom number and the top number can be changed (i.e. we can have 1/4, 3/4, 6/4 but not 3/8, 6/8).
Is there a way to change the bottom number?
Link for AKMetronome: https://audiokit.io/docs/Classes/AKMetronome.html#/s:8AudioKit11AKMetronomeC5resetyyF
AudioKit Cookbook's Shaker Metronome:
https://github.com/AudioKit/Cookbook/blob/main/Cookbook/Cookbook/Recipes/Shaker.swift
I made some changes to the Shaker Metronome's code to illustrate how you could create a metronome that plays different time signatures such as 6/8, 5/8, 7/8, and so on.
First I added some information to the ShakerMetronomeData structure:
enum Figure: Double {
case quarter = 1.0
case eighth = 0.5
}
struct ShakerMetronomeData {
var isPlaying = false
var tempo: BPM = 120
var timeSignatureTop: Int = 4
var downbeatNoteNumber = MIDINoteNumber(6)
var beatNoteNumber = MIDINoteNumber(10)
var beatNoteVelocity = 100.0
var currentBeat = 0
var figure: Figure = .quarter
var pattern: [Int] = [4]
}
Then, the part of the updateSequences function that plays the metronome clicks would become:
func updateSequences() {
var track = sequencer.tracks.first!
track.length = Double(data.timeSignatureTop)
track.clear()
var startTime: Double = 0.0
for numberOfBeatsInGroup in data.pattern {
track.sequence.add(noteNumber: data.downbeatNoteNumber, position: 0.0, duration: 0.4)
startTime += data.figure.rawValue
let vel = MIDIVelocity(Int(data.beatNoteVelocity))
for beat in 1 ..< numberOfBeatsInGroup {
track.sequence.add(noteNumber: data.beatNoteNumber, velocity: vel, position: startTime, duration: 0.1)
startTime += data.figure.rawValue
}
}
}
These would be the values of the figure and pattern members of the structure for different time signatures:
/*
Time signature Figure Pattern
4/4 .quarter [4]
3/4 .quarter [3]
6/8 .eighth [3,3]
5/8 .eighth [5]
7/8 .eighth [2,2,3]
*/
Please note I haven't tested this code, but it illustrates how you could play beats with a compound time signature.
This could be improved by having three different sounds instead of two: one for the start of each bar, one for the beginning of each group, and one for the downbeats.

Is there any methods to avoid amplitude click when stop/start AKOscillator beside using Envelopes

If I use AKOscillator only for specific purpose, should I anyway use Envelopes classes to avoide
amplitude click when I start/stop oscillator?
Or there any other more light methods?
One "light" method is to set your parameter ramp to a non zero value, start your amplitude at zero, and then set your amplitude. Ramping is the same value for all parameters, though, so depending on if you want your frequency to change at a different ramp, you may want to change the ramp again after it has reached the amplitude you want.
Here's an example playground:
import AudioKitPlaygrounds
import AudioKit
let oscillator = AKOscillator(waveform: AKTable(.sine), amplitude: 0)
oscillator.rampDuration = 0.2
AudioKit.output = oscillator
try AudioKit.start()
oscillator.start()
oscillator.amplitude = 1.0
sleep(1)
oscillator.amplitude = 0
I used your code and it did not help, but I found out that this 'click' appears on the end, when oscillator stops. so if even rampDuration is 0.0, there is not 'click' at the start, the only 'click' is on the end. Here is my code (it is inside IOS app):
class ViewController: UIViewController {
var osc = AKOscillator(waveform: AKTable(.sine), amplitude: 0)
#IBAction func buttonTapped(_ sender: UIButton) { //when button in App is pressed
osc.rampDuration = 0.2
AudioKit.output = osc
osc.frequency = Double.random(in: 100.0...1000.0)
try? AudioKit.start()
osc.start()
osc.amplitude = 0.5
osc.rampDuration = 0.0 //to avoid frequency glide effect
sleep(1)
//osc.rampDuration - I tried to change rampDuration before oscillator stop, but it
//did not help
osc.stop() //here is amplitude 'click' appears
try? AudioKit.stop()
}
So, as I supposed I have to use envelopes anyway?

AudioKit: Way to inject silence/fadeout with a loop with AKPlayer?

In my app, I give the user the option to play a small frame of audio (from a larger audio file)in order to listen over and over to do a manual transcription. AKPlayer makes this trivial. Now, because the frame of audio is pretty small, it's pretty intense to hear this loop over and over (a little maddening in the classical sense of the word). I'd like to either fade it out/fade it back in with the loop OR just inject like 500 ms of silence before the loop starts again. I have no idea where to start, here is the current working code as is:
public func playLoop(start: Double, end: Double) {
self.chordLoopPlayer.isLooping = true
self.chordLoopPlayer.buffering = .always
self.chordLoopPlayer.preroll()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (((start * millisecondsPerSample) / 1000) / 2)
let endingDuration : Double = (((end * millisecondsPerSample) / 1000) / 2)
print("StartinDuration:\(startingDuration) | EndingDuration:\(endingDuration)")
self.chordLoopPlayer.loop.start = startingDuration
self.chordLoopPlayer.loop.end = endingDuration
self.chordLoopPlayer.play(from: startingDuration, to: endingDuration)
Thanks so much <3
You just need to set .fade values for your fade-in/fade-out prior to calling the play() function. AudioKit will execute them each time going in and out of the loop. So assuming you'd like a 2-second fade-out, and a 2-second fade-in (adjust to your taste), your code would look like:
public func playLoop(start: Double, end: Double) {
self.chordLoopPlayer.isLooping = true
self.chordLoopPlayer.buffering = .always
self.chordLoopPlayer.preroll()
let millisecondsPerSample : Double = 1000 / 44100
let startingDuration : Double = (((start * millisecondsPerSample) / 1000) / 2)
let endingDuration : Double = (((end * millisecondsPerSample) / 1000) / 2)
print("StartinDuration:\(startingDuration) | EndingDuration:\(endingDuration)")
self.chordLoopPlayer.loop.start = startingDuration
self.chordLoopPlayer.loop.end = endingDuration
// add fade in/out values to fade in or fade out during playback; reset to 0 to disable.
self.chordLoopPlayer.fade.inTime = 2 // in seconds
self.chordLoopPlayer.fade.outTime = 2 // in seconds
self.chordLoopPlayer.play(from: startingDuration, to: endingDuration)
}
I find the AudioKit documentation a bit frustrating in this respect, as it's not super-easy to find these properties if you don't already know what you're looking for, or to understand how to use them if you haven't already come across sample code, so I hope this is a useful example for others who happen to search on this topic on SO. In any case, the list of sub-properties associated with AudioKit's .fade property is here: https://audiokit.io/docs/Classes/AKPlayer/Fade.html

On extracting the sound pressure level from AVAudioPCMBuffer

I have almost no knowledge in signal-processing and currently I'm trying to implement a function in Swift that triggers an event when there is an increase in the sound pressure level (e.g. when a human screams).
I am tapping into an input node of an AVAudioEngine with a callback like this:
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat){
(buffer : AVAudioPCMBuffer?, when : AVAudioTime) in
let arraySize = Int(buffer.frameLength)
let samples = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:arraySize))
//do something with samples
let volume = 20 * log10(floatArray.reduce(0){ $0 + $1} / Float(arraySize))
if(!volume.isNaN){
print("this is the current volume: \(volume)")
}
}
After turning it into a float array I tried just getting a rough estimation of the sound pressure level by computing the mean.
But this gives me values that fluctuate a lot even when the iPad was just sitting in a quite room:
this is the current volume: -123.971
this is the current volume: -119.698
this is the current volume: -147.053
this is the current volume: -119.749
this is the current volume: -118.815
this is the current volume: -123.26
this is the current volume: -118.953
this is the current volume: -117.273
this is the current volume: -116.869
this is the current volume: -110.633
this is the current volume: -130.988
this is the current volume: -119.475
this is the current volume: -116.422
this is the current volume: -158.268
this is the current volume: -118.933
There is indeed an significant increase in this value if I clap near the microphone.
So I can do something like first computing a mean of these volumes during the preparing phase, and comparing if there is a significant increase in the difference during the event-triggering phase:
if(!volume.isNaN){
if(isInThePreparingPhase){
print("this is the current volume: \(volume)")
volumeSum += volume
volumeCount += 1
}else if(isInTheEventTriggeringPhase){
if(volume > meanVolume){
//triggers an event
}
}
}
where averageVolume is computed during the transition from the preparing phase to the triggering event phase: meanVolume = volumeSum / Float(volumeCount)
....
However, there appears to be no significant increases if I play loud music besides the microphone. And on rare occasion, volume is greater than meanVolume even when the environment has no significant increase in volume (audible to the human ears).
So what is the proper way of extracting the sound pressure level from AVAudioPCMBuffer?
The wikipedia gives a formula like this
with p being the root mean square sound pressure and p0 being the reference sound pressure.
But I have no ideas what the float values in AVAudioPCMBuffer.floatChannelData represent. The apple page only says
The buffer's audio samples as floating point values.
How should I work with them?
Thanks to the response from #teadrinker I finally find out a solution for this problem. I share my Swift code that outputs the volume of the AVAudioPCMBuffer input:
private func getVolume(from buffer: AVAudioPCMBuffer, bufferSize: Int) -> Float {
guard let channelData = buffer.floatChannelData?[0] else {
return 0
}
let channelDataArray = Array(UnsafeBufferPointer(start:channelData, count: bufferSize))
var outEnvelope = [Float]()
var envelopeState:Float = 0
let envConstantAtk:Float = 0.16
let envConstantDec:Float = 0.003
for sample in channelDataArray {
let rectified = abs(sample)
if envelopeState < rectified {
envelopeState += envConstantAtk * (rectified - envelopeState)
} else {
envelopeState += envConstantDec * (rectified - envelopeState)
}
outEnvelope.append(envelopeState)
}
// 0.007 is the low pass filter to prevent
// getting the noise entering from the microphone
if let maxVolume = outEnvelope.max(),
maxVolume > Float(0.015) {
return maxVolume
} else {
return 0.0
}
}
I think the first step is to get the envelope of the sound. You could use simple averaging to calculate an envelope, but you need to add a rectification step (usually means using abs() or square() to make all samples positive)
More commonly a simple iir-filter is used instead of averaging, with different constants for attack and decay, here is a lab. Note that these constants depend on the sampling frequency, you can use this formula to calculate the constants:
1 - exp(-timePerSample*2/smoothingTime)
Step 2
When you have the envelope, you can smooth it with an additional filter, and then compare the two envelopes to find a sound that is louder than the baselevel, here's a more complete lab.
Note that detecting audio "events" can be quite tricky, and hard to predict, make sure you have a lot of debbugging aid!

SKEmiterNode with AVAudioPlayer for music visuals

PLEASE SOMEONE HELP!
I want to have my SKEmiterNode's scale(meaning size) get larger and smaller to the music i have built into the application using AVAudioPlayer. Right now this is pretty much all I have for the SKEmiterNode and it looks great:
beatParticle?.position = CGPoint(x: self.size.width * 0.5, y: self.size.height * 0.5)
var beatParticleEffectNode = SKEffectNode()
beatParticleEffectNode.addChild(beatParticle!)
self.addChild(beatParticleEffectNode)
All the looks are done in the .sks file.
Here is where I call the "updateBeatParticle" function in a continual loop so that It can where i will put my code for making the particle's scale(meaning size) larger and smaller to the music.
var dpLink : CADisplayLink?
dpLink = CADisplayLink(target: self, selector: "updateBeatParticle")
dpLink?.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSRunLoopCommonModes)
func updateBeatParticle(){
//Put code here
}
Any idea how i can do this? I looked at some tutorials such as this: https://www.raywenderlich.com/36475/how-to-make-a-music-visualizer-in-ios
However, i can't quite get my head around it because they're using an emitterLayer and its in Obj-C and am also interested in any other ideas you wonderful people may have!
WARNING: The following code has not been tested. Please let me know if it works.
Firstly, it looks like you are using SpriteKit, therefore you could put the code needed to alter the emitter scale in the SKScene method update:, which automatically gets called virtually as often as a CADisplayLink.
Essentially all you need to do is update the emitter scale in the update: method based on the volume of the channels of your AVAudioPlayer. Note that the audio player may have multiple channels running, so you need to average out the average power for each.
Firstly...
player.meteringEnabled = true
Set this after you initialise your audio player, so that it will monitor the levels of the channels.
Next, add something like this in your update method.
override func update(currentTime: CFTimeInterval) {
var scale: CGFloat = 0.5
if audioPlayer.playing { // Only do this if the audio is actually playing
audioPlayer.updateMeters() // Tell the audio player to update and fetch the latest readings
let channels = audioPlayer.numberOfChannels
var power: Float = 0
// Loop over each channel and add its average power
for i in 0..<channels {
power += audioPlayer.averagePowerForChannel(i)
}
power /= Float(channels) // This will give the average power across all the channels in decibels
// Convert power in decibels to a more appropriate percentage representation
scale = CGFloat(getIntensityFromPower(power))
}
// Set the particle scale to match
emitterNode.particleScale = scale
}
The method getIntensityFromPower is used to convert the power in decibels, to a more appropriate percentage representation. This method can be declared like so...
// Will return a value between 0.0 ... 1.0, based on the decibels
func getIntensityFromPower(decibels: Float) -> Float {
// The minimum possible decibel returned from an AVAudioPlayer channel
let minDecibels: Float = -160
// The maximum possible decibel returned from an AVAudioPlayer channel
let maxDecibels: Float = 0
// Clamp the decibels value
if decibels < minDecibels {
return 0
}
if decibels >= maxDecibels {
return 1
}
// This value can be adjusted to affect the curve of the intensity
let root: Float = 2
let minAmp = powf(10, 0.05 * minDecibels)
let inverseAmpRange: Float = 1.0 / (1.0 - minAmp)
let amp: Float = powf(10, 0.05 * decibels)
let adjAmp = (amp - minAmp) * inverseAmpRange
return powf(adjAmp, 1.0 / root)
}
The algorithm for this conversion was taken from this StackOverflow response https://stackoverflow.com/a/16192481/3222419.

Resources