AudioKit: Infinite playtime and frequency shift for flute? - ios

I can't seem to get the AudioKit instruments to behave the way I'd like: I want to be able to change the frequency continuously and also have the instruments play for an infinite amount of time, just like the oscillators. However, I can't even get a simple playground like the following to output any sound:
//: ## Flute
//: Physical model of a Flute
import AudioKitPlaygrounds
import AudioKit
let playRate = 2.0
let flute = AKFlute()
let reverb = AKReverb(flute)
var triggered = false
let performance = AKPeriodicFunction(frequency: playRate) {
if !triggered {
flute.frequency = 240.0
flute.amplitude = 0.6
flute.play()
triggered = true
}
}
AudioKit.output = reverb
try AudioKit.start(withPeriodicFunctions: performance)
performance.start()
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
The behavior I want is the ability to set the frequency at any time and have the note ring-out forever. Is this possible?

Change flute.play() to flute.trigger()

Related

Can TFLite's CoreMLDelegate use GPU and CPU simultaneously in iOS?

I've been successfully using tflite's MetalDelegate in my app. When I switch to CoreMLDelegate, it runs my (float) tflite model (MobileNet) entirely on CPU, showing 0 GPU use. I am running this on iPhone 11MaxPro, which is a compatible device. During the initialization I noticed the following line:
“CoreML delegate: 29 nodes delegated out of 31 nodes, with 2 partitions”.
Any ideas why? How do I make CoreMLDelegate use both GPU and CPU on iOS? I downloaded the mobilenet_v1_1.0_224.tflite model file from here.
import AVFoundation
import UIKit
import SpriteKit
import Metal
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
private var total_latency:Double = 0
private var total_count:Double = 0
private var sstart = TimeInterval(NSDate().timeIntervalSince1970)
class ViewController: UIViewController {
...
}
// MARK: CameraFeedManagerDelegate Methods
extension ViewController: CameraFeedManagerDelegate {
func didOutput(pixelBuffer: CVPixelBuffer) {
let currentTimeMs = Date().timeIntervalSince1970 * 1
guard (currentTimeMs - previousInferenceTimeMs) >= delayBetweenInferencesMs else { return }
previousInferenceTimeMs = currentTimeMs
// 1. First create the Metal device and command queue in viewDidLoad():
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
var timestamp = NSDate().timeIntervalSince1970
let start = TimeInterval(timestamp)
// 2. Access the shared MTLCaptureManager and start capturing
let capManager = MTLCaptureManager.shared()
let myCaptureScope = capManager.makeCaptureScope(device: device)
myCaptureScope.begin()
let commandBuffer = commandQueue.makeCommandBuffer()!
// Do Metal work
// Pass the pixel buffer to TensorFlow Lite to perform inference.
result = modelDataHandler?.runModel(onFrame: pixelBuffer)
// 3.
// encode your kernel
commandBuffer.commit()
myCaptureScope.end()
timestamp = NSDate().timeIntervalSince1970
let end = TimeInterval(timestamp)
//var end = NSDate(timeIntervalSince1970: TimeInterval(myTimeInterval))
total_latency += (end - start)
total_count += 1;
let rfps = total_count/(end - sstart)
let fps = total_count/(end - start)
let stri = "Time: " + String(end - start) + " avg: " + String(total_latency/total_count)+" count: " + String(total_count)+" rfps: "+String(rfps)+" fps: "+String(fps)
print(stri)
// Display results by handing off to the InferenceViewController.
DispatchQueue.main.async {
guard let finalInferences = self.result?.inferences else {
self.resultLabel.text = ""
return
}
let resultStrings = finalInferences.map({ (inference) in
return String(format: "%# %.2f",inference.label, inference.confidence)
})
self.resultLabel.text = resultStrings.joined(separator: "\n")
}
}
2020-08-22 07:09:39.783215-0400 ImageClassification[3039:645963] coreml_version must be 2 or 3. Setting to 3.
2020-08-22 07:09:39.785103-0400 ImageClassification[3039:645963] Created TensorFlow Lite delegate for Metal.
2020-08-22 07:09:39.785505-0400 ImageClassification[3039:645963] Metal GPU Frame Capture Enabled
2020-08-22 07:09:39.786110-0400 ImageClassification[3039:645963] Metal API Validation Enabled
2020-08-22 07:09:39.927854-0400 ImageClassification[3039:645963] Initialized TensorFlow Lite runtime.
2020-08-22 07:09:39.928928-0400 ImageClassification[3039:645963] CoreML delegate: 29 nodes delegated out of 31 nodes, with 2 partitions
thanks for trying out Core ML delegate. Can you share the version of TFLite you used, and a code that you used to initialize Core ML delegate? Also, can you confirm that you're trying to run float model, not quantized one?
the latency might be different depending on what you measure, but when measuring solely the inference time my iPhone 11 Pro shows 11ms for CPU, and 5.5ms for Core ML delegate.
Neural Engine utilization is not captured by profiler, but if you're seeing that latency and high CPU utilization, it might indicate your model is running solely on CPU. You can also try the time profiler to figure out which part is consuming the most resources.

AudioKit - How to use AKAmplitudeTracker threshold callback?

AudioKit include a great tool to track signal amplitude: AKAmplitudeTracker
This tracker can be init with a thresholdCallback, I suppose that the callback should trigger when the threshold is reach.
I'm playing with the MicrophoneAnalysis example and I can't find a way to trigger my callback.
Here is my code:
var mic: AKMicrophone!
var trackerAmplitude: AKAmplitudeTracker!
var silence: AKBooster!
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
trackerAmplitude = AKAmplitudeTracker(mic, halfPowerPoint: 10, threshold: 0.01, thresholdCallback: { (success) in
print("thresholdCallback: \(success)")
})
trackerAmplitude.start()
silence = AKBooster(trackerAmplitude, gain: 0)
AudioKit.output = silence
I tried to play with the halfPowerPoint and threshold values, but even with vey low values I cannot find a way to print anything :/
Whereas when I'm printing trackerAmplitude.amplitude, I've got values higher than 0.01
Is there something I'm missing ?
The following code works. Tested with AudioKit 4.9, Xcode 11.2, macOS Playground.
This might be an issue of AudioKit, but threshold must be changed via property to activate tracking, as shown below...
import AudioKitPlaygrounds
import AudioKit
let mic = AKMicrophone()
AKSettings.audioInputEnabled = true
let amplitudeTracker = AKAmplitudeTracker(mic, halfPowerPoint: 10, threshold: 1, thresholdCallback: { (success) in
print("thresholdCallback: \(success)")
})
AudioKit.output = amplitudeTracker
try AudioKit.start()
amplitudeTracker.threshold = 0.01 // !! MUST BE SET VIA PROPERTY
amplitudeTracker.start()
mic?.start()
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true

AudioKit - AKOperationGenerator with AKParameters - CPU Issue

I need help with sending AKParameters to the AKOperationGenerator. My current solution use a lot of CPU. Is there a better way how to do it?
Here is my example code:
import AudioKit
class SynthVoice: AKNode {
override init() {
let synth = AKOperationGenerator { p in
//(1) - 30% CPU
let osc: AKOperation = AKOperation.squareWave(frequency: p[0], amplitude: p[1], pulseWidth: p[2])
//(2) - 9% CPU
//let osc: AKOperation = AKOperation.squareWave(frequency: 440, amplitude: 1, pulseWidth: 0.5)
return osc
}
synth.parameters[0] = 880
synth.parameters[1] = 1
synth.parameters[2] = 0.5
super.init()
self.avAudioNode = synth.avAudioNode
synth.start()
}
}
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let mixer: AKMixer = AKMixer([SynthVoice(), SynthVoice(), SynthVoice(), SynthVoice(), SynthVoice(), SynthVoice()])
AudioKit.output = mixer
AudioKit.start()
}
}
I need 6 voice osc bank with envelope filter for each voice. I did not find any OSC-bank with envelope filter in AudioKit, so I started to write my own via AKOperationGenerator... But the CPU is too high. (About 100% in my project - 6 AKOperationGenerator with PWM square osc and envelope filter and a lot of AKParameters that can be changed via UI)
Thanks for any response.
I'd definitely do this at the DSP Kernel level. Its C/C++ but its really not too bad. Use one of the AKOscillatorBank type nodes as your model, but in addition to having an amplitude envelope, put in a filter envelope the same way. We're releasing an open source synth that does this exact thing in a few months if you can wait.

AudioKit - How to use frequency filters with microphone?

I'm using the AudioKit library with Swift for developing a simple iOS application that should be able to listen to frequencies of the range 3.000Hz - maybe 6000Hz.
So I'd like to just track the input frequency of the microphone within this range and to achieve this I tried to apply a filter effect on the input microphone to avoid picking up the frequency of several unwanted noises.
var mic: AKMicrophone
var silence: AKBooster
var filter: AKHighPassFilter
var tracker: AKFrequencyTracker
public init() {
mic = AKMicrophone()
filter = AKHighPassFilter(mic)
filter.cutoffFrequency = 3000 // just get frequencyies above 3000Hz (highpass)
filter.resonance = 0
tracker = AKFrequencyTracker(filter)
silence = AKBooster(tracker, gain: 0)
}
func start() {
AKSettings.audioInputEnabled = true
AudioKit.output = silence
AudioKit.start()
}
func print() {
print(tracker.frequency)
}
To sum that up: I know that the filter is changing something - but I can not really apply a frequency-filter for the range 3.000Hz + because I'm also getting values below 3.000Hz (like 2.0000 / 500 / etc) for the filtered frequency.
The AudiKit website has included some examples how to use the filters - But I can find no examples how to apply filters on the input-microphone to get a filtered frequency? http://audiokit.io/playgrounds/Filters/
Am I doing something the wrong way?
Is this really the functionality of the AudioKit-filters or didn't I get the right sense of filters?
Is there another way to filter for frequency-ranges?

Play musical notes in Swift Playground

I am trying to play a short musical note sequence with a default sine wave as sound inside a Swift Playground. At a later point I'd like to replace the sound with a Soundfont but at the moment I'd be happy with just producing some sound.
I want this to be a midi like sequence with direct control over the notes, not something purely audio based. The AudioToolbox seems to provide what I am looking for but I have troubles fully understanding its usage. Here's what I am currently trying
import AudioToolbox
// Creating the sequence
var sequence:MusicSequence = nil
var musicSequence = NewMusicSequence(&sequence)
// Creating a track
var track:MusicTrack = nil
var musicTrack = MusicSequenceNewTrack(sequence, &track)
// Adding notes
var time = MusicTimeStamp(1.0)
for index:UInt8 in 60...72 {
var note = MIDINoteMessage(channel: 0,
note: index,
velocity: 64,
releaseVelocity: 0,
duration: 1.0 )
musicTrack = MusicTrackNewMIDINoteEvent(track, time, &note)
time += 1
}
// Creating a player
var musicPlayer:MusicPlayer = nil
var player = NewMusicPlayer(&musicPlayer)
player = MusicPlayerSetSequence(musicPlayer, sequence)
player = MusicPlayerStart(musicPlayer)
As you can imagine, there's no sound playing. I appreciate any ideas on how to have that sound sequence playing aloud.
You have to enable the asynchronous mode for the Playground.
Add this at the top (Xcode 7, Swift 2):
import XCPlayground
XCPlaygroundPage.currentPage.needsIndefiniteExecution = true
and your sequence will play.
The same for Xcode 8 (Swift 3):
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
Working MIDI example in a Swift Playground
import PlaygroundSupport
import AudioToolbox
var sequence : MusicSequence? = nil
var musicSequence = NewMusicSequence(&sequence)
var track : MusicTrack? = nil
var musicTrack = MusicSequenceNewTrack(sequence!, &track)
// Adding notes
var time = MusicTimeStamp(1.0)
for index:UInt8 in 60...72 { // C4 to C5
var note = MIDINoteMessage(channel: 0,
note: index,
velocity: 64,
releaseVelocity: 0,
duration: 1.0 )
musicTrack = MusicTrackNewMIDINoteEvent(track!, time, &note)
time += 1
}
// Creating a player
var musicPlayer : MusicPlayer? = nil
var player = NewMusicPlayer(&musicPlayer)
player = MusicPlayerSetSequence(musicPlayer!, sequence)
player = MusicPlayerStart(musicPlayer!)
PlaygroundPage.current.needsIndefiniteExecution = true
Great MIDI reference page with a nice chart

Resources