I currently have an application which uses an AKKeyboard to create sounds using an Oscillator. Whenever the keyboard is played I get the MIDI data also. What I would like to do is create an AKSequence from the MIDI data I receive.
Any advice or pointers will be greatly appreciated, thank you.
Here is a partial amount of my code:
var bank = AKOscillatorBank()
var midi: AKMIDI!
let sequencer = AKSequencer()
let sequenceLength = AKDuration(beats: 8.0)
func configureBank() {
AudioKit.output = bank
do {
try AudioKit.start()
} catch {
AKLog("AudioKit couldn't be started")
}
midi = AudioKit.midi
midi.addListener(self)
midi.openInput()
}
// AKKeyboard Protocol methods
func noteOn(note: MIDINoteNumber) {
let event = AKMIDIEvent(noteOn: note, velocity: 80, channel: 5)
midi.sendEvent(event)
bank.play(noteNumber: note, velocity: 100)
}
func noteOff(note: MIDINoteNumber) {
let event = AKMIDIEvent(noteOff: note, velocity: 0, channel: 5)
midi.sendEvent(event)
bank.stop(noteNumber: note)
}
// AKMIDIListener Protocol methods..
func receivedMIDINoteOff(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
print("ReceivedMIDINoteOff: \(noteNumber), velocity: \(velocity), channel: \(channel)")
}
You don't actually need to build the sequence directly from the AKMIDIEvents. Just query the sequence's currentPosition when you call AKKeyboardView's noteOn and noteOff methods and programmatically add events to a sequencer track based on this.
The process is basically identical to this (minus the final step, of course): https://stackoverflow.com/a/50071028/2717159
Edit - To get the noteOn and noteOff times, and duration:
// store notes and times in a dictionary:
var noteDict = [MIDINoteNumber: MIDITimeStamp]()
// when you get a noteOn, note the time
noteDict[currentMIDINote] = seq.currentPosition.beats
// when you get a noteOff
let endTime = seq.currentPosition.beats
if let startTime = noteDict[currentMIDINote] {
let durationInBeats = endTime - startTime
// use the startTime, duration and currentMIDINote to add event to track
noteDict[currentMIDINote] = nil
}
Related
I'm trying to control an oscillator with a sequencer with AudioKit v5, and I've hit a snag. I'm subclassing MIDIInstrument, but I'm not sure if this is right. Please see code below.
I'm getting this error on startup:
AVAEInternal.h:76 required condition is false:
[AVAudioEngine.mm:413:AttachNode: (node != nil)]
There's a previous post about this with an older AK version, which was somewhat helpful, but none of the links to examples in it work:
How do I control an oscillator's frequency with a sequencer
Can you point me in the right direction? Many thanks!
EDIT: Slight progress? I misunderstood the AudioEngine function and had 2 instances, so I removed one, which cleared the error. And adding track!.setMIDIOutput(instrument.midiIn) has it logging the 4 notes now, but still no sound. MIDIInstrument seems to accept a MIDIClientRef, but I see no reference to that in the sequencer class...
import AudioKit
import CAudioKit
class Test2 {
var instrument: OscMIDIInstrument
var sequencer: AppleSequencer
init() {
instrument = OscMIDIInstrument()
sequencer = AppleSequencer()
sequencer.setGlobalMIDIOutput(instrument.midiIn)
instrument.enableMIDI()
let track = sequencer.newTrack()
track!.setMIDIOutput(instrument.midiIn)
for i in 0 ..< 4 {
track!.add(noteNumber: 60, velocity: 64, position: Duration(seconds: Double(i)), duration: Duration(seconds: 0.5))
}
}
func testButton() {
if sequencer.isPlaying {
sequencer.stop()
} else {
sequencer.rewind()
sequencer.play()
}
}
}
class OscMIDIInstrument: MIDIInstrument {
var akEngine: AudioEngine
var osc: Oscillator
init() {
akEngine = AudioEngine()
osc = Oscillator()
super.init()
akEngine.output = osc
osc.amplitude = 0.1
osc.frequency = 440.0
do {
try akEngine.start()
} catch {
print("Couldn't start AudioEngine.")
}
}
override func receivedMIDINoteOn(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel, portID: MIDIUniqueID? = nil, offset: MIDITimeStamp = 0) {
osc.play()
}
override func receivedMIDINoteOff(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel, portID: MIDIUniqueID? = nil, offset: MIDITimeStamp = 0) {
osc.stop()
}
}
Got it working, somewhat. I found this example which pointed me to MIDICallbackInstrument:
https://github.com/AudioKit/Cookbook/blob/main/Cookbook/Cookbook/Recipes/CallbackInstrument.swift
The remaining issue is you apparently can't implement sysex messages this way, I guess because the callback messages are limited to 3 bytes.
So I'm still looking for a better solution if anyone can help.
Thanks a lot!
class Test {
let akEngine = AudioEngine()
let sequencer = AppleSequencer()
let osc = Oscillator()
init() {
let callbackInstrument = MIDICallbackInstrument { [self] status, note, _ in
guard let midiStatus = MIDIStatusType.from(byte: status) else {
return
}
if midiStatus == .noteOn {
print("NoteOn \(note) at \(sequencer.currentPosition.seconds)")
osc.play()
} else if midiStatus == .noteOff {
print("NoteOff \(note) at \(sequencer.currentPosition.seconds)")
osc.stop()
}
}
let track = sequencer.newTrack()
for i in 0..< 4 {
track!.add(noteNumber: 60, velocity: 64, position: Duration(seconds: Double(i)), duration: Duration(seconds: 0.25))
}
track?.setMIDIOutput(callbackInstrument.midiIn)
akEngine.output = osc
do {
try akEngine.start()
} catch {
print("Couldn't start AudioEngine.")
}
}
func play() {
if sequencer.isPlaying {
sequencer.stop()
} else {
sequencer.rewind()
sequencer.play()
}
}
}
I'm using an AVAudioPlayerNode attached to an AVAudioEngine to play a sound.
to get the current time of the player I'm doing this:
extension AVAudioPlayerNode {
var currentTime: TimeInterval {
get {
if let nodeTime: AVAudioTime = self.lastRenderTime, let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodeTime) {
return Double(playerTime.sampleTime) / playerTime.sampleRate
}
return 0
}
}
}
I have a slider that indicates the current time of the audio. When the user changes the slider value, on .ended event I have to change the current time of the player to that indicated in the slider.
To do so:
extension AVAudioPlayerNode {
func seekTo(value: Float, audioFile: AVAudioFile, duration: Float) {
if let nodetime = self.lastRenderTime{
let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodetime)!
let sampleRate = self.outputFormat(forBus: 0).sampleRate
let newsampletime = AVAudioFramePosition(Int(sampleRate * Double(value)))
let length = duration - value
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
self.stop()
if framestoplay > 1000 {
self.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, at: nil,completionHandler: nil)
}
}
self.play()
}
However, my function seekTo is not working correctly(I'm printing currentTime before and after the function and it shows always a negative value ~= -0.02). What is the wrong thing I'm doing and can I find a simpler way to change the currentTime of the player?
I ran into same issue. Apparently the framestoplay was always 0, which happened because of sampleRate. The value for playerTime.sampleRate was always 0 in my case.
So,
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
must be replaced with
let framestoplay = AVAudioFrameCount(Float(sampleRate) * length)
I am trying to use AKSamplerMetronome as a master clock in my (sort of) multi audiofile playback project. I wanted AKPlayers to be started in sync with Metronome's downbeat. Mixing AKPlayer and AKSamplerMetronome as AudioKit.output was successful, however, I am struggling to connect AKPlayer.start with AKSamplerMetronome.beatTime(or something else I haven't figured out) so the playback starts with the Metronome's downbeat in sync (and repeat every time Metronome hits downbeat). Here's what I've written:
class ViewController: UIViewController {
let metronome = AKSamplerMetronome()
let player = AKPlayer(audioFile: try! AKAudioFile(readFileName: "loop.wav"))
let mixer = AKMixer()
func startAudioEngine() {
do {
try AudioKit.start()
} catch {
print(error)
fatalError()
}
}
func makeConnections() {
player >>> mixer
metronome >>> mixer
AudioKit.output = mixer
}
func startMetronome() {
metronome.tempo = 120.0
metronome.beatVolume = 1.0
metronome.play()
}
func preparePlayer() {
player.isLooping = true
player.buffering = .always
player.prepare()
// I wanted AKPlayer to be repeated based on Metronome's downbeat.
}
func startPlayer() {
let startTime = AVAudioTime.now() + 0.25
player.start(at: startTime)
}
override func viewDidLoad() {
super.viewDidLoad()
makeConnections()
startAudioEngine()
preparePlayer()
startPlayer()
startMetronome()
}
}
My problem is that, AKPlayer's start point(at:) doesn't recognize AKSamplerMetronome's properties, maybe because it's not compatible with AVAudioTime? I tried something like:
let startTime = metronome.beatTime + 0.25
player.start(at: startTime)
But this doesn't seem to be an answer (as "cannot convert value type 'Double' to expected argument type 'AVAudioTime?'). It would be extremely helpful if someone could help me exploring Swift/AudioKit. <3
you are calling the AVAudioTime playback function with a Double parameter. That's incorrect. If you want to start the AKPlayer with a seconds param, use player.play(when: time)
In general, you're close. This is how you do it:
let startTime: Double = 1
let hostTime = mach_absolute_time()
let now = AVAudioTime(hostTime: hostTime)
let avTime = now.offset(seconds: startTime)
metronome.setBeatTime(0, at: avTime)
player.play(at: avTime)
Basically you need to give a common clock to each unit (mach_absolute_time()), then use AVAudioTime to start them at the exact time. The metronome.setBeatTime is telling the metronome to reset it's 0 point at the passed in avTime.
I'd like to use the AudioKit framework to generate a small sound sequence of some high and low sounds.
So what I'm starting with is the message that could look like this: "1100011010"
--> Every column should be looped through and if it's value is "1" AudioKit should play a (short) high frequency sound and if not it should play a (short) lower frequency sound.
Because a simple timer-loop that triggers every 0.15s the .play()-function for running a 0.1s sound (high/low) doesn't seems to be very accurate I decided to use the *AudioKit Sequencer*:
(o) audiokit:
enum Sequence: Int {
case snareDrum
}
var snareDrum = AKSynthSnare()
var sequencer = AKSequencer()
var pumper: AKCompressor?
var mixer = AKMixer()
public init() {
snareDrum >>> mixer
pumper = AKCompressor(mixer)
AudioKit.output = pumper
AudioKit.start()
}
func setupTracks() {
_ = sequencer.newTrack()
sequencer.tracks[Sequence.snareDrum.rawValue].setMIDIOutput(snareDrum.midiIn)
generateMessageSequence()
sequencer.enableLooping()
sequencer.setTempo(2000)
sequencer.play()
}
(o) play:
var message="1100011010"
var counter=0
for i in message {
counter+=0.15
if (i=="1") {
// play high sound at specific position
}
else {
// play low sound at specific position
}
}
(o) play low sound at specific position:
sequencer.tracks[Sequence.snareDrum.rawValue].add(noteNumber: 20,
velocity: 10,
position: AKDuration(beats: counter),
duration: AKDuration(beats: 1))
My question: How is it possible to play local sound files at specific positions using (position: AKDuration(beats: counter)) //the code from above instead of using default instruments like in this case AKSynthSnare()?
You could create two tracks, each with an AKMIDISampler. One plays a 'low' sample, and the other plays a 'high' sample. Assign the high notes to the high track, and low notes to the low track.
let sequencer = AKSequencer()
let lowTrack = sequencer.newTrack()
let lowSampler = AKMIDISampler()
try! lowSampler.loadWav("myLowSoundFile")
lowTrack?.setMIDIOutput(lowSampler.midiIn)
let highTrack = sequencer.newTrack()
let highSampler = AKMIDISampler()
try! highSampler.loadWav("myHighSoundFile")
highTrack?.setMIDIOutput(highSampler.midiIn)
sequencer.setLength(AKDuration(beats: 4.0))
sequencer.enableLooping()
then assign the high and low sounds to each track
let message = "1100011010"
let dur = 4.0 / Double(message.count)
var position: Double = 0
for i in message {
position += dur
if (i == "1") {
highTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/3)))
} else {
lowTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/34)))
}
}
(I haven't run the code, but something like this should work)
When I add a bunch (20-40) samples playing and overlapping eachother simultaneously sometimes it starts getting distorted and then some waving, oscillating, and clicking begins to happen. A similar sound happens when the samples are playing the the app crashes - sounds like an abrupt, crunchy halt.
Notice the waviness begins between 0:05 and 0:10; nasty clicks start around 0:15.
Listen Here
How can I make it smoother? I am spawning AKPlayer objects (from 4.1) that play 4-8 second .wav files. Those go into AKBoosters which go into AKMixers which go into the final AKMixer for output.
Edit:
Many PenAudioNodes get plugged into the mixer of the AudioReceiver singleton.
Here's my AudioReceiver singleton:
class AudioReceiver {
static var sharedInstance = AudioReceiver()
private var audioNodes = [UUID : AudioNode]()
private let mixer = AKMixer()
private let queue = DispatchQueue(label: "audio-queue")
//MARK: - Setup & Teardown
init() {
AudioKit.output = mixer //peakLimiter
AudioKit.start()
}
//MARK: - Public
func audioNodeBegan(_ message : AudioNodeMessage) {
queue.async {
var audioNode: AudioNode?
switch message.senderType {
case .pen:
audioNode = PenAudioNode()
case .home:
audioNode = LoopingAudioNode(with: AudioHelper.homeLoopFile())
default:
break
}
if let audioNode = audioNode {
self.audioNodes[message.senderId] = audioNode
self.mixer.connect(input: audioNode.output)
audioNode.start(message)
}
}
}
func audioNodeMoved(_ message : AudioNodeMessage) {
queue.async {
if let audioNode = self.audioNodes[message.senderId] {
audioNode.update(message)
}
}
}
func audioNodeEnded(_ message : AudioNodeMessage) {
queue.async {
if let audioNode = self.audioNodes[message.senderId] {
audioNode.stop(message)
}
self.audioNodes[message.senderId] = nil
}
}
}
Here's my PenAudioNode:
class PenAudioNode {
fileprivate var mixer: AKMixer?
fileprivate var playersBoosters = [AKPlayer : AKBooster]()
fileprivate var finalOutput: AKNode?
fileprivate let file: AKAudioFile = AudioHelper.randomBellSampleFile()
//MARK: - Setup & Teardown
init() {
mixer = AKMixer()
finalOutput = mixer!
}
}
extension PenAudioNode: AudioNode {
var output: AKNode {
return finalOutput!
}
func start(_ message: AudioNodeMessage) {
}
func update(_ message: AudioNodeMessage) {
if let velocity = message.velocity {
let newVolume = Swift.min((velocity / 50) + 0.1, 1)
mixer!.volume = newVolume
}
if let isClimactic = message.isClimactic, isClimactic {
let player = AKPlayer(audioFile: file)
player.completionHandler = { [weak self] in
self?.playerCompleted(player)
}
let booster = AKBooster(player)
playersBoosters[player] = booster
booster.rampTime = 1
booster.gain = 0
mixer!.connect(input: booster)
player.play()
booster.gain = 1
}
}
func stop(_ message: AudioNodeMessage) {
for (_, booster) in playersBoosters {
booster.gain = 0
}
DispatchQueue.global().asyncAfter(deadline: DispatchTime.now() + 1) {
self.mixer!.stop()
self.output.disconnectOutput()
}
}
private func playerCompleted(_ player: AKPlayer) {
playersBoosters.removeValue(forKey: player)
}
}
This sounds like you are not releasing objects and you are eventually overloading the audio engine with too many instances of processing nodes connected in the graph. In particular not releasing AKBoosters will cause an issue like this. I can't really tell what your code is doing, but if you are spawning objects without releasing them properly, it will lead to garbled audio.
You want to conserve objects as much as possible and make sure you are using the absolute minimum amount of AKNode based processing.
There are various ways to debug this, but you can start by printing out the current state of the AVAudioEngine:
AudioKit.engine.description
That will show how many nodes you have connected in the graph at any given moment.