I'd like to use the AudioKit framework to generate a small sound sequence of some high and low sounds.
So what I'm starting with is the message that could look like this: "1100011010"
--> Every column should be looped through and if it's value is "1" AudioKit should play a (short) high frequency sound and if not it should play a (short) lower frequency sound.
Because a simple timer-loop that triggers every 0.15s the .play()-function for running a 0.1s sound (high/low) doesn't seems to be very accurate I decided to use the *AudioKit Sequencer*:
(o) audiokit:
enum Sequence: Int {
case snareDrum
}
var snareDrum = AKSynthSnare()
var sequencer = AKSequencer()
var pumper: AKCompressor?
var mixer = AKMixer()
public init() {
snareDrum >>> mixer
pumper = AKCompressor(mixer)
AudioKit.output = pumper
AudioKit.start()
}
func setupTracks() {
_ = sequencer.newTrack()
sequencer.tracks[Sequence.snareDrum.rawValue].setMIDIOutput(snareDrum.midiIn)
generateMessageSequence()
sequencer.enableLooping()
sequencer.setTempo(2000)
sequencer.play()
}
(o) play:
var message="1100011010"
var counter=0
for i in message {
counter+=0.15
if (i=="1") {
// play high sound at specific position
}
else {
// play low sound at specific position
}
}
(o) play low sound at specific position:
sequencer.tracks[Sequence.snareDrum.rawValue].add(noteNumber: 20,
velocity: 10,
position: AKDuration(beats: counter),
duration: AKDuration(beats: 1))
My question: How is it possible to play local sound files at specific positions using (position: AKDuration(beats: counter)) //the code from above instead of using default instruments like in this case AKSynthSnare()?
You could create two tracks, each with an AKMIDISampler. One plays a 'low' sample, and the other plays a 'high' sample. Assign the high notes to the high track, and low notes to the low track.
let sequencer = AKSequencer()
let lowTrack = sequencer.newTrack()
let lowSampler = AKMIDISampler()
try! lowSampler.loadWav("myLowSoundFile")
lowTrack?.setMIDIOutput(lowSampler.midiIn)
let highTrack = sequencer.newTrack()
let highSampler = AKMIDISampler()
try! highSampler.loadWav("myHighSoundFile")
highTrack?.setMIDIOutput(highSampler.midiIn)
sequencer.setLength(AKDuration(beats: 4.0))
sequencer.enableLooping()
then assign the high and low sounds to each track
let message = "1100011010"
let dur = 4.0 / Double(message.count)
var position: Double = 0
for i in message {
position += dur
if (i == "1") {
highTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/3)))
} else {
lowTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: position), duration: AKDuration(beats: dur * (2/34)))
}
}
(I haven't run the code, but something like this should work)
Related
I'm learning how to use AudioKit. I'm trying to play around with the sequencer and an oscillator. Everything is working dandy but i noticed when i feed a higher frequency to an oscillator that is in a sequencer track, it will render the same for such frequency (MidiNote) and all that are higher. if passed the same frequency to just the oscillator you can see the variance.
my initial setup
let oscillator = AKOscillatorBank()
let oscillatorTrackIndex = 0
let sequencer = AKAppleSequencer()
let midi = AKMIDI()
var scale: [Int] = []
let sequenceLength = AKDuration(beats: 8.0)
func setupTracks() {
let midiNode = AKMIDINode(node: oscillator)
_ = sequencer.newTrack()
sequencer.setLength(trueLength)
AudioKit.output = midiNode
try! AudioKit.start()
midiNode.enableMIDI(midi.client, name: "midiNode midi in")
sequencer.setTempo(currentTempo)
sequencer.enableLooping()
sequencer.play()
}
my method
func generateSequence(_ stepSize: Float = 1/4, clear: Bool = true) {
if clear { sequencer.tracks[oscillatorTrackIndex].clear() }
let numberOfSteps = Int(Float(sequenceLength.beats) / stepSize)
for i in 0 ..< numberOfSteps { //4
if i%4 == 0 {
sequencer.tracks[0].add(noteNumber: 140, velocity: 127, position: AKDuration(beats: Double(i)), duration: AKDuration(beats: 0.5))
} else {
sequencer.tracks[0].add(noteNumber: 200, velocity: 127, position: AKDuration(beats: Double(i)), duration: AKDuration(beats: 0.5))
}
}
}
as you can see i'm using note number 140 and 200. when the sequencer plays these notes, they render out the same audio. if i use .midiNoteToFrequency() and feed these through the oscillator by itself then you can hear the difference.
Thanks!
In the MIDI spec, there are only 7 bits for note number, allowing values between 0-127. Presumably (and this might happening internally in Apple's MusicSequence, since I don't think that AKAppleSequencer or AKMusicTrack do this explicitly), values outside of this range are clamped into this range to avoid unexpected crashes.
I'm using an AVAudioPlayerNode attached to an AVAudioEngine to play a sound.
to get the current time of the player I'm doing this:
extension AVAudioPlayerNode {
var currentTime: TimeInterval {
get {
if let nodeTime: AVAudioTime = self.lastRenderTime, let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodeTime) {
return Double(playerTime.sampleTime) / playerTime.sampleRate
}
return 0
}
}
}
I have a slider that indicates the current time of the audio. When the user changes the slider value, on .ended event I have to change the current time of the player to that indicated in the slider.
To do so:
extension AVAudioPlayerNode {
func seekTo(value: Float, audioFile: AVAudioFile, duration: Float) {
if let nodetime = self.lastRenderTime{
let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodetime)!
let sampleRate = self.outputFormat(forBus: 0).sampleRate
let newsampletime = AVAudioFramePosition(Int(sampleRate * Double(value)))
let length = duration - value
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
self.stop()
if framestoplay > 1000 {
self.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, at: nil,completionHandler: nil)
}
}
self.play()
}
However, my function seekTo is not working correctly(I'm printing currentTime before and after the function and it shows always a negative value ~= -0.02). What is the wrong thing I'm doing and can I find a simpler way to change the currentTime of the player?
I ran into same issue. Apparently the framestoplay was always 0, which happened because of sampleRate. The value for playerTime.sampleRate was always 0 in my case.
So,
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
must be replaced with
let framestoplay = AVAudioFrameCount(Float(sampleRate) * length)
I currently have an application which uses an AKKeyboard to create sounds using an Oscillator. Whenever the keyboard is played I get the MIDI data also. What I would like to do is create an AKSequence from the MIDI data I receive.
Any advice or pointers will be greatly appreciated, thank you.
Here is a partial amount of my code:
var bank = AKOscillatorBank()
var midi: AKMIDI!
let sequencer = AKSequencer()
let sequenceLength = AKDuration(beats: 8.0)
func configureBank() {
AudioKit.output = bank
do {
try AudioKit.start()
} catch {
AKLog("AudioKit couldn't be started")
}
midi = AudioKit.midi
midi.addListener(self)
midi.openInput()
}
// AKKeyboard Protocol methods
func noteOn(note: MIDINoteNumber) {
let event = AKMIDIEvent(noteOn: note, velocity: 80, channel: 5)
midi.sendEvent(event)
bank.play(noteNumber: note, velocity: 100)
}
func noteOff(note: MIDINoteNumber) {
let event = AKMIDIEvent(noteOff: note, velocity: 0, channel: 5)
midi.sendEvent(event)
bank.stop(noteNumber: note)
}
// AKMIDIListener Protocol methods..
func receivedMIDINoteOff(noteNumber: MIDINoteNumber, velocity: MIDIVelocity, channel: MIDIChannel) {
print("ReceivedMIDINoteOff: \(noteNumber), velocity: \(velocity), channel: \(channel)")
}
You don't actually need to build the sequence directly from the AKMIDIEvents. Just query the sequence's currentPosition when you call AKKeyboardView's noteOn and noteOff methods and programmatically add events to a sequencer track based on this.
The process is basically identical to this (minus the final step, of course): https://stackoverflow.com/a/50071028/2717159
Edit - To get the noteOn and noteOff times, and duration:
// store notes and times in a dictionary:
var noteDict = [MIDINoteNumber: MIDITimeStamp]()
// when you get a noteOn, note the time
noteDict[currentMIDINote] = seq.currentPosition.beats
// when you get a noteOff
let endTime = seq.currentPosition.beats
if let startTime = noteDict[currentMIDINote] {
let durationInBeats = endTime - startTime
// use the startTime, duration and currentMIDINote to add event to track
noteDict[currentMIDINote] = nil
}
I'm trying to make my iphone play a tune without using prerecorded files. What are my options here? AVAudioEngine, AudioKit? I've looked at them, but the learning curve is relatively steep for something I'm hoping is easy. They also seem like tools for creating sound effect given a PCM buffer window.
I'd like to be able to do something like
pitchCreator.play(["C4", "E4", "G4"], durations: [1, 1, 1])
Preferrably sounding like an instrument or at least not like a pure sine wave.
EDIT: The below code has been replaced by AudioKit
To anyone wondering this; I did make it work (kind of) using code similar to the one below.
class PitchCreator {
var engine: AVAudioEngine
var player: AVAudioPlayerNode
var mixer: AVAudioMixerNode
var buffer: AVAudioPCMBuffer
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
mixer = engine.mainMixerNode;
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
buffer.frameLength = 4096
engine.attachNode(player)
engine.connect(player, to: mixer, format: player.outputFormatForBus(0))
}
func play(frequency: Float) {
let signal = self.createSignal(frequency, amplitudes: [1.0, 0.5, 0.3, 0.1], bufferSize: Int(buffer.frameLength), sampleRate: Float(mixer.outputFormatForBus(0).sampleRate))
for i in 0 ..< signal.count {
buffer.floatChannelData.memory[i] = 0.5 * signal[i]
}
do {
try engine.start()
player.play()
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
} catch {}
}
func stop() {
engine.stop()
player.stop()
}
func createSignal(frequency: Float, amplitudes: [Float], bufferSize: Int, sampleRate: Float) -> [Float] {
let π = Float(M_PI)
let T = sampleRate / frequency
var x = [Float](count: bufferSize, repeatedValue: 0.0)
for k in 0 ..< x.count {
for h in 0 ..< amplitudes.count {
x[k] += amplitudes[h] * sin(2.0 * π * Float(h + 1) * Float(k) / T)
}
}
return x
}
}
But it doesn't sound good enough so I've gone with sampling the notes I need and just use AVAudioPlayer instead to play them.
Background: I found one of Apple WWDC sessions called "AVAudioEngine in Practice" and am trying to make something similar to the last demo shown at 43:35 (https://youtu.be/FlMaxen2eyw?t=2614). I'm using SpriteKit instead of SceneKit but the principle is the same: I want to generate spheres, throw them around and when they collide the engine plays a sound, unique to each sphere.
Problems:
I want a unique AudioPlayerNode attached to each SpriteKitNode so that I can play a different sound for each sphere. i.e Right now, if I create two spheres and set a different pitch for each of their AudioPlayerNode, only the most recently created AudioPlayerNode seems to be playing, even when the original sphere collides. During the demo, he mentions "I'm tying a player, a dedicated player to each ball". How would I go about doing that?
There are audio clicks/artefacts every time a new collision happens. I'm assuming this has to do with the AVAudioPlayerNodeBufferOptions and/or the fact that I'm trying to create, schedule and consume buffers very quickly each time contact occurs, which is not the most efficient method. What would be a good work around for this?
Code: As mentioned in the video, "...for every ball that's born into this world, a new player node is also created". I have a separate class for the spheres, with a method that returns a SpriteKitNode and also creates an AudioPlayerNode every time it is called :
class Sphere {
var sphere: SKSpriteNode = SKSpriteNode(color: UIColor(), size: CGSize())
var sphereScale: CGFloat = CGFloat(0.01)
var spherePlayer = AVAudioPlayerNode()
let audio = Audio()
let sphereCollision: UInt32 = 0x1 << 0
func createSphere(position: CGPoint, pitch: Float) -> SKSpriteNode {
let texture = SKTexture(imageNamed: "Slice")
let collisionTexture = SKTexture(imageNamed: "Collision")
// Define the node
sphere = SKSpriteNode(texture: texture, size: texture.size())
sphere.position = position
sphere.name = "sphere"
sphere.physicsBody = SKPhysicsBody(texture: collisionTexture, size: sphere.size)
sphere.physicsBody?.dynamic = true
sphere.physicsBody?.mass = 0
sphere.physicsBody?.restitution = 0.5
sphere.physicsBody?.usesPreciseCollisionDetection = true
sphere.physicsBody?.categoryBitMask = sphereCollision
sphere.physicsBody?.contactTestBitMask = sphereCollision
sphere.zPosition = 1
// Create AudioPlayerNode
spherePlayer = audio.createPlayer(pitch)
return sphere
}
Here's my Audio Class with which I create AudioPCMBuffers and AudioPlayerNodes
class Audio {
let engine: AVAudioEngine = AVAudioEngine()
func createBuffer(name: String, type: String) -> AVAudioPCMBuffer {
let audioFilePath = NSBundle.mainBundle().URLForResource(name as String, withExtension: type as String)!
let audioFile = try! AVAudioFile(forReading: audioFilePath)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: UInt32(audioFile.length))
try! audioFile.readIntoBuffer(buffer)
return buffer
}
func createPlayer(pitch: Float) -> AVAudioPlayerNode {
let player = AVAudioPlayerNode()
let buffer = self.createBuffer("PianoC1", type: "wav")
let pitcher = AVAudioUnitTimePitch()
let delay = AVAudioUnitDelay()
pitcher.pitch = pitch
delay.delayTime = 0.2
delay.feedback = 90
delay.wetDryMix = 0
engine.attachNode(pitcher)
engine.attachNode(player)
engine.attachNode(delay)
engine.connect(player, to: pitcher, format: buffer.format)
engine.connect(pitcher, to: delay, format: buffer.format)
engine.connect(delay, to: engine.mainMixerNode, format: buffer.format)
engine.prepare()
try! engine.start()
return player
}
}
In my GameScene class I then test for collision, schedule a buffer and play the AudioPlayerNode if contact has occurred
func didBeginContact(contact: SKPhysicsContact) {
let firstBody: SKPhysicsBody = contact.bodyA
if (firstBody.categoryBitMask & sphere.sphereCollision != 0) {
let buffer1 = audio.createBuffer("PianoC1", type: "wav")
sphere.spherePlayer.scheduleBuffer(buffer1, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
sphere.spherePlayer.play()
}
}
I'm new to Swift and only have basic knowledge of programming so any suggestion/criticism is welcome.
I've been working on AVAudioEngine in scenekit and trying to do something else, but this will be what you are looking for:
https://developer.apple.com/library/mac/samplecode/AVAEGamingExample/Listings/AVAEGamingExample_AudioEngine_m.html
It explains the process of:
1-Instantiating your own AVAudioEngine sub-class
2-Methods to load PCMBuffers for each AVAudioPlayer
3-Changing your Environment node's parameters to accomodate the reverb for the large number of pinball objects
Edit: Converted, tested and added a few features:
1-You create a subclass of AVAudioEngine, name it AudioLayerEngine for example. This is to access the AVAudioUnit effects such as distortion, delay, pitch and many of the other effects available as AudioUnits.
2-Initialise by setting up some configurations for the audio engine, such as rendering algorithm, exposing the AVAudioEnvironmentNode to play with 3D positions of your SCNNode objects or SKNode objects if you are in 2D but want 3D effects
3-Create some helper methods to load presets for each AudioUnit effect you want
4-Create a helper method to create an audio player then add it to whatever node you want, as many times as you want since that SCNNode accepts a .audioPlayers methods which returns [AVAudioPlayer] or [SCNAudioPlayer]
5-Start playing.
I've pasted the entire class for reference so that you can then structure it as you wish, but keep in mind that if you are coupling this with SceneKit or SpriteKit, you use this audioEngine to manage all your sounds instead of SceneKit's internal AVAudioEngine. This means that you instantiate this in your gameView during the AwakeFromNib method
import Foundation
import SceneKit
import AVFoundation
class AudioLayerEngine:AVAudioEngine{
var engine:AVAudioEngine!
var environment:AVAudioEnvironmentNode!
var outputBuffer:AVAudioPCMBuffer!
var voicePlayer:AVAudioPlayerNode!
var multiChannelEnabled:Bool!
//audio effects
let delay = AVAudioUnitDelay()
let distortion = AVAudioUnitDistortion()
let reverb = AVAudioUnitReverb()
override init(){
super.init()
engine = AVAudioEngine()
environment = AVAudioEnvironmentNode()
engine.attachNode(self.environment)
voicePlayer = AVAudioPlayerNode()
engine.attachNode(voicePlayer)
voicePlayer.volume = 1.0
outputBuffer = loadVoice()
wireEngine()
startEngine()
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil)
voicePlayer.play()
}
func startEngine(){
do{
try engine.start()
}catch{
print("error loading engine")
}
}
func loadVoice()->AVAudioPCMBuffer{
let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!)
do{
let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false)
outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length))
do{
try soundFile.readIntoBuffer(outputBuffer)
}catch{
print("somethign went wrong with loading the buffer into the sound fiel")
}
print("returning buffer")
return outputBuffer
}catch{
}
return outputBuffer
}
func wireEngine(){
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert)
engine.attachNode(distortion)
engine.attachNode(delay)
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format)
engine.connect(distortion, to: delay, format: self.outputBuffer.format)
engine.connect(delay, to: environment, format: self.outputBuffer.format)
engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment())
}
func constructOutputFormatForEnvironment()->AVAudioFormat{
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount)
multiChannelEnabled = false
return environmentOutputConnectionFormat
}
func loadDistortionPreset(preset: AVAudioUnitDistortionPreset){
distortion.loadFactoryPreset(preset)
}
func createPlayer(node: SCNNode){
let player = AVAudioPlayerNode()
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference)
engine.attachNode(player)
engine.attachNode(distortion)
engine.connect(player, to: distortion, format: outputBuffer.format)
engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment())
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF
player.renderingAlgorithm = algo
player.reverbBlend = 0.3
player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF
}
}