I recently migrated from AKAppleSequencer to AKSequencer. My app is using some AKSampler for playing recorded instruments provided as wav.
Everything works fine and the timing is much better now compared to AKAppleSequencer. But there is a strange behavior when setting the tempo to a certain value. It seems that some notes are muted (or partially cancellated) now. Setting the tempo higher or lower than that value works fine - all notes in the pattern are audible. Changing the tempo is not doing any magic except sequencer.tempo=value. There is no postprocessing except some mixers.
Here is some pseudocode:
let sequencer = AKSequencer()
let akfile = try AKAudioFile(readFileName: "sample-1.wav")
let descriptor = AKSampleDescriptor(
noteNumber: 60,
noteFrequency:
Float(AKPolyphonicNode.tuningTable.frequency(forNoteNumber: 60)),
minimumNoteNumber: -1,
maximumNoteNumber: 127,
minimumVelocity: 0,
maximumVelocity: 127,
isLooping: false,
loopStartPoint: 0.0,
loopEndPoint: 1.0,
startPoint: 0.0,
endPoint: 0.0)
let sampler = AKSampler()
sampler.loadAKAudioFile(from: descriptor, file: akfile)
sampler.buildSimpleKeyMap()
// load some more samples
// create tracks and notes
let track = sequencer.addTrack(for: sampler)
// wire everything together
// play
It seems that the issue is related to which samples are used so just dropping some lines of code might not be sufficient to reproduce the behavior.
Using AKAppleSampler instead of AKSampler solved the problem for me but any hints might be interesting for others who are facing that problem and need to use AKSampler.
I'm using AudioKit v 4.9.3 and the issue occures on hardware and simulators.
Related
I am setting up a start application that on start up, will generate some white noise using AudioKit.
I have set up the following code that gets called on start up of my application:
let engine = AudioEngine()
let noise = WhiteNoise()
let mixer = Mixer(noise)
mixer.volume = 1
engine.output = mixer
try! engine.start()
But when I start up the application I do not hear any sound being generated. I set up a simple example to generate a sine wave using AVFoundation and I was able to hear the sound generated from my simulator.
I found an old thread - AudioKit - no sound output but I checked the AudioKit repo and it looks like this feature was removed a couple months back since it was not being used.
Any help would be appreciated!
Try noise.start() Generators don't default to being on.
I'm trying to perform frequency modulation on a signal coming from AKPlayer, which in return plays a mp3 file.
I've tried to work with AKOperationEffect, but it doesn't work as expected:
let modulatedPlayer = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: modulationFrequency,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
Has anybody an idea how to get the mp3 modulated?
Unfortunately, the AudioKit API is not so well documented .... there are a tons of examples, but they all deal with synthetic sounds such as sine, square waves etc.
I took the time to create a working practical example to help you #Ulrich, you can drop and play if you have the playground environment available, or just use it as a reference trusting me it works to amend your code, it's self-explanatory but you can read more about why my version work after the code TLDR;
Before <audio>
After <audio>
The following was tested and ran without problems in the latest XCode and Swift at the time of writing (XCode 11.4, Swift 5.2 and AudioKit 4.9.5):
import AudioKitPlaygrounds
import AudioKit
let audiofile = try! AKAudioFile(readFileName: "demo.mp3")
let player = AKPlayer(audioFile: audiofile)
let generator = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: 400,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
AudioKit.output = generator
try! AudioKit.start()
player.play()
generator.start()
Find the playground ready to use in the Download page ( https://audiokit.io/downloads/ )
As you can see, apart from declaring a path to the mp3 file when initializing a new AKAudioFile and passing to an AKPlayer instance, there are three steps that you need to occur in a certain order:
1) Assign an `AKNode` to the AudioKit output
2) Start the AudioKit engine
3) Start the `player` to generate output
4) Start the generator to moderate your sound
The best way to understand why this is to forget about code for a bit and imagine patching things in the real world; and finally, try to imagine the audio flow.
Hope this helps you and future readers!
Note that this is NOT a duplicate of this SO Post because in that post only WHAT method to use is given but there's no example on HOW should I use it.
So, I have dug into AKOfflineRenderNode as much as I can and have viewed all examples I could find. However, my code never seemed to work correctly on iOS 10.3.1 devices(and other iOS 10 versions), for the result is always silent. I try to follow examples provided in other SO posts but no success. I try to follow that in SongProcessor but it uses an older version of Swift and I can't even compile it. Trying SongProcessor's way to use AKOfflineRenderNode didn't help either. It always turned out silent.
I created a demo project just to test this. Because I don't own the audio file I used to test with, I couldn't upload it to my GitHub. Please add an audio file named "Test" into the project before compiling onto an iOS 10.3.1 simulator. (And if your file isn't in m4a, remember to change the file type in code where I initialize AKPlayer)
If you don't want to download and run the sample, the essential part is here:
#IBAction func export() {
// url, player, offlineRenderer and others are predefined and connected as player >> aPitchShifter >> offlineRenderer
// AudioKit.output is already offlineRenderer
offlineRenderer.internalRenderEnabled = false
try! AudioKit.start()
// I also tried using AKAudioPlayer instead of AKPlayer
// Also tried getting time in these ways:
// AVAudioTime.secondsToAudioTime(hostTime: 0, time: 0)
// player.audioTime(at: 0)
// And for hostTime I've tried 0 as well as mach_absolute_time()
// None worked
let time = AVAudioTime(sampleTime: 0, atRate: offlineRenderer.avAudioNode.inputFormat(forBus: 0).sampleRate)
player.play(at: time)
try! offlineRenderer.renderToURL(url, duration: player.duration)
player.stop()
player.disconnectOutput()
offlineRenderer.internalRenderEnabled = true
try? AudioKit.stop()
}
I am just getting started with AudioKit. I want to keep it very simple. I want to make a few UIButtons (C,D,E,F,...) and then have them play the corresponding piano sample. However I don't understand how to correctly prepare the sample file(s).
I found this example:
let sampler = AKSampler()
sampler.loadWav("Sounds/fmpia1")
let ampedSampler = AKBooster(sampler, gain: 3.0)
var delay = AKDelay(ampedSampler)
delay.time = pulse * 1.5
delay.dryWetMix = 0.0
delay.feedback = 0.0
let cMajor = [72, 74, 76, 77, 79, 81, 83, 84]
var mix = AKMixer(delay)
var reverb = AKReverb(mix)
AudioKit.output = reverb
AudioKit.start()
for note in cMajor {
sampler.playNote(note)
sleep(1)
}
What I understand: Loading the sampler and the numbers (72, 74, ...) are the MIDI signals for the notes.
However: How does the sampler know what to play? Does the sample "fmpia1" contain all notes? Is it just one sample, but the AKSampler pitches them automatically? But then how does AKSampler know what note the sample is? Shouldn't AKSampler be informed that the sample in the file is, let's say a F# ? So he can pitch accordingly?
I am very confused about this. I hope you can understand what my problem is.
Thanks in advance for any help!
AKSampler (and AKMIDISampler) use Apple's AVAudioUnitSampler internally. It is AVAudioUnitSampler that is doing the playback and pitching your root note. If you look at the documentation for AVAudioUnitSampler loadAudioFiles(at:) (https://developer.apple.com/documentation/avfoundation/avaudiounitsampler/1388631-loadaudiofiles), you will see that it creates a new zone for each audio file and uses the metadata in the audio file to try and map it correctly. It can also take a shortcut if the root note is in the file name (ie - ViolinC4).
So, in direct response to your questions:
fmpia1 is a single audio file (pitch). It gets mapped internally to a root note (maybe C4 if not specified - needs verified).
when you send in a midi event with a specific note number, the sampler will pitch your audio file to that note and play it back. (Here is a handy map of midi to notes: https://medium.com/#gmcerveny/midi-note-number-chart-for-ios-music-apps-b3c01df3cb19)
Yes, if you know the root note (pitch of the file), specifying as I said above will result in accurate playback.
I'm trying to play instrument of several .wav samples using AudioKit.
I've tried so far:
Using AKSampler (with underlying AVAudioUnitSampler) – it worked fine, but I can't figure out how to control ADSR envelope here – calling stop will stop note immediately.
Another way is to use AKSamplePlayer for each sample and play it, manually setting rate so it play the right note. I can (possibly?) then connect AKAmplitudeEnvelope to each sample player. But if I want to play 5 notes of the same sample simultaneously, I would need 5 instances of AKSamplePlayer, which seems like wasting resources.
I also tried to find a way to just push raw audio samples to the AudioKit output buffer, making mixing and sample interpolation by myself (in C, probably?). But didn't find how to do it :(
What is the right way to make a multi-sampled instrument using AudioKit? I feel like it must be a fairly simple task.
Thanks to mahal tertin, it's pretty easy to use AKAUPresetBuilder!
You can create .aupreset file somewhere in tmp directory and then load this instrument with AKSampler.
The only thing worth noting is that by default AKAUPresetBuilder will generate samples with trigger mode set to trigger, which will ignore note-off events. So you should set it explicitly.
For example:
let sampleC4 = AKAUPresetBuilder.generateDictionary(
rootNote: 60,
filename: pathToC4WavSample,
startNote: 48,
endNote: 65)
sampleC4["triggerMode"] = "hold"
let sampleC5 = AKAUPresetBuilder.generateDictionary(
rootNote: 72,
filename: pathToC5WavSample,
startNote: 66,
endNote: 83)
sampleC5["triggerMode"] = "hold"
AKAUPresetBuilder.createAUPreset(
dict: [sampleC4, sampleC5],
path: pathToAUPresetFilename,
instrumentName: "My Instrument",
attack: 0,
release: 0.2)
and then create a sampler and start AudioKit:
sampler = AKSampler()
try sampler.loadInstrument(atPath: pathToAUPresetFilename)
AudioKit.output = sampler
AudioKit.start()
and then use this to start playing note:
sampler.play(noteNumber: MIDINoteNumber(63), velocity: MIDIVelocity(120), channel: 0)
and this to stop, respecting release parameter:
sampler.stop(noteNumber: MIDINoteNumber(63), channel: 0)
Probably the best way would be to embed your wav files into an EXS or Soundfont format, making use of tools in that realm to accomplish the ADSR for instance. Otherwise you'll kind of have to have an instrument for each sample.