Note that this is NOT a duplicate of this SO Post because in that post only WHAT method to use is given but there's no example on HOW should I use it.
So, I have dug into AKOfflineRenderNode as much as I can and have viewed all examples I could find. However, my code never seemed to work correctly on iOS 10.3.1 devices(and other iOS 10 versions), for the result is always silent. I try to follow examples provided in other SO posts but no success. I try to follow that in SongProcessor but it uses an older version of Swift and I can't even compile it. Trying SongProcessor's way to use AKOfflineRenderNode didn't help either. It always turned out silent.
I created a demo project just to test this. Because I don't own the audio file I used to test with, I couldn't upload it to my GitHub. Please add an audio file named "Test" into the project before compiling onto an iOS 10.3.1 simulator. (And if your file isn't in m4a, remember to change the file type in code where I initialize AKPlayer)
If you don't want to download and run the sample, the essential part is here:
#IBAction func export() {
// url, player, offlineRenderer and others are predefined and connected as player >> aPitchShifter >> offlineRenderer
// AudioKit.output is already offlineRenderer
offlineRenderer.internalRenderEnabled = false
try! AudioKit.start()
// I also tried using AKAudioPlayer instead of AKPlayer
// Also tried getting time in these ways:
// AVAudioTime.secondsToAudioTime(hostTime: 0, time: 0)
// player.audioTime(at: 0)
// And for hostTime I've tried 0 as well as mach_absolute_time()
// None worked
let time = AVAudioTime(sampleTime: 0, atRate: offlineRenderer.avAudioNode.inputFormat(forBus: 0).sampleRate)
player.play(at: time)
try! offlineRenderer.renderToURL(url, duration: player.duration)
player.stop()
player.disconnectOutput()
offlineRenderer.internalRenderEnabled = true
try? AudioKit.stop()
}
Related
I'm working on an iOS/Flutter application, and am trying to work out if it's possible to play audio from the Music library on iOS with audio modifications (e. g. equalization settings) applied.
It seems like I'm looking for a solution that can work with MPMusicPlayerController, since that appears to be the strategy for playing local audio from the user's iOS Music library. I can find examples of applying EQ to audio on iOS (e. g. using AVAudioUnitEQ and AVAudioEngine: SO link, tutorial), but I'm unable to find any resources to help me understand if it's possible to bridge the gap between these resources.
Flutter specific context:
There are Flutter plugins that provide some of the functionality I'm looking for, but don't appear to work together. For example, the just_audio plugin has a robust set of features for modifying audio, but does not work with the local Music application on iOS/MPMusicPlayerController. Other plugins that do work with MPMusicPlayerController, like playify, do not have the ability to modify/transform the audio.
Even though I'm working with Flutter, any general advice on the iOS side would be very helpful. I appreciate any insight someone with more knowledge may be able to share with me!
Updating with my own answer here for future people: It looks like my only path forward (for now) is leaning into into AVAudioEngine directly. This is the rough POC that worked for me:
var audioPlayer = AVAudioPlayerNode()
var audioEngine = AVAudioEngine()
var eq = AVAudioUnitEQ()
let mediaItemCollection: [MPMediaItem] = MPMediaQuery.songs().items!
let song = mediaItemCollection[0]
do {
let file = try AVAudioFile(forReading: song.assetURL!)
audioEngine.attach(audioPlayer)
audioEngine.attach(eq)
audioEngine.connect(audioPlayer, to: eq, format: nil)
audioEngine.connect(eq, to: audioEngine.outputNode, format: file.processingFormat)
audioPlayer.scheduleFile(file, at: nil)
try audioEngine.start()
audioPlayer.play()
} catch {
// catch
}
The trickiest part for me was working out how to bridge together the "Music library/MPMediaItem" world to "AVAudioEngine" world -- which was just AVAudioFile(forReading: song.assetURL!)
I have trouble understanding AVAudioEngine's behaviour when switching audio input sources.
Expected Behaviour
When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly.
Actual Behaviour
When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true.
When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false.
Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one.
OS
This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around.
Code to Reproduce
Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift:
import AVFoundation
class Conductor {
static let shared: Conductor = Conductor()
private let _engine = AVAudioEngine?
init() {
// Session
let session = AVAudioSession.sharedInstance()
try? session.setActive(false)
try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker,
.allowBluetooth,
.allowAirPlay])
try! session.setActive(true)
_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil)
_engine.prepare()
}
func start() { _engine.start() }
}
And in AppDelegate, call:
Conductor.shared.start()
This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop.
Question
What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.
I'm trying to perform frequency modulation on a signal coming from AKPlayer, which in return plays a mp3 file.
I've tried to work with AKOperationEffect, but it doesn't work as expected:
let modulatedPlayer = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: modulationFrequency,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
Has anybody an idea how to get the mp3 modulated?
Unfortunately, the AudioKit API is not so well documented .... there are a tons of examples, but they all deal with synthetic sounds such as sine, square waves etc.
I took the time to create a working practical example to help you #Ulrich, you can drop and play if you have the playground environment available, or just use it as a reference trusting me it works to amend your code, it's self-explanatory but you can read more about why my version work after the code TLDR;
Before <audio>
After <audio>
The following was tested and ran without problems in the latest XCode and Swift at the time of writing (XCode 11.4, Swift 5.2 and AudioKit 4.9.5):
import AudioKitPlaygrounds
import AudioKit
let audiofile = try! AKAudioFile(readFileName: "demo.mp3")
let player = AKPlayer(audioFile: audiofile)
let generator = AKOperationEffect(player) { player, _ in
let oscillator = AKOperation.fmOscillator(baseFrequency: 400,
carrierMultiplier: player.toMono(),
modulatingMultiplier: 100,
modulationIndex: 0,
amplitude: 1)
return oscillator
}
AudioKit.output = generator
try! AudioKit.start()
player.play()
generator.start()
Find the playground ready to use in the Download page ( https://audiokit.io/downloads/ )
As you can see, apart from declaring a path to the mp3 file when initializing a new AKAudioFile and passing to an AKPlayer instance, there are three steps that you need to occur in a certain order:
1) Assign an `AKNode` to the AudioKit output
2) Start the AudioKit engine
3) Start the `player` to generate output
4) Start the generator to moderate your sound
The best way to understand why this is to forget about code for a bit and imagine patching things in the real world; and finally, try to imagine the audio flow.
Hope this helps you and future readers!
I'm trying to work with AKMidiSampler on Mac OSX. I'm unable to load sample data from a file. The following code will illustrate the problem when put into the Development Playground in the AudioKit for macOS project:
import AudioKit
let sampler1 = AKMIDISampler()
sampler1.loadPath("/Users/shane/Documents/Cel1.wav")
AudioKit.output = sampler1
AudioKit.start()
sampler1.play(noteNumber: 64, velocity: 100, channel: 0)
sleep(5)
sampler1.stop(noteNumber: 64, channel: 0)
The error happens right at line 2:
AKSampler.swift:loadPath:114:Error loading audio file at /Users/shane/Documents/samples/Cel1.wav
and all I hear is the default sine tone. I've checked the obvious things, e.g. the file is quite definitely present, permissions OK (actually rwx for everybody, just in case). Earlier experiments trying to load an ESX file indicated permission errors (code -54).
Can anyone verify that AKSampler and/or AKMIDISampler actually work in OSX?
Update March 20, 2018: The AudioKit team has since made some additions to the AKSampler/AKMIDISampler API to allow loading sample files from arbitrary file paths.
I have been invited to join the AudioKit core team, and have written a new sampler engine from scratch. In the next AudioKit release (expected within a day or two), the name "AKSampler" will refer to this newer code, but users should be aware that it is not a direct replacement for the older AKSampler, which will be renamed "AKAppleSampler" to reflect the fact that it is a wrapper for Apple's built-in "AUSampler" audio unit. The AKMIDISampler class (the one most people actually use) will remain unchanged as a wrapper for AKAppleSampler.
In the AudioKit source code, loadPath(_ :) calls loadInstrument(_ : type:) which looks in the bundle for your file. See a copy of the sources here:
#objc open func loadPath(_ filePath: String) {
do {
try samplerUnit.loadInstrument(at: URL(fileURLWithPath: filePath))
} catch {
AKLog("Error loading audio file at \(filePath)")
}
}
internal func loadInstrument(_ file: String, type: String) throws {
//AKLog("filename is \(file)")
guard let url = Bundle.main.url(forResource: file, withExtension: type) else {
fatalError("file not found.")
}
do {
try samplerUnit.loadInstrument(at: url)
} catch let error as NSError {
AKLog("Error loading instrument resource \(file)")
throw error
}
}
So you need to put your audio file in the app's or playground's bundle for this to call for this to work.
Original problem…
I'm working on a cross platform iOS/Mac app using AVFoundation and AVAudioEngine. The problem that I'm running into is that AVAudioUnitReverb seems to be ignoring the loadFactoryPreset() method on the Mac, but works fine in iOS.
I've isolated my problem to the following minimal amount of sample code. Pasting this into an iOS Playground results in a definitely echo-y .plate reverb. In a Mac playground, I get a reverb effect, but not the one I intended. It seems to be stuck with the default .mediumHall preset.
If you'd like to download the iOS and Mac playgrounds with the included audio file, I've uploaded them here: http://server.briantoth.com/ReverbPlayground.zip
I've tried searching for similar problems with no luck. I've tried it under 10.11, 10.12, and the latest 10.13 beta with the same results. I have to be missing something. Any suggestions would be appreciated. Thanks!
import AVFoundation
let sourceFile: AVAudioFile
let format: AVAudioFormat
let sourceFileURL = Bundle.main.url(forResource: "mixLoop", withExtension: "caf")!
sourceFile = try! AVAudioFile(forReading: sourceFileURL)
format = sourceFile.processingFormat
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
let reverb = AVAudioUnitReverb()
engine.attach(player)
engine.attach(reverb)
// set desired reverb parameters
reverb.loadFactoryPreset(.plate)
reverb.wetDryMix = 100
// make connections
engine.connect(player, to: reverb, format: format)
engine.connect(reverb, to: engine.mainMixerNode, format: format)
// schedule source file
player.scheduleFile(sourceFile, at: nil)
try! engine.start()
player.play()
sleep(20)
Update with workaround…
Further research suggests that this is a bug in the AVAudioUnitReverb class on the Mac and I'll report it to Apple as such. If I get the parameters for the underlying audio unit with a .mediumHall preset applied on iOS, I get this:
kReverb2Param_DryWetMix: 100.0
kReverb2Param_Gain: 1.0
kReverb2Param_MinDelayTime: 0.01
kReverb2Param_MaxDelayTime: 0.07
kReverb2Param_DecayTimeAt0Hz: 2.1
kReverb2Param_DecayTimeAtNyquist: 1.1
kReverb2Param_RandomizeReflections: 1.0
On the Mac, I get this for every single preset value:
kReverb2Param_DryWetMix: 100.0
kReverb2Param_Gain: 35.0
kReverb2Param_MinDelayTime: 0.014745
kReverb2Param_MaxDelayTime: 0.0614
kReverb2Param_DecayTimeAt0Hz: 0.014925
kReverb2Param_DecayTimeAtNyquist: 0.001
kReverb2Param_RandomizeReflections: 0.655
On iOS, the different presets change the parameters as expected. On the Mac, there is no effect. The values don't even match the documented default of .mediumHall. I wonder if it has anything to do with the Mac reverb being a kAudioUnitSubType_MatrixReverb vs the iOS kAudioUnitSubType_Reverb2.
If I manually configuring the underlying audio unit on the Mac myself, then the reverb works as expected. For the time being I was able to write my own loadFactoryPreset() to bypass the problem by setting the parameters to the iOS equivalent values for each preset.