I'm trying to normalize audio file after record to make it louder or vice versa, but i'm getting error WARNING AKAudioFile: cannot normalize a silent file
I have checked recordered audioFile.maxLevel and it was 1.17549e-38, minimum float.
I'm using official Recorder example, and to normalize after record i added this code:
let norm = try player.audioFile.normalized(newMaxLevel: -4.0);
What I'm doing wrong? Why maxLevel invalid? Record is loud enough.
Rather than use the internal audio file of the player, make a new instance like so:
if let file = try? AKAudioFile(forReading: url) {
if let normalizedFile = try? file.normalized(newMaxLevel: -4) {
Swift.print("Normalized file sucess: \(normalizedFile.maxLevel)")
}
}
I can add a normalize func to the AKAudioPlayer so that it's available for playback. Essentially, the player just uses the AKAudioFile for initialization, and all subsequent operations happen in a buffer.
Related
I'm trying to write out an audio file after doing some processing, and am getting an error. I've reduced the error to this simple standalone case:
import Foundation
import AVFoundation
do {
let inputFileURL = URL(fileURLWithPath: "/Users/andrewmadsen/Desktop/test.m4a")
let file = try AVAudioFile(forReading: inputFileURL, commonFormat: .pcmFormatFloat32, interleaved: true)
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
throw NSError()
}
buffer.frameLength = buffer.frameCapacity
try file.read(into: buffer)
let tempURL =
URL(fileURLWithPath: NSTemporaryDirectory())
.appendingPathComponent("com.openreelsoftware.AudioWriteTest")
.appendingPathComponent(UUID().uuidString)
.appendingPathExtension("caf")
let fm = FileManager.default
let dirURL = tempURL.deletingLastPathComponent()
if !fm.fileExists(atPath: dirURL.path, isDirectory: nil) {
try fm.createDirectory(at: dirURL, withIntermediateDirectories: true, attributes: nil)
}
var settings = buffer.format.settings
settings[AVAudioFileTypeKey] = kAudioFileCAFType
let tempFile = try AVAudioFile(forWriting: tempURL, settings: settings)
try tempFile.write(from: buffer)
} catch {
print(error)
}
When this code runs, the tempFile.write(from: buffer) call throws an error:
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_imp->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
test.m4a is a stereo, 44.1 KHz AAC file (from the iTunes store), though the failure occurs with other stereo files in other formats (AIFF and WAV) as well.
The code does not fail, and instead correctly saves the original audio out to a new file if I change the interleaved parameter to false when creating the original input AVAudioFile (file). However, in this case, the following message is logged to the console:
Audio files cannot be non-interleaved. Ignoring setting AVLinearPCMIsNonInterleaved YES.
It seems strange and confusing that writing a non-interleaved buffer works fine, despite a message saying that files must be interleaved, while writing an interleaved buffer fails. This is the opposite of what I expected.
I'm aware that reading a file using the plain AVAudioFile(forReading:) initializer without specifying a format defaults to using non-interleaved (ie. the "standard" AVAudioFormat at the file's actual sample rate and channel count). Does this mean that I really do have to convert interleaved audio to non-interleaved before trying to write it?
Notably, in the actual program where this problem came up, I'm doing something much more complex than simply reading a file in and writing it back out again, and I do need to handle interleaved audio. I have confirmed however that that original, more complex code is also failing only for interleaved stereo audio.
Is there something tricky I need to do to get AVAudioFile to write out a buffer containing interleaved PCM audio?
The mixup here is that there are TWO formats in play: the format of the output file, and the format of the buffers you will write (the processing format). The initializer AVAudioFile(forWriting: settings:) does not let you choose the processing format and defaults to de-interleaved, hence your error.
This opens the file for writing using the standard format (deinterleaved floating point).
You need to use the other initializer: AVAudioFile(forWriting:settings: commonFormat:interleaved:) whose last two arguments specify the processing format (the argument names could have been clearer about that tbh).
var settings: [String : Any] = [:]
settings[AVFormatIDKey] = kAudioFormatMPEG4AAC
settings[AVAudioFileTypeKey] = kAudioFileCAFType
settings[AVSampleRateKey] = buffer.format.sampleRate
settings[AVNumberOfChannelsKey] = 2
settings[AVLinearPCMIsFloatKey] = (buffer.format.commonFormat == .pcmFormatInt32)
let tempFile = try AVAudioFile(forWriting: tempURL, settings: settings, commonFormat: buffer.format.commonFormat, interleaved: buffer.format.isInterleaved)
try tempFile.write(from: buffer)
p.s. passing the buffer format setting directly to AVAudioFile gets you an LPCM caf file, which you may not want, hence I reconstruct the file settings.
Not positive here, but maybe since you're making the outputFile settings the same as the processing format, it's possible that the processing format has an inflexible policy on interleaving, whereas the file settings format will be fine with it - or vice versa.
Here's what I'd try first. Incomplete example, but should be enough to illustrate the areas to test.
let sourceFile: AVAudioFile
let format: AVAudioFormat
do {
// for the moment, try this without any specific format and see what it gives you
let sourceFile = try AVAudioFile(forReading: inputFileURL)
format = sourceFile.processingFormat
print(format) // let's see what we're getting so far, maybe some clues
} catch {
fatalError("Unable to load the source audio file: \(error.localizedDescription).")
}
let sourceSettings = sourceFile.fileFormat.settings
var outputSettings = sourceSettings // start with the settings of the original file rather than the buffer format settings
outputSettings[AVAudioFileTypeKey] = kAudioFileCAFType
// etc...
I am making a basic music app for iOS, where pressing notes causes the corresponding sound to play. I am trying to get multiple sounds stored in buffers to play simultaneously with minimal latency. However, I can only get one sound to play at any time.
I initially set up my sounds using multiple AVAudioPlayer objects, assigning a sound to each player. While it did play multiple sounds simultaneously, it didn't seem like it was capable of starting two sounds at the same time (it seemed like it would delay the second sound just slightly after the first sound was started). Furthermore, if I pressed notes at a very fast rate, it seemed like the engine couldn't keep up, and later sounds would start well after I had pressed the later notes.
I am trying to solve this problem, and from the research I have done, it seems like using the AVAudioEngine to play sounds would be the best method, where I can set up the sounds in an array of buffers, and then have them play back from those buffers.
class ViewController: UIViewController
{
// Main Audio Engine and it's corresponding mixer
var audioEngine: AVAudioEngine = AVAudioEngine()
var mainMixer = AVAudioMixerNode()
// One AVAudioPlayerNode per note
var audioFilePlayer: [AVAudioPlayerNode] = Array(repeating: AVAudioPlayerNode(), count: 7)
// Array of filepaths
let noteFilePath: [String] = [
Bundle.main.path(forResource: "note1", ofType: "wav")!,
Bundle.main.path(forResource: "note2", ofType: "wav")!,
Bundle.main.path(forResource: "note3", ofType: "wav")!]
// Array to store the note URLs
var noteFileURL = [URL]()
// One audio file per note
var noteAudioFile = [AVAudioFile]()
// One audio buffer per note
var noteAudioFileBuffer = [AVAudioPCMBuffer]()
override func viewDidLoad()
{
super.viewDidLoad()
do
{
// For each note, read the note URL into an AVAudioFile,
// setup the AVAudioPCMBuffer using data read from the file,
// and read the AVAudioFile into the corresponding buffer
for i in 0...2
{
noteFileURL.append(URL(fileURLWithPath: noteFilePath[i]))
// Read the corresponding url into the audio file
try noteAudioFile.append(AVAudioFile(forReading: noteFileURL[i]))
// Read data from the audio file, and store it in the correct buffer
let noteAudioFormat = noteAudioFile[i].processingFormat
let noteAudioFrameCount = UInt32(noteAudioFile[i].length)
noteAudioFileBuffer.append(AVAudioPCMBuffer(pcmFormat: noteAudioFormat, frameCapacity: noteAudioFrameCount)!)
// Read the audio file into the buffer
try noteAudioFile[i].read(into: noteAudioFileBuffer[i])
}
mainMixer = audioEngine.mainMixerNode
// For each note, attach the corresponding node to the audioEngine, and connect the node to the audioEngine's mixer.
for i in 0...2
{
audioEngine.attach(audioFilePlayer[i])
audioEngine.connect(audioFilePlayer[i], to: mainMixer, fromBus: 0, toBus: i, format: noteAudioFileBuffer[i].format)
}
// Start the audio engine
try audioEngine.start()
// Setup the audio session to play sound in the app, and activate the audio session
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.soloAmbient)
try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.default)
try AVAudioSession.sharedInstance().setActive(true)
}
catch let error
{
print(error.localizedDescription)
}
}
func playSound(senderTag: Int)
{
let sound: Int = senderTag - 1
// Set up the corresponding audio player to play its sound.
audioFilePlayer[sound].scheduleBuffer(noteAudioFileBuffer[sound], at: nil, options: .interrupts, completionHandler: nil)
audioFilePlayer[sound].play()
}
Each sound should be playing without interrupting the other sounds, only interrupting its own sound when the sounds is played again. However, despite setting up multiple buffers and players, and assigning each one to its own Bus on the audioEngine's mixer, playing one sound still stops any other sounds from playing.
Furthermore, while leaving out .interrupts does prevent sounds from stopping other sounds, these sounds won't play until the sound that is currently playing completes. This means that if I play note1, then note2, then note3, note1 will play, while note2 will only play after note1 finishes, and note3 will only play after note2 finishes.
Edit: I was able to get the audioFilePlayer to reset to the beginning again without using interrupt with the following code in the playSound function.
if audioFilePlayer[sound].isPlaying == true
{
audioFilePlayer[sound].stop()
}
audioFilePlayer[sound].scheduleBuffer(noteAudioFileBuffer[sound], at: nil, completionHandler: nil)
audioFilePlayer[sound].play()
This still leaves me with figuring out how to play these sounds simultaneously, since playing another sound will still stop the currently playing sound.
Edit 2: I found the solution to my problem. My answer is below.
It turns out that having the .interrupt option wasn't the issue (in fact, this actually turned out to be the best way to restart the sound that was playing in my experience, as there was no noticeable pause during the restart, unlike the stop() function). The actual problem that was preventing multiple sounds from playing simultaneously was this particular line of code.
// One AVAudioPlayerNode per note
var audioFilePlayer: [AVAudioPlayerNode] = Array(repeating: AVAudioPlayerNode(), count: 7)
What happened here was that each item of the array was being assigned the exact same AVAudioPlayerNode value, so they were all effectively sharing the same AVAudioPlayerNode. As a result, the AVAudioPlayerNode functions were affecting all of the items in the array, instead of just the specified item. To fix this and give each item a different AVAudioPlayerNode value, I ended up changing the above line so that it starts as an empty array of type AVAudioPlayerNode instead.
// One AVAudioPlayerNode per note
var audioFilePlayer = [AVAudioPlayerNode]()
I then added a new line to append to this array a new AVAudioPlayerNode at the beginning inside of the second for-loop of the viewDidLoad() function.
// For each note, attach the corresponding node to the audioEngine, and connect the node to the audioEngine's mixer.
for i in 0...6
{
audioFilePlayer.append(AVAudioPlayerNode())
// audioEngine code
}
This gave each item in the array a different AVAudioPlayerNode value. Playing a sound or restarting a sound no longer interrupts the other sounds that are currently being played. I can now play any of the notes simultaneously and without any noticeable latency between note press and playback.
I'm trying to analyze the amplitude data of an audio file, but I can't seem to find a way to get this data after applying a filter. Is it possible to get the floatChannelData or write the output to a new file for analysis?
player = AKPlayer(audioFile: file)
player.buffering = .always
player.preroll()
let filter = AKBandPassButterworthFilter(player, centerFrequency: 1000, bandwidth: 100)
AudioKit.output = filter
do {
try AudioKit.start()
} catch {
print("Failed to start AudioKit")
return nil
}
// This is the peak as though no filter was applied
print(player.buffer?.peak())
Yes, you can choose to render the output to file. You can also tap the filter node to get its data for use in plots, or to save the contents to a buffer and save the data external to AudioKit's rendering.
First, I called a AKMIDISampler to play an audio file, and then assigned it to AKSequencer. The 'midi' file I used is just a 2 bars long, C3 note, single track midi file, exactly as long as the audio file I wanted to play. But, in calling AKAudioFile, I wanted to choose mp3 file randomly. I temporarily made 1.mp3, 2.mp3 and 3.mp3 as below.
let track = AKMIDISampler()
let sequencer = AKSequencer(filename: "midi")
try? track.loadAudioFile(AKAudioFile(readFileName: String(arc4random_uniform(3)+1) + ".mp3"))
sequencer.tracks[0].setMIDIOutput(track.midiIn)
// Tempo track I had to made to remove sine wave
sequencer.tracks[1].setMIDIOutput(track.midiIn)
And did some sequencer settings,
sequencer.setTempo(128.0)
sequencer.setLength(AKDuration(beats: 8))
sequencer.setLoopInfo(AKDuration(beats: 8), numberOfLoops: 4)
sequencer.preroll()
and assigned AKMIDISampler to AudioKit.output, then did sequencer.play().
The sequencer playback was successful! It loaded among three mp3 files randomly, and played 8 beats (2 bars), looped for 4 times exactly.
But my goal is to load random MP3 files every time the loop repeats. It seems like the sequencer only plays the first assigned mp3 file when looping. I am struggling finding a solution to this.
Perhaps I could use "AKCallbackInstrument"? Since I play audiofile through a midi note in this case, I might reset "loadAudioFile" whenever the midi note is off? In that way I might loop the sequencer and play random a audio file in every loop. This is just an idea, but for me now it is hard to write it properly. I hope I am on the right track. It would be great if I could get an advice here. <3
You're definitely on the right track - you can easily get random audio files to loop at a fixed interval with AKSequencer + AKCallbackInstrument. But I wouldn't worry about trying to reload on the NoteOff message.
I would first load each mp3 into a separate player (e.g., AKAppleSampler) in an array (e.g.,you could call it players) and create a method that will trigger one of these players at random:
func playRandom() {
let playerIndex = Int(arc4random_uniform(UInt32(players.count)))
try? players[playerIndex].play()
}
When you create your sequencer, add a track and assign it to an AKCallbackInstrument. The callback function for this AKCallbackInstrument will call playRandom when it receives a noteOn message.
seq = AKSequencer()
track = seq.newTrack()!
callbackInst = AKCallbackInstrument()
track.setMIDIOutput(callbackInst.midiIn)
callbackInst.callback = { status, note, vel in
guard status == .noteOn else { return }
self.playRandom()
}
It isn't necessary to load the sequencer with a MIDI file. You could just add the triggering MIDI event directly to the track.
track.add(noteNumber: 48, // i.e., C3
velocity: 127,
position: AKDuration(beats: 0), // noteOn message here
duration: AKDuration(beats: 8), // noteOff 8 beats later
channel: 0)
Your problem with the sine wave is probably being caused by an extra track (probably tempo track) in the MIDI file which you created which hasn't been assigned an output. You can avoid the problem altogether by adding the MIDI events directly.
In principle, you could use the callback to check for noteOff events and trigger code from the noteOff, but I wouldn't recommend it in your case. There is no good reason to re-use a single player for multiple audiofiles. Loading the file is where you are most likely to create an error. What happens if your file hasn't finished playing and you try to load another one? The resources needed to keep multiple players in memory is pretty trivial - if you're going to play the same file more than once, it is cleaner and safer to load it once and keep the player in memory.
It was very helpful, c_booth! Thanks to you, I made a huge progress today. Here's what I've written based on your advise. First, I made an array of AKPlayers include 6 mp3 files. They're assigned to AKMixer, and then I called sequencer and callback instrument. I made a track and a note on the sequencer, which calls 'playRandom' function on every noteOn :
let players: [AKPlayer] = {
do {
let filenames = ["a1.mp3", "a2.mp3", "a3.mp3", "b1.mp3", "b2.mp3", "b3.mp3"]
return try filenames.map { AKPlayer(audioFile: try AKAudioFile(readFileName: $0)) }
} catch {
fatalError()
}
}()
func playRandom() {
let playerIndex = Int(arc4random_uniform(UInt32(players.count)))
players[playerIndex].play()
}
func addTracks() {
let track = sequencer.newTrack()!
track.add(noteNumber: 48, velocity: 127, position: AKDuration(beats: 0), duration: AKDuration(beats: 16), channel: 0)
track.setMIDIOutput(callbackInst.midiIn)
callbackInst.callback = { status, note, vel in
guard status == .noteOn else { return }
self.playRandom()
}
}
func sequencerSettings() {
sequencer.setTempo(128.0)
sequencer.setLength(AKDuration(beats: 16))
sequencer.setLoopInfo(AKDuration(beats: 16), numberOfLoops: 4)
sequencer.preroll()
}
func makeConnections() {
players.forEach { $0 >>> mixer }
}
func startAudioEngine() {
AudioKit.output = mixer
do {
try AudioKit.start()
} catch {
print(error)
fatalError()
}
}
func startSequencer() {
sequencer.play()
}
This worked great. It randomly selects one from 6 mp3 files (they are all the same length, 128bpm and 16 beats). What I found strange here is, though, the first playback plays two audio files at once. It works fine after the second loop. I changed the numberOfLoop setting, enableLooping(), etc but still the same - plays two files on the first playback. The trackcount is still 1, and I only called one AKPlayer as you could see. Is there anything I can do about this?
Also, ultimately, I'd like to call hundreds of mp3 files on the array, as what I'm trying to make is a sort of DJing app (something like Ableton Live preset). Do you think it's a good idea to use AKPlayer, assuming this code will load mp3 files from the cloud and stream it to the user? Much appreciated. <3
I am using AudioKit to mix WAV files together with MIDI files.
I also need to save the result in a separate file.
To mix the WAVs and MIDIs I am using an AKMIDISampler with an AKSequencer like this:
func add(track: MixerTrack) -> Bool {
do{
let trackSampler = AKMIDISampler()
try trackSampler.loadWav(track.instrument.fileName)
trackSampler.connect(to: mixer)
let sequencer = AKSequencer(filename: track.midi.fileName)
sequencer.setTempo(Double(tempo))
sequencer.setRate(rate)
sequencer.setGlobalMIDIOutput(trackSampler.midiIn)
sequencer.enableLooping()
sequencer.enableLooping()
sequencers.append(sequencer)
tracks.append(track)
return true
} catch {
return false
}
}
I am using the SongProcessor example from AudioKit's examples for ideas on how to use AKOfflineRenderNode.
The thing is the example works with AKAudioPlayer instances and not sequencers as I am using. I believe I cannot use players because I need to mix the WAV and MIDI files, and I was only able to achieve that using sequencers.
My first question is: Is it possible to create files from sequencers the same way it is done in SongProcessor with players?
I was able to save an m4a file but the result is weird. First, if I don't set the rate manually to a number like 40, it is veeery slow to play all the notes. And when I set ti to a value like that,I can hear the sequence playing but at wrong rates. At some moments the beats play correctly but they often start playing too slow or too fast at different times.
Is there something I am doing wrong? Is this a bug with AKOfflineRenderNode or is it just not mean to be used like this?
Here is the code I use to save the mix to disk:
func saveMixToDisk() -> URL? {
do {
let fileManager = FileManager.default
let name = UUID().uuidString.appending(".m4a")
let documentDirectory = try fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor:nil, create:false)
let fileURL = documentDirectory.appendingPathComponent(name)
offlineRender.internalRenderEnabled = false
let duration = sequencers.first!.length.seconds
for sequencer in sequencers {
sequencer.stop()
sequencer.setTime(AKDuration(seconds: 0).musicTimeStamp)
sequencer.rewind()
}
for sequencer in sequencers {
sequencer.setRate(40) // I would like to find a way to avoid having to set this, since this value is hardcoded and I don't know how to find the correct one. (When I only play through the sequencer inside the app the rate is perfect, but it gets messed up when rendering to URL)
sequencer.play()
}
try offlineRender.renderToURL(fileURL, seconds: duration * 10)
for sequencer in sequencers {
sequencer.stop()
sequencer.setTime(AKDuration(seconds: 0).musicTimeStamp)
sequencer.rewind()
}
offlineRender.internalRenderEnabled = true
return fileURL
} catch let error {
print(error)
return nil
}
}
Any help is very much appreciated. I can't seem to be able to get this to work, and sadly I don't know of any other options in iOS to achieve what I need.
Instead of using AKOfflineRender, try the new AudioKit.renderToFile in AudioKit 4.0.4: https://github.com/AudioKit/AudioKit/commit/09aedf7c119a399ab00026ddfb91ae6778570176
I think you need to use this method in iOS11
[AudioKit renderToFile:file duration:self->_audioDurationSeconds error:&error prerender:^{
[self.voicePlayer start];
}];