How does one programmatically change the tempo (in BPM) of an AVAudioSequencer that's been loaded from an existing MIDI file (i.e. using the following)?
try sequencer.load(from: fileURL, options: AVMusicSequenceLoadOptions.smfChannelsToTracks)
I know that the sequencer's tempoTrack property returns the AVMusicTrack controlling the tempo, but how does one then edit it to add/change tempo events? The Apple documentation simply says...
"The tempo track can be edited and iterated upon as any other track. Non-tempo events in a tempo track are ignored."
...but gives no further indication on how such editing would be done.
I know there's the rate property, but that just revolves around a default value of 1.0, which would need some complex adjustments to allow BPM values, and I don't think would even be possible unless the file's original BPM is known at runtime.
Alternatively, is there a way to create a new AVMusicTrack from scratch, with a custom tempo, and make that the sequencer's tempoTrack?
The only way I managed this was to dip into the Audio Toolbox API momentarily.
This approach assumes that you instantiated your AVAudioSequencer with an AVAudioEngine via:
init(audioEngine engine: AVAudioEngine)
(1) After loading your midi file with AVAudioSequencer, get a pointer to the underlying MusicSequence from this property on AVAudioEngine
var musicSequence: MusicSequence? { get set }
(2) Get a pointer to the sequence's tempo track.
var tempoTrack: MusicTrack!
MusicSequenceGetTempoTrack(musicSequence, &tempoTrack)
(3) Remove all existing tempo information from the tempo track.
var iterator: MusicEventIterator!
NewMusicEventIterator(tempoTrack, &iterator)
var hasEvent = DarwinBoolean(false)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
while hasEvent.boolValue {
var timeStamp = MusicTimeStamp()
var eventType = MusicEventType()
var data: UnsafeRawPointer? = nil
var dataSize = UInt32()
MusicEventIteratorGetEventInfo(iterator, &timeStamp, &eventType, &data, &dataSize)
guard eventType == kMusicEventType_ExtendedTempo else {
MusicEventIteratorNextEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
continue
}
// remove tempo event
MusicEventIteratorDeleteEvent(iterator)
MusicEventIteratorHasCurrentEvent(iterator, &hasEvent)
}
DisposeMusicEventIterator(iterator)
(4) Set the new tempo.
let bpm = 92
let timeStamp = MusicTimeStamp(0)
MusicTrackNewExtendedTempoEvent(tempoTrack, timeStamp, bpm)
Related
Hello fellow AudioKit users,
I'm trying to setup AudioKit 5 with a playback time indication, and am having trouble.
If I use AudioPlayer's duration property, this is the total time of the audio file, not the current playback time.
ex:
let duration = player.duration
Always gives the file's total time.
Looking at old code from AKAudioPlayer, it seemed to have a "currentTime" property.
The migration guide (https://github.com/AudioKit/AudioKit/blob/v5-main/docs/MigrationGuide.md) mentions some potentially helpful classes from the old version, however the "AKTimelineTap" has no replacement with no comments from the developers... nice.
Also still not sure how to manipulate the current playback time either...
I've also checked out Audio Kit 5's Cookbooks, however this is for adding effects and nodes, not necessarily for playback display, etc..
Thanks for any help with this new version of AudioKit.
You can find playerNode in AudioPlayer, it's AVAudioPlayerNode class.
Use lastRenderTime and playerTime, you can calculate current time.
ex:
// get playerNode in AudioPlayer.
let playerNode = player.playerNode
// get lastRenderTime, and transform to playerTime.
guard let lastRenderTime = playerNode.lastRenderTime else { return }
guard let playerTime = playerNode.playerTime(forNodeTime: lastRenderTime) else { return }
// use sampleRate and sampleTime to calculate current time in seconds.
let sampleRate = playerTime.sampleRate
let sampleTime = playerTime.sampleTime
let currentTime = Double(sampleTime) / sampleRate
I am making a basic music app for iOS, where pressing notes causes the corresponding sound to play. I am trying to get multiple sounds stored in buffers to play simultaneously with minimal latency. However, I can only get one sound to play at any time.
I initially set up my sounds using multiple AVAudioPlayer objects, assigning a sound to each player. While it did play multiple sounds simultaneously, it didn't seem like it was capable of starting two sounds at the same time (it seemed like it would delay the second sound just slightly after the first sound was started). Furthermore, if I pressed notes at a very fast rate, it seemed like the engine couldn't keep up, and later sounds would start well after I had pressed the later notes.
I am trying to solve this problem, and from the research I have done, it seems like using the AVAudioEngine to play sounds would be the best method, where I can set up the sounds in an array of buffers, and then have them play back from those buffers.
class ViewController: UIViewController
{
// Main Audio Engine and it's corresponding mixer
var audioEngine: AVAudioEngine = AVAudioEngine()
var mainMixer = AVAudioMixerNode()
// One AVAudioPlayerNode per note
var audioFilePlayer: [AVAudioPlayerNode] = Array(repeating: AVAudioPlayerNode(), count: 7)
// Array of filepaths
let noteFilePath: [String] = [
Bundle.main.path(forResource: "note1", ofType: "wav")!,
Bundle.main.path(forResource: "note2", ofType: "wav")!,
Bundle.main.path(forResource: "note3", ofType: "wav")!]
// Array to store the note URLs
var noteFileURL = [URL]()
// One audio file per note
var noteAudioFile = [AVAudioFile]()
// One audio buffer per note
var noteAudioFileBuffer = [AVAudioPCMBuffer]()
override func viewDidLoad()
{
super.viewDidLoad()
do
{
// For each note, read the note URL into an AVAudioFile,
// setup the AVAudioPCMBuffer using data read from the file,
// and read the AVAudioFile into the corresponding buffer
for i in 0...2
{
noteFileURL.append(URL(fileURLWithPath: noteFilePath[i]))
// Read the corresponding url into the audio file
try noteAudioFile.append(AVAudioFile(forReading: noteFileURL[i]))
// Read data from the audio file, and store it in the correct buffer
let noteAudioFormat = noteAudioFile[i].processingFormat
let noteAudioFrameCount = UInt32(noteAudioFile[i].length)
noteAudioFileBuffer.append(AVAudioPCMBuffer(pcmFormat: noteAudioFormat, frameCapacity: noteAudioFrameCount)!)
// Read the audio file into the buffer
try noteAudioFile[i].read(into: noteAudioFileBuffer[i])
}
mainMixer = audioEngine.mainMixerNode
// For each note, attach the corresponding node to the audioEngine, and connect the node to the audioEngine's mixer.
for i in 0...2
{
audioEngine.attach(audioFilePlayer[i])
audioEngine.connect(audioFilePlayer[i], to: mainMixer, fromBus: 0, toBus: i, format: noteAudioFileBuffer[i].format)
}
// Start the audio engine
try audioEngine.start()
// Setup the audio session to play sound in the app, and activate the audio session
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.soloAmbient)
try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.default)
try AVAudioSession.sharedInstance().setActive(true)
}
catch let error
{
print(error.localizedDescription)
}
}
func playSound(senderTag: Int)
{
let sound: Int = senderTag - 1
// Set up the corresponding audio player to play its sound.
audioFilePlayer[sound].scheduleBuffer(noteAudioFileBuffer[sound], at: nil, options: .interrupts, completionHandler: nil)
audioFilePlayer[sound].play()
}
Each sound should be playing without interrupting the other sounds, only interrupting its own sound when the sounds is played again. However, despite setting up multiple buffers and players, and assigning each one to its own Bus on the audioEngine's mixer, playing one sound still stops any other sounds from playing.
Furthermore, while leaving out .interrupts does prevent sounds from stopping other sounds, these sounds won't play until the sound that is currently playing completes. This means that if I play note1, then note2, then note3, note1 will play, while note2 will only play after note1 finishes, and note3 will only play after note2 finishes.
Edit: I was able to get the audioFilePlayer to reset to the beginning again without using interrupt with the following code in the playSound function.
if audioFilePlayer[sound].isPlaying == true
{
audioFilePlayer[sound].stop()
}
audioFilePlayer[sound].scheduleBuffer(noteAudioFileBuffer[sound], at: nil, completionHandler: nil)
audioFilePlayer[sound].play()
This still leaves me with figuring out how to play these sounds simultaneously, since playing another sound will still stop the currently playing sound.
Edit 2: I found the solution to my problem. My answer is below.
It turns out that having the .interrupt option wasn't the issue (in fact, this actually turned out to be the best way to restart the sound that was playing in my experience, as there was no noticeable pause during the restart, unlike the stop() function). The actual problem that was preventing multiple sounds from playing simultaneously was this particular line of code.
// One AVAudioPlayerNode per note
var audioFilePlayer: [AVAudioPlayerNode] = Array(repeating: AVAudioPlayerNode(), count: 7)
What happened here was that each item of the array was being assigned the exact same AVAudioPlayerNode value, so they were all effectively sharing the same AVAudioPlayerNode. As a result, the AVAudioPlayerNode functions were affecting all of the items in the array, instead of just the specified item. To fix this and give each item a different AVAudioPlayerNode value, I ended up changing the above line so that it starts as an empty array of type AVAudioPlayerNode instead.
// One AVAudioPlayerNode per note
var audioFilePlayer = [AVAudioPlayerNode]()
I then added a new line to append to this array a new AVAudioPlayerNode at the beginning inside of the second for-loop of the viewDidLoad() function.
// For each note, attach the corresponding node to the audioEngine, and connect the node to the audioEngine's mixer.
for i in 0...6
{
audioFilePlayer.append(AVAudioPlayerNode())
// audioEngine code
}
This gave each item in the array a different AVAudioPlayerNode value. Playing a sound or restarting a sound no longer interrupts the other sounds that are currently being played. I can now play any of the notes simultaneously and without any noticeable latency between note press and playback.
I am setting up a tts app with AVSpeechSynthesizer. I have to do real-time pitch and rate adjustments. I am using UISLider for adjusting pitch and rate.
Here is my code:-
#IBAction func sl(_ sender: UISlider) {
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
self.rate = sender.value
if currentRange.length > 0 {
let valuee = currentRange.length + currentRange.location
let neww = self.tvEditor.text.dropFirst(valuee)
self.tvEditor.text = String(neww)
synthesizer.speak(buildUtterance(for: rate, pitch: pitch, with: String(neww), language: self.preferredVoiceLanguageCode2 ?? "en"))
}
} else {
}
}
I may have understood your problem even if no details are provided: you can't take into account the new values of the rate and pitchMultiplier when the speech is running.
To explain the following details, I read this example that contains code snippets (ObjC, Swift) and illustrations.
Create your AVSpeechUtterance instances with their rate and pitchMultiplier properties.
Add each one of them in an array that will represent the queue to be spoken.
Make a loop inside the previous queue with the synthesizer to read out every elements.
Now, if you want to change the property values in real-time, see the steps hereafter once one of your sliders moves:
Get the current spoken utterance thanks to the AVSpeechSynthesizerDelegate protocol.
Run the stopSpeaking synthesizer method that will remove from the queue the utterances that haven't been spoken yet.
Create the previous removed utterances with the new property values.
Redo steps 2/ and 3/ to resume where you stopped with these updated values.
The synthesizer queues all information to be spoken long before you ask for new values that don't impact the stored utterances: you must remove and recreate the utterances with their new property values to be spoken.
If the code example provided by the link above isn't enough, I suggest to take a look at this WWDC video detailed summary dealing with AVSpeechSynthesizer.
I'm trying to normalize audio file after record to make it louder or vice versa, but i'm getting error WARNING AKAudioFile: cannot normalize a silent file
I have checked recordered audioFile.maxLevel and it was 1.17549e-38, minimum float.
I'm using official Recorder example, and to normalize after record i added this code:
let norm = try player.audioFile.normalized(newMaxLevel: -4.0);
What I'm doing wrong? Why maxLevel invalid? Record is loud enough.
Rather than use the internal audio file of the player, make a new instance like so:
if let file = try? AKAudioFile(forReading: url) {
if let normalizedFile = try? file.normalized(newMaxLevel: -4) {
Swift.print("Normalized file sucess: \(normalizedFile.maxLevel)")
}
}
I can add a normalize func to the AKAudioPlayer so that it's available for playback. Essentially, the player just uses the AKAudioFile for initialization, and all subsequent operations happen in a buffer.
I am playing a song using AVAudioPlayerNode and I am trying to control its time using a UISlider but I can't figure it out how to seek the time using AVAUdioEngine.
After MUCH trial and error I think I have finally figured this out.
First you need to calculate the sample rate of your file. To do this get the last render time of your AudioNode:
var nodetime: AVAudioTime = self.playerNode.lastRenderTime
var playerTime: AVAudioTime = self.playerNode.playerTimeForNodeTime(nodetime)
var sampleRate = playerTime.sampleRate
Then, multiply your sample rate by the new time in seconds. This will give you the exact frame of the song at which you want to start the player:
var newsampletime = AVAudioFramePosition(sampleRate * Double(Slider.value))
Next, you are going to want to calculate the amount of frames there are left in the audio file:
var length = Float(songDuration!) - Slider.value
var framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
Finally, stop your node, schedule the new segment of audio, and start your node again!
playerNode.stop()
if framestoplay > 1000 {
playerNode.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, atTime: nil,completionHandler: nil)
}
playerNode.play()
If you need further explanation I wrote a short tutorial here: http://swiftexplained.com/?p=9
For future readers, probably better to get the sample rate as :
playerNode.outputFormat(forBus: 0).sampleRate
Also take care when converting to AVAudioFramePosition, as it is an integer, while sample rate is a double. Without rounding the result, you may end up with undesirable results.
P.S. The above answer assumes that the file you are playing has the same sample rate as the output format of the player, which may or may not be true.