Using AVAudioSequencer to send MIDI to third-party AUv3 instruments - ios

I'm having trouble controlling third-party AUv3 instruments with MIDI using AVAudioSequencer (iOS 12.1.4, Swift 4.2, Xcode 10.1) and would appreciate your help.
What I'm doing currently:
Get all AUs of type kAudioUnitType_MusicDevice.
Instantiate one and connect it to the AVAudioEngine.
Create some notes, and put them on a MusicTrack.
Hand the track data over to an AVAudioSequencer connected to the engine.
Set the destinationAudioUnit of the track to my selected Audio Unit.
So far, so good, but...
When I play the sequence using AVAudioSequencer it plays fine the first time, using the selected Audio Unit. On the second time I get either silence, or a sine wave sound (and I wonder who is making that). I'm thinking the Audio Unit should not be going out of scope in between playbacks of the sequence, but I do stop the engine and restart it again for the new round. (But it should even be possible to swap AUs while the engine is running, so I think this is OK.)
Are there some steps that I'm missing? I would love to include code, but it is really hard to condense it down to its essence from a wall of text. But if you want to ask for specifics, I can answer. Or if you can point me to a working example that shows how to reliably send MIDI to AUv3 using AVAudioSequencer, that would be great.
Is AVAudioSequencer even supposed to work with other Audio Units than Apple's? Or should I start looking for other ways to send MIDI over to AUv3?
I should add that I can consistently send MIDI to the AUv3 using the InstrumentPlayer method from Apple's AUv3Host sample, but that involves a concurrent thread, and results in all sorts of UI sync and timing problems.
EDIT: I added an example project to GitHub:
https://github.com/jerekapyaho/so54753738
It seems that it's now working in iPadOS 13.7, but I don't think I'm doing anything that different than earlier, except this loads a MIDI file from the bundle, instead of generating it from data on the fly.
If someone still has iOS 12, it would be interesting to know if it's broken there, but working on iOS 13.x (x = ?)

In case you are using AVAudioUnitSampler as an audio unit instrument, the sine tone happens when you stop and start the audio engine without reloading the preset. Whenever you start the engine you need to load any instruments back into the sampler (e.g. a SoundFont), otherwise you may hear the sine. This is an issue with the Apple AUSampler, not with 3rd party instruments.
Btw you can test it under iOS 12 using the simulator.

Related

Measuring the mic input volume during recording with AVAudioRecorder

new here, and new to mobile dev in general. This question is more about approach than anything. I have a simple app that I’m writing to learn various things, one of which is AVFoundation. I have the app working to the point where I record audio using AVAudioRecorder, play the recorded file back with AVAudioPlayer, and all is well. There are two things I’d like to achieve but I’m not quite sure how to go about them in the best way. I'm using Swift 3, xcode 8.3, iOS 10.3. Lots of 3s.
First: I want to only play back X number of seconds of the audio. To achieve this, my thought is to use scheduledTimer for X, which will trigger a stop() call when it elapses. Is that the best method to use?
Second: I want to measure the decibel level of input coming into the microphone while it’s recording. This is the one I truly have little insight on how to accomplish. I believe this can be obtained through the AVAudioRecorder.powerOutput value (?), but I’m unclear as to how I can monitor the value during playback and act on it.
Not really sure what code to include since it's pretty basic. I'm setting up the AVAudioSession in the AppDelegate, the AVAudioRecorder is setup to record in didFinishLoading, and the rest of the record, stop, play functionality is through buttons.

getting pcm audio for visualization via Spotify iOS SDK

We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.

Midi Timing Issues with Delphi ASIO VST and MiniHost

I'm coming from a background of using MSC* MidiSequencer for a Delphi XE2 project and have been playing with DelphiASIOVST this weekend on the off chance the MIDI may be stable enough to use as my core MIDI engine while also allowing me to support VST plug ins. I pulled the D16 trunk off the SVN and compiled effortlessly after a few path tweaks.
I understand a great deal of what I'm seeing but I'm wondering if others have experienced issues with MIDI file playback in the the MiniHost example application. Specifically with a one track melodic performance it sounds like notes are getting skipped and/or playing back a bit later over other notes that are playing as they should. Basically it's just hit or miss if a note is even played at all.
I have numerous pro sequencers on my machine and the MIDI files are fine there. they also support VST with little to no problems. I also know the MIDI lowest level file format and know the file structure is sound.
Can the TMidiFile play direct to the standard MIDI synth in the computer? I'm trying to rule out VST issues by getting a direct pipeline to the built in synth. Barring that, anyone seen these issues or know of some more/better examples of MIDI file to VST using the component set?
I use FL studios with my Midi and odds are is you need to turn down your buffer quality so that there is little to no delay.
It's probably by default set to about mid-high range which means you will almost for sure have 1 - 1.5 second delay
Don't turn it down too low otherwise you'll get trash can sound where everything sounds hollow and robotic, but keep smashing the keys while you're adju8sting the setting
Is the wordclock functioning properly? Do you have the ability of driving off another midi clock-source, just to test with?
Though you said: "I have numerous pro sequencers on my machine and the MIDI files are fine there", you could also try http://www.reaper.fm (works on Linux/BSD, Mac and Win) DAW and import the midi directly into that, then set your default midi device as that which you wish to test with.
Check the Midi Overflow settings.
Ensure each of your Midi Devices has a unique ID.
Get a midi throughput app like Midi-ox http://www.midiox.com/ To see realtime messages and data. and see where things are going.
Midi workflow checking is required for setting as per our requirements.
Set all devices with unique ids that have been specified in your midi overflow.
Midi throughput application is required to see the messages that are realtime and data that are also realtime.
User has to see the things where they are going to what purpose.
Hope this will help u...

Skip between multiple files while playing audio in iPhone iOS

For a project I need to handle audio in an iPhone app quite special and hope somebody may point me in the right direction.
Lets say you have a fixed set of up to thirty audio files of the same length (2-3 sec, non-compressed). While a que is playing from one audio file it should be able to update parameters that makes the playing continue from another audio file from the same timestamp the previous audiofile ended playing. If the different audio files is different versions of heavely filtered audio it should be possible to "slide" between them an get the impression that you applied the filter directly. The filtering is at the moment not possible to achive in realtime on an iPhone, therefore the prerendered files.
If A B and C is different audio files I like to be able to:
Play A without interruption:
Start AAAAAAAAAAAAA Stop
Or start play A and continue over in B and then C, initiated while playing
Start AAABBBBBBBBCC Stop
Ideally is should be possible to play two er more ques at the same time. Latency is not that important, but the skipping between files should ideally not produce clicks or delays.
I have looked into using Audio Queue Services (which look like hell to dive into) and sniffed on OpenAl. Could anyone give me a ruff overview and a general direction I can spend the next days burried into?
Try using the iOS Audio Unit API, particularly a mixer unit connected to RemoteIO for audio output.
I managed to do this by using FMOD Designer. FMOD (http://www.fmod.org/) is a sound design framework for game development, that supports iOS development. I made a multitrack-event in FMOD Designer with different layers for each sound clip. Add a parameter in the horizontal bar that lets you controll which sound clip to play in realtime. The trick is to let each soundclip continue over the whole bar and controll which sound that is beeing heard by using a volume effect (0-100%) like in the attached picture. In that way you are ensured that skipping between files follow the same timecode. I have tried this successfully with up to thirty layers, but experienced some double playing. This seemed to dissapear if I cut the number down to fifteen.
It should be possible to use iOS Audio Unit API if you are comfortable with this, but for those of us that like the most simple sollution FMOD is quite good :) Thanks to Ellen S for the sollution tip!
Screenshot of the multitrack-event in FMOD Designer:
https://plus.google.com/photos/106278910734599034045/albums/5723469198734595793?authkey=CNSIkbyYw8PM2wE

iOS: Sample code for simultaneous record and playback

I'm designing a simple proof of concept for multitrack recorder.
Obvious starting point is to play from file A.caf to headphones while simultaneously recording microphone input into file B.caf
This question -- Record and play audio Simultaneously -- points out that there are three levels at which I can work:
AVFoundation API (AVAudioPlayer + AVAudioRecorder)
Audio Queue API
Audio Unit API (RemoteIO)
What is the best level to work at? Obviously the generic answer is to work at the highest level that gets the job done, which would be AVFoundation.
But I'm taking this job on from someone who gave up due to latency issues (he was getting a 0.3sec delay between the files), so maybe I need to work at a lower level to avoid these issues?
Furthermore, what source code is available to springboard from? I have been looking at SpeakHere sample ( http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html ). if I can't find something simpler I will use this.
But can anyone suggest something simpler/else? I would rather not work with C++ code if I can avoid it.
Is anyone aware of some public code that uses AVFoundation to do this?
EDIT: AVFoundation example here: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb=1
EDIT(2): Much nicer looking one here: http://www.switchonthecode.com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework
EDIT(3): How do I record audio on iPhone with AVAudioRecorder?
To avoid latency issues, you will have to work at a lower level than AVFoundation alright. Check out this sample code from Apple - Auriotouch. It uses Remote I/O.
As suggested by Viraj, here is the answer.
Yes, you can achieve very good results using AVFoundation. Firstly you need to pay attention to the fact that for both the player and the recorder, activating them is a two step process.
First you prime it.
Then you play it.
So, prime everything. Then play everything.
This will get your latency down to about 70ms. I tested by recording a metronome tick, then playing it back through the speakers while holding the iPhone up to the speakers and simultaneously recording.
The second recording had a clear echo, which I found to be ~70ms. I could have analysed the signal in Audacity to get an exact offset.
So in order to line everything up I just performSelector:x withObject: y afterDelay: 70.0/1000.0
There may be hidden snags, for example the delay may differ from device to device. it may even differ depending on device activity. It is even possible the thread could get interrupted/rescheduled in between starting the player and starting the recorder.
But it works, and is a lot tidier than messing around with audio queues / units.
I had this problem and I solved it in my project simply by changing the PreferredHardwareIOBufferDuration parameter of the AudioSession. I think I have just 6ms latency now, that is good enough for my app.
Check this answer that has a good explanation.

Resources