AKMIDICallbackInstrument handling more than 16 channels - audiokit

I've been using AKMIDICallbackInstrument for connecting a single Audio Unit to Apple's sequencer. This has worked fine in the past, but I now want to extend this into a multi-timbral context. Since the AKMIDICallback only allows for passing (status, note, velocity), I'm not sure how to handle > 16 parts/tracks. I understand that the callback instrument is connected to a track using the endpoint, so that the passed events will only be those from the connected track, but how can I route those events to a specific Audio Unit, other than by MIDI channel? (With Apple's sequencer I could, in principle, have hundreds of tracks, all sending on MIDI channel 1...)

Ugh... okay, I had tried getting around the limitations of the closure by overriding receivedMIDINoteOn(noteNumber:velocity:channel:portID:offset:), but I suppose that's only for receiving raw MIDI bytes from hardware.
The solution is to override play(noteNumber:velocity:channel:). Since this is an instance method (not a closure) you can reference self to handle the events coming through as required (i.e., "self" can point to the desired Audio Unit). I'm actually not using AKMIDICallbackInstrument anymore, just subclassing AKMIDIInstrument... Seems better suited to my purposes.

Related

How can I play a specific channel in a CoreMidi track?

I'm able to mute/unmute midi audio tracks with ease using MusicPlayer's MusicTrackSetProperty(t, kSequenceTrackProperty_MuteStatus...) method. But, I haven't wrapped my wits around how to enable/disable specific midi channels within the track. Is there a mute/unmute or disable/enable property for channels within a track?
Would something like this done on the track level, or should I be manipulating the midi synth audio unit in some fashion?
Creating an endpoint does me no good, because I only get a copy of events sent to the synth, not a callback that I can see for filtering what's going to the synth. So, I'm thinking there's probably something that can be tweaked in the audio unit graph, but what exactly?
Someone might suggest opening the midi file with the kMusicSequenceLoadSMF_ChannelsToTracks flag and then simply unmute the track corresponding to the channel and mute the rest. I tried doing that, but I actually get /less/ tracks when opening the midi file that way then without the kMusicSequenceLoadSMF_ChannelsToTracks flag. Odd. Maybe I should understand why that's the case, huh? Here's what I have for a midi file: 16 tracks each containing 6 channels of midi. Without the kMusicSequenceLoadSMF_ChannelsToTracks, I get 16 tracks, with the kMusicSequenceLoadSMF_ChannelsToTracks flag, 12. Shouldn't it be 16*6 tracks?
Thank your for your help. Best to you. /Jay
You're on the right track. To my knowledge, kMusicSequenceLoadSMF_ChannelsToTracks will coalesce common channels. So if given two tracks containing notes from three channels each, let's say track1 has notes on channels 1,2,and 3. And track2 has notes on channels 3,4,and 5. Then using the kMusicSequenceLoadSMF_ChannelsToTracks flag will coalesce the notes using channel 3 from track1 and track2 to a new track. The total number of tracks would be 5 using that method. That's probably the way to go unless you can prove otherwise. Otherwise, if you really need to pick things apart the endpoint is a valid approach. You just need to send the midi events manually instead of making a connection (pointing a track to a synth). In your callback you are supposed to parse the midi and call MusicDeviceMIDIEvent to trigger the synth directly. You could do your filtering there.

Playing back a WAV file streamed gradually over a network connection in iOS

I'm working with a third party API that behaves as follows:
I have to connect to its URL and make my request, which involves POSTing request data;
the remote server then sends back, "chunk" at a time, the corresponding WAV data (which I receive in my NSURLConnectionDataDelegate's didReceiveData callback).
By "chunk" for argument's sake, we mean some arbitrary "next portion" of the data, with no guarantee that it corresponds to any meaningful division of the audio (e.g. it may not be aligned to a specific multiple of audio frames, the number of bytes in each chunk is just some arbitrary number that can be different for each chunk, etc).
Now-- correct me if I'm wrong, I can't simply use an AVAudioPlayer because I need to POST to my URL, so I need to pull back the data "manually" via an NSURLConnection.
So... given the above, what is then the most painless way for me to play back that audio as it comes down the wire? (I appreciate that I could concatenate all the arrays of bytes and then pass the whole thing to an AVAudioPlayer at the end-- only that this will delay the start of playback as I have to wait for all the data.)
I will give a bird's eye view to the solution. I think that this will help you a great deal in the direction to find a concrete, coded solution.
iOS provides a zoo of audio APIs and several of them can be used to play audio. Which one of them you choose depends on your particular requirements. As you wrote already, the AVAudioPlayer class is not suitable for your case, because with this one, you need to know all the audio data in the moment you start playing audio. Obviously, this is not the case for streaming, so we have to look for an alternative.
A good tradeoff between ease of use and versatility are the Audio Queue Services, which I recommend for you. Another alternative would be Audio Units, but they are a low level C API and therefor less intuitive to use and they have many pitfalls. So stick to Audio Queues.
Audio Queues allow you to define callback functions which are called from the API when it needs more audio data for playback - similarly to the callback of your network code, which gets called when there is data available.
Now the difficulty is how to connect two callbacks, one which supplies data and one which requests data. For this, you have to use a buffer. More specifically, a queue (don't confuse this queue with the Audio Queue stuff. Audio Queue Services is the name of an API. On the other hand, the queue I'm talking about next is a container object). For clarity, I will call this one buffer-queue.
To fill data into the buffer-queue you will use the network callback function, which supplies data to you from the network. And data will be taken out of the buffer-queue by the audio callback function, which is called by the Audio Queue Services when it needs more data.
You have to find a buffer-queue implementation which supports concurrent access (aka it is thread safe), because it will be accessed from two different threads, the audio thread and the network thread.
Alternatively to finding an already thread safe buffer-queue implementation, you can take care of the thread safety on your own, e.g. by executing all code dealing with the buffer-queue on a certain dispatch queue (3rd kind of queue here; yes, Apple and IT love them).
Now, what happens if either
The audio callback is called and your buffer-queue is empty, or
The network callback is called and your buffer-queue is already full?
In both cases, the respective callback function can't proceed normally. The audio callback function can't supply audio data if there is none available and the network callback function can't store incoming data if the buffer-queue is full.
In these cases, I would first try out blocking further execution until more data is available or respectively space is available to store data. On the network side, this will most likely work. On the audio side, this might cause problems. If it causes problems on the audio side, you have an easy solution: if you have no data, simply supply silence as data. That means that you need to supply zero-frames to the Audio Queue Services, which it will play as silence to fill the gap until more data is available from the network.
This is the concept that all streaming players use when suddenly the audio stops and it tells you "buffering" next to some kind of spinning icon indicating that you have to wait and nobody knows for how long.

Keeping two AVPlayers in sync

I have a client who has a very specific request for the app that requires two AVPlayers to be in sync. One video is for some content and the other one is for a presenter speaking about the content. Using a AVMutableComposition to combine them into one video is not an option because the presenter video has to be able to respond to user generated events (e.g. they want to have a feature to show/hide the presenter) and I don't believe there is a way to have that kind of control over a specific AVMutableCompositionTrack.
So, I'm left with figuring out how to ensure that two AVPlayers stay in sync and I was wondering if anyone has had experience with this or suggestions for other tools to accomplish this.
Thanks
The following methods are the ones to use
- (void)setRate:(float)rate
time:(CMTime)itemTime
atHostTime:(CMTime)hostClockTime;
- (void)prerollAtRate:(float)rate
completionHandler:(void (^)(BOOL finished))completionHandler;
Caveats
Important This method is not currently supported for HTTP Live
Streaming or when automaticallyWaitsToMinimizeStalling is YES. For
clients linked against iOS 10.0 and later or macOS 10.12 and later,
invoking this method when automaticallyWaitsToMinimizeStalling is YES
will raise an NSInvalidArgument exception.
This is an expected behavior since "live" is "present" and cannot be seek forward and setting the rate to less than 1.0 it will cause to extra buffering the stream (second point is a guess).
Documentation
https://developer.apple.com/documentation/avfoundation/avplayer/1386591-setrate?language=objc
https://developer.apple.com/documentation/avfoundation/avplayer/1389712-prerollatrate?language=objc
As a side note consider that HLS streams are not truly live streams, the "present moment" could vary several seconds among clientes consuming the stream, the opposite of WebRTC for example where the delay between publishers and consumers is kinda of warranted for 1 second max.

Redirection playback output of avplayer item

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

iOS: Modify mic data stream with audio unit?

Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.

Resources