Getting averagePowerForChannel in AVPlayer - ios

how can i get the averagePowerForChannel in AVPlayer in order to make an audio visualization on my music app!
ive already done the visualization part but im stuck in its engine (realtime volume channel).
i know that by using AVAudioPlayer it can be done easily using the .meteringEnabled Property but for some known reason AVPlayer is a must in my app!
im actualy thinking of using AVAudioPlayer Alongside with AVPlayer to get the desired result but it sounds kind of messy workaround,
how can that affect performance and stability?
thanks in advance

I have an issue with AVPlayer visualisation for about two years. In my case it involves HLS live streaming, in that case, you won't get it running, as of my knowledge.
EDIT This will not let you access the averagePowerForChannel: method, but you will get access to the raw data and with for example FFT get the desired information.
I got it working with local playback, though. You basically wait for the players player item to have a track up and running. At that point you will need to patch an MTAudioProcessingTap into the audio mix.
The processing tap will run callbacks you specify in which you will be able to compute the raw audio data as you need.
Here is a quick example (sorry for heaving it in ObjC, though):
#import <AVFoundation/AVFoundation.h>
#import <MediaToolbox/MediaToolbox.h>
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut) {};
void finalize(MTAudioProcessingTapRef tap) {};
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat) {};
void unprepare(MTAudioProcessingTapRef tap) {};
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut) {};
- (void)play {
// player and item setup ...
[[[self player] currentItem] addObserver:self forKeyPath:#"tracks" options:kNilOptions context:NULL];
}
//////////////////////////////////////////////////////
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)changecontext:(void *)context
if ([keyPath isEqualToString:#"tracks"] && [[object tracks] count] > 0) {
for (AVPlayerItemTrack *itemTrack in [object tracks]) {
AVAssetTrack *track = [itemTrack assetTrack];
if ([[track mediaType] isEqualToString:AVMediaTypeAudio]) {
[self addAudioProcessingTap:track];
break;
}
}
}
- (void)addAudioProcessingTap:(AVAssetTrack *)track {
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalise;
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err) {
NSLog(#"error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:err userInfo:nil]);
return;
}
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
[inputParams setAudioTapProcessor:tap];
[audioMix setInputParameters:#[inputParams]];
[[[self player] currentItem] setAudioMix:audioMix];
}
There is some discussion going on over on my question from over two years ago, so make sure to check it out as well.

You will need an audio processor class in combination with AV Foundation to visualize audio samples as well as applying a Core Audio audio unit effect (Bandpass Filter) to the audio data. You can find a sample by Apple here
Essentially you will need to add an observer to you AVPlayer like the below:
// Notifications
let playerItem: AVPlayerItem! = videoPlayer.currentItem
playerItem.addObserver(self, forKeyPath: "tracks", options: NSKeyValueObservingOptions.New, context: nil);
NSNotificationCenter.defaultCenter().addObserverForName(AVPlayerItemDidPlayToEndTimeNotification, object: videoPlayer.currentItem, queue: NSOperationQueue.mainQueue(), usingBlock: { (notif: NSNotification) -> Void in
self.videoPlayer.seekToTime(kCMTimeZero)
self.videoPlayer.play()
print("replay")
})
Then handle the notification in the overriden method below:
override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) {
if (object === videoPlayer.currentItem && keyPath == "tracks"){
if let playerItem: AVPlayerItem = videoPlayer.currentItem {
if let tracks = playerItem.asset.tracks as? [AVAssetTrack] {
tapProcessor = MYAudioTapProcessor(AVPlayerItem: playerItem)
playerItem.audioMix = tapProcessor.audioMix
tapProcessor.delegate = self
}
}
}
}
Here's a link to a sample project on GitHub

Related

Extract/Record Audio from HLS stream (video) while playing iOS

I am playing HLS streams using AVPlayer. And I also need to record these streams as user presses record button.
The approach I am using is to record audio and video separately then at the end merge these file to make the final video. And It is successful with remote mp4 files.
But now for the HLS (.m3u8) files I am able to record the video using AVAssetWriter but having problems with audio recording.
I am using MTAudioProccessingTap to process the raw audio data and write it to a file. I followed this article. I am able to record remote mp4 audio but its not working with HLS streams.
Initially I wasn't able to extract the audio tracks from the stream using AVAssetTrack *audioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
But I was able to extract the audioTracks using KVO to initialize the MTAudioProcessingTap.
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context{
AVPlayer *player = (AVPlayer*) object;
if (player.status == AVPlayerStatusReadyToPlay)
{
NSLog(#"Ready to play");
self.previousAudioTrackID = 0;
__weak typeof (self) weakself = self;
timeObserverForTrack = [player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(1, 100) queue:nil usingBlock:^(CMTime time)
{
#try {
for(AVPlayerItemTrack* track in [weakself.avPlayer.currentItem tracks]) {
if([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio])
weakself.currentAudioPlayerItemTrack = track;
}
AVAssetTrack* audioAssetTrack = weakself.currentAudioPlayerItemTrack.assetTrack;
weakself.currentAudioTrackID = audioAssetTrack.trackID;
if(weakself.previousAudioTrackID != weakself.currentAudioTrackID) {
NSLog(#":::::::::::::::::::::::::: Audio track changed : %d",weakself.currentAudioTrackID);
weakself.previousAudioTrackID = weakself.currentAudioTrackID;
weakself.audioTrack = audioAssetTrack;
/// Use this audio track to initialize MTAudioProcessingTap
}
}
#catch (NSException *exception) {
NSLog(#"Exception Trap ::::: Audio tracks not found!");
}
}];
}
}
I am also keeping track of trackID to check if track is changed.
This is how I initialize the MTAudioProcessingTap.
-(void)beginRecordingAudioFromTrack:(AVAssetTrack *)audioTrack{
// Configure an MTAudioProcessingTap to handle things.
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
OSStatus err = MTAudioProcessingTapCreate(
kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap
);
if(err) {
NSLog(#"Unable to create the Audio Processing Tap %d", (int)err);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain
code:err
userInfo:nil];
NSLog(#"Error: %#", [error description]);;
return;
}
// Create an AudioMix and assign it to our currently playing "item", which
// is just the stream itself.
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters
audioMixInputParametersWithTrack:audioTrack];
inputParams.audioTapProcessor = tap;
audioMix.inputParameters = #[inputParams];
_audioPlayer.currentItem.audioMix = audioMix;
}
But Now with this audio track MTAudioProcessingTap callbacks "Prepare" and "Process" are never called.
Is the problem with the audioTrack I am getting through KVO?
Now I would really appreciate if some one can help me with this. Or can tell am I using the write approach to record HLS Streams?
I Found solution for this and using it in my app. Wanted to post it earlier but didn't get the time.
So to play with HLS you should have some knowledge what they are exactly. For that please see it here on Apple Website.
HLS Apple
Here are the steps I am following.
1. First get the m3u8 and parse it.
You can parse it using this helpful kit M3U8Kit.
Using this kit you can get the M3U8MediaPlaylist or M3U8MasterPlaylist(if it is a master playlist)
if you get the master playlist you can also parse it to get M3U8MediaPlaylist
(void) parseM3u8
{
NSString *plainString = [self.url m3u8PlanString];
BOOL isMasterPlaylist = [plainString isMasterPlaylist];
NSError *error;
NSURL *baseURL;
if(isMasterPlaylist)
{
M3U8MasterPlaylist *masterList = [[M3U8MasterPlaylist alloc] initWithContentOfURL:self.url error:&error];
self.masterPlaylist = masterList;
M3U8ExtXStreamInfList *xStreamInfList = masterList.xStreamList;
M3U8ExtXStreamInf *StreamInfo = [xStreamInfList extXStreamInfAtIndex:0];
NSString *URI = StreamInfo.URI;
NSRange range = [URI rangeOfString:#"dailymotion.com"];
NSString *baseURLString = [URI substringToIndex:(range.location+range.length)];
baseURL = [NSURL URLWithString:baseURLString];
plainString = [[NSURL URLWithString:URI] m3u8PlanString];
}
M3U8MediaPlaylist *mediaPlaylist = [[M3U8MediaPlaylist alloc] initWithContent:plainString baseURL:baseURL];
self.mediaPlaylist = mediaPlaylist;
M3U8SegmentInfoList *segmentInfoList = mediaPlaylist.segmentList;
NSMutableArray *segmentUrls = [[NSMutableArray alloc] init];
for (int i = 0; i < segmentInfoList.count; i++)
{
M3U8SegmentInfo *segmentInfo = [segmentInfoList segmentInfoAtIndex:i];
NSString *segmentURI = segmentInfo.URI;
NSURL *mediaURL = [baseURL URLByAppendingPathComponent:segmentURI];
[segmentUrls addObject:mediaURL];
if(!self.segmentDuration)
self.segmentDuration = segmentInfo.duration;
}
self.segmentFilesURLs = segmentUrls;
}
You can see that you will get the links to the .ts files from the m3u8 parse it.
Now download all the .ts file into a local folder.
Merge these .ts files in to one mp4 file and Export.
You can do that using this wonderful C library
TS2MP4
and then you can delete the .ts files or keep them if you need them.
This is not good approach what you can do is to Parse M3U8 link .Then try to download segment files (.ts) . If you can get these file you can merge them to generate mp4 file.

Getting PCM data of HLS from AVPlayer

This question seems to be asked few times over last few years but none has answer for that. I am trying to process PCM data from HLS and I have to use AVPlayer.
this post taps the local files
https://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
and this tap work with remote files but not with .m3u8 hls files.
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
I can play first two tracks in the playlist but it doesn't start the needed callbacks to get the pcm, when the file is local or remote(not stream) I can still get the pcm but it is the hls is not working and I need HLS working
here is my code
//avplayer tap try
- (void)viewDidLoad {
[super viewDidLoad];
NSURL*testUrl= [NSURL URLWithString:#"http://playlists.ihrhls.com/c5/1469/playlist.m3u8"];
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:testUrl];
self.player = [AVPlayer playerWithPlayerItem:item];
// Watch the status property - when this is good to go, we can access the
// underlying AVAssetTrack we need.
[item addObserver:self forKeyPath:#"status" options:0 context:nil];
}
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context
{
if(![keyPath isEqualToString:#"status"])
return;
AVPlayerItem *item = (AVPlayerItem *)object;
if(item.status != AVPlayerItemStatusReadyToPlay)
return;
NSArray *tracks = [self.player.currentItem tracks];
for(AVPlayerItemTrack *track in tracks) {
if([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio]) {
NSLog(#"GOT DAT FUCKER");
[self beginRecordingAudioFromTrack:track.assetTrack];
[self.player play];
}
}
}
- (void)beginRecordingAudioFromTrack:(AVAssetTrack *)audioTrack
{
// Configure an MTAudioProcessingTap to handle things.
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
OSStatus err = MTAudioProcessingTapCreate(
kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap
);
if(err) {
NSLog(#"Unable to create the Audio Processing Tap %d", (int)err);
return;
}
// Create an AudioMix and assign it to our currently playing "item", which
// is just the stream itself.
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters
audioMixInputParametersWithTrack:audioTrack];
inputParams.audioTapProcessor = tap;
audioMix.inputParameters = #[inputParams];
self.player.currentItem.audioMix = audioMix;
}
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
{
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut,
flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %d", (int)err);
NSLog(#"Process");
}
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut)
{
NSLog(#"Initialising the Audio Tap Processor");
*tapStorageOut = clientInfo;
}
void finalize(MTAudioProcessingTapRef tap)
{
NSLog(#"Finalizing the Audio Tap Processor");
}
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)
{
NSLog(#"Preparing the Audio Tap Processor");
}
void unprepare(MTAudioProcessingTapRef tap)
{
NSLog(#"Unpreparing the Audio Tap Processor");
}
void init is called void prepare and process has to be called as well.
how can I do this?
I would suggest you use FFMPEG library to process HLS streams. This is a little harder but gives more flexibility. I did HLS Player for Android a few years ago (used this project) I believe same applies to iOS.
I recommended to use Novocaine
Really fast audio in iOS and Mac OS X using Audio Units is hard, and will leave you scarred and bloody. What used to take days can now be done with just a few lines of code.

AVPlayer Item get a nan duration

I'm playing a file. mp3 from url by stream.
I'm using AVPlayer and when I am trying to get the total time to build a progress bar, I get whenever time is nan.
NSError *setCategoryError = nil;
if ([ [AVAudioSession sharedInstance] isOtherAudioPlaying]) { // mix sound effects with music already playing
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategorySoloAmbient error:&setCategoryError];
} else {
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:&setCategoryError];
}
if (setCategoryError) {
NSLog(#"Error setting category! %ld", (long)[setCategoryError code]);
}
NSURL *url = [NSURL URLWithString:#"http://..//46698"];
AVPlayer *player = [AVPlayer playerWithURL:url];
songPlayer=player;
[songPlayer addObserver:self forKeyPath:#"status" options:0 context:nil];
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
if (object == songPlayer && [keyPath isEqualToString:#"status"]) {
if (songPlayer.status == AVPlayerStatusFailed) {
NSLog(#"AVPlayer Failed");
} else if (songPlayer.status == AVPlayerStatusReadyToPlay) {
NSLog(#"AVPlayerStatusReadyToPlay");
[songPlayer play];
[songPlayer addPeriodicTimeObserverForInterval:CMTimeMake(1, 1) queue:dispatch_get_main_queue() usingBlock:^(CMTime time){
CMTime aux = [songPlayer currentTime];
AVPlayerItem *item=[songPlayer currentItem];
CMTime dur=[item duration];
NSLog(#"%f/%f", CMTimeGetSeconds(aux), CMTimeGetSeconds(dur));
}];
} else if (songPlayer.status == AVPlayerItemStatusUnknown) {
NSLog(#"AVPlayer Unknown");
}
}
}
I've tried everything.
[item duration]; /// Fail
[[item asset] duration]; /// Fail
and nothing work
Anyone know why?
The value of duration property will be reported as kCMTimeIndefinite until the duration of the underlying asset has been loaded. There are two ways to ensure that the value of duration is accessed only after it becomes available:
Wait until the status of the AVPlayerItem is AVPlayerItemStatusReadyToPlay.
Register for key-value observation of the duration property, requesting the initial value. If the initial value is reported as kCMTimeIndefinite, the AVPlayerItem will notify you of the availability of the item's duration via key-value observing as soon as its value becomes known.
For swift:
if player.currentItem.status == .readyToPlay {
print(currentItem.duration.seconds) // it't not nan
}
I have this problem on iOS 12 (for iOS 13 everything works as expected). Current item's duration is always indefinite. I solve it by using player.currentItem?.asset.duration. Something like this:
private var currentItemDuration: CMTime? {
if #available(iOS 13.0, *) {
return player?.currentItem?.duration
} else {
return player?.currentItem?.asset.duration
}
}
See this answer for macOS: https://stackoverflow.com/a/52668213/7132300 It looks like it's also valid for iOS 12.
#voromax is correct. I added the asset to the playerItem without getting the duration first and duration was nan:
let asset = AVAsset(url: videoUrl)
self.playerItem = AVPlayerItem(asset: asset)
When I loaded the asset.loadValuesAsynchronously first, no more nan and I got the correct duration:
let assetKeys = ["playable", "duration"]
let asset = AVAsset(url: url)
asset.loadValuesAsynchronously(forKeys: assetKeys, completionHandler: {
DispatchQueue.main.async { [weak self] in
self?.playerItem = AVPlayerItem(asset: asset, automaticallyLoadedAssetKeys: assetKeys)
}
})
You can use asset property. It will give you the duration.
self.player.currentItem?.asset.duration.seconds ?? 0
I had the same problem but I was able to get the duration with a different method. Please see my answer here: https://stackoverflow.com/a/38406295/3629481

Issue with MTAudioProcessingTap on device

I'm trying to create an MTAudioProcessingTap based on the tutorial from this blog entry - http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
The issue is working with various audio formats. I've been able to successfully create a tap with M4A's, but it is not working with MP3's. Even stranger to me - the code works on a simulator with both formats, but not on the device (only m4a works). I'm getting OSStatusError code 50 in my process block, and if I attempt to use the AudioBufferList data, I'll get a bad access. The tap setup and callbacks I'm using are below. The process block seems to be the culprit (I think) but I do not know why.
Update - It seems like it is very sporadically working on the first time after a bit of a break, but only the first time. I get the feeling there is some sort of lock on my audio file. Does anyone know what should be doing in the unprepare block for clean up?
Unprepare block -
void unprepare(MTAudioProcessingTapRef tap)
{
NSLog(#"Unpreparing the Audio Tap Processor");
}
Process block (will get OSStatus error 50) -
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
{
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut,
flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %ld", err);
}
Tap Setup -
NSURL *assetURL = [[NSBundle mainBundle] URLForResource:#"DLP" withExtension:#"mp3"];
assert(assetURL);
// Create the AVAsset
AVAsset *asset = [AVAsset assetWithURL:assetURL];
assert(asset);
// Create the AVPlayerItem
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
assert(playerItem);
assert([asset tracks]);
assert([[asset tracks] count]);
self.player = [AVPlayer playerWithPlayerItem:playerItem];
assert(self.player);
// Continuing on from where we created the AVAsset...
AVAssetTrack *audioTrack = [[asset tracks] objectAtIndex:0];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
// Create a processing tap for the input parameters
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
MTAudioProcessingTapRef tap;
// The create function makes a copy of our callbacks struct
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err || !tap) {
NSLog(#"Unable to create the Audio Processing Tap");
return;
}
assert(tap);
// Assign the tap to the input parameters
inputParams.audioTapProcessor = tap;
// Create a new AVAudioMix and assign it to our AVPlayerItem
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = #[inputParams];
playerItem.audioMix = audioMix;
// And then we create the AVPlayer with the playerItem, and send it the play message...
[self.player play];
This was apparently a bug in 6.0 (which my device was still on). Simulator was on 6.1
Upgraded device to 6.1.2 and errors disappeared.

AVPlayer playback of single channel audio stereo->mono

In my iPad/iPhone App I'm playing back a video using AVPlayer. The video file has a stereo audio track but I need to playback only one channel of this track in mono. Deployment target is iOS 6. How can I achieve this? Thanks a lot for your help.
I now finally found an answer to this question - at least for deployment on iOS 6. You can easily add an MTAudioProcessingTap to your existing AVPlayer item and copy the selected channels samples to the other channel during your process callback function. Here is a great tutorial explaining the basics: http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
This is my code so far, mostly copied from the link above.
During AVPlayer setup I assign callback functions for audio processing:
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = ( void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
MTAudioProcessingTapRef tap;
// The create function makes a copy of our callbacks struct
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err || !tap) {
NSLog(#"Unable to create the Audio Processing Tap");
return;
}
assert(tap);
// Assign the tap to the input parameters
audioInputParam.audioTapProcessor = tap;
// Create a new AVAudioMix and assign it to our AVPlayerItem
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = #[audioInputParam];
playerItem.audioMix = audioMix;
Here are the audio processing functions (actually process is the only one needed):
#pragma mark Audio Processing
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut) {
NSLog(#"Initialising the Audio Tap Processor");
*tapStorageOut = clientInfo;
}
void finalize(MTAudioProcessingTapRef tap) {
NSLog(#"Finalizing the Audio Tap Processor");
}
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat) {
NSLog(#"Preparing the Audio Tap Processor");
}
void unprepare(MTAudioProcessingTapRef tap) {
NSLog(#"Unpreparing the Audio Tap Processor");
}
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut) {
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %ld", err);
SIVSViewController* self = (SIVSViewController*) MTAudioProcessingTapGetStorage(tap);
if (self.selectedChannel) {
int channel = self.selectedChannel;
if (channel == 0) {
bufferListInOut->mBuffers[1].mData = bufferListInOut->mBuffers[0].mData;
} else {
bufferListInOut->mBuffers[0].mData = bufferListInOut->mBuffers[1].mData;
}
}
}

Resources