This question seems to be asked few times over last few years but none has answer for that. I am trying to process PCM data from HLS and I have to use AVPlayer.
this post taps the local files
https://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
and this tap work with remote files but not with .m3u8 hls files.
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
I can play first two tracks in the playlist but it doesn't start the needed callbacks to get the pcm, when the file is local or remote(not stream) I can still get the pcm but it is the hls is not working and I need HLS working
here is my code
//avplayer tap try
- (void)viewDidLoad {
[super viewDidLoad];
NSURL*testUrl= [NSURL URLWithString:#"http://playlists.ihrhls.com/c5/1469/playlist.m3u8"];
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:testUrl];
self.player = [AVPlayer playerWithPlayerItem:item];
// Watch the status property - when this is good to go, we can access the
// underlying AVAssetTrack we need.
[item addObserver:self forKeyPath:#"status" options:0 context:nil];
}
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context
{
if(![keyPath isEqualToString:#"status"])
return;
AVPlayerItem *item = (AVPlayerItem *)object;
if(item.status != AVPlayerItemStatusReadyToPlay)
return;
NSArray *tracks = [self.player.currentItem tracks];
for(AVPlayerItemTrack *track in tracks) {
if([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio]) {
NSLog(#"GOT DAT FUCKER");
[self beginRecordingAudioFromTrack:track.assetTrack];
[self.player play];
}
}
}
- (void)beginRecordingAudioFromTrack:(AVAssetTrack *)audioTrack
{
// Configure an MTAudioProcessingTap to handle things.
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
OSStatus err = MTAudioProcessingTapCreate(
kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap
);
if(err) {
NSLog(#"Unable to create the Audio Processing Tap %d", (int)err);
return;
}
// Create an AudioMix and assign it to our currently playing "item", which
// is just the stream itself.
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters
audioMixInputParametersWithTrack:audioTrack];
inputParams.audioTapProcessor = tap;
audioMix.inputParameters = #[inputParams];
self.player.currentItem.audioMix = audioMix;
}
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
{
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut,
flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %d", (int)err);
NSLog(#"Process");
}
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut)
{
NSLog(#"Initialising the Audio Tap Processor");
*tapStorageOut = clientInfo;
}
void finalize(MTAudioProcessingTapRef tap)
{
NSLog(#"Finalizing the Audio Tap Processor");
}
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)
{
NSLog(#"Preparing the Audio Tap Processor");
}
void unprepare(MTAudioProcessingTapRef tap)
{
NSLog(#"Unpreparing the Audio Tap Processor");
}
void init is called void prepare and process has to be called as well.
how can I do this?
I would suggest you use FFMPEG library to process HLS streams. This is a little harder but gives more flexibility. I did HLS Player for Android a few years ago (used this project) I believe same applies to iOS.
I recommended to use Novocaine
Really fast audio in iOS and Mac OS X using Audio Units is hard, and will leave you scarred and bloody. What used to take days can now be done with just a few lines of code.
Related
how can i get the averagePowerForChannel in AVPlayer in order to make an audio visualization on my music app!
ive already done the visualization part but im stuck in its engine (realtime volume channel).
i know that by using AVAudioPlayer it can be done easily using the .meteringEnabled Property but for some known reason AVPlayer is a must in my app!
im actualy thinking of using AVAudioPlayer Alongside with AVPlayer to get the desired result but it sounds kind of messy workaround,
how can that affect performance and stability?
thanks in advance
I have an issue with AVPlayer visualisation for about two years. In my case it involves HLS live streaming, in that case, you won't get it running, as of my knowledge.
EDIT This will not let you access the averagePowerForChannel: method, but you will get access to the raw data and with for example FFT get the desired information.
I got it working with local playback, though. You basically wait for the players player item to have a track up and running. At that point you will need to patch an MTAudioProcessingTap into the audio mix.
The processing tap will run callbacks you specify in which you will be able to compute the raw audio data as you need.
Here is a quick example (sorry for heaving it in ObjC, though):
#import <AVFoundation/AVFoundation.h>
#import <MediaToolbox/MediaToolbox.h>
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut) {};
void finalize(MTAudioProcessingTapRef tap) {};
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat) {};
void unprepare(MTAudioProcessingTapRef tap) {};
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut) {};
- (void)play {
// player and item setup ...
[[[self player] currentItem] addObserver:self forKeyPath:#"tracks" options:kNilOptions context:NULL];
}
//////////////////////////////////////////////////////
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)changecontext:(void *)context
if ([keyPath isEqualToString:#"tracks"] && [[object tracks] count] > 0) {
for (AVPlayerItemTrack *itemTrack in [object tracks]) {
AVAssetTrack *track = [itemTrack assetTrack];
if ([[track mediaType] isEqualToString:AVMediaTypeAudio]) {
[self addAudioProcessingTap:track];
break;
}
}
}
- (void)addAudioProcessingTap:(AVAssetTrack *)track {
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalise;
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err) {
NSLog(#"error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:err userInfo:nil]);
return;
}
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
[inputParams setAudioTapProcessor:tap];
[audioMix setInputParameters:#[inputParams]];
[[[self player] currentItem] setAudioMix:audioMix];
}
There is some discussion going on over on my question from over two years ago, so make sure to check it out as well.
You will need an audio processor class in combination with AV Foundation to visualize audio samples as well as applying a Core Audio audio unit effect (Bandpass Filter) to the audio data. You can find a sample by Apple here
Essentially you will need to add an observer to you AVPlayer like the below:
// Notifications
let playerItem: AVPlayerItem! = videoPlayer.currentItem
playerItem.addObserver(self, forKeyPath: "tracks", options: NSKeyValueObservingOptions.New, context: nil);
NSNotificationCenter.defaultCenter().addObserverForName(AVPlayerItemDidPlayToEndTimeNotification, object: videoPlayer.currentItem, queue: NSOperationQueue.mainQueue(), usingBlock: { (notif: NSNotification) -> Void in
self.videoPlayer.seekToTime(kCMTimeZero)
self.videoPlayer.play()
print("replay")
})
Then handle the notification in the overriden method below:
override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) {
if (object === videoPlayer.currentItem && keyPath == "tracks"){
if let playerItem: AVPlayerItem = videoPlayer.currentItem {
if let tracks = playerItem.asset.tracks as? [AVAssetTrack] {
tapProcessor = MYAudioTapProcessor(AVPlayerItem: playerItem)
playerItem.audioMix = tapProcessor.audioMix
tapProcessor.delegate = self
}
}
}
}
Here's a link to a sample project on GitHub
I am playing HLS streams using AVPlayer. And I also need to record these streams as user presses record button.
The approach I am using is to record audio and video separately then at the end merge these file to make the final video. And It is successful with remote mp4 files.
But now for the HLS (.m3u8) files I am able to record the video using AVAssetWriter but having problems with audio recording.
I am using MTAudioProccessingTap to process the raw audio data and write it to a file. I followed this article. I am able to record remote mp4 audio but its not working with HLS streams.
Initially I wasn't able to extract the audio tracks from the stream using AVAssetTrack *audioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
But I was able to extract the audioTracks using KVO to initialize the MTAudioProcessingTap.
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context{
AVPlayer *player = (AVPlayer*) object;
if (player.status == AVPlayerStatusReadyToPlay)
{
NSLog(#"Ready to play");
self.previousAudioTrackID = 0;
__weak typeof (self) weakself = self;
timeObserverForTrack = [player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(1, 100) queue:nil usingBlock:^(CMTime time)
{
#try {
for(AVPlayerItemTrack* track in [weakself.avPlayer.currentItem tracks]) {
if([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio])
weakself.currentAudioPlayerItemTrack = track;
}
AVAssetTrack* audioAssetTrack = weakself.currentAudioPlayerItemTrack.assetTrack;
weakself.currentAudioTrackID = audioAssetTrack.trackID;
if(weakself.previousAudioTrackID != weakself.currentAudioTrackID) {
NSLog(#":::::::::::::::::::::::::: Audio track changed : %d",weakself.currentAudioTrackID);
weakself.previousAudioTrackID = weakself.currentAudioTrackID;
weakself.audioTrack = audioAssetTrack;
/// Use this audio track to initialize MTAudioProcessingTap
}
}
#catch (NSException *exception) {
NSLog(#"Exception Trap ::::: Audio tracks not found!");
}
}];
}
}
I am also keeping track of trackID to check if track is changed.
This is how I initialize the MTAudioProcessingTap.
-(void)beginRecordingAudioFromTrack:(AVAssetTrack *)audioTrack{
// Configure an MTAudioProcessingTap to handle things.
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
OSStatus err = MTAudioProcessingTapCreate(
kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap
);
if(err) {
NSLog(#"Unable to create the Audio Processing Tap %d", (int)err);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain
code:err
userInfo:nil];
NSLog(#"Error: %#", [error description]);;
return;
}
// Create an AudioMix and assign it to our currently playing "item", which
// is just the stream itself.
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters
audioMixInputParametersWithTrack:audioTrack];
inputParams.audioTapProcessor = tap;
audioMix.inputParameters = #[inputParams];
_audioPlayer.currentItem.audioMix = audioMix;
}
But Now with this audio track MTAudioProcessingTap callbacks "Prepare" and "Process" are never called.
Is the problem with the audioTrack I am getting through KVO?
Now I would really appreciate if some one can help me with this. Or can tell am I using the write approach to record HLS Streams?
I Found solution for this and using it in my app. Wanted to post it earlier but didn't get the time.
So to play with HLS you should have some knowledge what they are exactly. For that please see it here on Apple Website.
HLS Apple
Here are the steps I am following.
1. First get the m3u8 and parse it.
You can parse it using this helpful kit M3U8Kit.
Using this kit you can get the M3U8MediaPlaylist or M3U8MasterPlaylist(if it is a master playlist)
if you get the master playlist you can also parse it to get M3U8MediaPlaylist
(void) parseM3u8
{
NSString *plainString = [self.url m3u8PlanString];
BOOL isMasterPlaylist = [plainString isMasterPlaylist];
NSError *error;
NSURL *baseURL;
if(isMasterPlaylist)
{
M3U8MasterPlaylist *masterList = [[M3U8MasterPlaylist alloc] initWithContentOfURL:self.url error:&error];
self.masterPlaylist = masterList;
M3U8ExtXStreamInfList *xStreamInfList = masterList.xStreamList;
M3U8ExtXStreamInf *StreamInfo = [xStreamInfList extXStreamInfAtIndex:0];
NSString *URI = StreamInfo.URI;
NSRange range = [URI rangeOfString:#"dailymotion.com"];
NSString *baseURLString = [URI substringToIndex:(range.location+range.length)];
baseURL = [NSURL URLWithString:baseURLString];
plainString = [[NSURL URLWithString:URI] m3u8PlanString];
}
M3U8MediaPlaylist *mediaPlaylist = [[M3U8MediaPlaylist alloc] initWithContent:plainString baseURL:baseURL];
self.mediaPlaylist = mediaPlaylist;
M3U8SegmentInfoList *segmentInfoList = mediaPlaylist.segmentList;
NSMutableArray *segmentUrls = [[NSMutableArray alloc] init];
for (int i = 0; i < segmentInfoList.count; i++)
{
M3U8SegmentInfo *segmentInfo = [segmentInfoList segmentInfoAtIndex:i];
NSString *segmentURI = segmentInfo.URI;
NSURL *mediaURL = [baseURL URLByAppendingPathComponent:segmentURI];
[segmentUrls addObject:mediaURL];
if(!self.segmentDuration)
self.segmentDuration = segmentInfo.duration;
}
self.segmentFilesURLs = segmentUrls;
}
You can see that you will get the links to the .ts files from the m3u8 parse it.
Now download all the .ts file into a local folder.
Merge these .ts files in to one mp4 file and Export.
You can do that using this wonderful C library
TS2MP4
and then you can delete the .ts files or keep them if you need them.
This is not good approach what you can do is to Parse M3U8 link .Then try to download segment files (.ts) . If you can get these file you can merge them to generate mp4 file.
I am trying to decode audio samples from a remote HLS (m3u8) stream on an iOS device for further processing of the data, e.g. record to a file.
As reference stream http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8 is used.
By using an AVURLAsset in combination with the AVPlayer I am able to show the video as preview on a CALayer.
I can also get the raw video data (CVPixelBuffer) by using AVPlayerItemVideoOutput. The audio is hearable over the speaker of the iOS device as well.
This is the code I am using at the moment for the AVURLAsset and AVPlayer:
NSURL* url = [NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
NSString *tracksKey = #"tracks";
[asset loadValuesAsynchronouslyForKeys:#[tracksKey] completionHandler: ^{
dispatch_async(dispatch_get_main_queue(), ^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
NSDictionary *settings = #
{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
#"IOSurfaceOpenGLESTextureCompatibility": #YES,
#"IOSurfaceOpenGLESFBOCompatibility": #YES,
};
AVPlayerItemVideoOutput* output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:output];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
[player setVolume: 0.0]; // no preview audio
self.playerItem = playerItem;
self.player = player;
self.playerItemVideoOutput = output;
AVPlayerLayer* playerLayer = [AVPlayerLayer playerLayerWithPlayer: player];
[self.preview.layer addSublayer: playerLayer];
[playerLayer setFrame: self.preview.bounds];
[playerLayer setVideoGravity: AVLayerVideoGravityResizeAspectFill];
[self setPlayerLayer: playerLayer];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(playerItemNewAccessLogEntry:) name:AVPlayerItemNewAccessLogEntryNotification object:self.playerItem];
[_player addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerStatusContext];
[_playerItem addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerItemStatusContext];
[_playerItem addObserver:self forKeyPath:#"tracks" options:0 context:nil];
}
});
}];
-(void) observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if (self.player.status == AVPlayerStatusReadyToPlay && context == &PlayerStatusContext) {
[self.player play];
}
}
To get the raw video data of the HLS stream I use:
CVPixelBufferRef buffer = [self.playerItemVideoOutput copyPixelBufferForItemTime:self.playerItem.currentTime itemTimeForDisplay:nil];
if (!buffer) {
return;
}
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.duration = CMTimeMake(33, 1000);
int64_t ts = timestamp * 1000.0;
timingInfo.decodeTimeStamp = CMTimeMake(ts, 1000);
timingInfo.presentationTimeStamp = timingInfo.decodeTimeStamp;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, buffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
buffer,
true,
NULL,
NULL,
videoInfo,
&timingInfo,
&newSampleBuffer);
// do something here with sample buffer...
CFRelease(buffer);
CFRelease(newSampleBuffer);
Now I would like to get access to the raw audio data as well, but had no luck so far.
I tried to use MTAudioProcessingTap as described here:
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
Unfortunately I could not get this to work properly. I succeeded in getting access to the underlying assetTrack of the AVPlayerItem but the callback mehtods "prepare" and "process" of the MTAudioProcessingTap is never getting called. I am not sure if I am on the right track here.
AVPlayer is playing the audio of the stream via the speaker, so internally the audio seems to be available as raw audio data. Is it possible to get access to the raw audio data? If it is not possible with AVPlayer, are there any other approaches?
If possible, I would not like to use ffmpeg, because the hardware decoder of the iOS device should be used for the decoding of the stream.
I'm trying to create an MTAudioProcessingTap based on the tutorial from this blog entry - http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
The issue is working with various audio formats. I've been able to successfully create a tap with M4A's, but it is not working with MP3's. Even stranger to me - the code works on a simulator with both formats, but not on the device (only m4a works). I'm getting OSStatusError code 50 in my process block, and if I attempt to use the AudioBufferList data, I'll get a bad access. The tap setup and callbacks I'm using are below. The process block seems to be the culprit (I think) but I do not know why.
Update - It seems like it is very sporadically working on the first time after a bit of a break, but only the first time. I get the feeling there is some sort of lock on my audio file. Does anyone know what should be doing in the unprepare block for clean up?
Unprepare block -
void unprepare(MTAudioProcessingTapRef tap)
{
NSLog(#"Unpreparing the Audio Tap Processor");
}
Process block (will get OSStatus error 50) -
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
{
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut,
flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %ld", err);
}
Tap Setup -
NSURL *assetURL = [[NSBundle mainBundle] URLForResource:#"DLP" withExtension:#"mp3"];
assert(assetURL);
// Create the AVAsset
AVAsset *asset = [AVAsset assetWithURL:assetURL];
assert(asset);
// Create the AVPlayerItem
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
assert(playerItem);
assert([asset tracks]);
assert([[asset tracks] count]);
self.player = [AVPlayer playerWithPlayerItem:playerItem];
assert(self.player);
// Continuing on from where we created the AVAsset...
AVAssetTrack *audioTrack = [[asset tracks] objectAtIndex:0];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
// Create a processing tap for the input parameters
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
MTAudioProcessingTapRef tap;
// The create function makes a copy of our callbacks struct
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err || !tap) {
NSLog(#"Unable to create the Audio Processing Tap");
return;
}
assert(tap);
// Assign the tap to the input parameters
inputParams.audioTapProcessor = tap;
// Create a new AVAudioMix and assign it to our AVPlayerItem
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = #[inputParams];
playerItem.audioMix = audioMix;
// And then we create the AVPlayer with the playerItem, and send it the play message...
[self.player play];
This was apparently a bug in 6.0 (which my device was still on). Simulator was on 6.1
Upgraded device to 6.1.2 and errors disappeared.
In my iPad/iPhone App I'm playing back a video using AVPlayer. The video file has a stereo audio track but I need to playback only one channel of this track in mono. Deployment target is iOS 6. How can I achieve this? Thanks a lot for your help.
I now finally found an answer to this question - at least for deployment on iOS 6. You can easily add an MTAudioProcessingTap to your existing AVPlayer item and copy the selected channels samples to the other channel during your process callback function. Here is a great tutorial explaining the basics: http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
This is my code so far, mostly copied from the link above.
During AVPlayer setup I assign callback functions for audio processing:
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = ( void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
MTAudioProcessingTapRef tap;
// The create function makes a copy of our callbacks struct
OSStatus err = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects, &tap);
if (err || !tap) {
NSLog(#"Unable to create the Audio Processing Tap");
return;
}
assert(tap);
// Assign the tap to the input parameters
audioInputParam.audioTapProcessor = tap;
// Create a new AVAudioMix and assign it to our AVPlayerItem
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = #[audioInputParam];
playerItem.audioMix = audioMix;
Here are the audio processing functions (actually process is the only one needed):
#pragma mark Audio Processing
void init(MTAudioProcessingTapRef tap, void *clientInfo, void **tapStorageOut) {
NSLog(#"Initialising the Audio Tap Processor");
*tapStorageOut = clientInfo;
}
void finalize(MTAudioProcessingTapRef tap) {
NSLog(#"Finalizing the Audio Tap Processor");
}
void prepare(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat) {
NSLog(#"Preparing the Audio Tap Processor");
}
void unprepare(MTAudioProcessingTapRef tap) {
NSLog(#"Unpreparing the Audio Tap Processor");
}
void process(MTAudioProcessingTapRef tap, CMItemCount numberFrames,
MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut,
CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut) {
OSStatus err = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, NULL, numberFramesOut);
if (err) NSLog(#"Error from GetSourceAudio: %ld", err);
SIVSViewController* self = (SIVSViewController*) MTAudioProcessingTapGetStorage(tap);
if (self.selectedChannel) {
int channel = self.selectedChannel;
if (channel == 0) {
bufferListInOut->mBuffers[1].mData = bufferListInOut->mBuffers[0].mData;
} else {
bufferListInOut->mBuffers[0].mData = bufferListInOut->mBuffers[1].mData;
}
}
}