HTTP live stream AVAsset - opencv

I am implementing a n HTTP live streaming player with OSX using avplayer.
I am able to stream it properly seek and get duration timing etc.
Now i want to take screen shots and process the frames from it using OpenCV.
I went for using AVASSetImageGenerator. But there is no audio and video tracks with the AVAsset which is associated with player.currentItem.
The tracks are appearing in player.currentItem.tracks.
So i am not able to sue AVAssetGenerator. Can anybody help to find out a solution to extract screenshots and individual frames in such a scenario?
Please find the code below how i am initiating an HTTP live stream
Thanks in advance.
NSURL* url = [NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
playeritem = [AVPlayerItem playerItemWithURL:url];
[playeritem addObserver:self forKeyPath:#"status" options:0 context:AVSPPlayerStatusContext];
[self setPlayer:[AVPlayer playerWithPlayerItem:playeritem]];
[self addObserver:self forKeyPath:#"player.rate" options:NSKeyValueObservingOptionNew context:AVSPPlayerRateContext];
[self addObserver:self forKeyPath:#"player.currentItem.status" options:NSKeyValueObservingOptionNew context:AVSPPlayerItemStatusContext];
AVPlayerLayer *newPlayerLayer = [AVPlayerLayer playerLayerWithPlayer:[self player]];
[newPlayerLayer setFrame:[[[self playerView] layer] bounds]];
[newPlayerLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
[newPlayerLayer setHidden:YES];
[[[self playerView] layer] addSublayer:newPlayerLayer];
[self setPlayerLayer:newPlayerLayer];
[self addObserver:self forKeyPath:#"playerLayer.readyForDisplay" options:NSKeyValueObservingOptionInitial | NSKeyValueObservingOptionNew context:AVSPPlayerLayerReadyForDisplay];
[self.player play];
Following is how i am checking whether video track is present with the Asset
case AVPlayerItemStatusReadyToPlay:
[self setTimeObserverToken:[[self player] addPeriodicTimeObserverForInterval:CMTimeMake(1, 10) queue:dispatch_get_main_queue() usingBlock:^(CMTime time) {
[[self timeSlider] setDoubleValue:CMTimeGetSeconds(time)];
NSLog(#"%f,%f,%f",[self currentTime],[self duration],[[self player] rate]);
AVPlayerItem *item = playeritem;
if(item.status == AVPlayerItemStatusReadyToPlay)
{
AVAsset *asset = (AVAsset *)item.asset;
long audiotracks = [[asset tracks] count];
long videotracks = [[asset availableMediaCharacteristicsWithMediaSelectionOptions]count];
NSLog(#"Track info Audio = %ld,Video=%ld",audiotracks,videotracks);
}
}]];
AVPlayerItem *item = self.player.currentItem;
if(item.status != AVPlayerItemStatusReadyToPlay)
return;
AVURLAsset *asset = (AVURLAsset *)item.asset;
long audiotracks = [[asset tracksWithMediaType:AVMediaTypeAudio]count];
long videotracks = [[asset tracksWithMediaType:AVMediaTypeVideo]count];
NSLog(#"Track info Audio = %ld,Video=%ld",audiotracks,videotracks);

This is an older question but in case someone needs help for that i have an answer
AVURLAsset *asset = /* Your Asset here! */;
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * /* Put the FPS of the source video here */ ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, /* Put the FPS of the source video here */);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
// Do what you want with the image, for example save it as UIImage
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
}
}
You can easily get the FPS of a Video by using this code:
float fps=0.00;
if (asset) {
AVAssetTrack * videoATrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
if(videoATrack)
{
fps = [videoATrack nominalFrameRate];
}
}
Hope that helps someone who is asking how to get all frames from a video or just some specific (with CMTime for example) frames. Please bear in mind, that saving all frames to an array can impact the memory hardly!

Related

How to use AVPlayerLooper with AVPlayerItemVideoOutput?

AVPlayerLooper accepts a template AVPlayerItem and a AVQueuePlayer as setup parameters, then it internally manipulates items of the queue and player is constantly changing its currentItem.
This works perfect with AVPlayerLayer, which accepts this looped player as parameter and just renders it, but how can I use it with AVPlayerItemVideoOutput, which is being attached to the AVPlayerItem, which the player has multiple inside it? How do I reproduce the same thing AVPlayerLayer does internally?
AVPlayerLooper setup example from docs
NSString *videoFile = [[NSBundle mainBundle] pathForResource:#"example" ofType:#"mov"];
NSURL *videoURL = [NSURL fileURLWithPath:videoFile];
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [AVQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
_playerLayer.frame = self.view.bounds;
[self.view.layer addSublayer:_playerLayer];
[_player play];
This is how AVPlayerItemVideoOutput supposed to be used
[item addOutput:_videoOutput];
The only workround I came up with is to observe for changes of the currentItem and each time deattach the video output from old item and attach it to new one, like in example below, but this apparently neutralizes the gapless playback which I'm trying to achieve.
- (void)observeValueForKeyPath:(NSString*)path
ofObject:(id)object
change:(NSDictionary*)change
context:(void*)context {
if (context == currentItemContext) {
AVPlayerItem* newItem = [change objectForKey:NSKeyValueChangeNewKey];
AVPlayerItem* oldItem = [change objectForKey:NSKeyValueChangeOldKey];
if(oldItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem removeOutput:_videoOutput];
}
if(newItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem addOutput:_videoOutput];
}
[self removeItemObservers:oldItem];
[self addItemObservers:newItem];
}
}
For more context, I'm trying to come up with a fix for flutter's video_player plugin https://github.com/flutter/flutter/issues/72878
Plugin's code can be found here https://github.com/flutter/plugins/blob/172338d02b177353bf517e5826cf6a25b5f0d887/packages/video_player/video_player/ios/Classes/FLTVideoPlayerPlugin.m
You can do this by subclassing AVQueuePlayer (yay OOP) and creating and adding AVPlayerItemVideoOutputs there, as needed. I've never seen multiple AVPlayerItemVideoOutputs before, but memory consumption seems reasonable and everything works.
#interface OutputtingQueuePlayer : AVQueuePlayer
#end
#implementation OutputtingQueuePlayer
- (void)insertItem:(AVPlayerItem *)item afterItem:(nullable AVPlayerItem *)afterItem;
{
if (item.outputs.count == 0) {
NSLog(#"Creating AVPlayerItemVideoOutput");
AVPlayerItemVideoOutput *videoOutput = [[AVPlayerItemVideoOutput alloc] initWithOutputSettings:nil]; // or whatever
[item addOutput:videoOutput];
}
[super insertItem:item afterItem:afterItem];
}
#end
The current output is accessed like so:
AVPlayerItemVideoOutput *videoOutput = _player.currentItem.outputs.firstObject;
CVPixelBufferRef pixelBuffer = [videoOutput copyPixelBufferForItemTime:_player.currentTime itemTimeForDisplay:nil];
// do something with pixelBuffer here
CVPixelBufferRelease(pixelBuffer);
and configuration becomes:
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [OutputtingQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
[self.view.layer addSublayer:_playerLayer];
[_player play];

iOS AVPlayer slow down

I am using AVPlayer to play online video in my project. The Video is playing well. Now I want to reduce /increase the fps of the video . Below is my code that I am using:
self.asset = [AVAsset assetWithURL:self.videoUrl];
// the video player
self.player = [AVPlayer playerWithURL:self.videoUrl];
self.player.actionAtItemEnd = AVPlayerActionAtItemEndNone;
self.playerLayer = [AVPlayerLayer playerLayerWithPlayer:self.player];
self.playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[self.player currentItem]];
self.playerLayer.frame = CGRectMake(0, 0, self.view.frame.size.width, self.myPlayerView.frame.size.height);
[self.myPlayerView.layer addSublayer:self.playerLayer];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
AVPlayerItem *p = [notification object];
[p seekToTime:kCMTimeZero];
}
Now how should I reduce/increase the fps for the online video?
You can do something like,
-(float)getFrameRateFromAVPlayer
{
float fps=0.00;
if (self.queuePlayer.currentItem.asset) {
AVAssetTrack * videoATrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject];
if(videoATrack)
{
fps = videoATrack.nominalFrameRate;
}
}
return fps;
}
OR
AVPlayerItem *item = AVPlayer.currentItem; // Your current item
float fps = 0.00;
for (AVPlayerItemTrack *track in item.tracks) {
if ([track.assetTrack.mediaType isEqualToString:AVMediaTypeVideo]) {
fps = track.currentVideoFrameRate;
}
}
Hope this will help :)
AVPlayer allows you to set the current rate of the playback. Basically, it accepts a range of possibility values to control the current AVPlayerItem such as play slow forward, fast forward or reverse with negative rates. As saying in the document, you should check whether or not the current item can support those states of playing
Please try to check it out. The link for your reference https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayer_Class/index.html#//apple_ref/occ/instp/AVPlayer/rate

AVAssetExportSession combine video files and freeze frame between videos

I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.

Looping a Video in AVFoundation AVSampleBufferDisplayLayer

I am trying to play a video in a loop on a AVSampleBufferDisplayLayer. I can get it to play though once with no problem. But, when I try to loop it, it doesn't keep playing.
Per the answer to AVFoundation to reproduce a video loop there isn't a way to rewind the AVAssetReader so I re-create it. (I did see the answer for Looping a video with AVFoundation AVPlayer? but AVPlayer is more full-features. I am reading for a file, but want the AVSampleBufferDisplayLayer still.)
One hypothesis is that I need to stop some of the H264 headers, but I have no idea if that'll help (and how). Another is that it has something to do with the CMTimebase, but I've tried several things to no avail.
Code below, based on Apple's WWDC talk on Direct Access to Video Encoding:
- (void)viewDidLoad {
[super viewDidLoad];
NSString *filepath = [[NSBundle mainBundle] pathForResource:#"sample-mp4" ofType:#"mp4"];
NSURL *fileURL = [NSURL fileURLWithPath:filepath];
AVAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
UIView *view = self.view;
self.videoLayer = [[AVSampleBufferDisplayLayer alloc] init];
self.videoLayer.bounds = view.bounds;
self.videoLayer.position = CGPointMake(CGRectGetMidX(view.bounds), CGRectGetMidY(view.bounds));
self.videoLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoLayer.backgroundColor = [[UIColor greenColor] CGColor];
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
self.videoLayer.controlTimebase = controlTimebase;
CMTimebaseSetTime(self.videoLayer.controlTimebase, CMTimeMake(5, 1));
CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);
[[view layer] addSublayer:_videoLayer];
dispatch_queue_t assetQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); //??? right queue?
__block AVAssetReader *assetReaderVideo = [self createAssetReader:asset];
__block AVAssetReaderTrackOutput *outVideo = [assetReaderVideo outputs][0];
if( [assetReaderVideo startReading] )
{
[_videoLayer requestMediaDataWhenReadyOnQueue: assetQueue usingBlock: ^{
while( [_videoLayer isReadyForMoreMediaData] )
{
CMSampleBufferRef sampleVideo;
if ( ([assetReaderVideo status] == AVAssetReaderStatusReading) && ( sampleVideo = [outVideo copyNextSampleBuffer]) ) {
[_videoLayer enqueueSampleBuffer:sampleVideo];
CFRelease(sampleVideo);
CMTimeShow(CMTimebaseGetTime(_videoLayer.controlTimebase));
}
else {
[_videoLayer stopRequestingMediaData];
//CMTimebaseSetTime(_videoLayer.controlTimebase, CMTimeMake(5, 1));
//CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);
//CMTimeShow(CMTimebaseGetTime(_videoLayer.controlTimebase));
assetReaderVideo = [self createAssetReader:asset];
outVideo = [assetReaderVideo outputs][0];
[assetReaderVideo startReading];
//sampleVideo = [outVideo copyNextSampleBuffer];
//[_videoLayer enqueueSampleBuffer:sampleVideo];
}
}
}];
}
}
-(AVAssetReader *)createAssetReader:(AVAsset*)asset {
NSError *error=nil;
AVAssetReader *assetReaderVideo = [[AVAssetReader alloc] initWithAsset:asset error:&error];
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetReaderTrackOutput *outVideo = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTracks[0] outputSettings:nil]; //dic];
[outVideo res]
[assetReaderVideo addOutput:outVideo];
return assetReaderVideo;
}
Thanks so much.
Try Making a loop with swift, then bridge the objective-c files with the swift files. google has many answers to bridging and looping, so just google it with swift.

Decode audio samples from hls stream on ios?

I am trying to decode audio samples from a remote HLS (m3u8) stream on an iOS device for further processing of the data, e.g. record to a file.
As reference stream http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8 is used.
By using an AVURLAsset in combination with the AVPlayer I am able to show the video as preview on a CALayer.
I can also get the raw video data (CVPixelBuffer) by using AVPlayerItemVideoOutput. The audio is hearable over the speaker of the iOS device as well.
This is the code I am using at the moment for the AVURLAsset and AVPlayer:
NSURL* url = [NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
NSString *tracksKey = #"tracks";
[asset loadValuesAsynchronouslyForKeys:#[tracksKey] completionHandler: ^{
dispatch_async(dispatch_get_main_queue(), ^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
NSDictionary *settings = #
{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
#"IOSurfaceOpenGLESTextureCompatibility": #YES,
#"IOSurfaceOpenGLESFBOCompatibility": #YES,
};
AVPlayerItemVideoOutput* output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:output];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
[player setVolume: 0.0]; // no preview audio
self.playerItem = playerItem;
self.player = player;
self.playerItemVideoOutput = output;
AVPlayerLayer* playerLayer = [AVPlayerLayer playerLayerWithPlayer: player];
[self.preview.layer addSublayer: playerLayer];
[playerLayer setFrame: self.preview.bounds];
[playerLayer setVideoGravity: AVLayerVideoGravityResizeAspectFill];
[self setPlayerLayer: playerLayer];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(playerItemNewAccessLogEntry:) name:AVPlayerItemNewAccessLogEntryNotification object:self.playerItem];
[_player addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerStatusContext];
[_playerItem addObserver:self forKeyPath:#"status" options:NSKeyValueObservingOptionNew context:&PlayerItemStatusContext];
[_playerItem addObserver:self forKeyPath:#"tracks" options:0 context:nil];
}
});
}];
-(void) observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if (self.player.status == AVPlayerStatusReadyToPlay && context == &PlayerStatusContext) {
[self.player play];
}
}
To get the raw video data of the HLS stream I use:
CVPixelBufferRef buffer = [self.playerItemVideoOutput copyPixelBufferForItemTime:self.playerItem.currentTime itemTimeForDisplay:nil];
if (!buffer) {
return;
}
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.duration = CMTimeMake(33, 1000);
int64_t ts = timestamp * 1000.0;
timingInfo.decodeTimeStamp = CMTimeMake(ts, 1000);
timingInfo.presentationTimeStamp = timingInfo.decodeTimeStamp;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, buffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
buffer,
true,
NULL,
NULL,
videoInfo,
&timingInfo,
&newSampleBuffer);
// do something here with sample buffer...
CFRelease(buffer);
CFRelease(newSampleBuffer);
Now I would like to get access to the raw audio data as well, but had no luck so far.
I tried to use MTAudioProcessingTap as described here:
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
Unfortunately I could not get this to work properly. I succeeded in getting access to the underlying assetTrack of the AVPlayerItem but the callback mehtods "prepare" and "process" of the MTAudioProcessingTap is never getting called. I am not sure if I am on the right track here.
AVPlayer is playing the audio of the stream via the speaker, so internally the audio seems to be available as raw audio data. Is it possible to get access to the raw audio data? If it is not possible with AVPlayer, are there any other approaches?
If possible, I would not like to use ffmpeg, because the hardware decoder of the iOS device should be used for the decoding of the stream.

Resources