Looping a Video in AVFoundation AVSampleBufferDisplayLayer - ios

I am trying to play a video in a loop on a AVSampleBufferDisplayLayer. I can get it to play though once with no problem. But, when I try to loop it, it doesn't keep playing.
Per the answer to AVFoundation to reproduce a video loop there isn't a way to rewind the AVAssetReader so I re-create it. (I did see the answer for Looping a video with AVFoundation AVPlayer? but AVPlayer is more full-features. I am reading for a file, but want the AVSampleBufferDisplayLayer still.)
One hypothesis is that I need to stop some of the H264 headers, but I have no idea if that'll help (and how). Another is that it has something to do with the CMTimebase, but I've tried several things to no avail.
Code below, based on Apple's WWDC talk on Direct Access to Video Encoding:
- (void)viewDidLoad {
[super viewDidLoad];
NSString *filepath = [[NSBundle mainBundle] pathForResource:#"sample-mp4" ofType:#"mp4"];
NSURL *fileURL = [NSURL fileURLWithPath:filepath];
AVAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
UIView *view = self.view;
self.videoLayer = [[AVSampleBufferDisplayLayer alloc] init];
self.videoLayer.bounds = view.bounds;
self.videoLayer.position = CGPointMake(CGRectGetMidX(view.bounds), CGRectGetMidY(view.bounds));
self.videoLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.videoLayer.backgroundColor = [[UIColor greenColor] CGColor];
CMTimebaseRef controlTimebase;
CMTimebaseCreateWithMasterClock( CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase );
self.videoLayer.controlTimebase = controlTimebase;
CMTimebaseSetTime(self.videoLayer.controlTimebase, CMTimeMake(5, 1));
CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);
[[view layer] addSublayer:_videoLayer];
dispatch_queue_t assetQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); //??? right queue?
__block AVAssetReader *assetReaderVideo = [self createAssetReader:asset];
__block AVAssetReaderTrackOutput *outVideo = [assetReaderVideo outputs][0];
if( [assetReaderVideo startReading] )
{
[_videoLayer requestMediaDataWhenReadyOnQueue: assetQueue usingBlock: ^{
while( [_videoLayer isReadyForMoreMediaData] )
{
CMSampleBufferRef sampleVideo;
if ( ([assetReaderVideo status] == AVAssetReaderStatusReading) && ( sampleVideo = [outVideo copyNextSampleBuffer]) ) {
[_videoLayer enqueueSampleBuffer:sampleVideo];
CFRelease(sampleVideo);
CMTimeShow(CMTimebaseGetTime(_videoLayer.controlTimebase));
}
else {
[_videoLayer stopRequestingMediaData];
//CMTimebaseSetTime(_videoLayer.controlTimebase, CMTimeMake(5, 1));
//CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);
//CMTimeShow(CMTimebaseGetTime(_videoLayer.controlTimebase));
assetReaderVideo = [self createAssetReader:asset];
outVideo = [assetReaderVideo outputs][0];
[assetReaderVideo startReading];
//sampleVideo = [outVideo copyNextSampleBuffer];
//[_videoLayer enqueueSampleBuffer:sampleVideo];
}
}
}];
}
}
-(AVAssetReader *)createAssetReader:(AVAsset*)asset {
NSError *error=nil;
AVAssetReader *assetReaderVideo = [[AVAssetReader alloc] initWithAsset:asset error:&error];
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetReaderTrackOutput *outVideo = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTracks[0] outputSettings:nil]; //dic];
[outVideo res]
[assetReaderVideo addOutput:outVideo];
return assetReaderVideo;
}
Thanks so much.

Try Making a loop with swift, then bridge the objective-c files with the swift files. google has many answers to bridging and looping, so just google it with swift.

Related

How to use AVPlayerLooper with AVPlayerItemVideoOutput?

AVPlayerLooper accepts a template AVPlayerItem and a AVQueuePlayer as setup parameters, then it internally manipulates items of the queue and player is constantly changing its currentItem.
This works perfect with AVPlayerLayer, which accepts this looped player as parameter and just renders it, but how can I use it with AVPlayerItemVideoOutput, which is being attached to the AVPlayerItem, which the player has multiple inside it? How do I reproduce the same thing AVPlayerLayer does internally?
AVPlayerLooper setup example from docs
NSString *videoFile = [[NSBundle mainBundle] pathForResource:#"example" ofType:#"mov"];
NSURL *videoURL = [NSURL fileURLWithPath:videoFile];
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [AVQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
_playerLayer.frame = self.view.bounds;
[self.view.layer addSublayer:_playerLayer];
[_player play];
This is how AVPlayerItemVideoOutput supposed to be used
[item addOutput:_videoOutput];
The only workround I came up with is to observe for changes of the currentItem and each time deattach the video output from old item and attach it to new one, like in example below, but this apparently neutralizes the gapless playback which I'm trying to achieve.
- (void)observeValueForKeyPath:(NSString*)path
ofObject:(id)object
change:(NSDictionary*)change
context:(void*)context {
if (context == currentItemContext) {
AVPlayerItem* newItem = [change objectForKey:NSKeyValueChangeNewKey];
AVPlayerItem* oldItem = [change objectForKey:NSKeyValueChangeOldKey];
if(oldItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem removeOutput:_videoOutput];
}
if(newItem.status == AVPlayerItemStatusReadyToPlay) {
[newItem addOutput:_videoOutput];
}
[self removeItemObservers:oldItem];
[self addItemObservers:newItem];
}
}
For more context, I'm trying to come up with a fix for flutter's video_player plugin https://github.com/flutter/flutter/issues/72878
Plugin's code can be found here https://github.com/flutter/plugins/blob/172338d02b177353bf517e5826cf6a25b5f0d887/packages/video_player/video_player/ios/Classes/FLTVideoPlayerPlugin.m
You can do this by subclassing AVQueuePlayer (yay OOP) and creating and adding AVPlayerItemVideoOutputs there, as needed. I've never seen multiple AVPlayerItemVideoOutputs before, but memory consumption seems reasonable and everything works.
#interface OutputtingQueuePlayer : AVQueuePlayer
#end
#implementation OutputtingQueuePlayer
- (void)insertItem:(AVPlayerItem *)item afterItem:(nullable AVPlayerItem *)afterItem;
{
if (item.outputs.count == 0) {
NSLog(#"Creating AVPlayerItemVideoOutput");
AVPlayerItemVideoOutput *videoOutput = [[AVPlayerItemVideoOutput alloc] initWithOutputSettings:nil]; // or whatever
[item addOutput:videoOutput];
}
[super insertItem:item afterItem:afterItem];
}
#end
The current output is accessed like so:
AVPlayerItemVideoOutput *videoOutput = _player.currentItem.outputs.firstObject;
CVPixelBufferRef pixelBuffer = [videoOutput copyPixelBufferForItemTime:_player.currentTime itemTimeForDisplay:nil];
// do something with pixelBuffer here
CVPixelBufferRelease(pixelBuffer);
and configuration becomes:
_playerItem = [AVPlayerItem playerItemWithURL:videoURL];
_player = [OutputtingQueuePlayer queuePlayerWithItems:#[_playerItem]];
_playerLooper = [AVPlayerLooper playerLooperWithPlayer:_player templateItem:_playerItem];
_playerLayer = [AVPlayerLayer playerLayerWithPlayer:_player];
[self.view.layer addSublayer:_playerLayer];
[_player play];

AVQueuePlayer memory issue while looping a playlist in iOS

In my iOS application, I'm trying to play list of videos downloaded to applications' Documents directory. To achieve that target, I used AVQueuePlayer. Following is my code which leads to app crash after 6/7 times looping.
#interface PlayYTVideoViewController () <NSURLConnectionDataDelegate, UITableViewDataSource, UITableViewDelegate>
{
AVQueuePlayer *avQueuePlayer;
}
- (void)playlistLoop
{
NSLog(#"%s - %d", __PRETTY_FUNCTION__, __LINE__);
lastPlayedVideoNumber = 0;
_loadingVideoLabel.hidden = YES;
avPlayerItemsMutArray = [[NSMutableArray alloc] init];
for (NSString *videoPath in clipUrlsMutArr)
{
NSURL *vidPathUrl = [NSURL fileURLWithPath:videoPath];
AVPlayerItem *avpItem = [AVPlayerItem playerItemWithURL:vidPathUrl];
[avPlayerItemsMutArray addObject:avpItem];
}
avPlayerItemsArray = [avPlayerItemsMutArray copy];
for(AVPlayerItem *item in avPlayerItemsArray)
{
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(itemDidPlayToEndTime:) name:AVPlayerItemDidPlayToEndTimeNotification object:item];
}
avQueuePlayer = [AVQueuePlayer queuePlayerWithItems:avPlayerItemsArray];
avQueuePlayer.actionAtItemEnd = AVPlayerActionAtItemEndAdvance;
introVideoLayer = [AVPlayerLayer playerLayerWithPlayer:avQueuePlayer];
introVideoLayer.frame = _mpIntroVideoView.bounds;
[_mpContainerView.layer addSublayer:introVideoLayer];
[avQueuePlayer play];
}
- (void)itemDidPlayToEndTime:(NSNotification *)notification
{
NSLog(#"%s - %d", __PRETTY_FUNCTION__, __LINE__);
AVPlayerItem *endedAVPlayerItem = [notification object];
[endedAVPlayerItem seekToTime:kCMTimeZero];
for (AVPlayerItem *item in avPlayerItemsArray)
{
if (item == endedAVPlayerItem)
{
lastPlayedVideoNumber++;
break;
}
}
[self reloadVideoClipsTable];
if ([endedAVPlayerItem isEqual:[avPlayerItemsArray lastObject]])
{
[self playlistLoop];
}
}
After getting memory issue, I tried to make some changes to above code.
I tried to set avQueuePlayer variable public and set it as strong variable
#property (strong, nonatomic) AVQueuePlayer *avQueuePlayer;
By doing that I expected avQueuePlayer variable remain in the memory till we manually set to nil. But that didn't solve the problem.
Then I tried to set player, related arrays and layers to nil and created again for new loop session.
if (avPlayerItemsMutArray != nil)
{
avPlayerItemsMutArray = nil;
}
avPlayerItemsMutArray = [[NSMutableArray alloc] init];
if (avPlayerItemsArray != nil)
{
avPlayerItemsArray = nil;
}
avPlayerItemsArray = [avPlayerItemsMutArray copy];
if (avQueuePlayer != nil)
{
avQueuePlayer = nil;
}
avQueuePlayer = [AVQueuePlayer queuePlayerWithItems:avPlayerItemsArray];
if(introVideoLayer != nil)
{
[introVideoLayer removeFromSuperlayer];
introVideoLayer = nil;
}
introVideoLayer = [AVPlayerLayer playerLayerWithPlayer:avQueuePlayer];
But that also didn't help to solve the issue.
Next I try to remove the observer before it re-initialized in a new loop
if (avPlayerItemsArray != nil)
{
avPlayerItemsArray = nil;
[[NSNotificationCenter defaultCenter] removeObserver:self
name:AVPlayerItemDidPlayToEndTimeNotification
object:nil];
}
But that also didn't help.
Next I used Instrument to find out memory usages and leaks. Application is not exceeding 18 MB when it is crashing and also there were more than 200 MB remaining as free. Instruments is little more complicated but still I didn't find any memory leaks related to this code.
Actually the error was not with the AVQueuePlayer. In my application I'm listing all the videos inside a table below the video playing view. In that table, each row consists with video thumbnail that I taken from below code.
+ (UIImage *)createThumbForVideo:(NSString *)vidFileName
{
NSString *videoFolder = [Video getVideoFolder];
NSString *videoFilePath = [videoFolder stringByAppendingFormat:#"/trickbook/videos/edited/%#",vidFileName];
NSURL *url = [NSURL fileURLWithPath:videoFilePath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:nil];
AVAssetImageGenerator *generateImg = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generateImg.appliesPreferredTrackTransform = YES;
NSError *error = NULL;
CMTime time = CMTimeMake(1, 65);
CGImageRef refImg = [generateImg copyCGImageAtTime:time actualTime:NULL error:&error];
UIImage *frameImage = [[UIImage alloc] initWithCGImage:refImg];
return frameImage;
}
Every time a video clip ends playing and also playlist begins a new loop, I update the table view. So each time I call above method and that's the reason for memory issue.
As the solution I call this method only once for a single video clip and store the returning UIImage in a mutable array. That solved the issue.
Heading of the question and the tags may not adequate with the answer, but I thought this is worth existing as a Q & A rather than deleting the post.

Extract/Record Audio from HLS stream (video) while playing iOS

I am playing HLS streams using AVPlayer. And I also need to record these streams as user presses record button.
The approach I am using is to record audio and video separately then at the end merge these file to make the final video. And It is successful with remote mp4 files.
But now for the HLS (.m3u8) files I am able to record the video using AVAssetWriter but having problems with audio recording.
I am using MTAudioProccessingTap to process the raw audio data and write it to a file. I followed this article. I am able to record remote mp4 audio but its not working with HLS streams.
Initially I wasn't able to extract the audio tracks from the stream using AVAssetTrack *audioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
But I was able to extract the audioTracks using KVO to initialize the MTAudioProcessingTap.
-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context{
AVPlayer *player = (AVPlayer*) object;
if (player.status == AVPlayerStatusReadyToPlay)
{
NSLog(#"Ready to play");
self.previousAudioTrackID = 0;
__weak typeof (self) weakself = self;
timeObserverForTrack = [player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(1, 100) queue:nil usingBlock:^(CMTime time)
{
#try {
for(AVPlayerItemTrack* track in [weakself.avPlayer.currentItem tracks]) {
if([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio])
weakself.currentAudioPlayerItemTrack = track;
}
AVAssetTrack* audioAssetTrack = weakself.currentAudioPlayerItemTrack.assetTrack;
weakself.currentAudioTrackID = audioAssetTrack.trackID;
if(weakself.previousAudioTrackID != weakself.currentAudioTrackID) {
NSLog(#":::::::::::::::::::::::::: Audio track changed : %d",weakself.currentAudioTrackID);
weakself.previousAudioTrackID = weakself.currentAudioTrackID;
weakself.audioTrack = audioAssetTrack;
/// Use this audio track to initialize MTAudioProcessingTap
}
}
#catch (NSException *exception) {
NSLog(#"Exception Trap ::::: Audio tracks not found!");
}
}];
}
}
I am also keeping track of trackID to check if track is changed.
This is how I initialize the MTAudioProcessingTap.
-(void)beginRecordingAudioFromTrack:(AVAssetTrack *)audioTrack{
// Configure an MTAudioProcessingTap to handle things.
MTAudioProcessingTapRef tap;
MTAudioProcessingTapCallbacks callbacks;
callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
callbacks.clientInfo = (__bridge void *)(self);
callbacks.init = init;
callbacks.prepare = prepare;
callbacks.process = process;
callbacks.unprepare = unprepare;
callbacks.finalize = finalize;
OSStatus err = MTAudioProcessingTapCreate(
kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap
);
if(err) {
NSLog(#"Unable to create the Audio Processing Tap %d", (int)err);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain
code:err
userInfo:nil];
NSLog(#"Error: %#", [error description]);;
return;
}
// Create an AudioMix and assign it to our currently playing "item", which
// is just the stream itself.
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *inputParams = [AVMutableAudioMixInputParameters
audioMixInputParametersWithTrack:audioTrack];
inputParams.audioTapProcessor = tap;
audioMix.inputParameters = #[inputParams];
_audioPlayer.currentItem.audioMix = audioMix;
}
But Now with this audio track MTAudioProcessingTap callbacks "Prepare" and "Process" are never called.
Is the problem with the audioTrack I am getting through KVO?
Now I would really appreciate if some one can help me with this. Or can tell am I using the write approach to record HLS Streams?
I Found solution for this and using it in my app. Wanted to post it earlier but didn't get the time.
So to play with HLS you should have some knowledge what they are exactly. For that please see it here on Apple Website.
HLS Apple
Here are the steps I am following.
1. First get the m3u8 and parse it.
You can parse it using this helpful kit M3U8Kit.
Using this kit you can get the M3U8MediaPlaylist or M3U8MasterPlaylist(if it is a master playlist)
if you get the master playlist you can also parse it to get M3U8MediaPlaylist
(void) parseM3u8
{
NSString *plainString = [self.url m3u8PlanString];
BOOL isMasterPlaylist = [plainString isMasterPlaylist];
NSError *error;
NSURL *baseURL;
if(isMasterPlaylist)
{
M3U8MasterPlaylist *masterList = [[M3U8MasterPlaylist alloc] initWithContentOfURL:self.url error:&error];
self.masterPlaylist = masterList;
M3U8ExtXStreamInfList *xStreamInfList = masterList.xStreamList;
M3U8ExtXStreamInf *StreamInfo = [xStreamInfList extXStreamInfAtIndex:0];
NSString *URI = StreamInfo.URI;
NSRange range = [URI rangeOfString:#"dailymotion.com"];
NSString *baseURLString = [URI substringToIndex:(range.location+range.length)];
baseURL = [NSURL URLWithString:baseURLString];
plainString = [[NSURL URLWithString:URI] m3u8PlanString];
}
M3U8MediaPlaylist *mediaPlaylist = [[M3U8MediaPlaylist alloc] initWithContent:plainString baseURL:baseURL];
self.mediaPlaylist = mediaPlaylist;
M3U8SegmentInfoList *segmentInfoList = mediaPlaylist.segmentList;
NSMutableArray *segmentUrls = [[NSMutableArray alloc] init];
for (int i = 0; i < segmentInfoList.count; i++)
{
M3U8SegmentInfo *segmentInfo = [segmentInfoList segmentInfoAtIndex:i];
NSString *segmentURI = segmentInfo.URI;
NSURL *mediaURL = [baseURL URLByAppendingPathComponent:segmentURI];
[segmentUrls addObject:mediaURL];
if(!self.segmentDuration)
self.segmentDuration = segmentInfo.duration;
}
self.segmentFilesURLs = segmentUrls;
}
You can see that you will get the links to the .ts files from the m3u8 parse it.
Now download all the .ts file into a local folder.
Merge these .ts files in to one mp4 file and Export.
You can do that using this wonderful C library
TS2MP4
and then you can delete the .ts files or keep them if you need them.
This is not good approach what you can do is to Parse M3U8 link .Then try to download segment files (.ts) . If you can get these file you can merge them to generate mp4 file.

Error Domain=AVFoundationErrorDomain Code=-11821 "Cannot Decode"

There's a strange behaviour I've found when trying to merge videos with AVFoundation. I'm pretty sure that I've made a mistake somewhere but I'm too blind to see it. My goal is just to merge 4 videos (later there will be crossfade transition between them).
Everytime I'm trying to export video I get this error:
Error Domain=AVFoundationErrorDomain Code=-11821 "Cannot Decode" UserInfo=0x7fd94073cc30 {NSLocalizedDescription=Cannot Decode, NSLocalizedFailureReason=The media data could not be decoded. It may be damaged.}
The funniest thing is that if I don't provide AVAssetExportSession with AVMutableVideoComposition, then everything works fine! I can't understand what I'm doing wrong. The source videos are downloaded from youtube and have .mp4 extension. I can play them with MPMoviePlayerController. While checking the source code, please, look carefully at AVMutableVideoComposition.
I was testing this code in Xcode 6.0.1 on iOS simulator.
#import "VideoStitcher.h"
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>
#implementation VideoStitcher
{
VideoStitcherCompletionBlock _completionBlock;
AVMutableComposition *_composition;
AVMutableVideoComposition *_videoComposition;
}
- (instancetype)init
{
self = [super init];
if (self)
{
_composition = [AVMutableComposition composition];
_videoComposition = [AVMutableVideoComposition videoComposition];
}
return self;
}
- (void)compileVideoWithAssets:(NSArray *)assets completion:(VideoStitcherCompletionBlock)completion
{
_completionBlock = [completion copy];
if (assets == nil || assets.count < 2)
{
// We need at least two video to make a stitch, right?
NSAssert(NO, #"VideoStitcher: assets parameter is nil or has not enough items in it");
}
else
{
[self composeAssets:assets];
if (_composition != nil) // if stitching went good and no errors were found
[self exportComposition];
}
}
- (void)composeAssets:(NSArray *)assets
{
AVMutableCompositionTrack *compositionVideoTrack = [_composition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
NSError *compositionError = nil;
CMTime currentTime = kCMTimeZero;
AVAsset *asset = nil;
for (int i = (int)assets.count - 1; i >= 0; i--) //For some reason videos are compiled in reverse order. Find the bug later. 06.10.14
{
asset = assets[i];
AVAssetTrack *assetVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
BOOL success = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetVideoTrack.timeRange.duration)
ofTrack:assetVideoTrack
atTime:currentTime
error:&compositionError];
if (success)
{
CMTimeAdd(currentTime, asset.duration);
}
else
{
NSLog(#"VideoStitcher: something went wrong during inserting time range in composition");
if (compositionError != nil)
{
NSLog(#"%#", compositionError);
_completionBlock(nil, compositionError);
_composition = nil;
return;
}
}
}
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, _composition.duration);
videoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
_videoComposition.instructions = #[videoCompositionInstruction];
_videoComposition.renderSize = [self calculateOptimalRenderSizeFromAssets:assets];
_videoComposition.frameDuration = CMTimeMake(1, 600);
}
- (void)exportComposition
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *myPathDocs = [documentsDirectory stringByAppendingPathComponent:#"testVideo.mov"];
NSURL *url = [NSURL fileURLWithPath:myPathDocs];
NSString *filePath = [url path];
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:filePath]) {
NSError *error;
if ([fileManager removeItemAtPath:filePath error:&error] == NO) {
NSLog(#"removeItemAtPath %# error:%#", filePath, error);
}
}
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:_composition
presetName:AVAssetExportPreset1280x720];
exporter.outputURL = url;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = _videoComposition;
[exporter exportAsynchronouslyWithCompletionHandler:^{
[self exportDidFinish:exporter];
}];
}
- (void)exportDidFinish:(AVAssetExportSession*)session
{
NSLog(#"%li", session.status);
if (session.status == AVAssetExportSessionStatusCompleted)
{
NSURL *outputURL = session.outputURL;
// time to call delegate methods, but for testing purposes we save the video in 'photos' app
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:outputURL completionBlock:^(NSURL *assetURL, NSError *error){
if (error == nil)
{
NSLog(#"successfully saved video");
}
else
{
NSLog(#"saving video failed.\n%#", error);
}
}];
}
}
else if (session.status == AVAssetExportSessionStatusFailed)
{
NSLog(#"VideoStitcher: exporting failed.\n%#", session.error);
}
}
- (CGSize)calculateOptimalRenderSizeFromAssets:(NSArray *)assets
{
AVAsset *firstAsset = assets[0];
AVAssetTrack *firstAssetVideoTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] firstObject];
CGFloat maxWidth = firstAssetVideoTrack.naturalSize.height;
CGFloat maxHeight = firstAssetVideoTrack.naturalSize.width;
for (AVAsset *asset in assets)
{
AVAssetTrack *assetVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (assetVideoTrack.naturalSize.width > maxWidth)
maxWidth = assetVideoTrack.naturalSize.width;
if (assetVideoTrack.naturalSize.height > maxHeight)
maxHeight = assetVideoTrack.naturalSize.height;
}
return CGSizeMake(maxWidth, maxHeight);
}
#end
Thank you for your attention. I am really tired, I've been trying to find the bug for four hours straight. I'll go to sleep now.
I've finally found the solution. The description of error lead me in the wrong direction: "Cannot Decode. The media data could not be decoded. It may be damaged.". From this description you may think that there is something wrong with your video files. I've spent 5 hours experimenting with formats, debugging and etc.
Well, THE ANSWER IS COMPLETELY DIFFERENT!
My mistake was that I forgot that CMTimeADD() returns value. I thought that it changes the value of its first argument, and in the code you can see this:
CMTime currentTime = kCMTimeZero;
for (int i = (int)assets.count - 1; i >= 0; i--)
{
CMTimeAdd(currentTime, asset.duration); //HERE!! I don't actually increment the value! currentTime is always kCMTimeZero
}
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, _composition.duration); // And that's where everything breaks!
The lesson that I've learned: When working with AVFoundation always check your time values! It's very important, otherwise you'll get a lot of bugs.
Error:-
domain: "AVFoundationErrorDomain" - code: 18446744073709539816
Solution:- [Swift 5.5]
Stop running mutiple av player in back ground thread.

HTTP live stream AVAsset

I am implementing a n HTTP live streaming player with OSX using avplayer.
I am able to stream it properly seek and get duration timing etc.
Now i want to take screen shots and process the frames from it using OpenCV.
I went for using AVASSetImageGenerator. But there is no audio and video tracks with the AVAsset which is associated with player.currentItem.
The tracks are appearing in player.currentItem.tracks.
So i am not able to sue AVAssetGenerator. Can anybody help to find out a solution to extract screenshots and individual frames in such a scenario?
Please find the code below how i am initiating an HTTP live stream
Thanks in advance.
NSURL* url = [NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
playeritem = [AVPlayerItem playerItemWithURL:url];
[playeritem addObserver:self forKeyPath:#"status" options:0 context:AVSPPlayerStatusContext];
[self setPlayer:[AVPlayer playerWithPlayerItem:playeritem]];
[self addObserver:self forKeyPath:#"player.rate" options:NSKeyValueObservingOptionNew context:AVSPPlayerRateContext];
[self addObserver:self forKeyPath:#"player.currentItem.status" options:NSKeyValueObservingOptionNew context:AVSPPlayerItemStatusContext];
AVPlayerLayer *newPlayerLayer = [AVPlayerLayer playerLayerWithPlayer:[self player]];
[newPlayerLayer setFrame:[[[self playerView] layer] bounds]];
[newPlayerLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
[newPlayerLayer setHidden:YES];
[[[self playerView] layer] addSublayer:newPlayerLayer];
[self setPlayerLayer:newPlayerLayer];
[self addObserver:self forKeyPath:#"playerLayer.readyForDisplay" options:NSKeyValueObservingOptionInitial | NSKeyValueObservingOptionNew context:AVSPPlayerLayerReadyForDisplay];
[self.player play];
Following is how i am checking whether video track is present with the Asset
case AVPlayerItemStatusReadyToPlay:
[self setTimeObserverToken:[[self player] addPeriodicTimeObserverForInterval:CMTimeMake(1, 10) queue:dispatch_get_main_queue() usingBlock:^(CMTime time) {
[[self timeSlider] setDoubleValue:CMTimeGetSeconds(time)];
NSLog(#"%f,%f,%f",[self currentTime],[self duration],[[self player] rate]);
AVPlayerItem *item = playeritem;
if(item.status == AVPlayerItemStatusReadyToPlay)
{
AVAsset *asset = (AVAsset *)item.asset;
long audiotracks = [[asset tracks] count];
long videotracks = [[asset availableMediaCharacteristicsWithMediaSelectionOptions]count];
NSLog(#"Track info Audio = %ld,Video=%ld",audiotracks,videotracks);
}
}]];
AVPlayerItem *item = self.player.currentItem;
if(item.status != AVPlayerItemStatusReadyToPlay)
return;
AVURLAsset *asset = (AVURLAsset *)item.asset;
long audiotracks = [[asset tracksWithMediaType:AVMediaTypeAudio]count];
long videotracks = [[asset tracksWithMediaType:AVMediaTypeVideo]count];
NSLog(#"Track info Audio = %ld,Video=%ld",audiotracks,videotracks);
This is an older question but in case someone needs help for that i have an answer
AVURLAsset *asset = /* Your Asset here! */;
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * /* Put the FPS of the source video here */ ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, /* Put the FPS of the source video here */);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
// Do what you want with the image, for example save it as UIImage
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
}
}
You can easily get the FPS of a Video by using this code:
float fps=0.00;
if (asset) {
AVAssetTrack * videoATrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
if(videoATrack)
{
fps = [videoATrack nominalFrameRate];
}
}
Hope that helps someone who is asking how to get all frames from a video or just some specific (with CMTime for example) frames. Please bear in mind, that saving all frames to an array can impact the memory hardly!

Resources