How to check Resolution, bitrate of video in iOS - ios

I'm developing a video compression functionally; my ideas are below:
Getting resolution and bit-rate of video.
Check resolution of video. If it larger 640x480, I will compress this video in half and adjust the bit rate in 1/4 . E.g., if resolution of video is 1920x1080, it will be compressed to 960x540 and 1920x1080 at 4mbps will be compressed to 960x540 at 1mbps.
I have a few questions:
How can get resolution and bit-rate of video in iOS?
If compress 1920x1080 in half to 960x540, the bit-rate will also adaptively adjust, or do I still need to set the bitrate manually? How can do that?
I tried below code to compress video but I don't know it compressed to which resolution:
- (void)convertVideoToLowQuailtyWithInputURL:(NSURL*)inputURL
outputURL:(NSURL*)outputURL
handler:(void (^)(AVAssetExportSession*))handler
{
[[NSFileManager defaultManager] removeItemAtURL:outputURL error:nil];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset: urlAsset presetName:AVAssetExportPresetLowQuality];
session.outputURL = outputURL;
session.outputFileType = AVFileTypeQuickTimeMovie;
[session exportAsynchronouslyWithCompletionHandler:^(void)
{
handler(session);
}];
}
Please give me some advice. Thanks in advance.

To get the resolution of the video use this :-
AVAssetTrack *videoTrack = nil;
AVURLAsset *asset = [AVAsset assetWithURL:[NSURL fileURLWithPath:originalVideo]];
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
CMFormatDescriptionRef formatDescription = NULL;
NSArray *formatDescriptions = [videoTrack formatDescriptions];
if ([formatDescriptions count] > 0)
formatDescription = (CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
if ([videoTracks count] > 0)
videoTrack = [videoTracks objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
trackDimensions = [videoTrack naturalSize];
int width = trackDimensions.width;
int height = trackDimensions.height;
NSLog(#"Resolution = %d X %d",width ,height);
you can get the frameRate and bitrate as follows:-
float frameRate = [videoTrack nominalFrameRate];
float bps = [videoTrack estimatedDataRate];
NSLog(#"Frame rate == %f",frameRate);
NSLog(#"bps rate == %f",bps);

Video resolution in Swift:
func resolutionForLocalVideo(url:NSURL) -> CGSize?
{
guard let track = AVAsset(URL: url).tracksWithMediaType(AVMediaTypeVideo).first else { return nil }
let size = CGSizeApplyAffineTransform(track.naturalSize, track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}
Solutions without preferredTransform do not return correct values for some videos on the latest devices!

Here is Avt's answer updated and tested for Swift 3:
func resolutionForLocalVideo(url:URL) -> CGSize?
{
guard let track = AVURLAsset(url: url).tracks(withMediaType: AVMediaTypeVideo).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: fabs(size.width), height: fabs(size.height))
}

Related

AVAsset rotation

It's a well documented issue on SO, where AVAssets get rotated after writing them to file, either using AVAssetWriter or AVComposition. And there are solutions, such as looking at the video track transform and seeing how the asset is rotated so that it can be rotated to the desired orientation for your particular use case.
What I want to know however is why this happens and if it's possible to prevent it from happening. I run into issues not only with writing custom video files but also transforming videos into gifs using CGImageDestination where the output gif looks great except that it's rotated.
To give a quick reference point for my code that writes an asset to file:
let destinationURL = url ?? NSURL(fileURLWithPath: "\(NSTemporaryDirectory())\(String.random()).mp4")
if let writer = try? AVAssetWriter(URL: destinationURL, fileType: AVFileTypeMPEG4),
videoTrack = self.asset.tracksWithMediaType(AVMediaTypeVideo).last,
firstBuffer = buffers.first {
let videoCompressionProps = [AVVideoAverageBitRateKey: videoTrack.estimatedDataRate]
let outputSettings: [String: AnyObject] = [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: width,
AVVideoHeightKey: height,
AVVideoCompressionPropertiesKey: videoCompressionProps
]
let writerInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings, sourceFormatHint: (videoTrack.formatDescriptions.last as! CMFormatDescription))
writerInput.expectsMediaDataInRealTime = false
let rotateTransform = CGAffineTransformMakeRotation(Utils.degreesToRadians(-90))
writerInput.transform = CGAffineTransformScale(rotateTransform, -1, 1)
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: nil)
writer.addInput(writerInput)
writer.startWriting()
writer.startSessionAtSourceTime(CMSampleBufferGetPresentationTimeStamp(firstBuffer))
for (sample, newTimestamp) in Array(Zip2Sequence(buffers, timestamps)) {
if let imageBuffer = CMSampleBufferGetImageBuffer(sample) {
while !writerInput.readyForMoreMediaData {
NSThread.sleepForTimeInterval(0.1)
}
pixelBufferAdaptor.appendPixelBuffer(imageBuffer, withPresentationTime: newTimestamp)
}
}
writer.finishWritingWithCompletionHandler {
// completion code
}
As you can see above, a simple transform rotates the outputted video back to portrait. However, if I have a landscape video, that transform no longer works. And as I mentioned before, transforming the video to a gif performs the exactly same 90 degrees rotation on my asset.
My feelings can be summed up in these two gifs:
http://giphy.com/gifs/jon-stewart-why-lYKvaJ8EQTzCU
http://giphy.com/gifs/the-office-no-steve-carell-12XMGIWtrHBl5e
i have also find same Problem then i changed rotated my video to 90'its works fine
Here is solution
//in videoorientation.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface videoorientationViewController : UIViewController
#property AVMutableComposition *mutableComposition;
#property AVMutableVideoComposition *mutableVideoComposition;
#property AVMutableAudioMix *mutableAudioMix;
#property AVAssetExportSession *exportSession;
- (void)performWithAsset : (NSURL *)moviename;
#end
In //viewcontroller.m
- (void)performWithAsset : (NSURL *)moviename
{
self.mutableComposition=nil;
self.mutableVideoComposition=nil;
self.mutableAudioMix=nil;
// NSString* filename = [NSString stringWithFormat:#"temp1.mov"];
//
// NSLog(#"file name== %#",filename);
//
// [[NSUserDefaults standardUserDefaults]setObject:filename forKey:#"currentName"];
// NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
// NSLog(#"file number %i",_currentFile);
// NSURL* url = [NSURL fileURLWithPath:path];
// NSString *videoURL = [[NSBundle mainBundle] pathForResource:#"Movie" ofType:#"m4v"];
AVAsset *asset = [[AVURLAsset alloc] initWithURL:moviename options:nil];
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
if (!self.mutableComposition) {
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
self.mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
float width=assetVideoTrack.naturalSize.width;
float height=assetVideoTrack.naturalSize.height;
float toDiagonal=sqrt(width*width+height*height);
float toDiagonalAngle = radiansToDegrees(acosf(width/toDiagonal));
float toDiagonalAngle2=90-radiansToDegrees(acosf(width/toDiagonal));
float toDiagonalAngleComple;
float toDiagonalAngleComple2;
float finalHeight = 0.0;
float finalWidth = 0.0;
float degrees=90;
if(degrees>=0&&degrees<=90){
toDiagonalAngleComple=toDiagonalAngle+degrees;
toDiagonalAngleComple2=toDiagonalAngle2+degrees;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
t1 = CGAffineTransformMakeTranslation(height*sinf(degreesToRadians(degrees)), 0.0);
}
else if(degrees>90&&degrees<=180){
float degrees2 = degrees-90;
toDiagonalAngleComple=toDiagonalAngle+degrees2;
toDiagonalAngleComple2=toDiagonalAngle2+degrees2;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
t1 = CGAffineTransformMakeTranslation(width*sinf(degreesToRadians(degrees2))+height*cosf(degreesToRadians(degrees2)), height*sinf(degreesToRadians(degrees2)));
}
else if(degrees>=-90&&degrees<0){
float degrees2 = degrees-90;
float degreesabs = ABS(degrees);
toDiagonalAngleComple=toDiagonalAngle+degrees2;
toDiagonalAngleComple2=toDiagonalAngle2+degrees2;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
t1 = CGAffineTransformMakeTranslation(0, width*sinf(degreesToRadians(degreesabs)));
}
else if(degrees>=-180&&degrees<-90){
float degreesabs = ABS(degrees);
float degreesplus = degreesabs-90;
toDiagonalAngleComple=toDiagonalAngle+degrees;
toDiagonalAngleComple2=toDiagonalAngle2+degrees;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
t1 = CGAffineTransformMakeTranslation(width*sinf(degreesToRadians(degreesplus)), height*sinf(degreesToRadians(degreesplus))+width*cosf(degreesToRadians(degreesplus)));
}
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(degrees));
//t2 = CGAffineTransformRotate(t1, -90);
// Step 3
// Set the appropriate render sizes and rotational transforms
if (!self.mutableVideoComposition) {
// Create a new video composition
self.mutableVideoComposition = [AVMutableVideoComposition videoComposition];
// self.mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
self.mutableVideoComposition.renderSize = CGSizeMake(finalWidth,finalHeight);
self.mutableVideoComposition.frameDuration = CMTimeMake(1,30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [self.mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(self.mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
self.mutableVideoComposition.renderSize = CGSizeMake(self.mutableVideoComposition.renderSize.height, self.mutableVideoComposition.renderSize.width);
// Extract the existing layer instruction on the mutableVideoComposition
instruction = (self.mutableVideoComposition.instructions)[0];
layerInstruction = (instruction.layerInstructions)[0];
// Check if a transform already exists on this layer instruction, this is done to add the current transform on top of previous edits
CGAffineTransform existingTransform;
if (![layerInstruction getTransformRampForTime:[self.mutableComposition duration] startTransform:&existingTransform endTransform:NULL timeRange:NULL]) {
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
// Note: the point of origin for rotation is the upper left corner of the composition, t3 is to compensate for origin
CGAffineTransform t3 = CGAffineTransformMakeTranslation(-1*assetVideoTrack.naturalSize.height/2, 0.0);
CGAffineTransform newTransform = CGAffineTransformConcat(existingTransform, CGAffineTransformConcat(t2, t3));
[layerInstruction setTransform:newTransform atTime:kCMTimeZero];
}
}
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
self.mutableVideoComposition.instructions = #[instruction];
// Step 5
// Notify AVSEViewController about rotation operation completion
// [[NSNotificationCenter defaultCenter] postNotificationName:AVSEEditCommandCompletionNotification object:self];
[self performWithAssetExport];
}
- (void)performWithAssetExport
{
// Step 1
// Create an outputURL to which the exported movie will be saved
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:outputURL withIntermediateDirectories:YES attributes:nil error:nil];
outputURL = [outputURL stringByAppendingPathComponent:#"output.mov"];
// Remove Existing File
[manager removeItemAtPath:outputURL error:nil];
// Step 2
// Create an export session with the composition and write the exported movie to the photo library
self.exportSession = [[AVAssetExportSession alloc] initWithAsset:[self.mutableComposition copy] presetName:AVAssetExportPreset1280x720];
self.exportSession.videoComposition = self.mutableVideoComposition;
self.exportSession.audioMix = self.mutableAudioMix;
self.exportSession.outputURL = [NSURL fileURLWithPath:outputURL];
self.exportSession.outputFileType=AVFileTypeQuickTimeMovie;
[self.exportSession exportAsynchronouslyWithCompletionHandler:^(void){
switch (self.exportSession.status) {
case AVAssetExportSessionStatusCompleted:
//[self playfunction];
[[NSNotificationCenter defaultCenter]postNotificationName:#"Backhome" object:nil];
// Step 3
// Notify AVSEViewController about export completion
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#",self.exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#",self.exportSession.error);
break;
default:
break;
}
}];
}

AVFoundation - Reverse an AVAsset and output video file

I've seen this question asked a few times, but none of them seem to have any working answers.
The requirement is to reverse and output a video file (not just play it in reverse) keeping the same compression, format, and frame rate as the source video.
Ideally, the solution would be able to do this all in memory or buffer and avoid generating the frames into image files (for ex: using AVAssetImageGenerator) and then recompiling it (resource intensive, unreliable timing results, changes in frame/image quality from original, etc.).
--
My contribution:
This is still not working, but the best I've tried so far:
Read in the sample frames into an array of CMSampleBufferRef[] using AVAssetReader.
Write it back in reverse order using AVAssetWriter.
Problem: Seems like timing for each frame is saved in the CMSampleBufferRef so even appending them backwards will not work.
Next, I tried swapping the timing information of each frame with reverse/mirror frame.
Problem: This causes an unknown error with AVAssetWriter.
Next Step: I'm going to look into AVAssetWriterInputPixelBufferAdaptor
- (AVAsset *)assetByReversingAsset:(AVAsset *)asset {
NSURL *tmpFileURL = [NSURL URLWithString:#"/tmp/test.mp4"];
NSError *error;
// initialize the AVAssetReader that will read the input asset track
AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
// Read in the samples into an array
NSMutableArray *samples = [[NSMutableArray alloc] init];
while(1) {
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL) {
break;
}
[samples addObject:(__bridge id)sample];
CFRelease(sample);
}
// initialize the the writer that will save to our temporary file.
CMFormatDescriptionRef formatDescription = CFBridgingRetain([videoTrack.formatDescriptions lastObject]);
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:nil sourceFormatHint:formatDescription];
CFRelease(formatDescription);
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:tmpFileURL
fileType:AVFileTypeMPEG4
error:&error];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])];
[writer startWriting];
// Traverse the sample frames in reverse order
for(NSInteger i = samples.count-1; i >= 0; i--) {
CMSampleBufferRef sample = (__bridge CMSampleBufferRef)samples[i];
// Since the timing information is built into the CMSampleBufferRef
// We will need to make a copy of it with new timing info. Will copy
// the timing data from the mirror frame at samples[samples.count - i -1]
CMItemCount numSampleTimingEntries;
CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)samples[samples.count - i -1], 0, nil, &numSampleTimingEntries);
CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * numSampleTimingEntries);
CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)sample, numSampleTimingEntries, timingInfo, &numSampleTimingEntries);
CMSampleBufferRef sampleWithCorrectTiming;
CMSampleBufferCreateCopyWithNewTiming(
kCFAllocatorDefault,
sample,
numSampleTimingEntries,
timingInfo,
&sampleWithCorrectTiming);
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:sampleWithCorrectTiming];
}
CFRelease(sampleWithCorrectTiming);
free(timingInfo);
}
[writer finishWriting];
return [AVAsset assetWithURL:tmpFileURL];
}
Worked on this over the last few days and was able to get it working.
Source code here: http://www.andyhin.com/post/5/reverse-video-avfoundation
Uses AVAssetReader to read out the samples/frames, extracts the image/pixel buffer, and then appends it with the presentation time of the mirror frame.
Swift 5 version of Original Answer:
extension AVAsset {
func getReversedAsset(outputURL: URL) -> AVAsset? {
do {
let reader = try AVAssetReader(asset: self)
guard let videoTrack = tracks(withMediaType: AVMediaType.video).last else {
return .none
}
let readerOutputSettings = [
"\(kCVPixelBufferPixelFormatTypeKey)": Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
]
let readerOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerOutputSettings)
reader.add(readerOutput)
reader.startReading()
// Read in frames (CMSampleBuffer is a frame)
var samples = [CMSampleBuffer]()
while let sample = readerOutput.copyNextSampleBuffer() {
samples.append(sample)
}
// Write to AVAsset
let writer = try AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mp4)
let writerOutputSettings = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: videoTrack.naturalSize.width,
AVVideoHeightKey: videoTrack.naturalSize.height,
AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: videoTrack.estimatedDataRate]
] as [String : Any]
let sourceFormatHint = videoTrack.formatDescriptions.last as! CMFormatDescription
let writerInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: writerOutputSettings, sourceFormatHint: sourceFormatHint)
writerInput.expectsMediaDataInRealTime = false
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: .none)
writer.add(writerInput)
writer.startWriting()
writer.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(samples[0]))
for (index, sample) in samples.enumerated() {
let presentationTime = CMSampleBufferGetPresentationTimeStamp(sample)
if let imageBufferRef = CMSampleBufferGetImageBuffer(samples[samples.count - index - 1]) {
pixelBufferAdaptor.append(imageBufferRef, withPresentationTime: presentationTime)
}
while !writerInput.isReadyForMoreMediaData {
Thread.sleep(forTimeInterval: 0.1)
}
}
writer.finishWriting { }
return AVAsset(url: outputURL)
}
catch let error as NSError {
print("\(error)")
return .none
}
}
}

AVAssetExportSession combine video files and freeze frame between videos

I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.

Processing all frames in an AVAsset

I am trying to go through each frame in an AVAsset and process each frame as if it were an image. I have not been able to find anything from my searches.
The task I am trying to accomplish would look like this in pseudo-code
for each frame in asset
take the frame as an image and convert to a cvMat
Process and store data of center points
Store center points in array
The only part in that pseudo-code I do not know how to write is the going though each frame and capturing it in an image.
Can anyone help?
One answer is to use AVAssetImageGenerator.
1) Load the movie file into an AVAsset object.
2) Create an AVAssetImageGenerator object.
3) Pass in an estimated time of the frame where you want to get an image back from the movie.
Setting the 2 properties requestedTimeToleranceBefore and requestedTimeToleranceAfter on the AVAssetImageGenerator object to kCMTimeZero will increase the ability to get individual frames, but increases the processing time.
However this method is slow and I have not found a faster way.
//Load the Movie from a URL
self.movieAsset = [AVAsset assetWithURL:self.movieURL];
NSArray *movieTracks = [self.movieAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *movieTrack = [movieTracks objectAtIndex:0];
//Make the image Generator
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:self.movieAsset];
//Create a variables for the time estimation
Float64 durationSeconds = CMTimeGetSeconds(self.movieAsset.duration);
Float64 timePerFrame = 1.0 / (Float64)movieTrack.nominalFrameRate;
Float64 totalFrames = durationSeconds * movieTrack.nominalFrameRate;
//Step through the frames
for (int counter = 0; counter <= totalFrames; counter++){
CMTime actualTime;
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
NSError *error;
CGImageRef image = [imageGenerator copyCGImageAtTime:imageTimeEstimate actualTime:&actualTime error:&error];
...Do some processing on the image
CGImageRelease(image);
}
You could simply gen each frame using AVAssetReaderTrackOutput:
let asset = AVAsset(url: inputUrl)
let reader = try! AVAssetReader(asset: asset)
let videoTrack = asset.tracks(withMediaType: .video).first!
let outputSettings = [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)]
let trackReaderOutput = AVAssetReaderTrackOutput(track: videoTrack,
outputSettings: outputSettings)
reader.add(trackReaderOutput)
reader.startReading()
while let sampleBuffer = trackReaderOutput.copyNextSampleBuffer() {
if let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
// do what you want
}
}

precise timing with AVMutableComposition

I'm trying to use AVMutableComposition to play a sequence of sound files at precise times.
When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds:
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
The composition is created successfully with no errors. Later, when I want to play the sequence I do this:
[self.avPlayer seekToTime:CMTimeMakeWithSeconds(0, 1)];
[self.avPlayer play];
For some reason, the sounds are not evenly spaced at all - but play almost all at once. I tried the same thing spaced over 4 seconds, replacing the time calculation like this:
Float64 t = i * 1.0;
And this plays perfectly. Any time interval under 1 second seems to generate unexpected results. What am I missing? Are AVCompositions not supposed to be used for time intervals under 1 second? Or perhaps I'm misunderstanding the time intervals?
Your CMTimeMakeWithSeconds(t, 1) is in whole second 'slices' because your timescale is set to 1. No matter what fraction t is, the atTime: will always end up as 0. This is why it works when you increase it to 1 second (t=i*1).
You need to set the timescale to 4 to get your desired 0.25 second slices. Since the CMTime is now in .25 second slices, you won't need the i * 0.25 calculcation. Just use the i directly; atTime:CMTimeMake(i, 4)
If you might need to get more precise in the future, you should account for it now so you won't have to adjust your code later. Apple recommends using a timescale of 600 as it is a multiple of the common video framerates (24, 25, and 30 FPS) but it works fine for audio-only too. So for your situation, you would use 24 slices to get your .25 second value; Float64 t = i * 24; atTime:CMTimeMake(t, 600)
As for your issue of all 4 sounds playing almost all at once, be aware of this unanswered SO question where it only happens on the first play. Even with the changes above, you might still run into this issue.
Unless each track is exactly 0.25 seconds long this is your problem:
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
You need to be keeping track of the cumulative time range added so far, and inserting the next track at that time:
CMTime currentTime = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++) {
/* Code to create track for insertion */
CMTimeRange trackTimeRange = [assetTrack timeRange];
BOOL success = [track insertTimeRange:trackTimeRange
ofTrack:assetTrack
atTime:currentTime
error:&error];
/* Error checking code */
//Update time range for insertion
currentTime = CMTimeAdd(currentTime,trackTimeRange.duration);
}
i changed a bit your code, sorry i had no time to test it.
AVMutableComposition *composition = [AVMutableComposition composition];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
CMTime totalDuration = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Record_%i", i] ofType:#"caf"]];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:options];
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange timeRange = [assetTrack timeRange];
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTIME_COMPARE_INLINE(totalDuration, >, kCMTimeZero)? CMTimeAdd(totalDuration, CMTimeMake(1, 4)): totalDuration error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
totalDuration = CMTimeAdd(CMTimeAdd(totalDuration,CMTimeMake(1, 4)), asset.duration);
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
P.S. use kCMTimeZero instead of CMTimeMakeWithSeconds(0, 1).

Resources