I am using AVAssetExportSession to export a video saved in my documents directory. I wish to apply a CALayer to the video and hence am using AVMutableVideoComposition and setting necessary AVMutableVideoCompositionInstruction and video exports fine.
My issue is, say the original video is of resolution 1920x1080 now when I export this video by setting the rendersize of the video composition to 1920x640 it gives me a video of size 640x360.
I tried setting the rendersize to smaller values like 300x300 where I get the resultant exported video of size 300x300 of course by cropping extra contents. I then tried setting the rendersixe to 700x700 it resulted into a 640x640 video.
What I could understand is that it maintains the aspect ratio as per the render size we set, hence when i set the size to 1920x1080 it gives a 630x360 video maintaining the ratin of 16:9. Similarly when i set the size to be 700x700 it results in a video of size 640x640 with aspect ratio 1:1.
The Problem
I want the video size to be the same as its original size. But when I set the size of the mutablevideoComposition to the naturalSize of the original videotrack it limits me to a size below 640x640(if the size is beyond 640x640).
Is this a known behaviour? or am I missing something. If it is a known behaviour is there another way wherein I can export the video at its original size or size greater than 640x640.
Help will be deeply appreicated. Thanks in anticipation.
I figured it. It was a simple mistake. I was initializing the export session with AVAssetExportPreset640x480 instead of AVAssetExportPreset1920x1080.
When you set the preset the AVExportSession maintains the aspect ratio of your video which is bigger then the preset in your case that is how you were getting a 640x360 when your video was 1920x1080 thereby maintaining the aspect ratio.
Related
I want to record a video in a square format, so I insert the preview layer into a square UIView, but when the video is saved to Gallery, it is not square anymore, it has default iPhone aspect ratio (3/4 I guess).
How can I record a video with a custom aspect ratio other than default one?
Also, if it is not possible, how can I crop it after it is recorded, using AVCaptureFileOutputRecordingDelegate's method didFinishRecordingTo?
Thanks in advance!
I'm actually looking for is to not just the quality but resize the entire video to a greater resolution using the AV foundation.
I have a videos in 320x240 and 176x144 in mp4 quality and I want to resize video upto size 1280x720, but AVAssetExportSession class not allow to scale the video up from a smaller size.
try AVMutableVideoCompositionLayerInstruction and CGAffineTransform.
This code will help the understanding.
https://gist.github.com/zrxq/9817265
I am creating a feature where a user can record a video of themselves, and superimposed on this video is a view that displays an image and some text. When they are done recording, I am using AVFoundation's composition classes to composite the video and the view (as an image) into one video file, and output this in the next scene in a custom video player. The problem is that while the view's resolution is crystal clear in the record scene, after the composition (and after the AVExportSession completes) the resulting video's overlayed view quality is not clsoe to the actual view quality. I am converting the view to an image, and then setting the contents of an overlay layer as this image's CGImage, which, as I have checked, still has the same quality as the original view. The problem occurs when I apply the composition, and the image becomes blurry. Does anyone have any idea why this might be happening?
If you need to see the code, please feel free to ask! I can also provide screenshots.
Thank you!
It could happen when initiating your UIImage, iOS automatically pick #2x or #3x image source for you corresponding to your device.
Let say you get image size using size property like image.size, it gives you #1x size, and you might reduce your image size from #2x or #3x to #1x, you get a bad quality image output, because JPEG or PNG resize algorithms.
I was having a look at the RosyWriter Sample Code provided by Apple as a starting point and I'd like to find a way how to crop a video.
So i have the full resolution video from the iPhones Camera, but I just want to use a cropped part of it (and also rotate this subpart).
I figured that in captureOutput:didOutputSampleBuffer: fromConnection: i can modify each frame by modifying the CMSampleBufferRef that i get passed in.
So my questions now are:
Is this the right place to crop my video?
Where do I specify that the final video (that get's saved to disc) has a smaller resolution than the full video captured by AVCaptureSession? Setting the AVVideoWidthKey and AVVideoHeightKey has no effect.
How can I crop the video and still have good performance?
Any help is appreciated!
Thanks a lot!
EDIT:
Maybe I just need to know how I can make a video that was shot in portrait a landscape one by turning the images of the video by 90 degrees and then zoom in to fit the width again...?!?
In AVVideoSetttings.h there is the AVVideoScalingModeKey. This key combined with the defined values control how the video is scaled/cropped when encoding the images to the video container. For example if you specified a value of AVVideoScalingModeFit then cropping is used. Check out the header for how other values effect the video images.
I have a video file that is a QuickTime .mov (H.264) - if I open with QuickTime Player 10 and check with Movie Inspector I can see that the prescaled size is 1440x1080 and the display size is 1920x1080.
I open the video with QTKit and the following attributes: QTMovieOpenAsyncOKAttribute, QTMovieIsActiveAttribute, QTMovieResolveDataRefsAttribute, QTMovieDontInteractWithUserAttribute.
Both QTMovieCurrentSizeAttribute and QTMovieNaturalSizeAttribute give 1920x1080.
If I open the movie with QuickTime 7 I can use GetMovieBox() to find the size is 1920x1080 and frames can be accessed at 1440x1080. How can I get the 1440x1080 resolution information using QTKit ?
I already tried using the affine transform as given in this question: QTMovieCurrentSizeAttribute and QTMovieSizeDidChangeNotification replacements but it gave an identity transform.
You need to get the dimensions of the actual video track, not the movie.
QTTrack* videoTrack = nil;
for (QTTrack* track in [movie tracks])
{
if ([track attributeForKey:QTTrackMediaTypeAttribute] == QTMediaTypeVideo)
{
videoTrack = track;
break;
}
}
NSSize dimensions = [(NSValue*)[videoTrack attributeForKey:QTTrackDimensionsAttribute] sizeValue];
Usually there is no need to do it, because dimensions of the video track and QTMovieNaturalSizeAttribute are equal. However, with anamorphic videos movie natural size attribute tells us how the video should be displayed, when track dimensions represent size of the actual video frame (which is smaller).
QTMovieCurrentSizeAttribute is odd add deprecated, it is not related to the data at all.