AVFoundation max render size - ios

I've searched quite a lot and it seems that couldn't find a definite answer to what is the maximum render size of a video on iOS using AVFoundation.
I need to stitch two or more videos side by side or above each and render them in one new video with a final size larger than 1920 x 1080. So for example if I have two full hd videos (1920 x 1080) side by side the final composition would be 3840 x 1080.
I've tried with AVAssetExportSession and it always shrinks the final video proportionally to max 1920 in width or 1080 in height. It's quite understandable because of all possible AVAssetExportSession settings like preset, file type etc.
I tried also using AVAssetReader and AVAssetWriter but the results are the same. I only have more control over the quality, bitrate etc.
So.. is there a way this can be achieved on iOS or we have to stick to max Full HD?
Thanks

Well... Actually the answer should be YES and also NO. At least of what I've found until now.
H.264 allows higher resolutions only using a higher level profile which is fine. However on iOS the max profile that can be used is AVVideoProfileLevelH264High41 which according the specs, permits a max resolution of 1,920×1,080#30.1 fps or 2,048×1,024#30.0 fps.
So encoding with H.264 won't do the job and the answer should be NO.
The other option is to use other compression/codec. I've tried AVVideoCodecJPEG and was able to render such a video. So the answer should be YES.
But.. the problem is that this video is not playable on iOS which again changes the answer to NO.
To summarise I'd say: it is possible if that video is meant to be used out of the device otherwise the video will simply not be useable.
Hope it will help other people as well and if someone else gives a better, even different answer I'll be glad.

Related

Slow lottie animation video rendering with AVAssetWriter and AVAssetWriterInputPixelBufferAdaptor on iPhone X

I'm using this code (thanks damikdk) to render lottie animation to video:
https://github.com/damikdk/LottieExportDemo/blob/master/LottieExportDemo/ViewController.swift
I'm using the oldExport() in the previous file and these two methods (append and fill) to fill the pixelbuffer with the image:
https://github.com/damikdk/LottieExportDemo/blob/master/LottieExportDemo/Helpers.swift
It works great on a iphone 5s and exports 30sec video in approximately 1min. But on a iPhone X it takes up to 10 min to export the same video with the same resolution settings. Is there a way to optimize this to work better on newer devices?
This repository was created as demonstration of bugs, you should not use it even as start point. Sorry if it's not obvious from description.
I am not sure what was wrong in your case, but I will check, if you share your code in issues.

Processing live video and still images simultaneously at two different resolutions on iPhone?

I'm working on video processing app for the iPhone using OpenCV.
For performance reasons, I wan't to process live video at a relatively low resolution. I'm doing object-detection on each frame in the video. When the objects are found in the low-resolution video frame, I need to acquire that exact same frame at a much higher resolution.
I've been able to semi-accomplish this using a videoDataBufferOutput and a stillImageOutput from AVFoundation, but the still image is not the exact frame that I need.
Are there any good implementations of this or ideas on how to implement it myself?
In AVCaptureSessionPresetPhoto it use small video preview(about 1000x700 for iPhone6) and high resolution photo(about 3000x2000).
So I use modified 'CvPhotoCamera' class to process small preview and take photo of full-size picture. I post this code here: https://stackoverflow.com/a/31478505/1994445

iOS Video File Sizes and Bandwidth Considerations

I'm building an app whose core functionality is centered around 1-10 second videos. Currently, I'm recording video using PBJVision with the preset set to AVCaptureSessionPresetMedium. A 10 second video is around ~3-5MB. Considering each user could theoretically download hundreds or even thousands of videos a day, I was wondering if there was a more bandwidth efficient way of packing these videos up.
Could WebM be a more suitable container format?
I searched across the web, but couldn't find any articles pertaining to this specific question.
Edit: this looks promising
Modern video codecs (include WebM VP8) usually has compression ratio around 1/50. By adjusting codec parameters we can archive ~ 1/100 (IMHO), but very difficult and horrible picture quality.
Roughly, we can think of 1 camera pixel consist of 1.5 bytes (YUV 12 or 16 bits).
If the resolution is 720x480 and the frame rate is 30/sec,
720 x 480 x 1.5 x 30 = 15,552,000
x 10 sec = 155,520,000
/ 50 = 3,110,400
~= 3MB
It seems PBJVision doing good.
Reducing resolution, or lowering frame rate would be the first consideration, I think.
ios wont playback webm unless you use a software decoder. A software decoder will take more CPU/battery and produce more heat. And webm will not even solve your problem. What you want is to reduce the bitrate, but this will also reduce the quality. So its a trade off.

iPad retina screen recording

Two parts:
Correct me if I'm wrong, but there isn't a standard video file format that holds 2048 x 1536 frames? (i.e. recording the full resolution of the iPad retina is impossible?)
My app uses a glReadPixels call to record the screen, and appends the pixel buffers to an AVAssetWriterInputPixelBufferAdaptor. If the video needs to be resized to export, what's the best way to do this? I'm trying right now with AVMutableVideoCompositionLayerInstructions and CGAffineTransforms, but it's not working. Any ideas?
Thanks
Sam
Yes , it is possible. My app is also taking big frame video .
Don't use glReadpixels it causes a lot of delay especially if you record big frames as 2048 x 1536
Since iOS 5.0 you can use a faster way using texture cash (link)

How to crop a video in iOS

I was having a look at the RosyWriter Sample Code provided by Apple as a starting point and I'd like to find a way how to crop a video.
So i have the full resolution video from the iPhones Camera, but I just want to use a cropped part of it (and also rotate this subpart).
I figured that in captureOutput:didOutputSampleBuffer: fromConnection: i can modify each frame by modifying the CMSampleBufferRef that i get passed in.
So my questions now are:
Is this the right place to crop my video?
Where do I specify that the final video (that get's saved to disc) has a smaller resolution than the full video captured by AVCaptureSession? Setting the AVVideoWidthKey and AVVideoHeightKey has no effect.
How can I crop the video and still have good performance?
Any help is appreciated!
Thanks a lot!
EDIT:
Maybe I just need to know how I can make a video that was shot in portrait a landscape one by turning the images of the video by 90 degrees and then zoom in to fit the width again...?!?
In AVVideoSetttings.h there is the AVVideoScalingModeKey. This key combined with the defined values control how the video is scaled/cropped when encoding the images to the video container. For example if you specified a value of AVVideoScalingModeFit then cropping is used. Check out the header for how other values effect the video images.

Resources