Cap frame rate or bit rate in AVAssetExportSession? - ios

I'm using AVAssetExportSession to export some stuff at 640x480, and the files are kind of monstrous -- predictably monstrous, but still monstrous, given that we need to upload them from the phone over a 3G network. Is there any way to affect the size of the file other than to reduce the resolution? Ideally I'd like to try, e.g., compressing harder (even if that lowers quality), or cutting back to 15 frames/second, or something like that, but there don't seem to be any hooks to do it.

With AVExportSession you can only use presets. If AVAssetExportPresetMediumQuality and AVAssetExportPresetLowQuality don't work for you, you're better off using AVAssetReader and AVAssetWriter. AVAssetWriter supports bitrate setting and optionally you could skip frames when writing to get lower FPS.

Related

Performance issues with AVMutableComposition - scaleTimeRange

I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed.But I noticed that videos start to stutter when played through an AVPlayer at 10x.
I also noticed that on OSX's QuickTime the same composition plays smoothly.
Another question states that the reason for this is hardware limitation , but I want to know if there is a way around this , so that the fast motion effect occurs smoothly over the length of the entire video.
Video Specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Data Rate : 1.17 Mbit/s
I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.
In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.
If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.
If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).

iOS - AVAssetReader copyNextSampleBuffer can be very slow

I'm trying to decode a video in real time (30 fps) and display /modify it with OpenGL. On an iPod touch, if I decode a video that I took with the camera, decoding a frame can take over 1s, while 30 fps should be 0.03s max. Thus the result is not very good..
Is it possible to achieve that with AVAssetReader ? For example Instagram applies filters (I think GLSL shaders) in real time on a video, and they can even navigate in the video. Instagram works fine on the ipod touch.
The code to decode can be found in the answer here :
Best way to access all movie frames in iOS
And more specifically here : Hardware accelerated h.264 decoding to texture, overlay or similar in iOS
Thank you in advance
Due to the very limited information you provided, I have to assume that your video sequence are compressed in the format of YUV and you set settings of AVAssetReader with other format like kCVPixelFormatType_32BGRA which forces iOS use hardware acceleration to convert colour space for you, then you feel it slowly. I suggest that no settings to set, just use its original pixel format.
Actually my app was just doing too much work on the CPU, I had another process analyzing images. When I removed it, the decoding was really fast.

Simplified screen capture: record video of only what appears within the layers of a UIView?

This SO answer addresses how to do a screen capture of a UIView. We need something similar, but instead of a single image, the goal is to produce a video of everything appearing within a UIView over 60 seconds -- conceptually like recording only the layers of that UIView, ignoring other layers.
Our video app superimposes layers on whatever the user is recording, and the ultimate goal is to produce a master video merging those layers with the original video. However, using AVVideoCompositionCoreAnimationTool to merge layers with the original video is very, very, very slow: exporting a 60-second video takes 10-20 seconds.
What we found is combining two videos (i.e., only using AVMutableComposition without AVVideoCompositionCoreAnimationTool) is very fast: ~ 1 second. The hope is to create an independent video of the layers and then combine that with the original video only using AVMutableComposition.
An answer in Swift is ideal but not required.
It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).
Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.
This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p # 120 fps or 240 fps, your encoding at ~6x realtime # 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.
There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

iOS Extract all frames from a video

I have to extract all frames from video file and then save them to file.
I tried to use AVAssetImageGenerator, but it's very slow - it takes 1s - 3s per each frame ( sample 1280x720 MPEG4 video ) without saving to file process.
Is there anyway to make it much faster?
OpenGL, GPU, (...)?
I will be very grateful for showing me right direction.
AVAssetImageGenerator is a random access (seeking) interface, and seeking takes time, so one optimisation could be to use an AVAssetReader which will quickly and sequentially vend you frames. You can also choose to work in yuv format, which will give you smaller frames (and I think) faster decoding.
However, those raw frames are enormous: are 1280px * 720px * 4 bytes/pixel (if in RGBA), which is about 3.6MB each. You're going to need some pretty serious compression if you want to keep them all (MPEG4 # 720p comes to mind :).
So what are you trying to achieve?
Are you sure you want fill up your users' disks at a rate of 108MB/s (at 30fps) or 864MB/s (at 240fps)?

Creating a Motion JPEG frame by frame with variable frame-rate

I'm analyzing a number of solutions to the problem that I have in hand: I'm receiving images from a device and I need to make a video file out of it. However, the images arrive with a somewhat random delay between them and I'm looking for the best way to encode this. I have to create this video frame by frame, and after each frame I must have a new video file with the new frame, replacing the old video file.
I was thinking of fixating the frame-rate a little "faster" than the minimum delay that I might get and just repeat the last frame until a new one arrives, but I guess that this solution is not optimal.
Also, this project is made with Delphi (no, I cannot change that) and I need means to turn these frames into a video file after each frame. I was thinking about using mencoder as an external tool, but I'm reading the documentation and still haven't found an option to make it insert a frame in an already encoded Motion JPEG video file. As my images come in as JPEG, I thought that it would be reasonable to use Motion JPEG, but not even this is certain yet. Also, I don't know if mencoder can be used as a library. It would help a lot if it did.
What would you suggest?
There are some media container formats that support variable frame rate, but I don't think MJPEG is good choice because of the storage overhead. I believe the best way would be to transcode JPEG frames to MP4 format using both I-frames and P-frames.
You can use FFMPEG Delphi/FP header files for the transcoding.
Edit:
The most up to date version of FFMPEG headers can be found at GLScene repository on SourceForge.net. To view the files you can use this link

Resources