Possible to use AV Foundation to crop a video? - ios

I am trying to crop videos both taken in my app and uploaded from the user's photo library. I am looking to crop every video to be the size of the iPhone 5s screen (I know that sounds dumb, but that's what I need to do).
Can I do this using the AV Foundation framework or do I need to use Core Video? I've made multiple attempts with AV Foundation and gotten nowhere.
Also if you could link to any helpful tutorials or code samples that would be greatly appreciated.
I'm using Objective-C and working on an app designated for iOS 7+.
Thanks!

1) Use AVAssetReader to open your video and extract CMSampleBuffers
Link
2) Modify CMSampleBuffer:
Link
3) Create AVAssetWriter and add modified CMSampleBuffers to it's input
Link
In this article CVPixelBuffer is used as an input to AVAssetWriter
using adapter. You actually don't need an adapter because
you have CMSampleBuffers ready you can add them straight
to the input using appendSampleBuffer: method.

Related

How to do Chroma Keying in ios to process a video file using Swift

I am working on a video editing application.
I need a function to do chroma keying and replace green background in a video file (not live feed) with an image or another video.
I have looked through the GPUImage framework but it is not suitable for my project as I cannot use third party frameworks for this project so I am wondering if there is another way to accomplish this.
Here are my questions:
Does it need to be done through Shaders in Opengl es?
Is there another way to access the frames and replace the background doing chroma keying using the AV Foundation framework?
I am not too versed in Graphics processing so I really appreciate any help.
Apple's image and video processing framework is CoreImage. There is sample code for Core Image which includes a ChromeKey example.
No it's not in Swift, which you asked for. But as #MoDJ says; video software is really complex, reuse what already exists.
I would use Apple's Obj-c sample code to get something that works then, if you feel you MUST have it in Swift, port it little by little and if you have specific problems ask them here.

Adding watermark to currently recording video and save with watermark

I would like to know if there is any way to add a watermark to a video which is currently recording and save it with the watermark. (I know about adding watermarks to video files that are already available in app bundle and exporting it with watermark).
iPhone Watermark on recorded Video.
I checked this link. Accepted answer is not good one. The most voted answer is applicable only if there is already a video file in your bundle. (Please read the answer before suggesting that.)
Thanks in advance
For this purpose its better to use GPUImage library (An open source library available in Git hub), It contains so many filter and its possible add overlay using GPUImageOverlayBlendFilter. That contains sample FilterShowCase that explains lot about using the filters. It uses GPU so that it takes the overhead of processing the image. The full credits goes to #Brad Larson the one who created such a great library.

Performance: Video thumbnail / screenshot generation

I´m currently using MpMoviePlayerController thumbnailImageAtTime to grab a thumbnail of my video. However there seems to be a delay around 0.5 seconds when generating the thumbnail - I have some ideas on how to optimize this, but I was wondering if there might be any performance gain in using one of the lower level frameworks? (CoreMedia or AV Foundation)
I have read several answers on SO that claim that AV Foundation (by using AVAssetImageGenerator) will generate thumbnails faster than MpMoviePlayerController - but I have also found SO answers that state the opposite.
I am looking for a method for taking video thumbnails at a specified time without any delay. Is that possible by using any of the mentioned frameworks or do I need to look into other custom solutions? (i.e.: using ffmpeg or similar?)
I went ahead and did some tests with the AV Foundation framework and AVAssetImageGenerator. Even when I set requestedTimeToleranceAfter and requestedTimeToleranceBefore to kCMTimeZero the AV foundation framework gave a very high performance gain compared to the higher level MpMoviePlayerController. For the purpose of my app I was able to achieve nearly realtime generation of thumbnails by using the AV Foundation framework.
UIImage *Thumbnailimage = [YourmoviePlayer thumbnailImageAtTime:1.0 timeOption:MPMovieTimeOptionNearestKeyFrame];

iOS FFT on a file from iPod library

I'm new to core audio and I've been banging my head against a brick wall for a while on how to do this and I was hoping someone might be able to point me in the right direction.
I'm creating an app for an assignment and I want the user to select a file from the iPod library (MPMediaPickerontroller ?) and then perform an fft on said file to detect the pitch.
I have code working that selects the file and saves it's location as an NSURL and I have code working for OSX that will play a file from a URL! I can't get this part to work on iOS for reasons that are beyond me.
I've also seen lots of sample code that implements FFT using remote i/o to fill the buffers but I can't work out how to do this from the iPod library.
Can anyone help? Idealluy point me to some sample code that will show me how best to do some of these tasks? I've looked at previous threads and can't see anything that's quite what I need.
Many thanks in Advance!
Since you have a NSURL Link to your songs, why not try using AVFoundation for the FFT part. Since you have the url it is perfect because the player in AVFoundation imports songs by URL.

Decode video using CoreMedia.framework on iOS

I need to decode mp4 file and draw it using OpenGL in ios app. I need to extract and decode h264 frames from mp4 file and I heard what it posible to do using CoreMedia. Anybody has any idea how to do it? Any examples of CoreMedia using?
It's not Core Media you're looking for, it's AVFoundation. In particular, you'd use an AVAssetReader to load from your movie and iterate through the frames. You then can upload these frames as OpenGL ES textures either by using glTexImage2D() or (on iOS 5.0) by using the much faster texture caches.
If you don't want to roll your own implementation of this, I have working AVFoundation-based movie loading and processing via OpenGL ES within my GPUImage framework. The GPUImageMovie class encapsulates movie reading and the process of uploading to a texture. If you want to extract that texture for use in your own scene, you can chain a GPUImageTextureOutput to it. Examples of both of these classes can be found in the SimpleVideoFileFilter and CubeExample sample applications within the framework distribution.
You can use this directly, or just look at the code I wrote to perform these same actions within the GPUImageMovie class.

Resources