Performance: Video thumbnail / screenshot generation - ios

I´m currently using MpMoviePlayerController thumbnailImageAtTime to grab a thumbnail of my video. However there seems to be a delay around 0.5 seconds when generating the thumbnail - I have some ideas on how to optimize this, but I was wondering if there might be any performance gain in using one of the lower level frameworks? (CoreMedia or AV Foundation)
I have read several answers on SO that claim that AV Foundation (by using AVAssetImageGenerator) will generate thumbnails faster than MpMoviePlayerController - but I have also found SO answers that state the opposite.
I am looking for a method for taking video thumbnails at a specified time without any delay. Is that possible by using any of the mentioned frameworks or do I need to look into other custom solutions? (i.e.: using ffmpeg or similar?)

I went ahead and did some tests with the AV Foundation framework and AVAssetImageGenerator. Even when I set requestedTimeToleranceAfter and requestedTimeToleranceBefore to kCMTimeZero the AV foundation framework gave a very high performance gain compared to the higher level MpMoviePlayerController. For the purpose of my app I was able to achieve nearly realtime generation of thumbnails by using the AV Foundation framework.

UIImage *Thumbnailimage = [YourmoviePlayer thumbnailImageAtTime:1.0 timeOption:MPMovieTimeOptionNearestKeyFrame];

Related

How to apply audio effects in iOS

I use AVPlayer to play audio(streaming or local file). For this audio I want to apply some effects - boost volume, skip silence, reduce noise, change speed(in 0.1 intervals).
I did same thing in android by creating own player, decoding different audio formats into pcm data and then using some c libraries to modify it. It was quite complicated.
Is it possible to do with AVPlayer or how can I do that? Something like modifying audio already decoded by AVPlayer. Is there some ios api (AVAudioEngine?) or frameworks (audioKit?) which can do this?
Thanks!
IMHO the best solution is to use https://github.com/audiokit/AudioKit as it is well maintained and supports most of the requirements you listed.
Another approach is to import the C library you used in the Android project and have a wrapper around it so it can easily used in ObjectiveC/Swift. With this approach you will have less code to maintain and you guarantee similar results between the two platforms. Do you care to share more about this code ?

iOS Multi-Channel Audio with AVFoundation and Swift

I am currently in the research and prototyping stages of a project to develop a native iOS app (Swift 3) that includes a multi-channel audio player (multiple stereo MP3 files). I have found very limited information online, particularly written in Swift 3, so thought as I continue my research I would pose a question here.
Regarding frameworks it seems clear from what I've looked at so far that AVFoundation is going to do the job. It's not too low level and has a good set of functionality. It has support for playing multiple audio files with AVAudioPlayer. I am planning to start prototyping something with this soon.
But I am new to Swift and to iOS development with its huge number of libraries, so I'm wondering if I'm missing anything, if I'm on the right track here. Any answers with general information and thoughts on this will be up-voted. For an accepted answer some sample outline code using an appropriate framework, AVFoundation or a justified alternative.
If no answer is forthcoming I will post my own code when I get there.
Specifically I need from two to ten input channels, from MP3 files within the project resources, each with their own gain that can be individually adjusted, and then all of these mixed, maintaining their stereo channels, to a single output (the device) with a master gain. Some of the tracks need to loop, others not. The tracks need to be accurately synchronised. This is just for info and outline code would be fine covering the important points.
Research Notes and Resources
Apple: AVFoundation
A collection of resources relating to AVFoundation.
Apple: AVFoundation Programming Guide
This document seems encouraging at first, but actually only deals with video. It says:
There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in the Multimedia Programming Guide, not in this document.
The "Multimedia Programming Guide" which is also mentioned elsewhere at Apple in relation to this, is never linked and Google results point to not found pages on the Apple site. It seems to have disappeared.
Rudi Strahl: Mixing Multiple Audio Tracks with AVFoundation
Compares using AVComposition to using multiple AVPlayers. Example code is Objective-C. Not sure how the AVPlayers are mixed in the second solution. Perhaps with AVAudioMix. Currently looking at this. The article talks a little about it but doesn't deliver any specifics.
Audio Session Programming Guide
This document looks at AVAudioSession which provides supporting functionality:
AVAudioSession gives you control your app’s audio behavior. You can:
Select the appropriate input and output routes for your app
Determine how your app integrates audio from other apps
Handle interruptions from other apps
Automatically configure audio for the type of app your are creating
Techotopia: Playing Audio on iOS 10 using AVAudioPlayer
Some useful information on using AVAudioPlayer.
Stack Overflow: Playing a Sound with AVAudioPlayer
Basic Swift code for playing a sound. Some answers include a little extra functionality.
Hacking with Swift: How to Play Sounds Using AVAudioPlayer
Again, covers the basics.
Sweet Tutos: How To Play Sounds Files And Manage Duration Progress – AVAudioPlayer Tutorial
Updated to Swift 3. Some useful info.
Xamarin: Playing Sound with AVAudioPlayer
Written in Swift 2, I think.
Apple Video: WWDC 2013 Moving to AV Kit and AV Foundation
While not directly related, I found the first 30 minutes of this video introducing developers to AV Kit and AV Foundation in OS X 10 provides a useful overview of the technology.
I was working on the same problem, best what I could do it is, to transcode media content to be playing using avplayer, here is a draft, maybe it can help.

How to do Chroma Keying in ios to process a video file using Swift

I am working on a video editing application.
I need a function to do chroma keying and replace green background in a video file (not live feed) with an image or another video.
I have looked through the GPUImage framework but it is not suitable for my project as I cannot use third party frameworks for this project so I am wondering if there is another way to accomplish this.
Here are my questions:
Does it need to be done through Shaders in Opengl es?
Is there another way to access the frames and replace the background doing chroma keying using the AV Foundation framework?
I am not too versed in Graphics processing so I really appreciate any help.
Apple's image and video processing framework is CoreImage. There is sample code for Core Image which includes a ChromeKey example.
No it's not in Swift, which you asked for. But as #MoDJ says; video software is really complex, reuse what already exists.
I would use Apple's Obj-c sample code to get something that works then, if you feel you MUST have it in Swift, port it little by little and if you have specific problems ask them here.

Possible to use AV Foundation to crop a video?

I am trying to crop videos both taken in my app and uploaded from the user's photo library. I am looking to crop every video to be the size of the iPhone 5s screen (I know that sounds dumb, but that's what I need to do).
Can I do this using the AV Foundation framework or do I need to use Core Video? I've made multiple attempts with AV Foundation and gotten nowhere.
Also if you could link to any helpful tutorials or code samples that would be greatly appreciated.
I'm using Objective-C and working on an app designated for iOS 7+.
Thanks!
1) Use AVAssetReader to open your video and extract CMSampleBuffers
Link
2) Modify CMSampleBuffer:
Link
3) Create AVAssetWriter and add modified CMSampleBuffers to it's input
Link
In this article CVPixelBuffer is used as an input to AVAssetWriter
using adapter. You actually don't need an adapter because
you have CMSampleBuffers ready you can add them straight
to the input using appendSampleBuffer: method.

iOS FFT on a file from iPod library

I'm new to core audio and I've been banging my head against a brick wall for a while on how to do this and I was hoping someone might be able to point me in the right direction.
I'm creating an app for an assignment and I want the user to select a file from the iPod library (MPMediaPickerontroller ?) and then perform an fft on said file to detect the pitch.
I have code working that selects the file and saves it's location as an NSURL and I have code working for OSX that will play a file from a URL! I can't get this part to work on iOS for reasons that are beyond me.
I've also seen lots of sample code that implements FFT using remote i/o to fill the buffers but I can't work out how to do this from the iPod library.
Can anyone help? Idealluy point me to some sample code that will show me how best to do some of these tasks? I've looked at previous threads and can't see anything that's quite what I need.
Many thanks in Advance!
Since you have a NSURL Link to your songs, why not try using AVFoundation for the FFT part. Since you have the url it is perfect because the player in AVFoundation imports songs by URL.

Resources