Process Slow Motion Video Effect iOS - ios

How would I go about creating a slow motion effect on a portion of a video recorded or obtained from the camera roll in iOS? I am using the AVFoundation framework to select a video from the camera roll or record a video. I intend to add the effect from Time1 to Time2 and then let the video continue at a normal speed.

Generally speaking you create a slow motion effect by recording at a higher frame rate. So if you record at 60FPS but playback at 30FPS, then you have created a half time slow motion effect. This is how it is done with film. With prerecorded fixed frame rate footage you could playback at a fraction of the original frame rate. If this is to be saved back to a container file, then you will need to adjust the presentation time stamps accordingly.

Related

Swift iOS Crop Video in Real Time

There are videos being recorded in a 16:9 ratio and uploaded to s3 and then download to multiple devices (Desktop, Tablets and Phone). Playback of the video that occurs on the on iOS should ratio of 9:16.
My goal is to crop the video playback in real-time to a 9:16, cutting off the outer edges, but also enlarging it if need be. What is the fastest and most efficient way of accomplishing this with Swift?
My concern is CPU overhead doing this on the phone.

IOS Swift buffer 30FPS Video for realtime object-detection

I have trained an ObjectDetector for iOS. Now I want to use it on a Video with a frame rate of 30FPS.
The ObjectDetector is a bit too slow, needs 85ms for one frame. For the 30FPS it should be below 33ms.
Now I am wondering if it is possible to buffer the frames and the predictions for a specified time x and then play the video on the screen?
If you have already tried using a smaller/faster model (and also to ensured that your model is fully optimized to run in CoreML on the neural engine), we had success doing inference only every nth frame.
The results were suitable for our use-case and you couldn't really tell that we were only doing it at 5 fps because we were able to continue to display the camera output at full frame-rate.
If you don't need realtime then yes, certainly you could store the video and do the processing per frame afterwards; this would let you parallelize things into bigger batch sizes as well.

Is there a quality difference between output of AVCaptureMovieFileOutput and AVCaptureVideoDataOutput?

In the process of capturing a light trail photo, I noticed that for fast moving objects, there is slightly more discontinuity between successive frames if I use the sample buffers from AVCaptureVideoDataOutput compared to if I record a movie and extract frames and run the same algo.
Is there a refresh rate/frame rate difference if the two modes are used?
A colleague who has experience in professional photography claims that there is a visible lag even in Apple's default camera app when comparing the preview in Photo mode and Video mode but it is not something very obvious to me.
Furthermore, I am actually capturing video at a low frame rate (close to highest exposure)
To conclude these experiments, I need to know if there is any definitive proof to confirm or disprove the same

How to record video with low audio quality

I am working on a project, we want to minimize the size in the sound part of a video. I know we can use ACAudioSession to record a pure audio, and can set the quality detailed into sampling rate, number of channels.
But when I want to design a video taker which record records audio at the same time. I found for the AVCaptureSession, I can only set the quality of video and audio together using sessionPreset, which leads the quality of video and audio decrease at the same time.
I am wondering whether there is a way to keep the video in high quality while manage to reduce the size of audio when taking a video?
Appreciate for the help.

OpenCV: GoPro video editing blur

I am attempting to post-process a video in OpenCV. The problem is that the GoPro video is very blurry, even with a high frame rate.
Is there any way that I can remove blur? I've heard about deinterlacing, but don't know if this applies to a GoPro 3+, or where even to begin.
Any help is appreciated.
You can record at a high frame rate to remove any blur, also make sure you are recording with enough natural light, so recording outdoors is recommended.
Look at this video: https://www.youtube.com/watch?v=-nU2_ERC_oE
At 30 fps, there is some blur in the car, but at 60fps the blur is non existant, just doubling the FPS can do some good. Since you have a HERO3+ you can record 720p 60 and that will remove the blur. WVGA 120fps can also do some justice (example is 1080p but still applies)

Resources