Slow motion stander rates - image-processing

I got videos from youtube that showing the soccer player while falling, most of these videos show the slow motion of the falling, I need the actual falling without the used slow motion.
the fps for most of these videos are 23, 25, 29 fps.
I have seen the two way of the stander slow motion, at this link.But how to see the original used rate for the previous videos.
Any suggestion?

Generally the effect of slow-motion is produced by filming at a higher frame-rate and displaying the movie in a lower frame-rate. For instance, to get 2x slow-motion, you could record in 50fps and playback in 25fps.
You are saying that the slow-motion videos you have are in 23, 25, 29 fps. This is the playback rate. Originally they were recorded in higher frames-rates that are unknown to us. But we can try to restore the original speed by displaying more frames per second or by cutting out frames and see if it looks realistic. I had a look around and I could not find a what standard slow-motion frame-rates are. If you cannot find out either you will have to guess.
You can use ffmpeg to modify the framerate of your videos as described here https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video . If you wanted to double the video playback speed (to restore from 2x slow-motion), you can do:
ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv
But I would recommend reading the short article in the link above to understand what this and the alternative commands are doing.

Related

IOS Apply a slow motion effect to a video while shooting

Is it possible to apply slow motion effect while recording video?
This means that the recording has not finished yet, the file has not been saved, but the user sees the recording process in slow motion.
I think it is important to understand what slow-motion actually means. To "slow down motion" in a movie, you need to film more images per second than usually and play this movie afterwards in normal speed, that's making the slow motion effect.
Example: Videos are often shot in 30 frames per second (fps), so for one second of movie you're creating 30 single images. If you want a motion to be half as fast, you need to shoot 60 fps (60 images per second). If you play those 60 images at half-speed (the normal 30fps), it will result in a movie of 2 second lengths showing the slow-motion effect.
As you can see, you cannot record and show a slow-motion effect at the same time. You'll need to save it first and then play it slower than recorded.

Performance issues with AVMutableComposition - scaleTimeRange

I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed.But I noticed that videos start to stutter when played through an AVPlayer at 10x.
I also noticed that on OSX's QuickTime the same composition plays smoothly.
Another question states that the reason for this is hardware limitation , but I want to know if there is a way around this , so that the fast motion effect occurs smoothly over the length of the entire video.
Video Specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Data Rate : 1.17 Mbit/s
I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.
In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.
If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.
If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).

Simplified screen capture: record video of only what appears within the layers of a UIView?

This SO answer addresses how to do a screen capture of a UIView. We need something similar, but instead of a single image, the goal is to produce a video of everything appearing within a UIView over 60 seconds -- conceptually like recording only the layers of that UIView, ignoring other layers.
Our video app superimposes layers on whatever the user is recording, and the ultimate goal is to produce a master video merging those layers with the original video. However, using AVVideoCompositionCoreAnimationTool to merge layers with the original video is very, very, very slow: exporting a 60-second video takes 10-20 seconds.
What we found is combining two videos (i.e., only using AVMutableComposition without AVVideoCompositionCoreAnimationTool) is very fast: ~ 1 second. The hope is to create an independent video of the layers and then combine that with the original video only using AVMutableComposition.
An answer in Swift is ideal but not required.
It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).
Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.
This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p # 120 fps or 240 fps, your encoding at ~6x realtime # 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.
There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

seeking or scrubbing a video stream in reverse

I've created a scrubber in my app that allows the user to scrub forwards/backwards through a video via [AVPlayer seekToTime:toleranceBefore:toleranceAfter].
The video that is being scrubbed is captured via an AVCaptureSession that uses AVCaptureMovieFileOutput. I've ffprobed the resulting .MOV and the results are as expected (e.g., on my iPhone 5s I'm recording at 120fps at approx 23000 kb/s with approx 1 keyframe per second).
Since there is only approximately 1 keyframe per second, it is difficult to scrubber backward through the video with any precision and without any lag (since it would have to go back to the closest keyframe and the compute the frame at my current scrubbing position).
So I'm wondering if there is a better strategy for smooth scrubbing? There are apps out there that do this really well (e.g., I've examined the Coach's Eye app and it records video precisely the same way I do and yet its scrubbing performance is quite good).
I'd be very appreciative of any suggestions.

Problems in Audio/Video Slow motion

I am trying do slow motion for my video file along with audio. In my case, I have to do Ramped Slow motion(Gradually slowing down and speeding up
like parabola not a "Linear Slow Motion".
Ref:Linear slow motion :
Ref : Ramped Slow Motion :
What have i done so far:
Used AVFoundation for first three bullets
From video files, separated audio and video.
Did slow motion for video using AVFoundation api (scaletimeRange).Its really working fine.
The same is not working for audio. Seems there's a bug in apple api itself (Bug ID : 14616144). The relevant question is scaleTimeRange has no effect on audio type AVMutableCompositionTrack
So i switched to Dirac. later found there is a limitation with Dirac's open source edition that it doesn't support Dynamic Time Stretching.
Finally trying to do with OpenAL.
I've taken a sample OpenAL program from Apple developer forum and executed it.
Here are my questions:
Can i store/save the processed audio in OpenAl?if its directly not possible with "OpenAl", can it be done with AVFoundation + OpenAL?
Very importantly, how to do slow motion or stretch the time scale with OpenAL? If i know time stretching, i can apply logic for Ramp Slow Motion.
Is there any other way?
I can't really speak to 1 or 2, but time scaling audio can be as easy as resampling. If you have RAW/PCM audio sampled at 48 kHz and want to playback at half speed, resample to 96 kHz and play the audio at 48 kHz. Since you have twice the number of samples it will take twice as long to play. Generally:
scaledSampleRate = (orignalSampleRate / playRate);
or
playRate = (originalSampleRate / scaledSampleRate);
This will effect the pitch of the track, however that may be the desired effect since that behavior is somewhat is expected in "slow motion" audio. There are more advanced techniques that preserve pitch while scaling time. The open source software Audacity implements these algorithms. You could find inspiration there. There are many resources on the web that explain the tradeoffs of pitch shifting vs time stretching.
http://en.wikipedia.org/wiki/Audio_time-scale/pitch_modification
http://www.dspdimension.com/admin/time-pitch-overview/
Another option you may not have considered is muting the audio during slow motion. That seems to be the technique employed by most AV playback utilities. However, depending on your use case, distorted audio does indicate time is being manipulated.
I have applied slow motion on complete video including audio this might help You check this link : How to do Slow Motion video in IOS

Resources