FFmpeg convert video to images with complex logic - image-processing

I'm trying to use FFMPEG in order to solve some complex logic on my videos.
The business logic is the following:
I get videos from the formats: avi, mp4, mov.
I don't know what is the content in the video. It can be from 1M to 5G.
I want to output a list of images from this video with the higher quality I can get. Capturing only frames that have big changes from their previous frame. (new person, a new angle, big movement, etc)
In addition, I want to limit the number of frames per second that even if the video is dramatically fast and has changed all the time it will not produce more frames per second than this parameter.
I'm using now the following command:
./ffmpeg -i "/tmp/input/fast_movies/3_1.mp4" -vf fps=3,mpdecimate=hi=14720:lo=7040:frac=0.5 -vsync 0 -s hd720 "/tmp/output/fast_movies/(#%04d).png"
According to my understanding it doing the following:
fps=3 - first cut the video to 3 frames per second (So that is the limit I talked about)
mpdecimate - filter frames that doesn't have greater changes than the thresholds I set.
-vsync 0 - sync video timestamp - I'm not sure why but without it - it makes hundereds of duplicate frames ignoring the fps and mpdecimate command. Can someone explain?
-s hd720 - set video size to
It works pretty well but I'm not so happy with the quality. Do you think I miss something? Is there any parameter in FFMPEG that I better use it instead of these ones?

You can set the frame quality by appending -qscale:v 1 to your command.
qscale stand for quality scale, v stands for video, and the range is 1 to 31. 1 being the highest quality, 31 being the lowest.

Related

Combing effect while reading interlaced video

all. I have a very strange issue, reading the VideoCapture in OpenCV.
I have .MTS videos (MPEG2), and I read them in OpenCV using the next code:
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture("00030.MTS")
while(cap.isOpened()):
ret,frame = cap.read()
cv2.imshow("frame", frame)
cv2.waitKey(0)
And this shows me the corrupted frames (with bands on them). The same quality is kept, if I save the frame as the image, and look at It outside the OpenCV.
But how It should look like:
I've never seen this before while working with .avi or .mp4
How can I get the same not-corrupted frames like in the media player?
(I edited the title of the question. Some of this information wasn't apparent originally.)
Your file names suggest that this video material is a video camera's own recording, with no alterations to it.
Your video seems to be "interlaced", i.e. not "progressive". Interlaced video consists of "fields" instead of complete frames. A field contains all the even or odd lines of an image. Even and odd fields follow each other. With that method, you can have "50i" video that updates at 50 Hz, yet requires only half the data of a full "50p" video (half the data means reduced vertical resolution).
OpenCV (probably) uses ffmpeg to read your video file.
Both ffmpeg and VLC know when a video file contains interlaced video data. VLC automatically applies a suitable filter to make this data nicer to look at. ffmpeg does not, because filtering costs processing time and changes the data.
You should use ffmpeg to "de-interlace" your video files. I would suggest the yadif filter.
Example:
ffmpeg -i 00001.MTS -c copy -vf yadif -c:v libx264 -b:v 24M 00001_deinterlaced.MTS
You should look at the settings/options of yadif. Maybe the defaults aren't to your liking. Try yadif=1 to get field-rate progressive video (50/60p).

iOS Extract all frames from a video

I have to extract all frames from video file and then save them to file.
I tried to use AVAssetImageGenerator, but it's very slow - it takes 1s - 3s per each frame ( sample 1280x720 MPEG4 video ) without saving to file process.
Is there anyway to make it much faster?
OpenGL, GPU, (...)?
I will be very grateful for showing me right direction.
AVAssetImageGenerator is a random access (seeking) interface, and seeking takes time, so one optimisation could be to use an AVAssetReader which will quickly and sequentially vend you frames. You can also choose to work in yuv format, which will give you smaller frames (and I think) faster decoding.
However, those raw frames are enormous: are 1280px * 720px * 4 bytes/pixel (if in RGBA), which is about 3.6MB each. You're going to need some pretty serious compression if you want to keep them all (MPEG4 # 720p comes to mind :).
So what are you trying to achieve?
Are you sure you want fill up your users' disks at a rate of 108MB/s (at 30fps) or 864MB/s (at 240fps)?

Slow motion stander rates

I got videos from youtube that showing the soccer player while falling, most of these videos show the slow motion of the falling, I need the actual falling without the used slow motion.
the fps for most of these videos are 23, 25, 29 fps.
I have seen the two way of the stander slow motion, at this link.But how to see the original used rate for the previous videos.
Any suggestion?
Generally the effect of slow-motion is produced by filming at a higher frame-rate and displaying the movie in a lower frame-rate. For instance, to get 2x slow-motion, you could record in 50fps and playback in 25fps.
You are saying that the slow-motion videos you have are in 23, 25, 29 fps. This is the playback rate. Originally they were recorded in higher frames-rates that are unknown to us. But we can try to restore the original speed by displaying more frames per second or by cutting out frames and see if it looks realistic. I had a look around and I could not find a what standard slow-motion frame-rates are. If you cannot find out either you will have to guess.
You can use ffmpeg to modify the framerate of your videos as described here https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video . If you wanted to double the video playback speed (to restore from 2x slow-motion), you can do:
ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv
But I would recommend reading the short article in the link above to understand what this and the alternative commands are doing.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

Why does this gif seem to have a 0ms duration? How can I find the true duration?

I'm trying to get the duration and frame count from animated gif files so I can calculate the average framerate of the gif and then convert it to a video.
I came across this image recently during my testing and it seems to make everything believe it has a 0ms duration.
Why? How can I find the real duration?
So far I've tried:
exiftool
exiftool -v image.gif
ImageMagick
identify -verbose -format "Frame %s: %Tcs\n" image.gif
This Python script which uses the Python Imaging Library
And a couple other programs which are used for animating gifs such as Microsoft Gif Animator
The actual duration of this GIF animation really is zero seconds. It has 41 frames, and each of them has a frame duration of zero. (It also has a malformed XMP record, but that's irrelevant here.)
An infinite frame rate is obviously quite stupid, and there's no reason why your browser should even bother trying to display it. What in fact happens is that your browser slows down the frame rate of GIF animations like this so that they can actually be displayed sensibly without tying up your processor or giving you epileptic seizures.
There's no specific standard behaviour, but generally any GIF with a frame delay of less than 0.05 or 0.06 seconds per frame is liable to be be slowed down by web browsers.
References:
http://blogs.msdn.com/b/ieinternals/archive/2010/06/08/animated-gifs-slow-down-to-under-20-frames-per-second.aspx
http://forums.mozillazine.org/viewtopic.php?t=108528

Resources