Extract every nth frame from a MPEG4 using ImageMagick - image-processing

I would like to try and extract very nth frame from a large MPEG4 file (20.1MB) that is over 6 minutes long. However if I try something like this:
convert video.mp4 *.png
My entire computer completely crashes. So I would like to try and save some time and computational power by only extracting every nth frame from the MPEG4 file, instead of extracting over 8000 images. How can this be achieved?

I'd use "ffmpeg" to do this. A command to extract PNG images at X intervals can be as simple as this...
ffmpeg -i input.mp4 -r 1 out%04d.png
Using the option -r 1 will set the output frame rate to 1 every second. Assuming the input video has a frame rate of 25 per second, the option -r 1 will output about every 25th frame. Calculate the interval you need according to the frame rate of the input. For example, to get approximately every 10th frame, use -r 2.5.
The output file name in this example contains %04d, which creates a numbered sequence with four digits, padded with leading zeros.

VLC Media player can do this:
Run VLC as Administrator.
Click Tools - Preferences in VLC.
Under “show settings”, click “all”.
Under “Video”, select “Filters”, but do NOT expand it yet. Tick the check box “Scene video filter”.
Expand “Filters” and select “Scene filter”,
Give a output path.
set the “recording ratio” box exampe 10,20 etc.
Click “save”.
Click Media - Open Video and find your video. Patiently let the whole thing play.
VLC will automatically capture frames and save to the output path.
Disable: Click Tools - Preferences. Under “show settings”, click “all”.
Under “video”, select “filters”. uncheck “Scene video filter”. Click “save”.

Related

Combing effect while reading interlaced video

all. I have a very strange issue, reading the VideoCapture in OpenCV.
I have .MTS videos (MPEG2), and I read them in OpenCV using the next code:
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture("00030.MTS")
while(cap.isOpened()):
ret,frame = cap.read()
cv2.imshow("frame", frame)
cv2.waitKey(0)
And this shows me the corrupted frames (with bands on them). The same quality is kept, if I save the frame as the image, and look at It outside the OpenCV.
But how It should look like:
I've never seen this before while working with .avi or .mp4
How can I get the same not-corrupted frames like in the media player?
(I edited the title of the question. Some of this information wasn't apparent originally.)
Your file names suggest that this video material is a video camera's own recording, with no alterations to it.
Your video seems to be "interlaced", i.e. not "progressive". Interlaced video consists of "fields" instead of complete frames. A field contains all the even or odd lines of an image. Even and odd fields follow each other. With that method, you can have "50i" video that updates at 50 Hz, yet requires only half the data of a full "50p" video (half the data means reduced vertical resolution).
OpenCV (probably) uses ffmpeg to read your video file.
Both ffmpeg and VLC know when a video file contains interlaced video data. VLC automatically applies a suitable filter to make this data nicer to look at. ffmpeg does not, because filtering costs processing time and changes the data.
You should use ffmpeg to "de-interlace" your video files. I would suggest the yadif filter.
Example:
ffmpeg -i 00001.MTS -c copy -vf yadif -c:v libx264 -b:v 24M 00001_deinterlaced.MTS
You should look at the settings/options of yadif. Maybe the defaults aren't to your liking. Try yadif=1 to get field-rate progressive video (50/60p).

FFmpeg convert video to images with complex logic

I'm trying to use FFMPEG in order to solve some complex logic on my videos.
The business logic is the following:
I get videos from the formats: avi, mp4, mov.
I don't know what is the content in the video. It can be from 1M to 5G.
I want to output a list of images from this video with the higher quality I can get. Capturing only frames that have big changes from their previous frame. (new person, a new angle, big movement, etc)
In addition, I want to limit the number of frames per second that even if the video is dramatically fast and has changed all the time it will not produce more frames per second than this parameter.
I'm using now the following command:
./ffmpeg -i "/tmp/input/fast_movies/3_1.mp4" -vf fps=3,mpdecimate=hi=14720:lo=7040:frac=0.5 -vsync 0 -s hd720 "/tmp/output/fast_movies/(#%04d).png"
According to my understanding it doing the following:
fps=3 - first cut the video to 3 frames per second (So that is the limit I talked about)
mpdecimate - filter frames that doesn't have greater changes than the thresholds I set.
-vsync 0 - sync video timestamp - I'm not sure why but without it - it makes hundereds of duplicate frames ignoring the fps and mpdecimate command. Can someone explain?
-s hd720 - set video size to
It works pretty well but I'm not so happy with the quality. Do you think I miss something? Is there any parameter in FFMPEG that I better use it instead of these ones?
You can set the frame quality by appending -qscale:v 1 to your command.
qscale stand for quality scale, v stands for video, and the range is 1 to 31. 1 being the highest quality, 31 being the lowest.

Adding an overlaid line on video footage following the center of the frames via video processing/pattern recognition

I have multiple 1-minute videos taken outside (a building, a sea cliff, etc). The camera is not fixed, it's taken by a drone very slowly going up a cliff for example.
What I need to do, if even possible, is use OpenCV or other video processing framework to automatically add a path to each of those video, a red line.
The red line would follow the location pointed by the centre of the video frame.
It means that on each frame, I must find the locations pointed at on the previous/next frames via some sort of pattern recognition and somehow link them with an overlaid red line path.
Is there some sort of algorithm or tool that facilitate this process? How would you approach this problem?
Example:
From those frames:
*
*
*
*
To those frames:
(in the real, slow, footage, the consecutive frames would be much closer to each other)
Looks like for all frames, I must try to locate all the previous/next frames and link their centres as an rightly ordered line. Surely, that must have been done before?

Overlay Image on moving object in Video (Argumented Reality / OpenCv)

I am using FFmpeg to overlay image/emoji on video by this command -
"-i "+inputfilePath+" -filter_complex "+"[0][1]overlay=enable='between(t,"+startTime+","+endTime+")'[v1]"+" -map [v0] -map 0:a "+OutputfilePath;
But above command only overlay image over video and stays still.
In Instagram and Snapchat there is New pin feature. I want exactly same ,eg blur on moving faces or as in below videos -
Here is link.
Is it possible via FFmpeg?
I think someone with OPENCV or Argumented Reality knowledge can help in this. It is quiet similar to AR as we need to move/zoom emoji exactly where we want to on video/live cam.
Based on overlay specification:
https://ffmpeg.org/ffmpeg-filters.html#overlay-1
when you specify time interval it will happen only at that time interval:
For example, to enable a blur filter (smartblur) from 10 seconds to 3 minutes:
smartblur = enable='between(t,10,3*60)'
What you need to do is to overlay an image at specific coordinates, for example the following at fixed x and y:
ffmpeg -i rtsp://[host]:[port] -i x.png -filter_complex 'overlay=10:main_h-overlay_h-10' http://[host]:[post]/output.ogg
Now the idea is to calculate those coordinates based on the current frame of the video and force filter to use changed coordinates on every frame.
For example based on time:
FFmpeg move overlay from one pixel coordinate to another
ffmpeg -i bg.mp4 -i fg.mkv -filter_complex \
"[0:v][1:v]overlay=enable='between=(t,10,20)':x=720+t*28:y=t*10[out]" \
-map "[out]" output.mkv
Or using some other expressions:
http://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
Unfortunately this will require to find a formula before using those limited expressions of cat moving his head or drawing a pen for x and y. It can be linear, trigonometric or other dependency from time:
x=sin(t)
With the free move it is not always possible.
To be more precise of finding an object coordinates to overlay something it should be possible to provide your own filter(ffmpeg is open sourced) similar to overlay:
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_overlay.c
Calculating x and y either based on external file(where you can dump all x and y for all times if it is a static video) or do some image processing to find specific region.
Hopefully it will give you an idea and direction to move to.
It's very interesting feature.

Generate 30FPS MP4 with Rmagick

I am using Imagemagick and Rmagick in Rails to generate MP4 videos. I am trying to change the frame rate and it keeps comming up at 25FPS.
I see in some examples that at a command line level you can use -delay 1x2, Nobody explains the 1x2, i'm guessing its ticks/second and tick/frame. but it doesn't seem to change the delay when using Rmagick.
I've also set the ticks_per_second to 600 and the delay to 30 on the Imagelist object. No joy.
Okay I made this work. if i do the math correctly:) 600/30=20FPS
set ticks_per_second to 60 on the ImageList
and delay to 2 on each image in the ImageList.
animation=ImageList.new
animation.ticks_per_second=60
frames.each{ |frame|
animation.push(frame)
animation.cur_image.delay=2
}
animation.write('my file')
I get an film at the right speed. But the film info window in Quicktime still shows 25FPS. That's probably a defect but easy to ignore.

Resources