I am using Imagemagick and Rmagick in Rails to generate MP4 videos. I am trying to change the frame rate and it keeps comming up at 25FPS.
I see in some examples that at a command line level you can use -delay 1x2, Nobody explains the 1x2, i'm guessing its ticks/second and tick/frame. but it doesn't seem to change the delay when using Rmagick.
I've also set the ticks_per_second to 600 and the delay to 30 on the Imagelist object. No joy.
Okay I made this work. if i do the math correctly:) 600/30=20FPS
set ticks_per_second to 60 on the ImageList
and delay to 2 on each image in the ImageList.
animation=ImageList.new
animation.ticks_per_second=60
frames.each{ |frame|
animation.push(frame)
animation.cur_image.delay=2
}
animation.write('my file')
I get an film at the right speed. But the film info window in Quicktime still shows 25FPS. That's probably a defect but easy to ignore.
Related
all. I have a very strange issue, reading the VideoCapture in OpenCV.
I have .MTS videos (MPEG2), and I read them in OpenCV using the next code:
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture("00030.MTS")
while(cap.isOpened()):
ret,frame = cap.read()
cv2.imshow("frame", frame)
cv2.waitKey(0)
And this shows me the corrupted frames (with bands on them). The same quality is kept, if I save the frame as the image, and look at It outside the OpenCV.
But how It should look like:
I've never seen this before while working with .avi or .mp4
How can I get the same not-corrupted frames like in the media player?
(I edited the title of the question. Some of this information wasn't apparent originally.)
Your file names suggest that this video material is a video camera's own recording, with no alterations to it.
Your video seems to be "interlaced", i.e. not "progressive". Interlaced video consists of "fields" instead of complete frames. A field contains all the even or odd lines of an image. Even and odd fields follow each other. With that method, you can have "50i" video that updates at 50 Hz, yet requires only half the data of a full "50p" video (half the data means reduced vertical resolution).
OpenCV (probably) uses ffmpeg to read your video file.
Both ffmpeg and VLC know when a video file contains interlaced video data. VLC automatically applies a suitable filter to make this data nicer to look at. ffmpeg does not, because filtering costs processing time and changes the data.
You should use ffmpeg to "de-interlace" your video files. I would suggest the yadif filter.
Example:
ffmpeg -i 00001.MTS -c copy -vf yadif -c:v libx264 -b:v 24M 00001_deinterlaced.MTS
You should look at the settings/options of yadif. Maybe the defaults aren't to your liking. Try yadif=1 to get field-rate progressive video (50/60p).
I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a
I'm trying to get the duration and frame count from animated gif files so I can calculate the average framerate of the gif and then convert it to a video.
I came across this image recently during my testing and it seems to make everything believe it has a 0ms duration.
Why? How can I find the real duration?
So far I've tried:
exiftool
exiftool -v image.gif
ImageMagick
identify -verbose -format "Frame %s: %Tcs\n" image.gif
This Python script which uses the Python Imaging Library
And a couple other programs which are used for animating gifs such as Microsoft Gif Animator
The actual duration of this GIF animation really is zero seconds. It has 41 frames, and each of them has a frame duration of zero. (It also has a malformed XMP record, but that's irrelevant here.)
An infinite frame rate is obviously quite stupid, and there's no reason why your browser should even bother trying to display it. What in fact happens is that your browser slows down the frame rate of GIF animations like this so that they can actually be displayed sensibly without tying up your processor or giving you epileptic seizures.
There's no specific standard behaviour, but generally any GIF with a frame delay of less than 0.05 or 0.06 seconds per frame is liable to be be slowed down by web browsers.
References:
http://blogs.msdn.com/b/ieinternals/archive/2010/06/08/animated-gifs-slow-down-to-under-20-frames-per-second.aspx
http://forums.mozillazine.org/viewtopic.php?t=108528
I want to track a moving object on a video using swistrack. https://en.wikibooks.org/wiki/SwisTrack
I will use a simple background subtraction algorithm for that. Therefore, I need a snapshot of the first frame of my movie.
The movies are in .avi format, and I have tried taking snapshots using GNOME player and Mplayer (on ubuntu) and VLC player (on Windows). However, I always bump into the same problem: my movie has dimensions 720 x 576 and any screenshot I take has dimensions 768 x 576. This makes background substraction impossible and it makes swistrack complain.
I have no idea what is going wrong here. Is it the movie format? I uploaded a movie and a screenshot on this URL so you could perhaps try it and see if you get the same results?
https://perswww.kuleuven.be/~u0065551/movies_and_snapshots/
The thing is I want to batch process my video's using e.g. Mplayer, always automatically saving a movie in its folder together with the snapshot of the first frame, and the mask that it makes from it, so I can very easily read that in with swistrack.
Thanks a lot for your help!
I'm analyzing a number of solutions to the problem that I have in hand: I'm receiving images from a device and I need to make a video file out of it. However, the images arrive with a somewhat random delay between them and I'm looking for the best way to encode this. I have to create this video frame by frame, and after each frame I must have a new video file with the new frame, replacing the old video file.
I was thinking of fixating the frame-rate a little "faster" than the minimum delay that I might get and just repeat the last frame until a new one arrives, but I guess that this solution is not optimal.
Also, this project is made with Delphi (no, I cannot change that) and I need means to turn these frames into a video file after each frame. I was thinking about using mencoder as an external tool, but I'm reading the documentation and still haven't found an option to make it insert a frame in an already encoded Motion JPEG video file. As my images come in as JPEG, I thought that it would be reasonable to use Motion JPEG, but not even this is certain yet. Also, I don't know if mencoder can be used as a library. It would help a lot if it did.
What would you suggest?
There are some media container formats that support variable frame rate, but I don't think MJPEG is good choice because of the storage overhead. I believe the best way would be to transcode JPEG frames to MP4 format using both I-frames and P-frames.
You can use FFMPEG Delphi/FP header files for the transcoding.
Edit:
The most up to date version of FFMPEG headers can be found at GLScene repository on SourceForge.net. To view the files you can use this link