all. I have a very strange issue, reading the VideoCapture in OpenCV.
I have .MTS videos (MPEG2), and I read them in OpenCV using the next code:
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture("00030.MTS")
while(cap.isOpened()):
ret,frame = cap.read()
cv2.imshow("frame", frame)
cv2.waitKey(0)
And this shows me the corrupted frames (with bands on them). The same quality is kept, if I save the frame as the image, and look at It outside the OpenCV.
But how It should look like:
I've never seen this before while working with .avi or .mp4
How can I get the same not-corrupted frames like in the media player?
(I edited the title of the question. Some of this information wasn't apparent originally.)
Your file names suggest that this video material is a video camera's own recording, with no alterations to it.
Your video seems to be "interlaced", i.e. not "progressive". Interlaced video consists of "fields" instead of complete frames. A field contains all the even or odd lines of an image. Even and odd fields follow each other. With that method, you can have "50i" video that updates at 50 Hz, yet requires only half the data of a full "50p" video (half the data means reduced vertical resolution).
OpenCV (probably) uses ffmpeg to read your video file.
Both ffmpeg and VLC know when a video file contains interlaced video data. VLC automatically applies a suitable filter to make this data nicer to look at. ffmpeg does not, because filtering costs processing time and changes the data.
You should use ffmpeg to "de-interlace" your video files. I would suggest the yadif filter.
Example:
ffmpeg -i 00001.MTS -c copy -vf yadif -c:v libx264 -b:v 24M 00001_deinterlaced.MTS
You should look at the settings/options of yadif. Maybe the defaults aren't to your liking. Try yadif=1 to get field-rate progressive video (50/60p).
Related
I'm trying to use FFMPEG in order to solve some complex logic on my videos.
The business logic is the following:
I get videos from the formats: avi, mp4, mov.
I don't know what is the content in the video. It can be from 1M to 5G.
I want to output a list of images from this video with the higher quality I can get. Capturing only frames that have big changes from their previous frame. (new person, a new angle, big movement, etc)
In addition, I want to limit the number of frames per second that even if the video is dramatically fast and has changed all the time it will not produce more frames per second than this parameter.
I'm using now the following command:
./ffmpeg -i "/tmp/input/fast_movies/3_1.mp4" -vf fps=3,mpdecimate=hi=14720:lo=7040:frac=0.5 -vsync 0 -s hd720 "/tmp/output/fast_movies/(#%04d).png"
According to my understanding it doing the following:
fps=3 - first cut the video to 3 frames per second (So that is the limit I talked about)
mpdecimate - filter frames that doesn't have greater changes than the thresholds I set.
-vsync 0 - sync video timestamp - I'm not sure why but without it - it makes hundereds of duplicate frames ignoring the fps and mpdecimate command. Can someone explain?
-s hd720 - set video size to
It works pretty well but I'm not so happy with the quality. Do you think I miss something? Is there any parameter in FFMPEG that I better use it instead of these ones?
You can set the frame quality by appending -qscale:v 1 to your command.
qscale stand for quality scale, v stands for video, and the range is 1 to 31. 1 being the highest quality, 31 being the lowest.
I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed.But I noticed that videos start to stutter when played through an AVPlayer at 10x.
I also noticed that on OSX's QuickTime the same composition plays smoothly.
Another question states that the reason for this is hardware limitation , but I want to know if there is a way around this , so that the fast motion effect occurs smoothly over the length of the entire video.
Video Specs
Format : H.264 , 1280x544
FPS : 25
Data Size : 26MB
Data Rate : 1.17 Mbit/s
I have a feeling that by playing your videos at 10x using scaleTimeRange:toDuration simply has the effect of multiplying your data rate by 10, bringing it up to 10Mbit/s, which osx machines can handle, but iOS devices cannot.
In other words, you're creating videos that need to play back at 300 frames per second, which is pushing AVPlayer too hard.
If I didn't know about your other question, I would have said that the solution is to export your AVComposition using AVAssetExportSession, which should result in your high FPS video being down sampled to an easier to handle 30fps, and then play that with AVPlayer.
If AVAssetExportSession isn't working, you could try applying the speedup effect yourself, by reading the frames from the source video using AVAssetReader and writing every tenth frame to the output file using AVAssetWriter (don't forget to set the correct presentation timestamps).
We want to allow the user to place animated "stickers" over video that they record in the app and are considering different ways to composite these stickers.
Create a video in code from the frame-based animated stickers (which can be rotated, and have translations applied to them) using AVAssetWriter. The problem is that AVAssetWriter only writes to a file and doesn't keep transparency. This would prevent us from being able to overly it over the video using AVMutableComposition.
Create .mov files ahead of time for our frame based stickers and composite them using AVMutableComposition and layer instructions with transformations. The problem with this is that there are no tools for easily converting our PNG based frames to a .mov while maintaining an alpha channel and we'd have to write our own.
Creating separate CALayers for each frame in the sticker animations. This could potentially create a very large number of layers per frame rate of the video.
Or any better ideas?
Thanks.
I would suggest that you take a look at my blog post on this specific subject. Basically, this example shows how RGBA video data can be loaded from a file attached to the app resources. This is imported from a .mov that contains Animation RGBA data on the desktop. A conversion step is required to get the data from the Desktop into iOS, since plain H.264 cannot support an Alpha channel directly (as you have discovered). Note that older hardware may have issues decoding a H.264 user recorded video and then another one on top of that, so this approach of using the CPU instead of the H.264 hardware for the sticker is actually better.
I got videos from youtube that showing the soccer player while falling, most of these videos show the slow motion of the falling, I need the actual falling without the used slow motion.
the fps for most of these videos are 23, 25, 29 fps.
I have seen the two way of the stander slow motion, at this link.But how to see the original used rate for the previous videos.
Any suggestion?
Generally the effect of slow-motion is produced by filming at a higher frame-rate and displaying the movie in a lower frame-rate. For instance, to get 2x slow-motion, you could record in 50fps and playback in 25fps.
You are saying that the slow-motion videos you have are in 23, 25, 29 fps. This is the playback rate. Originally they were recorded in higher frames-rates that are unknown to us. But we can try to restore the original speed by displaying more frames per second or by cutting out frames and see if it looks realistic. I had a look around and I could not find a what standard slow-motion frame-rates are. If you cannot find out either you will have to guess.
You can use ffmpeg to modify the framerate of your videos as described here https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video . If you wanted to double the video playback speed (to restore from 2x slow-motion), you can do:
ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv
But I would recommend reading the short article in the link above to understand what this and the alternative commands are doing.
I'm analyzing a number of solutions to the problem that I have in hand: I'm receiving images from a device and I need to make a video file out of it. However, the images arrive with a somewhat random delay between them and I'm looking for the best way to encode this. I have to create this video frame by frame, and after each frame I must have a new video file with the new frame, replacing the old video file.
I was thinking of fixating the frame-rate a little "faster" than the minimum delay that I might get and just repeat the last frame until a new one arrives, but I guess that this solution is not optimal.
Also, this project is made with Delphi (no, I cannot change that) and I need means to turn these frames into a video file after each frame. I was thinking about using mencoder as an external tool, but I'm reading the documentation and still haven't found an option to make it insert a frame in an already encoded Motion JPEG video file. As my images come in as JPEG, I thought that it would be reasonable to use Motion JPEG, but not even this is certain yet. Also, I don't know if mencoder can be used as a library. It would help a lot if it did.
What would you suggest?
There are some media container formats that support variable frame rate, but I don't think MJPEG is good choice because of the storage overhead. I believe the best way would be to transcode JPEG frames to MP4 format using both I-frames and P-frames.
You can use FFMPEG Delphi/FP header files for the transcoding.
Edit:
The most up to date version of FFMPEG headers can be found at GLScene repository on SourceForge.net. To view the files you can use this link