Load Captures in OpenCV - opencv

Which video formats can we use in OpenCV? Can anything in addition to AVI and load from camera be used?
If these are the only supported formats, is a video converter required to use other video formats.

I'm not sure how up-to-date it is, but this OpenCV wiki page gives a good overview of what codecs are supported. If looks like AVI is the only format with decent cross-platform support. Your options are either to do the conversion using an external converter (like you suggest) or write code that uses a video library to load the image and create the appropriate cv::Mat or IplImage * header for the data.
Unless you're processing huge quantities of video I suggest taking the path of least resistance and just converting the videos to AVI (see the above link for the details of what OpenCV supports). Just be careful to avoid lossy compression: it will wreck havoc with a lot of image processing algoritms.

OpenCV "farms out" video encoding and decoding to other libraries (e.g., ffmpeg and VFW). Also, have a look at the highgui source directory to see all of the VideoCapture wrappers available (specifically pay attention to the cap_* implementations). AVI is merely a container, and really isn't that critical to what video codecs that OpenCV can read. AVI can contain several different combinations of video, audio, and even subtitle streams. See my other answer about this. Here is also a quick article explaining the differences between containers and codecs.
So, if you're on Linux make sure ffmpeg supports decoding the video codec you are interested in processing. You can check what codecs your version of ffmpeg supports with the following command:
ffmpeg -formats
On Windows, you'll want to make sure you have plenty of codecs available to decode various types of video like the K-Lite Codec Pack.

Related

OpenCV with FFMPEG back-end and h264_v4l2m2m codec

I'm trying to figure out if there is a way to configure OpenCV 4.5.4 that uses FFMPEG back-end to write video file (via VideoWriter) with h264_v4l2m2m codec instead of h264. The difference between those 2 codecs on ffmpeg side is that h264_v4l2m2m uses hardware support to encode frames into video file.
If using ffmpeg tool directly via command line (Linux), the codec can be chosen with -vcodec argument, however, I don't see a way to accomplish the same in OpenCV and it seems to me that it just uses h264.
I notice that by means of CPU usage. h264 codec uses all cores of CPU, while h264_v4l2m2m takes just a little amount of CPU resources due to offloading encoding operations to hardware.
Thus, ffmpeg by itself works fine. The question is: How to achieve the same via OpenCV?
EDIT (Feb 2022): At this point of time this is not supported / tested on RPI4 as stated by the dev team in this comment.

How to streaming from an .avi container without encoding it in H264 or H265

I would like to stream a .avi container and not use any codec in the encoding process, that is, I do not want it to encode in H264 or H265, just upload the video and do not encode it, I am using the Azure SDK media services in .NET.
The presets that azure media services has for example in their sdk, they all use h264 or h265 to encode and return an mp4, I just want to upload .avi and see if it is possible that it does not apply any compression and then download the .avi
Thanks!
Adding the answer here. It looks like you were wanting to do a lossless, or near lossless encoding pass using CRF (constant rate factor encoding). There is currently no support for setting CRF encoding in the standard encoder in AMS, but there is work going on to add CRF encoding settings to the SDK in the near future.
For now, you are limited to the settings available in the Transform preset in the H264 or H265 Layers.
You can see all of the available encoding settings most easily in the REST API
https://github.com/Azure/azure-rest-api-specs/blob/main/specification/mediaservices/resource-manager/Microsoft.Media/stable/2021-06-01/Encoding.json
Or if you look at the Transform object in your favorite SDK. Look at the H264Video and H264 Layer classes in the model, as well as the H265 equivalent ones for settings you can control in your code.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.media.models.h264video?view=azure-dotnet
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.media.models.h264layer?view=azure-dotnet
UPDATE: SDK for .NET is available now with Exposed RateControlMode for H264 encoding, enabling 2 new ratecontrol modes - CBR (Constant Bit Rate) and CRF (Constant Rate Factor).
See- https://www.nuget.org/packages/Microsoft.Azure.Management.Media

which format is best for upload video & audio

I am working on "upload video & audio to server",I want to know which format is best for upload (consider the quality & file-size)
video formats are just containers, if you want to consider quality and file size you should look into the encoding of video. For ios based devices h264 encoder with high efficiency level 4 provides the good compression, hence you will get good quality in less file size.
If you want to learn about conversion of video data from one format to another please look into ffmpeg.

Can I use ffmpeg to create multi-bitrate (MBR) MPEG-4 videos?

I am currently in a webcam streaming server project that requires the function of dynamically adjusting the stream's bitrate according to the client's settings (screen sizes, processing power...) or the network bandwidth. The encoder is ffmpeg, since it's free and open sourced, and the codec is MPEG-4 part 2. We use live555 for the server part.
How can I encode MBR MPEG-4 videos using ffmpeg to achieve this?
The multi-bitrate video you are describing is called "Scalable Video Codec". See this wiki link for basic understanding.
Basically, in a scalable video codec, a base layer stream itself has completely decodable; however, additional information is represented in the form of (one or many) enhancement streams. There are couple of techniques to be able to do this including lower/higher resolution, framerate and change in Quantization. The following papers explains in details
of Scalable Video coding for MEPG4 and H.264 respectively. Here is another good paper that explains what you intend to do.
Unfortunately, this is broadly a research topic and till date no open source (ffmpeg and xvid) doesn't support such multi layer encoding. I guess even commercial encoders don't support this as well. This is significantly complex. Probably you can check out if Reference encoder for H.264 supports it.
The alternative (but CPU expensive) way could be transcode in real-time while transmitting the packets. In this case, you should start off with reasonably good quality to start with. If you are using FFMPEG as API, it should not be a problem. Generally multiple resolution could still be a messy but you can keep changing target encoding rate.

Library for decoding H.264 RTSP stream

I was planning to decode H.264 based RTSP stream using FFMPEG in OpenCV but, when I tried so it gave some errors. Later, I found that many people have faced issues while decoding H.264 stream using ffmpeg (libavcodec). Typically the below mentioned error messages pop-up while using libavcodec:
"[h264 # 0xa766dd0]concealing 1200 DC, 1200 AC, 1200 MV errors"
Has anyone used any other library successfully for decoding H.264 based RTSP. If so, which is the library (I have heard of live555 which is used within vlc player for decoding such streams). I would also like to know the output format and how it can be made compatible with OpenCV (typically within opencv we can use cvQueryFrame to directly extract a frame from a video stream, but in case we are using a library other than ffmpeg how to go about it).
Thanks in advance.
Regards,
Saurabh Gandhi
VLC is using ffmpeg to decode h.264.
the problem can happen when you have the wrong SPS PPS, or don't have.
You need to extract it from the RTSP protocol and pass it to the ffmpeg before trying to decode video.
To Decode your RTSP stream , The best libraries are FFMPEG and Gstreamer.
To decode the stream you need to feed the decoder with the right buffer for which you have to understand your H.264 stream so that you can arrange your SPS, PPS and NAL data before feeding it to the Library Decoder

Resources