OpenCL lossless video compression - opencv

I am looking for a lossless video compression in OpenCL. It has to be lossless as it is a project requirement. Found a few lossless algorithm written in OpenCV and ffmpeg but none of them supports OpenCL encoding/decoding. I am using Apple computers and they come with ATI graphics card which does not support CUDA.
Any help would be most appreciated.

You can use x264, which already has OpenCL support, and use a CRF of 0 (lossless). I know, looks like mpeg4 is always lossy but turns out it has also a lossless mode, that most of the times works better than other lossless codecs.
avconv -i input -c:v libx264 -preset slow --opencl -crf 0 -c:a copy outvideo.mp4
OpenCL in x264 is marginally faster than plain CPU so it is not widely used. EDIT: In my system my libx264 does not accept --opencl, but I think never versions do accept that parameter. Maybe you will need a binary executable "x264" since libx264 may not expose all the underlying functionality.

It is unlikely that you will find anything that already exists implemented in OpenCL for this lossless video compression task. Your best bet would be to take something that already exists and then try to adapt that, but the basic approach of OpenCL is splitting compute tasks up into different threads that operate on small chunks of memory. You might take a look at WebM as a starting point.

Related

OpenCV with FFMPEG back-end and h264_v4l2m2m codec

I'm trying to figure out if there is a way to configure OpenCV 4.5.4 that uses FFMPEG back-end to write video file (via VideoWriter) with h264_v4l2m2m codec instead of h264. The difference between those 2 codecs on ffmpeg side is that h264_v4l2m2m uses hardware support to encode frames into video file.
If using ffmpeg tool directly via command line (Linux), the codec can be chosen with -vcodec argument, however, I don't see a way to accomplish the same in OpenCV and it seems to me that it just uses h264.
I notice that by means of CPU usage. h264 codec uses all cores of CPU, while h264_v4l2m2m takes just a little amount of CPU resources due to offloading encoding operations to hardware.
Thus, ffmpeg by itself works fine. The question is: How to achieve the same via OpenCV?
EDIT (Feb 2022): At this point of time this is not supported / tested on RPI4 as stated by the dev team in this comment.

iOS - How to hardware accelerate MKV (+ others unsupported) playing

I saw that a few video players (e.g. AVPlayerHD) are doing hardware-accelerated playing on iOS for unsupported containers like MKV. How do you think they're achieving that?
I'm thinking reading packet with ffmpeg, decoding with Core Video. Does that make sense? I'm trying to achieve the same.
Thanks!
I think that the HW accelerators for video rendering (decoding) support fixed formats, due to hard wired logic. I don't know of any HW accelerator to be able to transcode from MKV.
Another method of accelerating video playback, would be the usage of OpenCL and make use of the integrated GPU on your device. This method enables HW acceleration of a wider area of applications.
The problem with this approach is that if you are not lucky enough to find a framework that uses OpenCL to do GPU acceleration of transcode / decode, you probably need to do it yourself.
Added info
To implement a fully HW accelerated solution you first need to transcode the MKV into H264 & sub, and from there you can use the HW decoder to render the H264 component.
For the HW accelerated transcode operations you could use GPU (via OpenCL) and/or multithreading.
I found https://handbrake.fr/ that might have some OpenCL transcoding features.
Cheers!

WebP recommended settings for bulk conversion

WebP seems to have an incredible number of settings you can tweak. That's great if you are converting images by hand, but is there a set of recommended default settings for WebP, or has Google published somewhere what settings they are using for Youtube?
See this talk: https://www.youtube.com/watch?v=8vJSCmIMIjI where they mention a 20% savings in file size. We'd love to see a similar decrease in file size without sacrificing (much) quality from our jpeg images, but I don't trust that I can just play with the settings for a little while and eyeball it to decide if I've degraded the quality too far...
cwebp -preset photo -q 75 input_file -o output_file.webp

Load Captures in OpenCV

Which video formats can we use in OpenCV? Can anything in addition to AVI and load from camera be used?
If these are the only supported formats, is a video converter required to use other video formats.
I'm not sure how up-to-date it is, but this OpenCV wiki page gives a good overview of what codecs are supported. If looks like AVI is the only format with decent cross-platform support. Your options are either to do the conversion using an external converter (like you suggest) or write code that uses a video library to load the image and create the appropriate cv::Mat or IplImage * header for the data.
Unless you're processing huge quantities of video I suggest taking the path of least resistance and just converting the videos to AVI (see the above link for the details of what OpenCV supports). Just be careful to avoid lossy compression: it will wreck havoc with a lot of image processing algoritms.
OpenCV "farms out" video encoding and decoding to other libraries (e.g., ffmpeg and VFW). Also, have a look at the highgui source directory to see all of the VideoCapture wrappers available (specifically pay attention to the cap_* implementations). AVI is merely a container, and really isn't that critical to what video codecs that OpenCV can read. AVI can contain several different combinations of video, audio, and even subtitle streams. See my other answer about this. Here is also a quick article explaining the differences between containers and codecs.
So, if you're on Linux make sure ffmpeg supports decoding the video codec you are interested in processing. You can check what codecs your version of ffmpeg supports with the following command:
ffmpeg -formats
On Windows, you'll want to make sure you have plenty of codecs available to decode various types of video like the K-Lite Codec Pack.

Can I use ffmpeg to create multi-bitrate (MBR) MPEG-4 videos?

I am currently in a webcam streaming server project that requires the function of dynamically adjusting the stream's bitrate according to the client's settings (screen sizes, processing power...) or the network bandwidth. The encoder is ffmpeg, since it's free and open sourced, and the codec is MPEG-4 part 2. We use live555 for the server part.
How can I encode MBR MPEG-4 videos using ffmpeg to achieve this?
The multi-bitrate video you are describing is called "Scalable Video Codec". See this wiki link for basic understanding.
Basically, in a scalable video codec, a base layer stream itself has completely decodable; however, additional information is represented in the form of (one or many) enhancement streams. There are couple of techniques to be able to do this including lower/higher resolution, framerate and change in Quantization. The following papers explains in details
of Scalable Video coding for MEPG4 and H.264 respectively. Here is another good paper that explains what you intend to do.
Unfortunately, this is broadly a research topic and till date no open source (ffmpeg and xvid) doesn't support such multi layer encoding. I guess even commercial encoders don't support this as well. This is significantly complex. Probably you can check out if Reference encoder for H.264 supports it.
The alternative (but CPU expensive) way could be transcode in real-time while transmitting the packets. In this case, you should start off with reasonably good quality to start with. If you are using FFMPEG as API, it should not be a problem. Generally multiple resolution could still be a messy but you can keep changing target encoding rate.

Resources