OpenCV VideoWriter (Gstreamer + NVENC) freezes for more than 3 streams - opencv

I am trying to setup a multi-stream hardware accelerated (Nvidia's NVENC) encoding system using Opencv compiled with Gstreamer backend as well as nvenc and nvdec plugins baked into Gstreamer.
The setup works fine for <= 3 streams but as soon as I create a 4th VideoWriter object the program freezes.
Freezed Output
Note that when I remove the 4th videoWriter object or change the encoding element from "nvh264enc" to "x264enc" for 4th stream, the program works just fine. The issue does not reproduce with all 4 streams switched to "x264enc". So my guess is that it has something to do with Nvidias NVENC API or underlying hardware? Testing on a laptop with RTX-3070.
Non-Freezed Output

I'm pretty sure that consumer grade NVIDIA GPUs are limited to 3 concurrent NVENC sessions.
See https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new

Related

Multiple webcams on a single usb controller [duplicate]

This question already has an answer here:
2 usb cameras not working with opencv
(1 answer)
Closed 6 years ago.
I'd like to open 720p streams of two Canyon CNE-CWC3 webcams on a single USB controller (using an USB 2.0 hub) with OpenCV. It works in a rather unpredictable way; sometimes it succeeds, but most of the times it cannot open the second stream. I have checked the streams' bandwidth usage in VLC, it tops at 150-160 Mbps per stream, so the two streams should fit in the 480 Mbps USB bandwidth without a problem. I guess the driver allocates more space for a stream during initialization and this is the reason why the second stream fails.
Is there a workaround for this problem (either in Win or Linux), or should I switch to different webcams? Do you know of any "reliable" type for which this problem surely does not come up?
I faced this problem in Linux. The possible solution depends on the driver; it's quite common that the driver allocates more bandwidth than necessary. In my case I solved the problem tweaking the driver, but it is not guaranteed to work. To estimate necessary bandwidth, VLC values may give you some estimation but often the camera chip needs more peak bandwidth because it supplies data in bursts. Reducing camera resolution for one of the cameras may help.

Creating synchronized stereo videos using webcams

I am using OpenCV to capture video streams from two USB webcams (Microsoft LifeCam Studio) in Ubuntu 14.04. I am using very simple VideoCapture code (source here) and am trying to at least view two videos that are synchronized against each other.
I used Android stopwatch apps (UltraChron Stopwatch Lite and Stopwatch Timer) on my Samsung Galaxy S3 mini to realize that my viewed images are out of sync (show different time on stopwatch).
The frames are synced maybe in 50% of the time. The frame time differences I get are from 0 to about 300ms with an average about 120ms. It seems that the amount of timeout used has very little effect on sync (same for 1000ms or 2000ms). I tried to minimize the timeout (waitKey(1) for the OpenCV loop to work at all) and read every Xth iteration of the loop - this gave worse results that waitKey(1000). I run in FullHD but lowering resolution to 640x480 had no effect.
An ideal result would be a 100% synchronized stereo video stream that has X FPS. As I said I so far use OpenCV to view video still images, but I do not mind using anything else to get the desired result (can be on Windows too).
Thanks for help in advance!
EDIT: In my search for low-cost hardware I fount that it is probably possible to do some commodity hardware hacking (link here) and inject a single clock signal into multiple camera modules simultaneously to get the desired sync. The guy who did that seems to have developed his GENLOCKed camera board (called NerdCam1) and even a synced stereo camera board that he now sells for about €200.
However, I have almost zero ability of hardware hacking. Also I am not sure if such clock injection is possible for resolutions above NTSC/PAL standard (as it seems to be an "analog" solution?). Also, I would prefer a variable baseline option where both cameras would not be soldered on a single board.
It is not possible to stereo sync two common webcams because webcams lack external trigger feature that lets one precisely sync multiple cams using a common trigger signal. Such trigger may be done both in SW or HW but the latter will give better precision. Webcams only support "free-running" mode and let you stream whatever FPS they support but you can not influence when exactly the frame integration/exposure is done.
There are USB cameras with a dedicated external trigger feature (usually scientific cameras like Point Grey) - they are more expensive (starting at about $300/piece) than webcams but can be synced. If you really are on low budget you can try to hack the PS3 Eye camera to get the ext. trigger feature.

H264 decoder in opencv for real time video transmission

I am writing a client-server application which does real time video transmission from an android based phone to a server. The captured video from the phone camera is encoded using the android provided h264 encoder and transmitted via UDP socket. The frames are not RTP encapsulated. I need it to reduce the overhead and hence the delay.
On the receiver, I need to decode the incoming encoded frame. The data being sent on the UDP socket not only contains the encoded frame but some other information related to the frame as a part of its header. Each frame is encoded as an nal unit.
I am able to retrieve the frames from the received packet as a byte array. I can save this byte array as raw h264 file and playback using vlc and everything works fine.
However, I need to do some processing on this frame and hence need to use it with opencv.
Can anyone help me with decoding a raw h264 byte array in opencv?
Can ffmpeg be used for this?
Short answer: ffmpeg and ffplay will work directly. I remember Opencv can be built on top of those 2. so shouldn`t be difficult to use the FFMEPG/FFSHOW plug in to convert to cv::Mat. Follow the docuemnts
OpenCV can use the FFmpeg library (http://ffmpeg.org/) as backend to
record, convert and stream audio and video. FFMpeg is a complete,
cross-reference solution. If you enable FFmpeg while configuring
OpenCV than CMake will download and install the binaries in
OPENCV_SOURCE_CODE/3rdparty/ffmpeg/. To use FFMpeg at runtime, you
must deploy the FFMepg binaries with your application.
https://docs.opencv.org/3.4/d0/da7/videoio_overview.html
Last time, I have to play with DJI PSDK. And they only allow stream at UDP port udp://192.168.5.293:23003 with H.264
So I wrote a simple ffmpeg interface to stream to the PSDK. But I have to debug it beforehand. So I use ffplay to show this network stream to proof it is working. This is the script to show the stream. So you have to work on top of this to work as opencv plugin
ffplay -f h264 -i udp://192.168.1.45:23003

Fastest way to get frames from webcam

I have a little wee of a problem developing one of my programs in C++ (Visual studio) - Right now im struggling with connection of multiple webcams (connected via usb cables), creating for each of them separate thread to capture frames, and separate frame for processing image.
I use OpenCV to process frames, but the problem is that i dont get a peak of webcam possibilities (it supports 25 fps, i get only 18) is there some library that i could use to get frames, than process them with OpenCV that would made frames be captured faster?
I was researching a bit and the most popular way is to use directshow to get frames and OpenCV to process them.
Do You agree? Or do You have another solution?
I wouldn't be offended by some links :)
DirectShow is only used, if you open your capture using the
CV_CAP_DSHOW flag, like:
VideoCapture capture( CV_CAP_DSHOW + 0 ); // 0,1,2, your cam id there
(without it, it defaults to vfw )
the capture already runs in a separate thread, so wrapping it with more threads won't give you any gain.
another obstacle with multiple cams is the usb bandwidth, so if you got ports on the back & the front of your machine, dont plug all your cams into the same port/controller else you just saturate it
OpenCV uses DirectShow. Using DirectShow (primary video capture API in Windows) directly will obviously get you par or better performance (and even more likely so if OpenCV is set to use Video for Windows). USB cams typically hit USB bandwidth and hence frame rate limit, using DirectShow to capture in compressed formats or in formats with less bits/pixel is the way to reach higher frame rates within the same USB bandwidth limit.
Another typical problem causing low frame rates is slow synchronous processing delaying the capture. You typically identify this by putting trivial processing into the same capture loop and seeing higher FPS compared to processing-enabled operation.

Is PIX replay using actual driver?

If I run a 3D application (like a benchmark tool or game) using PIX and replay the capture later, is the replay actually calling the same API (and thus invoking the actual driver and GPU, rather than running a punt back or emulated 3D using CPU) the same way it was when running the original 3D application? I'm focusing only on the Direct3D API part.
Is there any other way I can do the capture, because for some application, PIX fails to capture them.
Is there a way for me to capture only a subset of the rendering, say only the middle 50 frames?

Resources