GStreamer pipeline + OpenCV RTSP VideoCapture does not work in Docker container - docker

I'm trying to get GStreamer + OpenCV RTSP video capture working in a Docker container based on a NVIDIA PyTorch image. I had to end up building OpenCV from source to enable GStreamer integration, which I do in my Dockerfile like so:
FROM nvcr.io/nvidia/pytorch:19.12-py3
# OpenCV custom build instructions from:
# https://medium.com/#galaktyk01/how-to-build-opencv-with-gstreamer-b11668fa09c
# https://github.com/junjuew/Docker-OpenCV-GStreamer/blob/master/opencv3-gstreamer1.0-Dockerfile
# Install base dependencies + gstreamer
RUN pip uninstall -y opencv-python
RUN apt-get update
RUN apt-get -y install build-essential
RUN apt-get -y install pkg-config
RUN apt-get install -y libgstreamer1.0-0 \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
gstreamer1.0-doc \
gstreamer1.0-tools \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
cmake \
protobuf-compiler \
libgtk2.0-dev \
ocl-icd-opencl-dev
# Clone OpenCV repo
WORKDIR /
RUN git clone https://github.com/opencv/opencv.git
WORKDIR /opencv
RUN git checkout 4.2.0
# Build OpenCV
RUN mkdir /opencv/build
WORKDIR /opencv/build
RUN ln -s /opt/conda/lib/python3.6/site-packages/numpy/core/include/numpy /usr/include/numpy
RUN cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D PYTHON_EXECUTABLE=$(which python) \
-D BUILD_opencv_python2=OFF \
-D CMAKE_INSTALL_PREFIX=$(python -c "import sys; print(sys.prefix)") \
-D PYTHON3_EXECUTABLE=$(which python3) \
-D PYTHON3_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON3_PACKAGES_PATH=$(python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
-D WITH_GSTREAMER=ON \
-D BUILD_EXAMPLES=ON ..
RUN make -j$(nproc)
# Install OpenCV
RUN make install
RUN ldconfig
This builds successfully, and if I print OpenCV's build information from within the Docker container, GStreamer shows as available:
python -c 'import cv2; print(cv2.getBuildInformation());'
/* snip */
Video I/O:
DC1394: NO
FFMPEG: NO
avcodec: NO
avformat: NO
avutil: NO
swscale: NO
avresample: NO
GStreamer: YES (1.14.5)
v4l/v4l2: YES (linux/videodev2.h)
However, as soon as I try to use a GStreamer pipeline with cv2.VideoCapture() within the Docker container, it immediately fails:
import cv2
video = cv2.VideoCapture('gst-launch-1.0 rtspsrc location=<<rtsp URL>> latency=0 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink', cv2.CAP_GSTREAMER)
I get this "warning" (e.g., error). I'm not able to pull frames from the RTSP feed.
[ WARN:0] global /opencv/modules/videoio/src/cap_gstreamer.cpp (713) open OpenCV | GStreamer warning: Error opening bin: unexpected reference "gst-launch-1" - ignoring
[ WARN:0] global /opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
If I do this outside of the Docker container, it works like a charm. Also note, if I run the GStreamer pipeline from the command line within the Docker container, I get reasonable output that's identical to running the same command outside of the Docker container:
root:/# gst-launch-1.0 rtspsrc location=<<rtsp URL>> latency=0 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to <<rtsp URL>>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Redistribute latency...
Redistribute latency...
I'm not sure what to do next in terms of debugging the issue with GStreamer not working with OpenCV's VideoCapture - any suggestions?

I don't think you are supposed to include the gst-launch-1.0 command line tool into the cv2 pipeline description.
Instead of a console command it wants the sole GStreamer pipeline only. E.g.:
gst_str = ('v4l2src device=/dev/video{} ! '
'video/x-raw, width=(int){}, height=(int){} ! '
'videoconvert ! appsink').format(dev, width, height)
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
So in your case try:
import cv2
video = cv2.VideoCapture('rtspsrc location=<<rtsp URL>> latency=0 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink', cv2.CAP_GSTREAMER)

I have a few suggestions. But do note that I don't have much experience with Gstreamer.
Could it be related to the resolution of the feeds? From here:
It was nothing docker related. I'm using my Android phone as webcam (using adb-ffmpeg-v4l2loopback) and I've just used the wrong resolution of 640x360 instead of 1280x720 (pick the wrong line from my bash history).
Have you tried installing opencv_contrib? It might be irrelevant, but I see a lot of opencv problems fixed by installing it.
Perhaps try adding the parameter format=(string)NV12, source: OpenCV VideoCapture not working with GStreamer plugin
Here is the first example from Python cv2.CAP_GSTREAMER Examples:
def open_cam_rtsp(uri, width, height, latency):
"""Open an RTSP URI (IP CAM)."""
gst_elements = str(subprocess.check_output('gst-inspect-1.0'))
if 'omxh264dec' in gst_elements:
# Use hardware H.264 decoder on Jetson platforms
gst_str = ('rtspsrc location={} latency={} ! '
'rtph264depay ! h264parse ! omxh264dec ! '
'nvvidconv ! '
'video/x-raw, width=(int){}, height=(int){}, '
'format=(string)BGRx ! videoconvert ! '
'appsink').format(uri, latency, width, height)
elif 'avdec_h264' in gst_elements:
# Otherwise try to use the software decoder 'avdec_h264'
# NOTE: in case resizing images is necessary, try adding
# a 'videoscale' into the pipeline
gst_str = ('rtspsrc location={} latency={} ! '
'rtph264depay ! h264parse ! avdec_h264 ! '
'videoconvert ! appsink').format(uri, latency)
else:
raise RuntimeError('H.264 decoder not found!')
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
They seem to check the output of subprocess.check_output('gst-inspect-1.0') for Gstreamer elements, and set gst_str accordingly.
Of the 13 examples in the link above, I don't see the gst-launch-1.0 command. Could it be causing the problem?
Would this link help? Docker Error Could not capture frame on Ubuntu 18.04
Sorry I none of these helped. I hope you find a solution soon!

Related

Docker run vs build - build gstreamer Different behaviour

I'm trying to build a docker image that uses nvidia hardware decoding in gstreamer and have encountered a strange problem with making the image.
The build process does not find the nvidia cuda related stuff while running docker build (or nvidia-docker build), but when I spin up the failed image as a container and do those very same steps from within the container everything works. I even saved the container as image which gave me a persistent image that works as intended.
Has anyone experienced similar problem and can shed some light on it?
Dockerfile:
FROM nvcr.io/nvidia/deepstream:3.0-18.11 AS base
ENV DEBIAN_FRONTEND noninteractive
#install some dependencies. NOTE - not removing apt cache for the MWE
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libdc1394-22 \
tmux \
vim \
libjpeg-dev \
libpng-dev \
libpng12-dev \
cuda-toolkit-10-0 \
python3-setuptools \
python3-pip ninja-build pkg-config gobject-introspection gnome-devel bison flex libgirepository1.0-dev liborc-0.4-dev
RUN pip3 install meson && ldconfig
FROM base
#pull and make gstreamer:
RUN cd /tmp && mkdir gstreamer
RUN git clone https://github.com/GStreamer/gst-build.git /tmp/gstreamer \
&& cd /tmp/gstreamer \
&& git checkout tags/1.16.0 \
&& ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure \
&& ninja -C build \
&& ninja install -C build
Testing:
build and run the container. Inside the container:
$ gst-inspect-1.0 nvdec
No such element or plugin 'nvdec'
$ cd /tmp/gstreamer
$ ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
$ ninja -C build
$ ninja install -C build
$ gst-inspect-1.0 nvdec
Factory Details:
Rank primary (256)
[... all plugin parameters show up]
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstVideoDecoder
+----GstNvDec
EDIT1
The image builds with no errors, only when I try to call gstreamer it is built with no acceleration. I noticed that in the build process the major difference is
meson.build:109:2: Exception: Problem encountered: The nvdec plugin was enabled explicitly, but required CUDA dependencies were not found.
which does not happen when building from within the container.
Lack of error is related, most likely, to the ninja+meson build system which looks for compatible packages, reports the exception, but doesn't throw it and continues as if nothing wrong happened
EDIT2
Answering comment:
To build it and get the error, just build the attached docker image:
sudo docker build -t gst16:latest . > build.log
This will dump all the output into the build.log file.
I don't have a docker registry that I could use for this and the docker image gets quite big by docker standards (~8 Gigs), but to produce successfully, it's fairly simple:
sudo docker run --runtime="nvidia" -ti gst16:latest /bin/bash
or
sudo nvidia-docker run -ti gst16:latest /bin/bash
which seems to work the same for me. Notice no --rm flag! From within the container:
#check if nvidia decoder plugin is there:
gst-inspect-1.0 nvdec
#fail!
#now build it from within:
cd /opt/gstreamer
./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
ninja -C build
ninja install -C build
gst-inspect-1.0 nvdec
#success reported
Now to get the image, exit the container (ctrl+d) and in the host shell:
sudo docker container ls -a to view all containers including stopped ones
from gst16:latest get the CONTAINER_ID and copy it
sudo docker commit <CONTAINER_ID> gst16:manual and after a few seconds you should have the container saved as an image. Verify with sudo docker images
run the new image with sudo docker run --runtime=`nvidia` --rm -ti gst16:manual /bin/bash
from within the container try again the gst-inspect-1.0 nvdec to verify it's working
EDIT3
$ nvidia-docker --version
Docker version 18.09.0, build 4d60db4
I think I found the solution/reason
Writing it here in case someone finds themselves in similar situation, plus I hate finding old threads with similar problem and no answer or "nevermind, I solved it" as the only follow up
Docker build does not have any ties to nvidia runtime and gstreamer requires access to the full nvidia toolchain in order to build the plugins that need it. This is to be resolved with gstreamer 1.18 but until then, there is no way to build gstreamer with nvidia codecs in docker build.
The workaround:
Build image with all dependencies.
Run a container of said image using runtime="nvidia" but don't use --rm flag
In the container, build gstreamer and install it as normally.
Verify with gst-inspect-1.0
Commit the container as new image: docker commit <container_name> <temporary_image_name>
Tag the temporary image properly.

Build fails during make of OpenCV on Raspberry Pi with "segmentation fault" caused by "cc1plus"

I'm trying to make a build of OpenCV 4.0.0 on my Raspberry Pi 3B+, and keep running into this issue:
[ 83%] Building CXX object modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o
c++: internal compiler error: Segmentation fault (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
modules/stitching/CMakeFiles/opencv_perf_stitching.dir/build.make:62: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o' failed
make[2]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o] Error 4
CMakeFiles/Makefile2:23142: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all' failed
make[1]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all] Error 2
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2
This is the make/build portion of the script I'm running:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D PYTHON_EXECUTABLE=~/.virtualenvs/py3cv4/bin/python \
-D WITH_GSTREAMER=ON \
-D WITH_FFMPEG=ON \
-D WITH_OPENMP=ON \
-D BUILD_EXAMPLES=ON ..
echo ""
echo "======================="
echo "Building OpenCV..."
make -j4
sudo make install
sudo ldconfig
I read somewhere that I should change the make -j4 command to not use all four cores, because I'm running out of memory. I tried make -j1, but still got the same error at the same spot. I'm going to try again with just plain make, but delete all the pre-built stuff that's in there and start over from scratch to see if that helps.
Turns out I needed to entirely delete the build I had created and rebuild it with a single core instead of all four, as it was using up too much memory. I deleted my /opencv/build/ directory and then did make with no -j command, and it worked fine. It took a really long time (5+ hours), but it did complete successfully. Now I just have to figure out why I can't import cv2...

Build fails due to timeout

I have a project that is a wrapper for opencv library, written in Rust.
In order to be able to test it I have to build opencv itself. Then I cache it but cold build time is higher than 50 minutes and job gets killed.
How could this timeout be increased? For example, I have 50min per job timeout, but I'd like to have 500 minutes per 10 jobs, so I can run my first cold start build for say 90 minutes and then run fast build for 10 minutes each.
I don't know if it's possible so I'm looking for any workaround. Here is my script which takes most of time:
#!/bin/bash
set -eux -o pipefail
OPENCV_VERSION=${OPENCV_VERSION:-3.4.0}
URL=https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip
URL_CONTRUB=https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip
INSTALL_DIR="$HOME/usr/installed-${OPENCV_VERSION}"
if [[ ! -e INSTALL_DIR ]]; then
TMP=$(mktemp -d)
OPENCV_DIR="$(pwd)/opencv-${OPENCV_VERSION}"
OPENCV_CONTRIB_DIR="$(pwd)/opencv_contrib-${OPENCV_VERSION}"
if [[ ! -d "${OPENCV_DIR}/build" ]]; then
curl -sL ${URL} > ${TMP}/opencv.zip
unzip -q ${TMP}/opencv.zip
rm ${TMP}/opencv.zip
curl -sL ${URL_CONTRUB} > ${TMP}/opencv_contrib.zip
unzip -q ${TMP}/opencv_contrib.zip
rm ${TMP}/opencv_contrib.zip
mkdir $OPENCV_DIR/build
fi
pushd $OPENCV_DIR/build
cmake \
-D WITH_CUDA=ON \
-D BUILD_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_opencv_java=OFF \
-D BUILD_opencv_python=OFF \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_python3=OFF \
-D CMAKE_INSTALL_PREFIX=$HOME/usr \
-D CMAKE_BUILD_TYPE=Release \
-D OPENCV_EXTRA_MODULES_PATH=$OPENCV_CONTRIB_DIR/modules \
-D CUDA_ARCH_BIN=5.2 \
-D CUDA_ARCH_PTX="" \
..
make -j4
make install && touch INSTALL_DIR
popd
touch $HOME/fresh-cache
fi
sudo cp -r $HOME/usr/include/* /usr/local/include/
sudo cp -r $HOME/usr/lib/* /usr/local/lib/
How could this timeout be increased?
According to the Travis docs it's not possible and the timeout is fixed to 50 min (travis-ci.org) respectively 120 min (travis-ci.com).
You could consider to upgrade the travis plan. Though, the real problem is not the timeout but the necessity to build a huge library before each build. Even tough caching improves the situation a bit, it's still bad.
There are some ways to to reduce the build time (per build) – what fits best for you depends on your situation of course.
A. PPA
If you are luckky and there's a PPA shipping a version of OpenCV you can use that one. Travis runs Ubuntu 14.04 Trusty.
B. Pre-build binaries
You always can build OpenCV your own and upload pre-build binaries to eg. a server or different Git repo. Then Travis can then download and install then there.
C. Docker
Docker is imo the best approach to this. Either create a custom Docker Image or use exiting ones (there are enough around). A good start to look for are DockerHub and GitHub. In addition this way enables you to pack any further dependencies, compiler, … – simply everything you need.
D. Contact Travis
You can always drop an issue at Travis and ask for an updated version of OpenCV.

Torch OpenCV integration

I want to install opencv package in torch. I have already installed opencv and it is working fine. After using luarocks install cv for installing cv package in torch I am getting following error.
CMake Error at CMakeLists.txt:30 (FIND_PACKAGE):
Could not find a configuration file for package "OpenCV" that exactly
matches requested version "3.1".
The following configuration files were considered but not accepted:
/home/user/opencv-3.1.0/cmake/OpenCVConfig.cmake, version: unknown
/usr/local/share/OpenCV/OpenCVConfig.cmake, version: 3.3.0
-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_cv-scm-1-5467/torch-
opencv/build/CMakeFiles/CMakeOutput.log".
make: *** No targets specified and no makefile found. Stop.
Is there any way to fix this?
you may take a look at installation guide from the GitHub page
torch-opencv requires opencv-3.1.0 and is not (yet) compatible with opencv-3.3.0.
Therefore, you need to install opencv-3.1.0
git clone https://github.com/daveselinger/opencv
cd opencv
git checkout 3.1.0-with-cuda8
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local
-D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON
-D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON
-D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON ..
make
sudo make install
luarocks install cv
if your cuda version is not 8, you should change it.

Cannot suppress ffmpeg output from ruby

I have ruby on rails app that allows users to upload videos. When a video is added, I have a before_save filter that uses ffmpeg to generate a series of thumbnails. The problem is that ffmpeg is producing tons of console output when I'm saving a video item in the rails console, and when I run my tests.
My environment:
Host Machine: OS X 10.9.2
Vagrant Box: Ubuntu 10.04.4
ffmpeg version: SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.3
ruby version: 1.9.3-p194
Command I'm running:
`ffmpeg -v 0 -ss #{timestamp} -i #{video_file.path} -y -f image2 -vcodec mjpeg -vframes 1 -s 640*360 #{thumbnail_path}/thumbnail#{i}.jpg`
This version of ffmpeg on my VM doesn't seem to care about the "-v 0" option. I've also tried "-loglevel quiet" which causes ffmpeg to error, indicating that the option isn't recognized (both loglevel and v work on my host machine's ffmpeg).
Tried using both exec() and system(), which both caused execution to hang. Tried to redirecting output to a file by doing:
`ffmpeg -v 0 -ss #{timestamp} -i #{video_file.path} -y -f image2 -vcodec mjpeg -vframes 1 -s 640*360 #{thumbnail_path}/thumbnail#{i}.jpg > #{thumbnail_path}/output.txt`
Still see output. Next I tried:
`ffmpeg -v 0 -ss #{timestamp} -i #{video_file.path} -y -f image2 -vcodec mjpeg -vframes 1 -s 640*360 #{thumbnail_path}/thumbnail#{i}.jpg &> dev/null`
Still seeing output! Finally I tried:
$stdout.reopen("#{thumbnail_path}/output.txt", "w")
$stderr.reopen("#{thumbnail_path}/error.txt", "w")
`ffmpeg -v 0 -ss #{timestamp} -i #{video_file.path} -y -f image2 -vcodec mjpeg -vframes 1 -s 640*360 #{thumbnail_path}/thumbnail#{i}.jpg`
$stdout = STDOUT
$stderr = STDERR
Holy cow, that worked! Well, sort of. No more verbose output when running tests, BUT somehow anytime this runs I get kicked out of the rails console.
Does anyone have a more elegant solution?
You can try :
ffmpeg ... >output.txt 2>&1
Which insert stdout and stderr in output.txt

Resources