Torch OpenCV integration - opencv

I want to install opencv package in torch. I have already installed opencv and it is working fine. After using luarocks install cv for installing cv package in torch I am getting following error.
CMake Error at CMakeLists.txt:30 (FIND_PACKAGE):
Could not find a configuration file for package "OpenCV" that exactly
matches requested version "3.1".
The following configuration files were considered but not accepted:
/home/user/opencv-3.1.0/cmake/OpenCVConfig.cmake, version: unknown
/usr/local/share/OpenCV/OpenCVConfig.cmake, version: 3.3.0
-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_cv-scm-1-5467/torch-
opencv/build/CMakeFiles/CMakeOutput.log".
make: *** No targets specified and no makefile found. Stop.
Is there any way to fix this?

you may take a look at installation guide from the GitHub page

torch-opencv requires opencv-3.1.0 and is not (yet) compatible with opencv-3.3.0.
Therefore, you need to install opencv-3.1.0
git clone https://github.com/daveselinger/opencv
cd opencv
git checkout 3.1.0-with-cuda8
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local
-D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON
-D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON
-D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON ..
make
sudo make install
luarocks install cv
if your cuda version is not 8, you should change it.

Related

OPENNI2 missing After building opencv

I've downloaded OPENNI2 and got it work in Niviewer.exe through the libfreenct2 driver. But when I build Opencv with OPENNI2 it's wired.
My cmd line : cmake -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D PYTHON3_NUMPY_INCLUDE_DIRS=C:\Users\asus\anaconda3\envs\yolotag5\Lib\site-packages\numpy\core\include -D WITH_OPENNI2=ON ..
The result waspage 1 and page 2, OPENNI2 was embeded with the compile file.
But after I ran: cmake.exe --build . --config Release --target INSTALL
It returns the result like Page 1 and Page 2
OPENNI2 has just gone!
Has anyone met this problem before?

How can I enable GTK+ for OpenCV in cmake?

Host: UBUNTU (Linux 4.15.0-47-generic x86_64)
Target: i.MX 8QuadXPlus (Linux imx8qxpmek 4.9.88-imx_4.9.88_imx8qxp_beta2+g05f46d3)
I`m already make WITH_GTK and WITH_GTK_2_X enable.
But after Configure,
GTK+ still NO.
How can I enable GTK+ for OpenCV in cmake?
sudo apt-get install libgtk2.0-dev libgtk-3-dev
If after that you still can't enable the options, try with:
sudo apt-get install gnome-devel
you need to make sure Qt is disabled before GTK3 support is possible
( -D WITH_QT=OFF )
#lb90 and #incubo4u correct
Writing it to new ones who come across this issue
Install gtk3 in case not installed source
sudo apt install libgtk-3-dev
It is always QT OR GTK, if you enable one the other is automatically disabled
This should give you an idea which one you would like to build opencv with GTK or QT when build OpenCV
This is the official page for the same GUI backends (highgui module)
-D WITH_QT=OFF \
-D WITH_GTK=ON \
OR
-D WITH_QT=ON \
-D WITH_GTK=OFF \
If you turn ON GTK OpenGL support is turned OFF by default

Docker run vs build - build gstreamer Different behaviour

I'm trying to build a docker image that uses nvidia hardware decoding in gstreamer and have encountered a strange problem with making the image.
The build process does not find the nvidia cuda related stuff while running docker build (or nvidia-docker build), but when I spin up the failed image as a container and do those very same steps from within the container everything works. I even saved the container as image which gave me a persistent image that works as intended.
Has anyone experienced similar problem and can shed some light on it?
Dockerfile:
FROM nvcr.io/nvidia/deepstream:3.0-18.11 AS base
ENV DEBIAN_FRONTEND noninteractive
#install some dependencies. NOTE - not removing apt cache for the MWE
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libdc1394-22 \
tmux \
vim \
libjpeg-dev \
libpng-dev \
libpng12-dev \
cuda-toolkit-10-0 \
python3-setuptools \
python3-pip ninja-build pkg-config gobject-introspection gnome-devel bison flex libgirepository1.0-dev liborc-0.4-dev
RUN pip3 install meson && ldconfig
FROM base
#pull and make gstreamer:
RUN cd /tmp && mkdir gstreamer
RUN git clone https://github.com/GStreamer/gst-build.git /tmp/gstreamer \
&& cd /tmp/gstreamer \
&& git checkout tags/1.16.0 \
&& ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure \
&& ninja -C build \
&& ninja install -C build
Testing:
build and run the container. Inside the container:
$ gst-inspect-1.0 nvdec
No such element or plugin 'nvdec'
$ cd /tmp/gstreamer
$ ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
$ ninja -C build
$ ninja install -C build
$ gst-inspect-1.0 nvdec
Factory Details:
Rank primary (256)
[... all plugin parameters show up]
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstVideoDecoder
+----GstNvDec
EDIT1
The image builds with no errors, only when I try to call gstreamer it is built with no acceleration. I noticed that in the build process the major difference is
meson.build:109:2: Exception: Problem encountered: The nvdec plugin was enabled explicitly, but required CUDA dependencies were not found.
which does not happen when building from within the container.
Lack of error is related, most likely, to the ninja+meson build system which looks for compatible packages, reports the exception, but doesn't throw it and continues as if nothing wrong happened
EDIT2
Answering comment:
To build it and get the error, just build the attached docker image:
sudo docker build -t gst16:latest . > build.log
This will dump all the output into the build.log file.
I don't have a docker registry that I could use for this and the docker image gets quite big by docker standards (~8 Gigs), but to produce successfully, it's fairly simple:
sudo docker run --runtime="nvidia" -ti gst16:latest /bin/bash
or
sudo nvidia-docker run -ti gst16:latest /bin/bash
which seems to work the same for me. Notice no --rm flag! From within the container:
#check if nvidia decoder plugin is there:
gst-inspect-1.0 nvdec
#fail!
#now build it from within:
cd /opt/gstreamer
./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
ninja -C build
ninja install -C build
gst-inspect-1.0 nvdec
#success reported
Now to get the image, exit the container (ctrl+d) and in the host shell:
sudo docker container ls -a to view all containers including stopped ones
from gst16:latest get the CONTAINER_ID and copy it
sudo docker commit <CONTAINER_ID> gst16:manual and after a few seconds you should have the container saved as an image. Verify with sudo docker images
run the new image with sudo docker run --runtime=`nvidia` --rm -ti gst16:manual /bin/bash
from within the container try again the gst-inspect-1.0 nvdec to verify it's working
EDIT3
$ nvidia-docker --version
Docker version 18.09.0, build 4d60db4
I think I found the solution/reason
Writing it here in case someone finds themselves in similar situation, plus I hate finding old threads with similar problem and no answer or "nevermind, I solved it" as the only follow up
Docker build does not have any ties to nvidia runtime and gstreamer requires access to the full nvidia toolchain in order to build the plugins that need it. This is to be resolved with gstreamer 1.18 but until then, there is no way to build gstreamer with nvidia codecs in docker build.
The workaround:
Build image with all dependencies.
Run a container of said image using runtime="nvidia" but don't use --rm flag
In the container, build gstreamer and install it as normally.
Verify with gst-inspect-1.0
Commit the container as new image: docker commit <container_name> <temporary_image_name>
Tag the temporary image properly.

Build fails during make of OpenCV on Raspberry Pi with "segmentation fault" caused by "cc1plus"

I'm trying to make a build of OpenCV 4.0.0 on my Raspberry Pi 3B+, and keep running into this issue:
[ 83%] Building CXX object modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o
c++: internal compiler error: Segmentation fault (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
modules/stitching/CMakeFiles/opencv_perf_stitching.dir/build.make:62: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o' failed
make[2]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/perf/opencl/perf_stitch.cpp.o] Error 4
CMakeFiles/Makefile2:23142: recipe for target 'modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all' failed
make[1]: *** [modules/stitching/CMakeFiles/opencv_perf_stitching.dir/all] Error 2
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2
This is the make/build portion of the script I'm running:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D PYTHON_EXECUTABLE=~/.virtualenvs/py3cv4/bin/python \
-D WITH_GSTREAMER=ON \
-D WITH_FFMPEG=ON \
-D WITH_OPENMP=ON \
-D BUILD_EXAMPLES=ON ..
echo ""
echo "======================="
echo "Building OpenCV..."
make -j4
sudo make install
sudo ldconfig
I read somewhere that I should change the make -j4 command to not use all four cores, because I'm running out of memory. I tried make -j1, but still got the same error at the same spot. I'm going to try again with just plain make, but delete all the pre-built stuff that's in there and start over from scratch to see if that helps.
Turns out I needed to entirely delete the build I had created and rebuild it with a single core instead of all four, as it was using up too much memory. I deleted my /opencv/build/ directory and then did make with no -j command, and it worked fine. It took a really long time (5+ hours), but it did complete successfully. Now I just have to figure out why I can't import cv2...

RStudio in docker image does not deal with some librairies

I have a problem to use the docker rstudio-image rocker/rstudio proposed
on https://www.rocker-project.org/ (docker containers for R). Since I am a beginner with both docker and RStudio, I suspect the problem comes from me and does not deserve a bug report:
I open a proper terminal with 'Docker Quickstart Terminal'
where I run the image with docker run -d -p 8787:8787 -e DISABLE_AUTH=true -v <...>:/home/rstudio/<...> --name rstudio rocker/rstudio
in my browser I then get a nice RStudio instance at the address http://192.168.99.100:8787
but in this instance I can't install several packages such as xml2. I get the message:
Using PKG_CFLAGS=
Using PKG_LIBS=-lxml2
------------------------- ANTICONF ERROR ---------------------------
Configuration failed because libxml-2.0 was not found. Try installing:
* deb: libxml2-dev (Debian, Ubuntu, etc)
* rpm: libxml2-devel (Fedora, CentOS, RHEL)
* csw: libxml2_dev (Solaris)
If libxml-2.0 is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a libxml-2.0.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'
--------------------------------------------------------------------
ERROR: configuration failed for package ‘xml2’
* removing ‘/usr/local/lib/R/site-library/xml2’
Warning in install.packages :
installation of package ‘xml2’ had non-zero exit status
I don't know whether xml2 is on the image but the file libxml-2.0.pc does exist on my laptop in the directory /opt/local/lib/pkgconfig and pkg-config is in /opt/local/bin. So I tried linking these pkg paths when running
the image (to see what happen when I play with the image environment
in RStudio), adding options -v
/opt/local/lib/pkgconfig:/home/rstudio/lib/pkgconfig -v
/opt/local/bin:/home/rstudio/bin to the run command. But it doesn't work: for some reason
I don't see the content of lib/pkgconfig in RStudio...
Also the RStudio instance does not accept root/sudo commands so I can't
use tools such as apt-get in the RStudio terminal
so, what's the trick ?
Libraries on your laptop (the host for docker) are not available for docker containers. You should create a custom image with required libraries, create a Dockerfile like this:
FROM rocker/rstudio
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libxml2-dev # add any additional libraries you need
CMD ["/init"]
Above I added the libxml2-dev but you can add as many libraries as you need.
Then build your image using this command (you need to execute below command in directory there you created Dockerfile):
docker build -t my_rstudio:0.1 .
Then you can start your container:
docker run -d -p 8787:8787 -e DISABLE_AUTH=true --name rstudio my_rstudio:0.1
(you can add any additional arguments like -v to above).

Resources