Anaconda 3 and Open CV3 - opencv

I would like to build Open CV3 from scratch with Anaconda 3 . I tried to find the instructions online but cannot find it. Would appreciate if anyone could point me in the right direction.
Thanks

Let's assume it is Linux, that you want to install opencv in /foo/opencv, that anaconda3 is installed in /foo/anaconda3. This should do it.
mkdir -p /foo/opencv/src
cd /foo/opencv/src
wget https://github.com/opencv/opencv/archive/3.2.0.zip
unzip 3.2.0.zip
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX:PATH=/foo/opencv \
-DCMAKE_BUILD_TYPE=Release \
-DBuild_opencv_python2=OFF \
-DBuild_opencv_python3=ON \
-DPYTHON3_EXECUTABLE=/foo/anaconda3/bin/python \
-DPYTHON3_INCLUDE_DIR=/foo/anaconda3/include/python3.6m \
-DWITH_1394=OFF -DWITH_VTK=OFF -DWITH_CUDA=OFF -DWITH_OPENMP=ON \
-DWITH_OPENCL=OFF -DWITH_MATLAB=OFF -DBUILD_SHARED_LIBS=ON \
-DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF \
../opencv-3.2.0
make
make install

Related

OpenCV Docker multistage build - cannot install prebuilt source

I'm trying to build a Docker image including a very particular configuration of OpenCV with CUDA and GPU support.
The build succeeds, and if I make install it from the same context that built the image, it works with no problems.
The problem happens when I try to use a multi stage build, to avoid keeping all the dependencies needed to build OpenCV. Before you continue reading, what follows might actually be an XY problem, if you have a better solution on how to copy OpenCV build artifacts (including Python bindings!) in a Docker multistage build, that is my actual intent.
Now for my attempted solution and the struggle I have:
I run COPY --from=requirements /opencv /opencv and it works and it apparently copies everything in the right path (I checked the filesystem). But, when I run from the build folder make install, I get this CMake error:
CMake Error: The source directory "" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:2724: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Again, the same command, from the same folder, but without multistage build, works.
Here is my Dockerfile:
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
# Install dependencies
RUN echo "deb http://es.archive.ubuntu.com/ubuntu eoan main universe" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake unzip pkg-config libjpeg-dev libpng-dev libtiff-dev libavcodec-dev \
libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev \
gfortran python3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libxvidcore-dev x264 \
libx264-dev libfaac-dev libmp3lame-dev libtheora-dev libfaac-dev libmp3lame-dev libvorbis-dev \
libjpeg-dev libpng-dev libtiff-dev git python3-pip libtbb-dev libprotobuf-dev protobuf-compiler \
libgoogle-glog-dev libgflags-dev libgphoto2-dev libeigen3-dev libhdf5-dev wget libtbb-dev gcc-8 g++-8 llvm \
python3-venv libgirepository1.0-dev
# Install my project requirements
WORKDIR /venv
RUN python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# Build OpenCV
WORKDIR /opencv
RUN wget https://github.com/opencv/opencv/archive/4.4.0.zip && mv 4.4.0.zip opencv.zip && unzip opencv.zip && rm opencv.zip
RUN wget https://github.com/opencv/opencv_contrib/archive/4.4.0.zip && mv 4.4.0.zip opencv_contrib.zip && unzip opencv_contrib.zip && rm opencv_contrib.zip
WORKDIR /opencv/opencv-4.4.0/build
ENV SITE_PACKAGES /venv/lib/python3.7/site-packages
ENV EXTRA_MODULES /opencv/opencv_contrib-4.4.0/modules
ENV CUDA_ARCH 7.5
ADD docker/build_opencv.sh .
RUN ./build_opencv.sh
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake python3-venv
# Install OpenCV
COPY --from=requirements /opencv /opencv
WORKDIR /opencv/opencv-4.4.0/build
RUN make install && ldconfig
# build fails here and the rest is specific to my project so I've ommitted it
The build_opencv.sh script has this options:
#!/bin/bash
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=/usr/bin/gcc-8 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=$SITE_PACKAGES \
-D OPENCV_EXTRA_MODULES_PATH=$EXTRA_MODULES \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=$CUDA_ARCH \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D WITH_GTK_2_X=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j16
You need at least numpy in your requirements.txt file.
In order to reproduce the issue, a minimal setup would have this structure:
- docker
- Dockerfile
- build_opencv.sh
- requirements.txt
Build using from the root of the build context:
docker build -t opencvmultistage:latest -f docker/Dockerfile .
Am I doing something wrong? Maybe CMake has some weird cache that I'm not copying to the new image and makes the build fail?
For the sake of clarity, if I add make install in the build_opencv.sh script it works, but I have OpenCV installed in the build context and not the runtime, which is not what I pretend to do. make install runs in the same directory, and the same files should be present, so I don't really know what's going on.
It is simpler to run cmake & make and make install in the same stage and then copy the install folders. It will allow to not have any build tools like cmake or build-essential in the final docker image.
We will use a custom CMAKE_INSTALL_PREFIX so that OpenCV binaries are installed to a directory and we can copy it straight to the next stage. Using a custom prefix will avoid having to copy CUDA installation or development libraries no longer required. Then we will run ldconfig on that directory to link the libraries as usual.
Modify the build script to use a custom CMAKE_INSTALL_PREFIX:
mkdir /prefix
cmake -D CMAKE_BUILD_TYPE=RELEASE \
# all compiler flags...
-D CMAKE_INSTALL_PREFIX=/prefix
Modifying the Dockerfile
to run make install in stage 1
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
...
ADD build_opencv.sh .
RUN ./build_opencv.sh && make install
copy the installation in stage 2
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential python3-venv
# Install OpenCV
COPY --from=requirements /prefix /prefix
COPY --from=requirements /venv /venv
ENV PATH="/venv/bin:$PATH"
RUN ldconfig /prefix

Dockerfile builds correctly but ADD fails

I'm rather new to Docker and I'm trying to make a simple Dockerfile that combines an alpine image with a python one.
This is what the Dockerfile looks like:
FROM alpine
RUN apk update &&\
apk add -q --progress \
bash \
bats \
curl \
figlet \
findutils \
git \
make \
mc \
nodejs \
openssh \
sed \
wget \
vim
ADD ./src/ /home/src/
WORKDIR /home/src/
FROM python:3.7.4-slim
When running:
docker build -t alp-py .
the image builds as normal.
When I run
docker run -it alp-py bash
I can access the bash, but when I cd to /home/ and ls, it shows an empty directory:
root#5fb77bbc81a1:/# cd home
root#5fb77bbc81a1:/home# ls
root#5fb77bbc81a1:/home#
I've alredy tried changing ADD to COPY and also trying:
CPOY . /home/src/
but nothing works.
What am I doing wrong? Am I missing something?
Thanks!
There is no such thing as "combining 2 images". You should see the images as different virtual machines (only for the purpose of understanding the concept - because they are more than that). You cannot combine them.
In your example you can start directly with the python image and install the tools you need on top of it:
FROM python:3.7.4-slim
RUN apt update &&\
apt-get install -y \
bash \
bats \
curl \
figlet \
findutils \
git \
make \
mc \
nodejs \
openssh \
sed \
wget \
vim
ADD ./src/ /home/src/
WORKDIR /home/src/
I didn't test if all the packages are available so you might want to so a bit of research to get them all in case you get errors.
When you use 2 FROM statements in your Dockerfile you are creating a multi-stage build. That is useful if you want to create a final image that doesn't contain your source code, but only binaries of your product (first stage build the source and the second only copies the binaries from the first one).

Docker: How to use Volumes for desktop applications

Let's take an example. Here, I'm trying to read an image and write it in a temp folder, using opencv. I want to put this desktop application on Docker and save the output using Docker volume. From volumes, I want to save the output to my local machine.
For the problem statement, I assigned a volume to the container, so that I can save the output. When I'm running the code, it's getting executed, but I'm not understanding how to save to local machine.
This is the DockerFile for the opencv example:
FROM python:3.7
RUN apt-get update \
&& apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install numpy
ENV OPENCV_VERSION="4.1.0"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN ln -s \
/usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
/usr/local/lib/python3.7/site-packages/cv2.so
WORKDIR /opencv_example
COPY . .
I need some help to understand how Docker volumes are used for desktop applications and the code to save the volume's output in local path.
How are you starting the container? Paste your command.
If you're using Windows, you need to enable shared drives in Docker settings, and then start your container like written below. If you're on MacOS or Linux, then you only need to execute the command. (Given you probably have other flags there as well)
docker run -v <path-on-host>:<path-inside-container> <image-name>
For more reference check out this link.

Using two docker images at once

I have a following scenario. I want to use tensorflow for ML and OpenCV for some image processing. I recently learned about dockers and found out, that both TF and OCV are dockerized. I can easily pull the image and run eg. tensorflow script. Is there a way to somehow merge what both dockers offer? Or run on top of it. I want to write a piece of code that uses both OpenCV and Tensorflow. Is there a way to achieve this?
Or in more generic sense: Docker A image has preinstalled python package AA. Docker B has python package BB. How can I write script that uses functions from both AA and BB?
Really simple. Build your own docker image with both TF and OpenCV. Example Dockerfile (Based on janza/docker-python3-opencv):
FROM python:3.7
LABEL maintainet="John Doe"
RUN apt-get update && \
apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev && \
pip install numpy && \
pip install tensorflow
WORKDIR /
ENV OPENCV_VERSION="3.4.2"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
Of course, I don't know your exact requirements regarding this project and there is some probability that this Dockerfile won't work for you. Just adjust it to you needs. But I recommend creating from ground zero (just basing on some already existing image of some Linux distribution). Then you have full control what have you installed in which versions without redundant stuff that is often found in 3rd party images (I'm not saying they are bad, but often for people use cases most parts are redundant.)
There is also already combined docker image in official hub:
https://hub.docker.com/r/fbcotter/docker-tensorflow-opencv/
If you reaaaaly want to have it separate I guess you could link running containers of those images. Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified. But you would have to implement some kind of logic to use another package from another container (probably possible but difficult and complex).
Docker Networking

Install dependencies of PHP extensions

I've started learning Docker and now I'm building my own container with PHP7 and Apache.
I have to enable some PHP extensions, but I would like to know how do you know what packages(dependencies) should be installed before installing the extension.
This is my Dockerfile at the moment:
FROM php:7.0-apache
RUN apt-get update && apt-get install -y libpng-dev
RUN docker-php-ext-install gd
In this case, to enable gd extension, I googled the error returned on building step and I found that it requires the package libpng-dev, but it's annoying to do these steps for every single extension that I want to install.
How do you manage this kind of problem?
The process is indeed annoying and very much something that could be done by a computer. Luckily someone wrote a script to do exactly that: docker php extension installer
Your example can then be written as:
FROM php:7.0-apache
#get the script
ADD https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions /usr/local/bin/
#install the script
RUN chmod uga+x /usr/local/bin/install-php-extensions && sync
#run the script
RUN install-php-extensions gd
Here is what i do, install php and some php extensions and tools. Things that I usual need...
# Add the "PHP 7" ppa
RUN add-apt-repository -y \
ppa:ondrej/php
#Install PHP-CLI 7, some PHP extentions and some useful Tools with apt
RUN apt-get update && apt-get install -y --force-yes \
php7.0-cli \
php7.0-common \
php7.0-curl \
php7.0-json \
php7.0-xml \
php7.0-mbstring \
php7.0-mcrypt \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite \
php7.0-sqlite3 \
php7.0-zip \
php7.0-memcached \
php7.0-gd \
php7.0-fpm \
php7.0-xdebug \
php7.1-bcmath \
php7.1-intl \
php7.0-dev \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
sqlite3 \
libsqlite3-dev \
git \
curl \
vim \
nano \
net-tools \
pkg-config \
iputils-ping
# remove load xdebug extension (only load on phpunit command)
RUN sed -i 's/^/;/g' /etc/php/7.0/cli/conf.d/20-xdebug.ini
Creating your own Dockerfiles involves trial and error - or building on and tweaking the work of others.
If you haven't already found this, take a look: https://hub.docker.com/r/chialab/php/
This image appears to have extensions added on top of the official base image. If you don't need all of the extensions in this image, you could look at the source of this image and tweak it to your liking.

Resources