Using two docker images at once - docker

I have a following scenario. I want to use tensorflow for ML and OpenCV for some image processing. I recently learned about dockers and found out, that both TF and OCV are dockerized. I can easily pull the image and run eg. tensorflow script. Is there a way to somehow merge what both dockers offer? Or run on top of it. I want to write a piece of code that uses both OpenCV and Tensorflow. Is there a way to achieve this?
Or in more generic sense: Docker A image has preinstalled python package AA. Docker B has python package BB. How can I write script that uses functions from both AA and BB?

Really simple. Build your own docker image with both TF and OpenCV. Example Dockerfile (Based on janza/docker-python3-opencv):
FROM python:3.7
LABEL maintainet="John Doe"
RUN apt-get update && \
apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev && \
pip install numpy && \
pip install tensorflow
WORKDIR /
ENV OPENCV_VERSION="3.4.2"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
Of course, I don't know your exact requirements regarding this project and there is some probability that this Dockerfile won't work for you. Just adjust it to you needs. But I recommend creating from ground zero (just basing on some already existing image of some Linux distribution). Then you have full control what have you installed in which versions without redundant stuff that is often found in 3rd party images (I'm not saying they are bad, but often for people use cases most parts are redundant.)
There is also already combined docker image in official hub:
https://hub.docker.com/r/fbcotter/docker-tensorflow-opencv/
If you reaaaaly want to have it separate I guess you could link running containers of those images. Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified. But you would have to implement some kind of logic to use another package from another container (probably possible but difficult and complex).
Docker Networking

Related

Set target for OpenBlas in Docker image

I'm creating a docker image with OpenBlas, here's a MWV
FROM ubuntu:22.04
# gfortran
RUN apt-get -qq update && apt-get -qq -y install \
build-essential \
gfortran \
curl
# open blas
RUN curl -L https://github.com/xianyi/OpenBLAS/archive/v0.3.7.tar.gz -o v0.3.7.tar.gz \
&& tar -xvf v0.3.7.tar.gz \
&& cd OpenBLAS-0.3.7 \
&& make -j2 USE_THREAD=0 USE_LOCKING=1 DYNAMIC_ARCH=1 NO_AFFINITY=1 FC=gfortran \
&& make install
when I build it I get
#8 14.58 Makefile:139: *** OpenBLAS: Detecting CPU failed. Please set TARGET explicitly, e.g. make TARGET=your_cpu_target. Please read README for the detail.. Stop.
So far I understand from this post, the idea behind the flags DYNAMIC_ARCH=1 NO_AFFINITY=1 was exactly to avoid optimization for the local architecture. Am I missing something?
Thanks,

Docker OpenGL support without GPU, gl error: linking with uncompiled/unspecialized shader

In order to build up a headless simulation cluster, we're working on containerization of our existing tools. Right now, the accessible server does not have any NVIDIA GPUs.
One problem, that we encounter is, that a specific application uses OpenGL for rendering. With an physical GPU, the simulation tool is running without any problem. To ship around the GPU dependencies, we're using Mesa 3D OpenGL Software Rendering (Gallium), LLVMpipe, and OpenSWR Drivers. For reference, we had a look at https://github.com/jamesbrink/docker-opengl.
The current Dockerfile, which builds mesa 19.0.2 (using gcc-8) from source, looks like this:
# OPENGL SUPPORT ------------------------------------------------------------------------------
# start with plain ubuntu as base image for testing
FROM ubuntu AS builder
# install some needed packages and set gcc-8 as default compiler
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
llvm-7 \
llvm-dev \
autoconf \
automake \
bison \
flex \
gettext \
libtool \
python-dev\
git \
pkgconf \
python-mako \
zlib1g-dev \
x11proto-gl-dev \
libxext-dev \
xcb \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-xfixes0-dev \
libdrm-dev \
g++ \
make \
xvfb \
x11vnc \
g++-8 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 800 --slave /usr/bin/g++ g++ /usr/bin/g++-8
# get mesa (using 19.0.2 as later versions dont use the configure script)
WORKDIR /mesa
RUN git clone https://gitlab.freedesktop.org/mesa/mesa.git
WORKDIR /mesa/mesa
RUN git checkout mesa-19.0.2
#RUN git checkout mesa-18.2.2
# build and install mesa
RUN libtoolize && \
autoreconf --install && \
./configure \
--enable-glx=gallium-xlib \
--with-gallium-drivers=swrast,swr \
--disable-dri \
--disable-gbm \
--disable-egl \
--enable-gallium-osmesa \
--enable-autotools \
--enable-llvm \
--with-llvm-prefix=/usr/lib/llvm-7/ \
--prefix=/usr/local && \
make -j 4 && \
make install && \
rm -rf /mesa
# SIM -----------------------------------------------------------------------------------------
FROM ubuntu
COPY --from=builder /usr/local /usr/local
# copy all simulation binaries to the image
COPY .....
# update ubuntu and install all sim dependencies
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
xterm \
freeglut3 \
openssh-server \
synaptic \
nfs-common \
mesa-utils \
xfonts-75dpi \
libusb-0.1-4 \
python \
libglu1-mesa \
libqtgui4 \
gedit \
xvfb \
x11vnc \
llvm-7-dev \
expat \
nano && \
dpkg -i /vtdDeb/libpng12-0_1.2.54-1ubuntu1.1_amd64.deb
# set the environment variables (display -> 99 and LIBGL_ALWAYS_SOFTWARE)
ENV DISPLAY=":99" \
GALLIUM_DRIVER="llvmpipe" \
LIBGL_ALWAYS_SOFTWARE="1" \
LP_DEBUG="" \
LP_NO_RAST="false" \
LP_NUM_THREADS="" \
LP_PERF="" \
MESA_VERSION="19.0.2" \
XVFB_WHD="1920x1080x24"
If we now start the container and initialize the xvfb session, all glx examples like glxgears are working. Also the output of glxinfo | grep '^direct rendering:' is yes, so OpenGL is working.
However, if we start our simulation binary (which is provided from some company and cannot be changed now), following error messages are provided:
uniform block ub_lights has no binding.
uniform block ub_lights has no binding.
FRAGMENT glCompileShader "../data/Shaders/roadRendererFrag.glsl" FAILED
FRAGMENT Shader "../data/Shaders/roadRendererFrag.glsl" infolog:
0:277(48): error: unsized array index must be constant
0:344(48): error: unsized array index must be constant
glLinkProgram "RoadRenderingBase_Program" FAILED
Program "RoadRenderingBase_Program" infolog:
error: linking with uncompiled/unspecialized shader
Any idea how to fix that? For us, the error message is kind of vacuous.
Did someone encountered a similar problem?

Docker: How to use Volumes for desktop applications

Let's take an example. Here, I'm trying to read an image and write it in a temp folder, using opencv. I want to put this desktop application on Docker and save the output using Docker volume. From volumes, I want to save the output to my local machine.
For the problem statement, I assigned a volume to the container, so that I can save the output. When I'm running the code, it's getting executed, but I'm not understanding how to save to local machine.
This is the DockerFile for the opencv example:
FROM python:3.7
RUN apt-get update \
&& apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install numpy
ENV OPENCV_VERSION="4.1.0"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN ln -s \
/usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
/usr/local/lib/python3.7/site-packages/cv2.so
WORKDIR /opencv_example
COPY . .
I need some help to understand how Docker volumes are used for desktop applications and the code to save the volume's output in local path.
How are you starting the container? Paste your command.
If you're using Windows, you need to enable shared drives in Docker settings, and then start your container like written below. If you're on MacOS or Linux, then you only need to execute the command. (Given you probably have other flags there as well)
docker run -v <path-on-host>:<path-inside-container> <image-name>
For more reference check out this link.

Install dependencies of PHP extensions

I've started learning Docker and now I'm building my own container with PHP7 and Apache.
I have to enable some PHP extensions, but I would like to know how do you know what packages(dependencies) should be installed before installing the extension.
This is my Dockerfile at the moment:
FROM php:7.0-apache
RUN apt-get update && apt-get install -y libpng-dev
RUN docker-php-ext-install gd
In this case, to enable gd extension, I googled the error returned on building step and I found that it requires the package libpng-dev, but it's annoying to do these steps for every single extension that I want to install.
How do you manage this kind of problem?
The process is indeed annoying and very much something that could be done by a computer. Luckily someone wrote a script to do exactly that: docker php extension installer
Your example can then be written as:
FROM php:7.0-apache
#get the script
ADD https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions /usr/local/bin/
#install the script
RUN chmod uga+x /usr/local/bin/install-php-extensions && sync
#run the script
RUN install-php-extensions gd
Here is what i do, install php and some php extensions and tools. Things that I usual need...
# Add the "PHP 7" ppa
RUN add-apt-repository -y \
ppa:ondrej/php
#Install PHP-CLI 7, some PHP extentions and some useful Tools with apt
RUN apt-get update && apt-get install -y --force-yes \
php7.0-cli \
php7.0-common \
php7.0-curl \
php7.0-json \
php7.0-xml \
php7.0-mbstring \
php7.0-mcrypt \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite \
php7.0-sqlite3 \
php7.0-zip \
php7.0-memcached \
php7.0-gd \
php7.0-fpm \
php7.0-xdebug \
php7.1-bcmath \
php7.1-intl \
php7.0-dev \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
sqlite3 \
libsqlite3-dev \
git \
curl \
vim \
nano \
net-tools \
pkg-config \
iputils-ping
# remove load xdebug extension (only load on phpunit command)
RUN sed -i 's/^/;/g' /etc/php/7.0/cli/conf.d/20-xdebug.ini
Creating your own Dockerfiles involves trial and error - or building on and tweaking the work of others.
If you haven't already found this, take a look: https://hub.docker.com/r/chialab/php/
This image appears to have extensions added on top of the official base image. If you don't need all of the extensions in this image, you could look at the source of this image and tweak it to your liking.

Anaconda 3 and Open CV3

I would like to build Open CV3 from scratch with Anaconda 3 . I tried to find the instructions online but cannot find it. Would appreciate if anyone could point me in the right direction.
Thanks
Let's assume it is Linux, that you want to install opencv in /foo/opencv, that anaconda3 is installed in /foo/anaconda3. This should do it.
mkdir -p /foo/opencv/src
cd /foo/opencv/src
wget https://github.com/opencv/opencv/archive/3.2.0.zip
unzip 3.2.0.zip
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX:PATH=/foo/opencv \
-DCMAKE_BUILD_TYPE=Release \
-DBuild_opencv_python2=OFF \
-DBuild_opencv_python3=ON \
-DPYTHON3_EXECUTABLE=/foo/anaconda3/bin/python \
-DPYTHON3_INCLUDE_DIR=/foo/anaconda3/include/python3.6m \
-DWITH_1394=OFF -DWITH_VTK=OFF -DWITH_CUDA=OFF -DWITH_OPENMP=ON \
-DWITH_OPENCL=OFF -DWITH_MATLAB=OFF -DBUILD_SHARED_LIBS=ON \
-DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF \
../opencv-3.2.0
make
make install

Resources