I've started learning Docker and now I'm building my own container with PHP7 and Apache.
I have to enable some PHP extensions, but I would like to know how do you know what packages(dependencies) should be installed before installing the extension.
This is my Dockerfile at the moment:
FROM php:7.0-apache
RUN apt-get update && apt-get install -y libpng-dev
RUN docker-php-ext-install gd
In this case, to enable gd extension, I googled the error returned on building step and I found that it requires the package libpng-dev, but it's annoying to do these steps for every single extension that I want to install.
How do you manage this kind of problem?
The process is indeed annoying and very much something that could be done by a computer. Luckily someone wrote a script to do exactly that: docker php extension installer
Your example can then be written as:
FROM php:7.0-apache
#get the script
ADD https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions /usr/local/bin/
#install the script
RUN chmod uga+x /usr/local/bin/install-php-extensions && sync
#run the script
RUN install-php-extensions gd
Here is what i do, install php and some php extensions and tools. Things that I usual need...
# Add the "PHP 7" ppa
RUN add-apt-repository -y \
ppa:ondrej/php
#Install PHP-CLI 7, some PHP extentions and some useful Tools with apt
RUN apt-get update && apt-get install -y --force-yes \
php7.0-cli \
php7.0-common \
php7.0-curl \
php7.0-json \
php7.0-xml \
php7.0-mbstring \
php7.0-mcrypt \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite \
php7.0-sqlite3 \
php7.0-zip \
php7.0-memcached \
php7.0-gd \
php7.0-fpm \
php7.0-xdebug \
php7.1-bcmath \
php7.1-intl \
php7.0-dev \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
sqlite3 \
libsqlite3-dev \
git \
curl \
vim \
nano \
net-tools \
pkg-config \
iputils-ping
# remove load xdebug extension (only load on phpunit command)
RUN sed -i 's/^/;/g' /etc/php/7.0/cli/conf.d/20-xdebug.ini
Creating your own Dockerfiles involves trial and error - or building on and tweaking the work of others.
If you haven't already found this, take a look: https://hub.docker.com/r/chialab/php/
This image appears to have extensions added on top of the official base image. If you don't need all of the extensions in this image, you could look at the source of this image and tweak it to your liking.
Related
I'm using Docker in a developement environment. I have two pc, both with same OS (kubuntu 20.04) and same docker version.
In one the dockerfile build without errors, in the other fails with
$ docker build -t letsjump/mydockername -f docker-data/webserver73/Dockerfile .
#...lot of compile output...
configure: error: unrecognized options: --with-freetype, --with-jpeg, --with-freetype, --with-webp
The command '/bin/sh -c curl -sL https://deb.nodesource.com/setup_12.x | bash - && apt-key update && DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::="--force-confdef" install $CUSTOM_BUILD_DEPS $PHPIZE_DEPS sendmail git mariadb-client openssh-client nano netcat linkchecker nodejs build-essential libzip-dev libz-dev wget unixodbc odbcinst unixodbc-dev gnupg libwebp-dev libonig-dev --no-install-recommends && npm -g install npm#latest && docker-php-ext-configure gd --with-freetype --with-jpeg --with-freetype --with-webp && docker-php-ext-configure bcmath && docker-php-ext-configure calendar && docker-php-ext-configure pdo_odbc --with-pdo-odbc=unixODBC,/usr && docker-php-ext-install gd intl pdo_mysql mysqli mbstring opcache zip bcmath calendar soap && pecl install memcached-3.1.5 && echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini && printf "\n" | pecl -d preferred_state=beta install xdebug' returned a non-zero code: 1
In both hosts I have the same OS (kubuntu 20.04 desktop) and almost the same Docker version.
In this host the dockerfile build succesfully:
$ docker --version
Docker version 19.03.5, build 633a0ea838
In this other not:
$ docker --version
Docker version 19.03.12, build 48a66213fe
This is the dockerfile:
# Dockerfile (c) letsjump 2018
FROM php:7.3-apache
MAINTAINER letsjump <letsjump#xxx>
WORKDIR /
COPY ./docker-data/webserver73/ /
RUN chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "/usr/local/bin";
RUN chmod -R +xr "/usr/local/bin/";
EXPOSE 80
EXPOSE 443
RUN useradd -ms /dev/null dbmaker
RUN chown -R dbmaker:dbmaker /home/dbmaker
# define build dependency lists
# inherited from PHP base image:
# PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev libpcre3-dev make pkg-config re2c
ENV CUSTOM_BUILD_DEPS \
unzip \
libmemcached-dev \
libicu-dev \
libfreetype6-dev \
libjpeg-dev \
libjpeg62-turbo-dev \
libxml2-dev \
zlib1g-dev \
libpng-dev
# list of other packages which could be deinstalled at the end
ENV CUSTOM_REMOVE_LIST cpp \
cpp-4.9 \
g++-4.9 \
gcc-4.9 \
libgcc-4.9-dev \
libhashkit-dev \
libsasl2-dev \
libstdc++-4.9-dev
RUN apt-get update && apt-get install -my gnupg
# Install system packages for PHP extensions recommended for Yii 2.0 Framework
# Install PHP extensions required for Yii 2.0 Framework
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash - && \
apt-key update && \
DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::="--force-confdef" install \
$CUSTOM_BUILD_DEPS \
$PHPIZE_DEPS \
sendmail \
git \
mariadb-client \
openssh-client \
nano \
netcat \
linkchecker \
nodejs \
build-essential \
libzip-dev \
libz-dev \
wget \
unixodbc \
odbcinst \
unixodbc-dev \
gnupg \
libwebp-dev \
libonig-dev \
--no-install-recommends && \
npm -g install npm#latest && \
docker-php-ext-configure gd --with-freetype --with-jpeg --with-freetype --with-webp && \
docker-php-ext-configure bcmath && \
docker-php-ext-configure calendar && \
docker-php-ext-configure pdo_odbc --with-pdo-odbc=unixODBC,/usr && \
docker-php-ext-install gd \
intl \
pdo_mysql \
mysqli \
mbstring \
opcache \
zip \
bcmath \
calendar \
soap && \
# printf "\n" | pecl -d preferred_state=beta install xdebug
pecl install memcached-3.1.5 && \
echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini && \
printf "\n" | pecl -d preferred_state=beta install xdebug
# apt-get remove -y $PHPIZE_DEPS $CUSTOM_BUILD_DEPS $CUSTOM_REMOVE_LIST && \
# dpkg --purge $(dpkg -l | awk '/^rc/ { print $2 }') && \
# apt-get clean && \
# rm -rf /usr/src/php* && \
# rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#set -x \
# && cd /usr/src/php/ext/odbc \
# && phpize \
# && sed -ri 's#^ *test +"\$PHP_.*" *= *"no" *&& *PHP_.*=yes *$##&#g' configure \
# odbc
# docker-php-ext-configure odbc --with-unixODBC=/usr --with-dbmaker=/home/dbmaker/5.2 && \
# docker-php-ext-configure odbc --with-unixODBC=/usr --with-dbmaker=/home/dbmaker/5.2 --with-adabas=no && \
# Install less-compiler
RUN npm install -g \
less \
lesshint \
uglify-js \
uglifycss
ENV PHP_USER_ID=33 \
PHP_ENABLE_XDEBUG=1 \
VERSION_COMPOSER_ASSET_PLUGIN=^1.4.6 \
VERSION_PRESTISSIMO_PLUGIN=^0.3.10 \
PATH=/var/www/:/root/.composer/vendor/bin:$PATH \
TERM=linux \
COMPOSER_ALLOW_SUPERUSER=1 \
GITHUB_API_TOKEN=${GITHUB_API_TOKEN}
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- \
--install-dir=/usr/local/bin \
--filename=composer --version=1.10.16 && \
composer global require --optimize-autoloader \
"fxp/composer-asset-plugin:${VERSION_COMPOSER_ASSET_PLUGIN}" \
"hirak/prestissimo:${VERSION_PRESTISSIMO_PLUGIN}" && \
composer global dumpautoload --optimize && \
composer clear-cache
RUN chown -R 1000:33 /var/www
# Install Yii framework bash autocompletion
RUN curl -L https://raw.githubusercontent.com/yiisoft/yii2/master/contrib/completion/bash/yii \
-o /etc/bash_completion.d/yii
# enabling specific apache2 modules
# enabling specific apache2 modules
RUN a2enmod rewrite && \
service apache2 restart
WORKDIR /var/www
Yes, the dockerfile is probably a mess, with a lot of adds on the fly. But why it compiles in one host and in the other not?
UPDATE
After reading the reasonable answer of aankhen, I emptied the docker-data/webserver73 folder which in any case did not contain any relevant code except one apache virtualhost and one xdebug.ini configuration file in their relative folders.
The build failed again with the same error as above.
I also want to point out that the wevserver73 folder contains the exact same files as the one on the other host with the same owner and rights.
I would expect failures to happen sporadically and periodically with this dockerfile. Why? Because you haven't pinned the versions of the majority of tools you're installing. You implicitly install the latest version of a bunch of GNU/Linux libraries and then explicitly install the latest version of npm and you follow this up by using npm to install the latest version of a bunch of javascript modules. I can see that you pin the versions of some libs, but to guarantee repeatability you have to pin everything.
Also, as #Aankhen said in the comments, you also have a copy command that is copying files from the local filesystem, which could contain anything and again is unlikely to result in a portable repeatable image.
The solution to this is a) pin everything, or b) use a registry to share an image that has been built once and rely on a pinned version of that. Generally, banks and large corporations require the pinning solution, and less regulated industries often go with the registry solution.
I answer to my question:
As Software-engineer said in his answer, pin, pin, pin.
In this case it was the PHP's mantainers (questionable) choice to change the Debian release within a minor version of the docker image (php:7.3-apache). Here they switched from stretch to buster.
So the new pulled php:7.3-apache image had a completely different OS and different dependent library and behaviors.
So, as Software-engineer said, the right and the fastest solution to this problem is to pin the PHP image to a tag related to the distribution: php:7.3-apache-stretch.
The slower and unstable solution, in case you want to use the unpinned version, is to adapt the GD configure options to the new release:
...
docker-php-ext-configure gd --with-gd --with-webp-dir --with-jpeg-dir \
--with-png-dir --with-zlib-dir --with-freetype-dir && \
...
taking care to include the necessary libraries first: libfreetype6-dev, libjpeg-dev and libjpeg62-turbo-dev
But this will require a manual pull from the server for any host that has a non recent build of the same image.
We developed Windows based application and try to convert it to run as docker in Linux base environment.
Unfortunately, one of 3rd party library can't be convert to Linux environment.
We built docker image which is ubuntu 16.04 + wine 4.0 + winetricks which our application can runs on it but all 3 components (ubuntu, wine + winetrick) weight more than 3GB.
Below is the part of Dockerfile which we use to build the docker image
Our application is 64bit and combines python and C++ code
How can we reduce docker size?
Is there another way to run windows base application as docker container on linux environment?
FROM ubuntu:16.04
# reccomended to add 32bit arch for wine
RUN dpkg --add-architecture i386 \
# install things to help install wine
&& apt-get update \
&& apt-get install -y --allow-unauthenticated wget software-properties-common software-properties-common debconf-utils python-software-properties apt-transport-https cabextract telnet xvfb unzip build-essential \
# register repo and install winehq
&& wget -nc https://dl.winehq.org/wine-builds/Release.key \
&& apt-key add Release.key \
&& wget -nc https://dl.winehq.org/wine-builds/winehq.key \
&& apt-key add winehq.key \
&& apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/ \
&& apt-get update \
&& apt-get install -y xvfb \
&& apt-get install --install-recommends -y --allow-unauthenticated winehq-stable
# setup vars for wine
ENV DISPLAY=":0.0"
ENV WINEARCH="win64"
ENV WINEPREFIX="/root/.wine64"
ENV WINESYSTEM32="/root/.wine64/drive_c/windows/system32"
ENV WINEDLLOVERRIDES="mscoree,mshtml="
ENV WINEDEBUG=-all
COPY scripts /root/scripts
# pull down winetricks, and install requirements
# vcrun2015 and vcrun2010 are Visual Studio C++ Redistributables
RUN set -e \
&& mkdir -p $WINEPREFIX \
&& cd $WINEPREFIX \
&& wget https://raw.githubusercontent.com/Winetricks/winetricks/20190615/src/winetricks \
&& chmod +x winetricks \
&& xvfb-run wine wineboot --init \
&& xvfb-run wineserver -w \
&& xvfb-run sh ./winetricks -q d3dx9 corefonts vcrun2015
RUN set -x \
&& pythonVersions='python3.7' \
&& apt-get update \
&& apt-get install -y --allow-unauthenticated --no-install-recommends software-properties-common \
&& apt-add-repository -y ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y --allow-unauthenticated --no-install-recommends $pythonVersions \
&& rm -rf /var/lib/apt/lists/* \
...
You install many unneeded packages, you should read carefully
https://www.dajobe.org/blog/2015/04/18/making-debian-docker-images-smaller/
which explains why
You should remove xvfb (I guess you need xvfb during the installation and configuration of wine, but not after) and all the "recommended" packages
Look also at
https://github.com/wagoodman/dive
which is an excellent tool if you need to see how efficient is your docker image
Use also
https://github.com/jwilder/docker-squash
it can save some space.
Good hunt
In order to build up a headless simulation cluster, we're working on containerization of our existing tools. Right now, the accessible server does not have any NVIDIA GPUs.
One problem, that we encounter is, that a specific application uses OpenGL for rendering. With an physical GPU, the simulation tool is running without any problem. To ship around the GPU dependencies, we're using Mesa 3D OpenGL Software Rendering (Gallium), LLVMpipe, and OpenSWR Drivers. For reference, we had a look at https://github.com/jamesbrink/docker-opengl.
The current Dockerfile, which builds mesa 19.0.2 (using gcc-8) from source, looks like this:
# OPENGL SUPPORT ------------------------------------------------------------------------------
# start with plain ubuntu as base image for testing
FROM ubuntu AS builder
# install some needed packages and set gcc-8 as default compiler
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
llvm-7 \
llvm-dev \
autoconf \
automake \
bison \
flex \
gettext \
libtool \
python-dev\
git \
pkgconf \
python-mako \
zlib1g-dev \
x11proto-gl-dev \
libxext-dev \
xcb \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-xfixes0-dev \
libdrm-dev \
g++ \
make \
xvfb \
x11vnc \
g++-8 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 800 --slave /usr/bin/g++ g++ /usr/bin/g++-8
# get mesa (using 19.0.2 as later versions dont use the configure script)
WORKDIR /mesa
RUN git clone https://gitlab.freedesktop.org/mesa/mesa.git
WORKDIR /mesa/mesa
RUN git checkout mesa-19.0.2
#RUN git checkout mesa-18.2.2
# build and install mesa
RUN libtoolize && \
autoreconf --install && \
./configure \
--enable-glx=gallium-xlib \
--with-gallium-drivers=swrast,swr \
--disable-dri \
--disable-gbm \
--disable-egl \
--enable-gallium-osmesa \
--enable-autotools \
--enable-llvm \
--with-llvm-prefix=/usr/lib/llvm-7/ \
--prefix=/usr/local && \
make -j 4 && \
make install && \
rm -rf /mesa
# SIM -----------------------------------------------------------------------------------------
FROM ubuntu
COPY --from=builder /usr/local /usr/local
# copy all simulation binaries to the image
COPY .....
# update ubuntu and install all sim dependencies
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
xterm \
freeglut3 \
openssh-server \
synaptic \
nfs-common \
mesa-utils \
xfonts-75dpi \
libusb-0.1-4 \
python \
libglu1-mesa \
libqtgui4 \
gedit \
xvfb \
x11vnc \
llvm-7-dev \
expat \
nano && \
dpkg -i /vtdDeb/libpng12-0_1.2.54-1ubuntu1.1_amd64.deb
# set the environment variables (display -> 99 and LIBGL_ALWAYS_SOFTWARE)
ENV DISPLAY=":99" \
GALLIUM_DRIVER="llvmpipe" \
LIBGL_ALWAYS_SOFTWARE="1" \
LP_DEBUG="" \
LP_NO_RAST="false" \
LP_NUM_THREADS="" \
LP_PERF="" \
MESA_VERSION="19.0.2" \
XVFB_WHD="1920x1080x24"
If we now start the container and initialize the xvfb session, all glx examples like glxgears are working. Also the output of glxinfo | grep '^direct rendering:' is yes, so OpenGL is working.
However, if we start our simulation binary (which is provided from some company and cannot be changed now), following error messages are provided:
uniform block ub_lights has no binding.
uniform block ub_lights has no binding.
FRAGMENT glCompileShader "../data/Shaders/roadRendererFrag.glsl" FAILED
FRAGMENT Shader "../data/Shaders/roadRendererFrag.glsl" infolog:
0:277(48): error: unsized array index must be constant
0:344(48): error: unsized array index must be constant
glLinkProgram "RoadRenderingBase_Program" FAILED
Program "RoadRenderingBase_Program" infolog:
error: linking with uncompiled/unspecialized shader
Any idea how to fix that? For us, the error message is kind of vacuous.
Did someone encountered a similar problem?
I'm trying to install OpenVino on my Raspberry using Docker.
I have this Dockerfile:
FROM raspbian/stretch
ARG INSTALL_DIR="/opt/intel/inference_engine_vpu_arm"
RUN apt-get -y update \
&& DEBIAN_FRONTEND=noninteractive && apt-get -y upgrade && apt-get autoremove && \
apt-get install -y \
apt-transport-https \
build-essential \
cmake \
cpio \
lsb-release \
pciutils \
python3.5 \
python3.5-dev \
python3-pip \
python3-setuptools \
ffmpeg \
libjpeg-dev \
libtiff5-dev \
libjasper-dev \
libpng12-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libgtk2.0-dev \
libgtk-3-dev \
libatlas-base-dev \
gfortran \
libgstreamer1.0-0 \
libgstreamer-plugins-base1.0-0
RUN usermod -a -G users "$(whoami)"
COPY inference_engine_vpu_arm $INSTALL_DIR
RUN sed -i "s|<INSTALLDIR>|$INSTALL_DIR|" $INSTALL_DIR/bin/setupvars.sh && \
echo "source $INSTALL_DIR/bin/setupvars.sh" >> $HOME/.bashrc
RUN ["/bin/bash", "-c", "source $INSTALL_DIR/bin/setupvars.sh && /bin/bash $INSTALL_DIR/install_dependencies/install_NCS_udev_rules.sh"]
RUN pip3 install numpy
RUN apt autoremove -y && \
rm -rf /var/lib/apt/lists/*
CMD ["/bin/bash"]
But I have this error when I try to build:
E: Unable to correct problems, you have held broken packages.
The command '/bin/sh -c apt-get -y update..... returned a non-zero code: 100
Do you have any idea?
Thanks
After a some google search it seems the error happens because the apt daemon is not able to connect to the configured repositories. This is likely since the base image was not updated for a while as i can see on docker hub.
If you not familiar with the available repositories you can generate them easily with online tools such as: https://debgen.simplylinux.ch/index.php?generate
You can put them into the docker image with a simple COPY command like
COPY sources.list /etc/apt/sources.list
where the first argument refers to a local file, the second to the docker image
I have a following scenario. I want to use tensorflow for ML and OpenCV for some image processing. I recently learned about dockers and found out, that both TF and OCV are dockerized. I can easily pull the image and run eg. tensorflow script. Is there a way to somehow merge what both dockers offer? Or run on top of it. I want to write a piece of code that uses both OpenCV and Tensorflow. Is there a way to achieve this?
Or in more generic sense: Docker A image has preinstalled python package AA. Docker B has python package BB. How can I write script that uses functions from both AA and BB?
Really simple. Build your own docker image with both TF and OpenCV. Example Dockerfile (Based on janza/docker-python3-opencv):
FROM python:3.7
LABEL maintainet="John Doe"
RUN apt-get update && \
apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev && \
pip install numpy && \
pip install tensorflow
WORKDIR /
ENV OPENCV_VERSION="3.4.2"
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
Of course, I don't know your exact requirements regarding this project and there is some probability that this Dockerfile won't work for you. Just adjust it to you needs. But I recommend creating from ground zero (just basing on some already existing image of some Linux distribution). Then you have full control what have you installed in which versions without redundant stuff that is often found in 3rd party images (I'm not saying they are bad, but often for people use cases most parts are redundant.)
There is also already combined docker image in official hub:
https://hub.docker.com/r/fbcotter/docker-tensorflow-opencv/
If you reaaaaly want to have it separate I guess you could link running containers of those images. Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified. But you would have to implement some kind of logic to use another package from another container (probably possible but difficult and complex).
Docker Networking