I'm trying to build a LAMP container and I have already built several containers: httpd 2.4.23, redis 3.0.7, mysql 5.6.30 by compiling them myself from source code downloaded archives. I have based all of these above on the debian container.
Now that I'm doing the php 5.6.20 container it complains that it does not know about apache and mysql.
Here is the Dockerfile for the php container:
FROM debian
RUN apt-get update
RUN apt-get install -y build-essential;
RUN apt-get install -y cmake;
RUN apt-get install -y libfreetype6-dev libjpeg-dev libpng12-dev libcurl4-openssl-dev libbz2-dev libxml2-dev libxslt-dev libgd2-xpm-dev php5-imap libz-dev
WORKDIR /usr/bin/
COPY php-5.6.20.tar.gz /usr/bin/
RUN gzip -d php-5.6.20.tar.gz
RUN tar -xvf php-5.6.20.tar
RUN ln -s php-5.6.20 php
WORKDIR /usr/bin/php/
RUN ./configure \
--prefix=/usr/bin/ \
--with-apxs2=/usr/bin/apache/bin/apxs \
--with-config-file-path=/usr/bin/php-5.6.20/ \
--enable-libgcc \
--with-mysqli=/usr/bin/mysql/mysql_config \
--with-zlib-dir=/usr \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-gd \
--enable-gd-native-ttf \
--with-freetype-dir=/usr \
--enable-ftp \
--enable-xml \
--enable-zip \
--with-bz2 \
--enable-wddx \
--without-pear \
--enable-mbstring \
--with-curl
RUN make
RUN make install
I wonder if I should base it instead on: FROM httpd:2.4.23. But then I would need to base httpd on the mysql one, and / or on the redis one... I don't really like that setup.
I have also installed Docker Compose but I wonder if it could be helpful in my situation.
UPDATE: Here is the fully working Dockerfile
FROM debian
RUN apt-get update
RUN apt-get install -y build-essential;
RUN apt-get install -y cmake;
RUN apt-get install -y openssl libssl-dev;
RUN apt-get install -y libpcre3 libpcre3-dev
WORKDIR /usr/bin/
COPY httpd-2.4.23.tar.gz /usr/bin/
RUN gzip -d httpd-2.4.23.tar.gz
RUN tar -xvf httpd-2.4.23.tar
RUN ln -s httpd-2.4.23 httpd
COPY apr-1.5.2.tar.gz /usr/bin/httpd/srclib/
COPY apr-util-1.5.4.tar.gz /usr/bin/httpd/srclib/
WORKDIR /usr/bin/httpd/srclib/
RUN gzip -d apr-1.5.2.tar.gz
RUN gzip -d apr-util-1.5.4.tar.gz
RUN tar -xvf apr-1.5.2.tar
RUN tar -xvf apr-util-1.5.4.tar
RUN ln -s apr-1.5.2 apr;
RUN ln -s apr-util-1.5.4 apr-util
WORKDIR /usr/bin/httpd/
RUN ./configure \
--prefix=/usr/bin/apache \
--enable-rewrite \
--enable-deflate \
--enable-ssl
RUN make
RUN make install
RUN apt-get update
RUN apt-get install -y libncurses-dev
COPY mysql-5.6.30.tar.gz /usr/bin/
WORKDIR /usr/bin/
RUN gzip -d mysql-5.6.30.tar.gz
RUN tar -xvf mysql-5.6.30.tar
RUN ln -s mysql-5.6.30 mysql
WORKDIR /usr/bin/mysql/
RUN mkdir install; mkdir install/data; mkdir install/var; mkdir install/etc; mkdir install/tmp
RUN cd /usr/bin/mysql/; cmake \
-DCMAKE_INSTALL_PREFIX=/usr/bin/mysql/install \
-DWITH_INNOBASE_STORAGE_ENGINE=1 \
-DMYSQL_DATADIR=/usr/bin/mysql/install/data \
-DDOWNLOAD_BOOST=1 \
-DWITH_BOOST=/usr/bin/mysql/install/boost \
-DMYSQL_UNIX_ADDR=/usr/bin/mysql/install/tmp/mysql.sock
RUN make
RUN make install
RUN apt-get update
RUN apt-get install -y libfreetype6-dev libjpeg-dev libpng12-dev libcurl4-openssl-dev libbz2-dev libxml2-dev libxslt-dev libgd2-xpm-dev php5-imap libz-dev
WORKDIR /usr/bin/
COPY php-5.6.20.tar.gz /usr/bin/
RUN gzip -d php-5.6.20.tar.gz
RUN tar -xvf php-5.6.20.tar
RUN ln -s php-5.6.20 php
WORKDIR /usr/bin/php/
RUN ./configure \
--prefix=/usr/bin/php \
--with-apxs2=/usr/bin/apache/bin/apxs \
--with-config-file-path=/usr/bin/php-5.6.20/ \
--enable-libgcc \
--with-mysqli=/usr/bin/mysql/install/bin/mysql_config \
--with-zlib-dir=/usr \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-gd \
--enable-gd-native-ttf \
--with-freetype-dir=/usr \
--enable-ftp \
--enable-xml \
--enable-zip \
--with-bz2 \
--enable-wddx \
--without-pear \
--enable-mbstring \
--with-openssl
--with-curl
RUN make
RUN make install
ENTRYPOINT ["/usr/bin/apache/bin/apachectl", "start", "-D FOREGROUND"]
EXPOSE 80
# Build the container: docker build -t stephaneeybert/httpd:2.4.23 .
# Run the container: docker run -d -p 127.0.0.1:80:80 --name httpd stephaneeybert/httpd:2.4.23
# Check that the port is open: nmap -p 8081 localhost
If you need apache running in your container, you can install apache in your image with above Dockerfile, just as the same as you install the build-essential stuffs. Which means:
RUN apt-get install -y apache2
or similar command. If you also need the configure for this apache application, you can use ADD or COPY command to add your configure file from outside to inside of your container. More details can be found here.
If you need apache as an independent container, you can use docker-compse to achieve it. Start apache in another container, then use depends_on to config the dependency between your containers. You may use ports to change the port number of each container, so they can communicate between each other.
Related
I am getting the following error inside my docker container:
How can I update my dockerfile, so that while building the container it builds with a version of wkhtmltopdf with patched Qt.
Following is my dockerfile:
...
RUN apt-get install -y wkhtmltopdf
RUN apt-get install -y xvfb
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
...
Installing the following dependencies with wkhtmltopdf worked out for me:
...
RUN apt-get update
RUN apt-get install -y xvfb
RUN apt-get install -y wget
RUN apt-get install -y openssl build-essential xorg libssl1.0-dev
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
RUN tar xvJf wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
RUN cp wkhtmltox/bin/wkhtmlto* /usr/bin/
...
FROM php:7.4-fpm
# Install dependencies, libssl1.1-dev ?
RUN apt-get update && apt-get install -y \
xvfb \
wget \
openssl \
xorg \
libssl1.1 \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
dcmtk \
nginx \
supervisor
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
RUN tar xvJf wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
RUN cp wkhtmltox/bin/wkhtmlto* /usr/bin/
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
COPY php.ini /usr/local/etc/php/php.ini
# adjustments to php.ini base on the production version.
# Install extensions and configure
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-configure zip
RUN docker-php-ext-install pdo_mysql zip exif pcntl gd sockets
# RUN sed -E -i -e 's/max_execution_time = 1200/max_execution_time = 120/' /etc/php.ini \
# && sed -E -i -e 's/memory_limit = 128M/memory_limit = 512M/' /etc/php.ini \
# && sed -E -i -e 's/post_max_size = 8M/post_max_size = 64M/' /etc/php.ini \
# && sed -E -i -e 's/upload_max_filesize = 2M/upload_max_filesize = 64M/' /etc/php.ini
# php artisan storage:link
# THAT NEEDS TO BE RUN FROM THE nginx-home/LaravelPortal/ directory to symlink the storage directory. NEED to figure out how to do that.
# sudo docker exec -it orthanc-docker-dev_ris_php-fpm_1 /bin/bash
COPY default.conf /etc/nginx/conf.d/default.conf
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY entrypoint.sh /
ENTRYPOINT ["/bin/bash","/entrypoint.sh"]
I am trying to deploy Shiny App through Docker using windows 10 machine(8GB RAM).
Here is the Dockerfile :
# Base image https://hub.docker.com/u/rocker
FROM rocker/shiny:3.6.3
# system libraries of general use
RUN apt-get update -qq && apt-get -y --no-install-recommends install \
libxml2-dev \
libcairo2-dev \
libsqlite3-dev \
libmariadbd-dev \
libmariadbclient-dev \
libpq-dev \
libssl-dev \
libcurl4-openssl-dev \
libssh2-1-dev \
unixodbc-dev \
&& install2.r --error \
--deps TRUE \
tidyverse \
dplyr \
devtools \
formatR \
remotes \
selectr \
caTools \
BiocManager \
&& rm -rf /tmp/downloaded_packages
# copy necessary files
## app folder
COPY /shinyapp/shinyapp.Rproj /srv/shiny-server/
COPY /shinyapp/ui.R /srv/shiny-server/
COPY /shinyapp/server.R /srv/shiny-server/
COPY /shinyapp/global.R /srv/shiny-server/
# install renv & restore packages
RUN Rscript -e 'install.packages("shiny", repos='http://cran.rstudio.com/')'
RUN Rscript -e 'install.packages("shinydashboard", repos='http://cran.rstudio.com/')'
RUN Rscript -e 'install.packages("glue", repos='http://cran.rstudio.com/')'
# expose port
EXPOSE 3838
RUN sudo chown -R shiny:shiny /srv/shiny-server
# run app on container start
CMD ["/usr/bin/shiny-server.sh"]
The Folder Structure is as follows :
Dockerfile
shinyapp ---> global.R ,
ui.R,
server.R
I tried troubleshooting and checked the name of the files. Everything is correct.
But still getting the error
Any help will be great !!
Follow this steps
/Users/myUserName/.docker
ls -a
rm -rf .token_seed
if it doesn't resolve the issue
remove '.token_seed.lock' file as well
I'm trying to build a Docker image including a very particular configuration of OpenCV with CUDA and GPU support.
The build succeeds, and if I make install it from the same context that built the image, it works with no problems.
The problem happens when I try to use a multi stage build, to avoid keeping all the dependencies needed to build OpenCV. Before you continue reading, what follows might actually be an XY problem, if you have a better solution on how to copy OpenCV build artifacts (including Python bindings!) in a Docker multistage build, that is my actual intent.
Now for my attempted solution and the struggle I have:
I run COPY --from=requirements /opencv /opencv and it works and it apparently copies everything in the right path (I checked the filesystem). But, when I run from the build folder make install, I get this CMake error:
CMake Error: The source directory "" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:2724: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Again, the same command, from the same folder, but without multistage build, works.
Here is my Dockerfile:
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
# Install dependencies
RUN echo "deb http://es.archive.ubuntu.com/ubuntu eoan main universe" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake unzip pkg-config libjpeg-dev libpng-dev libtiff-dev libavcodec-dev \
libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev \
gfortran python3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libxvidcore-dev x264 \
libx264-dev libfaac-dev libmp3lame-dev libtheora-dev libfaac-dev libmp3lame-dev libvorbis-dev \
libjpeg-dev libpng-dev libtiff-dev git python3-pip libtbb-dev libprotobuf-dev protobuf-compiler \
libgoogle-glog-dev libgflags-dev libgphoto2-dev libeigen3-dev libhdf5-dev wget libtbb-dev gcc-8 g++-8 llvm \
python3-venv libgirepository1.0-dev
# Install my project requirements
WORKDIR /venv
RUN python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# Build OpenCV
WORKDIR /opencv
RUN wget https://github.com/opencv/opencv/archive/4.4.0.zip && mv 4.4.0.zip opencv.zip && unzip opencv.zip && rm opencv.zip
RUN wget https://github.com/opencv/opencv_contrib/archive/4.4.0.zip && mv 4.4.0.zip opencv_contrib.zip && unzip opencv_contrib.zip && rm opencv_contrib.zip
WORKDIR /opencv/opencv-4.4.0/build
ENV SITE_PACKAGES /venv/lib/python3.7/site-packages
ENV EXTRA_MODULES /opencv/opencv_contrib-4.4.0/modules
ENV CUDA_ARCH 7.5
ADD docker/build_opencv.sh .
RUN ./build_opencv.sh
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake python3-venv
# Install OpenCV
COPY --from=requirements /opencv /opencv
WORKDIR /opencv/opencv-4.4.0/build
RUN make install && ldconfig
# build fails here and the rest is specific to my project so I've ommitted it
The build_opencv.sh script has this options:
#!/bin/bash
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=/usr/bin/gcc-8 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=$SITE_PACKAGES \
-D OPENCV_EXTRA_MODULES_PATH=$EXTRA_MODULES \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=$CUDA_ARCH \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D WITH_GTK_2_X=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j16
You need at least numpy in your requirements.txt file.
In order to reproduce the issue, a minimal setup would have this structure:
- docker
- Dockerfile
- build_opencv.sh
- requirements.txt
Build using from the root of the build context:
docker build -t opencvmultistage:latest -f docker/Dockerfile .
Am I doing something wrong? Maybe CMake has some weird cache that I'm not copying to the new image and makes the build fail?
For the sake of clarity, if I add make install in the build_opencv.sh script it works, but I have OpenCV installed in the build context and not the runtime, which is not what I pretend to do. make install runs in the same directory, and the same files should be present, so I don't really know what's going on.
It is simpler to run cmake & make and make install in the same stage and then copy the install folders. It will allow to not have any build tools like cmake or build-essential in the final docker image.
We will use a custom CMAKE_INSTALL_PREFIX so that OpenCV binaries are installed to a directory and we can copy it straight to the next stage. Using a custom prefix will avoid having to copy CUDA installation or development libraries no longer required. Then we will run ldconfig on that directory to link the libraries as usual.
Modify the build script to use a custom CMAKE_INSTALL_PREFIX:
mkdir /prefix
cmake -D CMAKE_BUILD_TYPE=RELEASE \
# all compiler flags...
-D CMAKE_INSTALL_PREFIX=/prefix
Modifying the Dockerfile
to run make install in stage 1
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
...
ADD build_opencv.sh .
RUN ./build_opencv.sh && make install
copy the installation in stage 2
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential python3-venv
# Install OpenCV
COPY --from=requirements /prefix /prefix
COPY --from=requirements /venv /venv
ENV PATH="/venv/bin:$PATH"
RUN ldconfig /prefix
I have a docker file like this :
FROM ubuntu:12.04
MAINTAINER me <me#c.com>
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install supervisor \
apache2 \
mysql-server \
php5 \
libapache2-mod-php5 \
php5-mysql \
php5-mcrypt
#ssh
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22 80
ADD ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
My question is how do I add a couchdb server into this docker file?
I can get a built-in couchdb docker image from here : https://hub.docker.com/r/klaemo/couchdb/, but how do I create a image like this my self? I can't find any documentation regarding the process!
I spent 3 hours tried to googled but got no luck, so I will take the risk to ask even if this is a dump question!
Is there a specific version of couchdb that you want in your docker container?
If not, since you are using Ubuntu 12.04 as your base image, you can get the couchdb 1.0.1 binaries from the Ubuntu 12.04/precise [universe] repository easily by adding couchdb to your apt-get list like this:
FROM ubuntu:12.04
MAINTAINER me <me#c.com>
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install supervisor \
apache2 \
mysql-server \
php5 \
libapache2-mod-php5 \
php5-mysql \
php5-mcrypt \
couchdb
#[--Rest of your dockerfile goes here unchanged--]
You can alternatively use the PPA maintained by the Apache CouchDB team to get latest stable releases for your base image based on the officially released tar balls. For this option you can use the following dockerfile:
# To install the ppa finder tool in your docker container
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN add-apt-repository ppa:couchdb/stable -y
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install supervisor \
apache2 \
mysql-server \
php5 \
libapache2-mod-php5 \
php5-mysql \
php5-mcrypt \
couchdb
#[--Rest of your dockerfile goes here unchanged--]
If you want the latest or a specific version of couchdb in your docker container then you might have to build couchdb from the source code. Note that this approach will require you to install many more packages (g++ erlang-dev erlang-manpages erlang-base-hipe erlang-eunit, libmozjs185-dev libicu-dev libcurl4-gnutls-dev libtool) onto your container to be able to build couchdb from source. However you may be able to purge/remove the packages that are required only to build couchdb. The complete list of dependencies can be found on the official couchdb build wiki on apache. If you really want THE latest version then you can refer to this dockerfile and add update your dockerfile accordingly. Here is a complete dockerfile [UNTESTED] for your ease of use:
FROM ubuntu:12.04
MAINTAINER me <me#c.com>
ENV COUCHDB_VERSION master
RUN groupadd -r couchdb && useradd -d /usr/src/couchdb -g couchdb couchdb
# download dependencies
RUN apt-get update -y -qq && apt-get install -y --no-install-recommends \
build-essential \
erlang-dev \
erlang-manpages \
erlang-base-hipe \
erlang-eunit \
erlang-nox \
erlang-xmerl \
erlang-inets \
libmozjs185-dev \
libicu-dev \
libcurl4-gnutls-dev \
libtool
RUN cd /usr/src && git clone https://git-wip-us.apache.org/repos/asf/couchdb.git \
&& cd couchdb && git checkout $COUCHDB_VERSION \
&& cd /usr/src/couchdb && ./configure && make
# You can optionally purge/remove the packages you installed to build the couchdb from source.
# permissions
RUN chmod +x /usr/src/couchdb/dev/run && chown -R couchdb:couchdb /usr/src/couchdb
USER couchdb
EXPOSE 5984 15984 25984 35984 15986 25986 35986
#[--Rest of your dockerfile can go here as required--]
I tried to installed a LAMP stack with a DockerFile in a directory on my desktop /home/username/Desktop/docker/lamp/:
FROM ubuntu:16.04
VOLUME ["/var/www"]
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y \
apache2 \
php7.0 \
php7.0-cli \
libapache2-mod-php7.0 \
php7.0-gd \
php7.0-json \
php7.0-ldap \
php7.0-mbstring \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite3 \
php7.0-xml \
php7.0-xsl \
php7.0-zip \
php7.0-soap
COPY apache_default /etc/apache2/sites-available/000-default.conf
COPY run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run
RUN a2enmod rewrite
EXPOSE 80
CMD ["/usr/local/bin/run"]
Then on my terminal, I run it:
$ docker build -t docker-lamp .
Step 4 : COPY apache_default /etc/apache2/sites-available/000-default.conf
lstat apache_default: no such file or directory
What I have done wrong? How can I fix this?
First of all, are you 100% sure the files apache_default and run are in the same directory as the Dockerfile?