I'm trying to dockerize my php application, this is my very first attempt.
Dockerfile.
FROM ubuntu:18.04
WORKDIR /php55
ARG GIT_TOKEN
ARG DEBIAN_FRONTEND=noninteractive
# Install apache2
RUN set -x; \
perl -pe 's/(\S+\.)?archive\.ubuntu\.com/mirror.sg.gs/g' /etc/apt/sources.list > temp-sc && mv temp-sc /etc/apt/sources.list \
&& sed -i 's#security.ubuntu.com#mirror.sg.gs#g' /etc/apt/sources.list \
&& apt-get update && apt-get install --yes apache2 curl wget nano \
&& a2enmod rewrite headers
# Configure apache2
RUN set -x; \
sed -i.backup 's#/var/www/html#/var/www#g' "/etc/apache2/sites-available/000-default.conf" \
&& echo "ServerName localhost" > "/etc/apache2/conf-available/fqdn.conf" && a2enconf fqdn \
&& cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf.backup
# Copy configuration
COPY ujian.conf /etc/apache2/sites-available/000-default.conf
RUN set -x; \
curl -H "Authorization: token ${GIT_TOKEN}" -O https://git.mydomain.com/api/v1/repos/liso/ujian/archive/main.tar.gz \
&& mkdir -p /var/www/ujian \
&& tar -xvzf main.tar.gz -C /var/www/ujian --strip-components=1 \
&& rm main.tar.gz
# Install PHP
COPY install-php5 .
RUN chmod +x install-php5 && ./install-php5
EXPOSE 80 7825
CMD ["apachectl", "-D", "FOREGROUND"]
docker-compose.yml
version: '3'
services:
ujian:
image: liso/ujian-dockerize
container_name: docker-ujian
build:
context: .
args:
GIT_TOKEN: ${GIT_TOKEN} # from .env file
dockerfile: ./Dockerfile
ports:
- 127.0.0.1:8080:80
volumes:
- ./www:/var/www
extra_hosts:
- "host.docker.internal:host-gateway"
.env contain my api token to git instance.
The problem is after building, I can't find the downloaded file located in `/var/www` on the container, it's empty.
root#6835554968db:/var/www# ls -al
total 12
drwxr-xr-x 2 root root 4096 Jan 30 11:07 .
drwxr-xr-x 1 root root 4096 Jan 30 11:11 ..
I have rebuild several times but still empty /var/www, I never touch docker before so I'm really lost. Can you help me debugging this problem ?
Yeah it seems the volume was overwriting my previous downloaded files, that's why it keep missing after I launched the container. Ultimately I had to create docker-entrypoint.sh script which run after the contain has been provisioned. Then all is well.
Related
So I'm trying to follow these instructions:
https://github.com/open-forest/sendy
I'm using Portainer and trying to run a Sendy container (newsletter software). Instead of running a MySQL image with it, I'm just using my external managed database instead.
On my server I keep project data at: /var/docker/project-name. I use this structure for bind mounting if I need to bring data into the containers from the start.
So for this project in the project-name folder I have sendy-6.0.2.zip and this Dockerfile: (This file was provide via the instructions on the above link)
#
# Docker with Sendy Email Campaign Marketing
#
# Build:
# $ docker build -t sendy:latest --target sendy -f ./Dockerfile .
#
# Build w/ XDEBUG installed
# $ docker build -t sendy:debug-latest --target debug -f ./Dockerfile .
#
# Run:
# $ docker run --rm -d --env-file sendy.env sendy:latest
FROM php:7.4.8-apache as sendy
ARG SENDY_VER=6.0.2
ARG ARTIFACT_DIR=6.0.2
ENV SENDY_VERSION ${SENDY_VER}
RUN apt -qq update && apt -qq upgrade -y \
# Install unzip cron
&& apt -qq install -y unzip cron \
# Install php extension gettext
# Install php extension mysqli
&& docker-php-ext-install calendar gettext mysqli \
# Remove unused packages
&& apt autoremove -y
# Copy artifacts
COPY ./artifacts/${ARTIFACT_DIR}/ /tmp
# Install Sendy
RUN unzip /tmp/sendy-${SENDY_VER}.zip -d /tmp \
&& cp -r /tmp/includes/* /tmp/sendy/includes \
&& mkdir -p /tmp/sendy/uploads/csvs \
&& chmod -R 777 /tmp/sendy/uploads \
&& rm -rf /var/www/html \
&& mv /tmp/sendy /var/www/html \
&& chown -R www-data:www-data /var/www \
&& mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini \
&& rm -rf /tmp/* \
&& echo "\nServerName \${SENDY_FQDN}" > /etc/apache2/conf-available/serverName.conf \
# Ensure X-Powered-By is always removed regardless of php.ini or other settings.
&& printf "\n\n# Ensure X-Powered-By is always removed regardless of php.ini or other settings.\n\
Header always unset \"X-Powered-By\"\n\
Header unset \"X-Powered-By\"\n" >> /var/www/html/.htaccess \
&& printf "[PHP]\nerror_reporting = E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n" > /usr/local/etc/php/conf.d/error_reporting.ini
# Apache config
RUN a2enconf serverName
# Apache modules
RUN a2enmod rewrite headers
# Copy hello-cron file to the cron.d directory
COPY cron /etc/cron.d/cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron \
# Apply cron job
&& crontab /etc/cron.d/cron \
# Create the log file to be able to run tail
&& touch /var/log/cron.log
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["apache2-foreground"]
#######################
# XDEBUG Installation
#######################
FROM sendy as debug
# Install xdebug extension
RUN pecl channel-update pecl.php.net \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& rm -rf /tmp/pear
Here is my Docker Compose file:
version: '3.7'
services:
project-sendy:
container_name: project-sendy
image: sendy:6.0.2
build:
dockerfile: var/docker/project-sendy/Dockerfile
restart: unless-stopped
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.project-secure.entrypoints=websecure"
- "traefik.http.routers.project-secure.rule=Host(`project.com`)"
environment:
SENDY_PROTOCOL: https
SENDY_FQDN: project.com
MYSQL_HOST: db-host-name-here
MYSQL_DATABASE: db-name-here
MYSQL_USER: db-user-name-here
MYSQL_PASSWORD: db-password-here
SENDY_DB_PORT: db-port-here
networks:
proxy:
external: true
When I try to deploy I get:
failed to deploy a stack: project-sendy Pulling project-sendy
Error could not find /data/compose/126/var/docker/project-sendy:
stat /data/compose/126/var/docker/project-sendy: no such file or directory
So here's what I've done.
I have the cron and artifacts folder on the same directory as the Dockerfile.
In the Dockerfile look for this line:
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
Right below it put this line:
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
Otherwise you will get this error:
Starting Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/local/bin/docker-entrypoint.sh": permission denied: unknown
Then build it with:
docker build -t sendy:6.0.2 .
Then your image will show up in portainer.
You can then remove the build section in your docker compose file and hit deploy. It now works for me.
I have a yii1 application. And I have a dockerfile. And I had a docker-compose file.
But for the momemnt I only have one application. Because I have a remote database. So the database is not in a container.
So I have this dockerfile:
FROM php:7.3-apache
#COPY BaltimoreCyberTrustRoot.crt.pem /usr/local/share/ca-certificates/AzureDB.crt
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
# RUN pecl install -o -f xdebug-3.1.3 \
# && rm -rf /tmp/pear
# Copy composer installable
COPY ./install-composer.sh ./
# Copy php.ini
COPY ./php.ini /usr/local/etc/php/
#COPY BaltimoreCyberTrustRoot.crt.pem /var/www/html/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
# Change the current working directory
WORKDIR /var/www/html
# Change the owner of the container document root
RUN chown -R www-data:www-data /var/www
# Start Apache in foreground
CMD ["apache2-foreground"]
And I had this docker-compose file:
version: '3'
services:
web:
build: ./docker
container_name: dockeryiidisc
ports:
- 80:80
- 443:443
volumes:
- C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/
- C:\xampp\htdocs\webScraper:/var/www/html/
and that worked.
But so now I only want to use the dockerfile.
So I tried this:
docker build -t docker_webcrawler .
and this command:
docker run -d -p 80:80 --name cntr-apache docker_webcrawler
But if I then go to: http://localhost:80
I only see a empty directory:
Index of /
[ICO] Name Last modified Size Description
So what I have to change? That I only have to use the dockerfile?
Thank you
It looks like you're missing the volume mappings that you have in your docker-compose file. Try this
docker run -d -p 80:80 --name cntr-apache -v C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/ -v C:\xampp\htdocs\webScraper:/var/www/html/ docker_webcrawler
My container is up an running successfully - i'm on a MacOS Catalina;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
487b211c7300 laravel-demo_laravel-app "docker-php-entrypoi…" 6 seconds ago Up 5 seconds 9000/tcp, 9021/tcp, 0.0.0.0:8021->80/tcp laravel-app
My docker-compose.yml looks like;
version: '3'
services:
#Laravel App
laravel-app:
build:
context: .
dockerfile: ./Dockerfile
container_name: laravel-app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: laravel-app
SERVICE_TAGS: dev
working_dir: /var/www/html
ports:
- 8021:80
volumes:
- ./:/var/www/html
and my Dockerfile
FROM php:7.2-fpm-alpine
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/html/
# Set working directory
WORKDIR /var/www/html
# Install Additional dependencies
RUN apk update && apk add --no-cache \
build-base shadow vim curl \
php7 \
php7-fpm \
php7-common \
php7-pdo \
php7-pdo_mysql \
php7-mysqli \
php7-mcrypt \
php7-mbstring \
php7-xml \
php7-openssl \
php7-json \
php7-phar \
php7-zip \
php7-gd \
php7-dom \
php7-session \
php7-zlib
# Add and Enable PHP-PDO Extenstions
RUN docker-php-ext-install pdo pdo_mysql
RUN docker-php-ext-enable pdo_mysql
# Install PHP Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --
filename=composer
# Remove Cache
RUN rm -rf /var/cache/apk/*
# Add UID '1000' to www-data
RUN usermod -u 1000 www-data
RUN usermod -u 501 www-data
# Copy existing application directory permissions
COPY --chown=www-data:www-data . /var/www/html
# Change current user to www
USER www-data
These two files are both on the root of my fresh Laravel install.
Internal/external ports look fine in my config and with docker ps reporting success - my app should launch in a browser no? In theory - I should see the Laravel splash-screen;-
Well, unfortunately it doesn't and it's kicking my ass.
Things I've tried;
EXPOSE 9000
CMD ["php-fpm", "--host", "http://localhost"]
and
#CMD ["php-fpm", "--host", "0.0.0.0"]
Plus, various implementations of docker-compose.yml - like removing the environment: parameter - but no joy.
By comparing my Dockerfile & docker-compose.yml - are you able to offer some clues?
Thanks
If you cant beat em - join em;-
https://hub.docker.com/r/bitnami/laravel
I am setting up Kafka and zookeeper through docker; however, my whenever I build my image I keep getting a code 8 error when it gets to:
wget -q https://www.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz.asc .
I have tried to change the file format in the download-kafka.sh to unix already.
Below is my dockerfile:
FROM Wurstmeister/base
MAINTAINER Wurstmeister
ENV ZOOKEEPER_VERSION 3.4.13
#Download Zookeeper
RUN wget -q http://mirror.vorboss.net/apache/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz && \
wget -q https://www.apache.org/dist/zookeeper/KEYS && \
wget -q https://www.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz.asc && \
wget -q https://www.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz.md5
#Verify download
RUN md5sum -c zookeeper-${ZOOKEEPER_VERSION}.tar.gz.md5 && \
gpg --import KEYS && \
gpg --verify zookeeper-${ZOOKEEPER_VERSION}.tar.gz.asc
#Install
RUN tar -xzf zookeeper-${ZOOKEEPER_VERSION}.tar.gz -C /opt
#Configure
RUN mv /opt/zookeeper-${ZOOKEEPER_VERSION}/conf/zoo_sample.cfg /opt/zookeeper-${ZOOKEEPER_VERSION}/conf/zoo.cfg
ENV JAVA_HOME /usr/lib/jvm/java-7-openjdk-amd64
ENV ZK_HOME /opt/zookeeper-${ZOOKEEPER_VERSION}
RUN sed -i "s|/tmp/zookeeper|$ZK_HOME/data|g" $ZK_HOME/conf/zoo.cfg; mkdir $ZK_HOME/data
ADD start-zk.sh /usr/bin/start-zk.sh
EXPOSE 2181 2888 3888
WORKDIR /opt/zookeeper-${ZOOKEEPER_VERSION}
VOLUME ["/opt/zookeeper-${ZOOKEEPER_VERSION}/conf", "/opt/zookeeper-${ZOOKEEPER_VERSION}/data"]
CMD /usr/sbin/sshd && bash /usr/bin/start-zk.sh
If you go to this link, then 3.4.13 doesn't exist anymore
https://www.apache.org/dist/zookeeper/
You can change to ENV ZOOKEEPER_VERSION 3.4.14, or just use an existing Zookeeper Docker image
I would like to have a shared directory between my containers: ftp and s3fs. Todo so, I have created a volume in my docker-compose file called s3.
If I stop s3fs from running in my s3fs container, then I can create files in the ftp container and they will show up in side s3fs under /home/files.
However, when running s3fs the directory /home/files remains empty whilst I create files in the ftp.
This is what my /proc/mounts file looks like:
/dev/sda2 /home/files ext4 rw,relatime,data=ordered 0 0
s3fs /home/files fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
I belive fuse maybe overriding my docker volume, has anyone encountered this problem before?
docker-compose.yml
version: "3"
services:
ftp:
image: app/proftpd:latest
volumes:
- s3:/home/files
ports:
- 2222:2222
s3fs:
image: app/s3fs:latest
command: start
env_file:
- s3fs/aws.env
volumes:
- s3:/home/files
cap_add:
- SYS_ADMIN
devices:
- "/dev/fuse"
environment:
ENVIRONMENT: "dev"
volumes:
s3:
s3fs - Dockerfile
FROM ubuntu:16.04
RUN apt-get update -qq
RUN apt-get install -y \
software-properties-common
RUN apt-get update -qq
RUN apt-get install -y \
automake \
autotools-dev \
fuse \
g++ \
git \
libcurl4-openssl-dev \
libfuse-dev \
libssl-dev \
libxml2-dev \
make \
pkg-config \
curl
RUN curl -L https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.84.tar.gz | tar zxv -C /usr/src
RUN cd /usr/src/s3fs-fuse-1.84 && ./autogen.sh && ./configure --prefix=/usr --with-openssl && make && make install
COPY entrypoint.sh /opt/s3fs/bin/entrypoint.sh
RUN mkdir -p /home/files
WORKDIR /opt/s3fs/bin
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
s3fs - entrypoint.sh
#!/usr/bin/env bash
case $1 in
start)
echo "Starting S3Fs: "
s3fs mybucket /home/files -o allow_other,nonempty -d -d
;;
esac
ftp - Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openssh-server \
proftpd-basic \
proftpd-mod-mysql
COPY proftpd.conf /etc/proftpd/proftpd.conf
COPY sftp.conf /etc/proftpd/conf.d/sftp.conf
COPY setup.sh /etc/proftpd/setup.sh
RUN chmod 500 /etc/proftpd/setup.sh && /etc/proftpd/setup.sh
EXPOSE 2222
ENTRYPOINT ["/bin/sh", "/etc/proftpd/entrypoint.sh"]
You can mount s3 in your docker container in next way
1.Add to Dockerfile
RUN apt-get install -y fuse s3fs
RUN mkdir /root/.aws
RUN touch /root/.aws/.passwd-s3fs && chmod 600 /root/.aws/.passwd-s3fs
COPY entrypoint.sh ./
RUN chmod 700 entrypoint.sh
ENTRYPOINT entrypoint.sh
2.Create entrypoint.sh with next script
#!/bin/sh
echo "$AWS_CREDS" > /root/.aws/.passwd-s3fs
echo "$BUCKET_NAME /srv/files fuse.s3fs _netdev,allow_other,passwd_file=/root/.aws/.passwd-s3fs 0 0" > /etc/fstab
mount -a
<your old CMD or ENTRYPOINT>
3.In docker-compose.yml add next
<your-container-name>:
image: ...
build: ...
environment:
- AWS_ID="AKI..."
- AWS_KEY="omIE..."
- AWS_CREDS=AKI...:2uMZ...
- BUCKET_NAME=<YOUR backed name>
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined