I am trying to learn Docker. I have installed Docker Desktop on my Windows10 Pro computer.
I started with getting a PHP, MySQL website up and running using Docker.
My docker-compose.yml looks like
version: "3.8"
services:
www:
build: .
ports:
- "80:80"
volumes:
- ./www:/var/www/html
links:
- db
networks:
- default
db:
image: mysql:8.0
ports:
- "3306:3306"
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- ./conf:/etc/mysql/conf.d
- persistent:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
persistent:
and the DockerFile looks like
FROM php:7.3-apache
RUN docker-php-ext-install mysqli
RUN apt-get update \
&& apt-get install -y libzip-dev \
&& apt-get install -y wget \
&& apt-get install -y zlib1g-dev \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install zip
I have a www folder with an index.php.
When I run
docker-compose up -d
I have the PHP, mySQL site up and running correctly and I can access the index.php with expected results.
So far so good.
Now, I want to change the Dockerfile to setup a php forum website (phpbb) - so I have updated my Dockerfile as follows:
FROM php:7.3-apache
RUN docker-php-ext-install mysqli
RUN apt-get update \
&& apt-get install -y libzip-dev \
&& apt-get install -y wget \
&& apt-get install -y zlib1g-dev \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install zip
WORKDIR /var/www/html
RUN wget https://download.phpbb.com/pub/release/3.3/3.3.0/phpBB-3.3.0.tar.bz2 \
&& tar -xvjf phpBB-3.3.0.tar.bz2
&& ls -l
When I run
docker-compose build --no-cache
I can see the expected results - i.e, the "ls" command shows all the expected phpBB files in /var/www/html
However, when I run
docker-compose up -d
My container only has the index.php in the /var/www/html (the index.php from the www folder). None of the phpBB files are there.
What am I doing wrong?
The files are there but you are hiding them by your bind mount.
You are bind mounting ./www into the same directory (/var/www/html) which will hide its contents.
here:
www:
...
volumes:
- ./www:/var/www/html
When you are building the image, you correctly see the files but once you run docker-compose the bind mount is created and it hides those files.
Related
I'm following the docs of mariadb. It says that the db should be created if it find a .sql in /docker-entrypoint-initdb.d.
I'm working on a Ubuntu Server in a Oracle Virtual BOX VM.
My docker-compose.yml looks like this:
version: "3.9"
services:
db:
image: mariadb:10
container_name: mariadb
ports:
- 3306:3306
environment:
- MYSQL_USER=user
- MYSQL_ROOT_PASSWORD=password
- MYSQL_PASSWORD=password
- MARIADB_DATABASE=database // tried with MYSQL_DATABASE and without this line
volumes:
- "db_data:/var/lib/mysql"
- ".database/initdb/dump.sql:/docker-entrypoint-initdb.d/initdb.sql"
# networks:
# - network
volumes:
db_data:
My initdb.sql looks like this (the one that should work in the end looks different but out of simplicity I reduced it to the max and could not even this simple one working):
CREATE DATABASE NEWDB;
I honestly don't know where to look or what to do now because everywhere I looked for a possible solution I found that this is the bare minimum example that should work.
I tried to restarted docker, deleted all containers, images and volumes, modified the initdb.sql into:
CREATE USER user WITH PASSWORD 'password';
CREATE DATABASE IF NOT EXISTS database;
GRANT ALL PRIVILEGES ON DATABASE database TO user;
but the database is not initialized when I docker compose up.
I looked up the container and the initdb.sql was there.
EDIT: It somehow worked, when I docker compose up with MARIADB_DATABASE=database but the script initdb.sql still doesn't work and it's the most important thing because it set's up the whole database.
(NOTE: On top of that I want to set up another PHP-container that runs a PHP-script in order to collect data that is being stored in the above MariaDB-container. The MariaDB is connected with a website that calls data from the container)
Well I'm using the following stack and it works fine for me.
php-apache:
This is an Apache server that runs all my php scripts. You can place your scripts in ./src directory and it will automatically be mounted to DocumentRoot directory of the Apache server.
db:
This the latest docker container of MariaDb
adminer:
This is the lite-weight database browser which I use for creating and altering my databases. You can just visit localhost:8081 and then enter the following credentials. It becomes simpler to manage the databases this way.
username: root
password: example
version: '3.8'
services:
php-apache:
container_name: php-apache
build:
context: .
dockerfile: Dockerfile
image: php:8.0-apache
volumes:
- ./src:/var/www/html/
ports:
- 8080:80
db:
image: mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8081:8080
Dockerfile:
This is a simple docker container which is extended from the base php:8.0-apache image, with mysql extensions installed in it for PDO support.
FROM php:8.0-apache
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN apt-get update && apt-get upgrade -y
P.S:
Here you'll have to create all your databases manually via GUI of Adminer. But if you prefer SQL queries via initdb.sql then be my guest. I've just provided this configuration as a suggestion.
I came up with a solution. I used a base image of laravel (installed the laravel project with <curl -s "https://laravel.build/project-name?with=mariadb" | sudo bash> and modified it a little bit. So here's the docker-compose.yml:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mariadb
mariadb:
image: 'mariadb:10'
container_name: 'mariadb-10'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sail-mariadb:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mariadb:
driver: local
Here you can see that the "10-create-testing-database.sh" is executed on startup. I tested this container and it created a database, so I just had to modify it a little bit and now the container creates a database and tables on container startup. Here's the "10-create-testing-database.sh":
#!/usr/bin/env bash
mysql --user=root --password="$MYSQL_ROOT_PASSWORD" <<-EOSQL
CREATE DATABASE IF NOT EXISTS database_name;
GRANT ALL PRIVILEGES ON \`testing%\`.* TO '$MYSQL_USER'#'%';
USE database_name;
CREATE TABLE IF NOT EXISTS table_name(
table_entries ...
);
EOSQL
I still don't know why my initial setup did not work. The only difference I see is that this working file is a .sh and the not working one is a .sql (this does not make sence to me but it what it is).
Dockerfile:
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]
I have a development environment where I run npm run serve in my local terminal and then docker-compose up -d in a different terminal to run the services I need to start my system.
I have an instance where I am attempting to run front-end tests, which I run inside of a running container using nightwatchJS, and for some reason the test runner is not accessing the files loaded from npm run serve. Quite literally when I print out a screenshot using the test runner the page looks as if I have canceled running npm run serve, however when I go to the page 127.0.0.1 in my browser, everything is loading as usual.
I think my issue is that the test is being run inside of a docker container like so:
docker-compose exec web bash -c "npx nightwatch ...file"
where that specific instance is not running npm run serve but I am confused as to why it works when I hit the browser personally. I have tried exposing ports in the Dockerfile but that does not work.
Can anybody point me in the right direction?
Here is my Dockerfile:
FROM python:3.8.5-slim-buster
# the first 2 prevent Python from writing out pyc files or from buffering stdin/stdout
# the others are Node
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.7.0
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# the man1 directory is not present for slim-buster so we add that and then install all of the default system based dependencies
# NOTE...TOP LAYERS ARE CACHED FIRST!!!!
RUN mkdir -p /usr/share/man/man1 \
&& apt-get clean && apt-get update -y && apt-get install pdftk-java curl git -y \
&& curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash \
&& apt-get install zlib1g-dev libjpeg-dev python3-pythonmagick inkscape xvfb poppler-utils libfile-mimeinfo-perl qpdf libimage-exiftool-perl ufraw-batch ffmpeg gcc procps -y \
&& apt-get clean && apt-get autoclean
# SELINUM
# get wget...
# Adding trusting keys to apt for repositories
RUN apt-get install gnupg -y && apt-get install wget -y \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' \
&& apt-get update -y \
&& apt-get install google-chrome-stable -y \
&& apt-get install unzip -yqq
# Set up Chromedriver Env Vars
ENV CHROMEDRIVER_VERSION 87.0.4280.20
ENV CHROMEDRIVER_DIR /chromedriver
# make directory for it...
RUN mkdir $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip" \
&& unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR \
&& rm "$CHROMEDRIVER_DIR/chromedriver_linux64.zip"
# Put Chromedriver into the PATH
ENV PATH $CHROMEDRIVER_DIR:$PATH
# Set display port as an environment variable
ENV DISPLAY=:99
# SELINUM
## NIGHTMARE
#RUN apt-get install wget -y && wget http://selenium-release.storage.googleapis.com/2.44/selenium-server-standalone-2.44.0.jar -P /bin/
#RUN apt install default-jre -y
#RUN apt-get install -y xvfb x11-xkb-utils xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic x11-apps clang libdbus-1-dev libgtk2.0-dev libnotify-dev libgconf2-dev libasound2-dev libcap-dev libcups2-dev libxtst-dev libxss1 libnss3-dev gcc-multilib g++-multilib
# ensure node is installed, and at the end, make the working directory
RUN . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& mkdir /code
# set working directory to /code...it was just made for this purpose
WORKDIR /code
# possible that these will cache so separate them from COPY . /code/
COPY requirements.txt /code/
# now install, this will normally also cache
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# place this at the end because the code will always change...this will almost never cache...
COPY . /code/
EXPOSE 8001
EXPOSE 8888
Here is my compose file:
version: '3.4'
services:
redis:
image: redis
ports:
- "6379"
restart: unless-stopped
networks:
main:
aliases:
- redis
postgres:
image: postgres:12
ports:
- "5432:5432"
env_file: ./.env
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
networks:
main:
aliases:
- postgres
#access by going to localhost:16543
#when adding a server to the serve list
#the hostname is postgres
#the username is postgres
#the password is postgres
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
env_file: ./.env
restart: unless-stopped
ports:
- "16543:80"
networks:
main:
aliases:
- pgadmin
celery:
build:
network: host
context: .
dockerfile: Dockerfile-dev # use docker-dev because production npm installs and npm builds
command: python manage.py celery
env_file: ./.env
restart: unless-stopped
volumes:
- .:/code
- tmp:/tmp
links:
- redis
depends_on:
- redis
networks:
main:
aliases:
- celery
web:
build:
network: host
context: .
dockerfile: Dockerfile-dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- tmp:/tmp
ports:
- "8000:8000"
env_file: ./.env
restart: unless-stopped
links:
- postgres
- redis
- celery
- pgadmin
depends_on:
- postgres
- redis
- celery
- pgadmin
networks:
main:
aliases:
- web
volumes:
pgdata:
tmp:
networks:
main:
I created a docker-compose.yml file with the code below :
version: "3.8"
services:
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
container_name: docker_database
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
networks:
- dev
phpmyadmin:
image: phpmyadmin:latest
container_name: docker_phpmyadmin
restart: always
depends_on:
- db
ports:
- 8081:80
environment:
PMA_HOST: db
networks:
- dev
prestashop:
build: php
container_name: docker_prestashop
ports:
- 8080:80
volumes:
- ./php/vhosts:/etc/apache2/sites-enabled
- ./:/var/www/html
restart: always
networks:
- dev
networks:
dev:
volumes:
db-data:
And also this Dockerfile :
FROM php:7.4-apache
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN apt-get update \
&& apt-get install -y --no-install-recommends locales apt-utils git libicu-dev g++ libpng-dev libxml2-dev libzip-dev libonig-dev libxslt-dev;
RUN echo "en_US.UTF8 UTF8" > /etc/locale.gen && \
echo "fr_FR.UTF-8 UTF-8" >> /etc/locale.gen && \
locale-gen
RUN curl -sSk https://getcomposer.org/installer | php -- --disable-tls && \
mv composer.phar /usr/local/bin/composer
RUN docker-php-ext-configure intl
RUN docker-php-ext-install pdo pdo_mysql gd opcache intl zip calendar dom mbstring zip gd xsl
RUN pecl install apcu && docker-php-ext-enable apcu
RUN a2enmod rewrite && service apache2 restart
The goal is to create a prestashop environnement however I'm struggling with a Error 500 when trying to install the prestashop. I've read that it has something to do with file permissions. Some of the folders are accessible but I don't know why some of them like the config folder is in read-only. I've created a volume that I share with container and host but I think I did something wrong on the configuration.
Even when I run a chmod 770 <some-file> I can access it but when I'm trying to change something in it I get this error :
Cannot save /home/benju/Bureau/docker/config/defines.inc.php.
Unable to create a backup file (defines.inc.php~).
The file left unchanged.
How could I access them with the same permission as the container on the host computer ?
You could use the use the same user and group with the docker run parameter, --user "$(id -u):$(id -g)"
with this the container can still give you an error, while bash on the container won't, but other applications can, because the user and group need to be created, so in the docker file,
ARG USER_ID
ARG GROUP_ID
RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
USER user
Then the file permissions which would be set by the container will on the volume will be the same as the host setting the file permissions.
More information on the following webpage,
https://vsupalov.com/docker-shared-permissions/
Here is what I've done and it worked (not sure if this is a good practice)
chown -R <hostuser>:<hostuser> <project root>
I am using Docker with the open source BI tool Apache Superset. I have added a new file, specifically a .geojson file in the CountryMap directory. Now, when I try to build using docker-compose up --build or make changes in the frontend, Docker is not fully updated, and I get a file not found error when trying to run a query. When I look inside the container via docker exec -it container_id bash, the new file is there.
Dockerfile:
FROM python:3.6-jessie
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash superset
# Configure environment
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8
RUN apt-get update -y
# Install dependencies to fix `curl https support error` and `elaying package configuration warning`
RUN apt-get install -y apt-transport-https apt-utils
# Install superset dependencies
# https://superset.incubator.apache.org/installation.html#os-dependencies
RUN apt-get install -y build-essential libssl-dev \
libffi-dev python3-dev libsasl2-dev libldap2-dev libxi-dev
# Install extra useful tool for development
RUN apt-get install -y vim less postgresql-client redis-tools
# Install nodejs for custom build
# https://superset.incubator.apache.org/installation.html#making-your-own-build
# https://nodejs.org/en/download/package-manager/
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install -y nodejs
WORKDIR /home/superset
COPY requirements.txt .
COPY requirements-dev.txt .
COPY contrib/docker/requirements-extra.txt .
RUN pip install --upgrade setuptools pip \
&& pip install -r requirements.txt -r requirements-dev.txt -r requirements-extra.txt \
&& rm -rf /root/.cache/pip
RUN pip install gevent
COPY --chown=superset:superset superset superset
ENV PATH=/home/superset/superset/bin:$PATH \
PYTHONPATH=/home/superset/superset/:$PYTHONPATH
USER superset
RUN cd superset/assets \
&& npm ci \
&& npm run build \
&& rm -rf node_modules
COPY contrib/docker/docker-init.sh .
COPY contrib/docker/docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["curl", "-f", "http://localhost:8088/health"]
EXPOSE 8088
docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis:/data
postgres:
image: postgres:10
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_PASSWORD: superset
POSTGRES_USER: superset
ports:
- "127.0.0.1:5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
superset:
build:
context: ../../
dockerfile: contrib/docker/Dockerfile
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_USER: superset
POSTGRES_PASSWORD: superset
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
REDIS_HOST: redis
REDIS_PORT: 6379
# If using production, comment development volume below
#SUPERSET_ENV: production
SUPERSET_ENV: development
# PYTHONUNBUFFERED: 1
user: root:root
ports:
- 8088:8088
depends_on:
- postgres
- redis
volumes:
# this is needed to communicate with the postgres and redis services
- ./superset_config.py:/home/superset/superset/superset_config.py
# this is needed for development, remove with SUPERSET_ENV=production
- ../../superset:/home/superset/superset
volumes:
postgres:
external: false
redis:
external: false
Why is there a not found error?
try to use absolute path in volumes:
volumes:
- /home/me/my_project/superset_config.py:/home/superset/superset/superset_config.py
- /home/me/my_project/superset:/home/superset/superset
It is because docker-compose is utilizing cache. If the dockerfile and the docker-compose.yml in not changed it does not recreate the container image. To avoid this you should use the following flag:
--force-recreate
--force-recreate
Recreate containers even if their configuration and image haven't
changed.
For development purposes I like to use the following switch as well:
-V, --renew-anon-volumes
Recreate anonymous volumes instead of retrieving data from the previous containers.
hi how are you guys i have 2 docker-compose file but both files are not run properly i face this
enter image description here
error please tell me any body how to fix it i give u a compose file content
(COMPOSE FILE 1)
db:
build: ./mysql
volumes:
- /opt/containers/personal/mysql:/var/lib/mysql
web:
build: ./web
ports:
- 80:80
volumes:
- /opt/containers/personal/php:/var/www/html
links:
- db:db
(COMPOSE FILE 2)
version: "3"
services:
nginx:
build: ./nginx
ports:
- 80:80
- 443:443
volumes:
- /opt/containers/personal/nginx/certs:/certs
depends_on:
- web
networks:
- webdbnet
web:
build: ./web
volumes:
# Example of host volume mounted in container
# - /opt/containers/personal/php:/var/www/html
# Example of docker volume mounted in container
- web-data:/var/www/html
networks:
- webdbnet
db:
# build: ./mysql
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- /opt/containers/personal/mysql:/var/lib/mysql
networks:
- webdbnet
networks:
webdbnet:
volumes:
web-data:
(AND THIS IS MY BUILD DOCKER FILE CONTENT)
FROM php:7-apache
RUN apt-get update && apt-get install -y \
libmcrypt-dev \
libfreetype6-dev \
libjpeg-dev \
libpng-dev \
&& a2enmod rewrite expires \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-install gd mysqli opcache iconv \
&& docker-php-ext-configure gd \
--with-freetype-dir=/usr/include/ \
--with-jpeg-dir=/usr/include/ \
--with-png-dir=/usr/include/ \
&& docker-php-ext-enable mcrypt mysqli
COPY index.html /var/www/html/
COPY index.php /var/www/html/
Sorry for my bad english
It looks like you are running the wrong version of PHP for mcrypt.
Try replacing
FROM php:7-apache
with
FROM php:7.2.14-apache-stretch
In your build file.
php:7-apache is taking you to the latest version, which is 7.3.1 and mcrypt seems to want 7.2.*
When you use official image for php and for this example try to install pecl install mcrypt-1.0.1 after that command you'll need to add this line to
RUN pecl install mcrypt
RUN echo "extension=mcrypt.so" >> /usr/local/etc/php/conf.d/docker-php-ext-intl.ini
Only extension installed via docker-php-ext-install are not required to enable with *.so file
I hope this helps..