I tried to learn a (even though a bit outdated) form of linking containers. I created an NGINX and PHP container, which should get linked. Everything runs on my local machine.
Dockerfile NGINX
FROM ubuntu:16.04
MAINTAINER Sebastian Scharf
# Install NGINX
RUN apt-get update && apt-get install -y nginx \
# Clean after apt-get
&& apt-get clean \
## remove content from apt/lists and var/tmp
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
## set deamon off
&& echo "daemon off;" >> /etc/nginx/nginx.conf
ADD default /etc/nginx/sites-available/default
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
CMD ["nginx"]
Dockerfile PHP
FROM ubuntu:16.04
MAINTAINER Sebastian Scharf
RUN apt-get update \
&& apt-get install -y locales \
&& locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN apt-get update \
&& apt-get install -y curl zip unzip git software-properties-common \
&& add-apt-repository -y ppa:ondrej/php \
&& apt-get update \
&& apt-get install -y php7.0-fpm php7.0-cli php7.0-mcrypt php7.0-gd php7.0-mysql \
php7.0-pgsql php7.0-imap php-memcached php7.0-mbstring php7.0-xml php7.0-curl \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& mkdir /run/php \
&& apt-get remove -y --purge software-properties-common \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD php-fpm.conf /etc/php/7.0/fpm/php-fpm.conf
ADD www.conf /etc/php/7.0/fpm/pool.d/www.conf
EXPOSE 9000
CMD ["php-fpm7.0"]
NGINX CONFIG
server {
listen 8080 default_server;
root /var/www/html/public;
index index.html index.htm index.php;
server_name _;
charset utf-8;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass php:9000;
}
error_page 404 /index.php;
location ~ /\.ht {
deny all;
}
}
When I call localhost:8080, I get the error: This page isn't working (localhost didn't send any data). I was expecting to see a test file.
This is how I started the containers and linked them:
docker run -d --name=myphp -v $(pwd)/application:/var/www/html retronexus/php:0.1.0
docker run -d --link=myphp:php -p 8080:80 -v $(pwd)/application:/var/www/html retronexus/nginx:0.2.0
Nginx in your container listens on port 8080, but you are binding port 80 from the container to 8080 on the host. Switch to binding 8080 from the container.
docker run -d \
--link=myphp:php \
-p 8080:8080 \
-v $(pwd)/application:/var/www/html \
retronexus/nginx:0.2.0
See here for more details: https://docs.docker.com/config/containers/container-networking/#published-ports
Related
I have the app deployed in one docker container:
Frontend - VueJS (served by Nginx)
Backend - Flask (gunicorn)
Dockerfile:
# Builder
FROM node:10-alpine as builder
WORKDIR /vue-ui
COPY ./frontend/package*.json ./
RUN npm install
COPY ./frontend .
RUN npm run build
#Production container
FROM nginx:alpine as production-build
WORKDIR /app
RUN apk update && apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev libxslt-dev libffi-dev
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /vue-ui/dist /usr/share/nginx/html
COPY ./backend/requirements.txt .
COPY ./backend/requirements.txt .
RUN pip install -r requirements.txt
RUN pip install gunicorn
COPY ./backend .
ENV DB_URL_EXTERNAL=postgres://logpasstodb/maindb
#ENV DB_URL_EXTERNAL=$DB_URL_EXTERNAL
EXPOSE 80 5432
#ENTRYPOINT ["nginx", "-g", "daemon off;"]
CMD gunicorn --chdir / -b 0.0.0.0:5000 app:create_app --daemon && \
nginx -g 'daemon off;'
When the frontend trying to send a request to the flask app, getting the following:
POST http://localhost:5000/login net::ERR_CONNECTION_REFUSED
Axios baseURL set to axios.defaults.baseURL = 'http://localhost:5000/';. Also, the same issue if its changed to docker internal host 172.17.0.2.
The main thing is if I'm running the backend on my local machine, where docker container deployed, the application (frontend) can successfully connect to it.
nginx.conf:
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
}
}
}
Could you please suggest, how to set baseURL correctly or it could be the problem with Nginx confs?
Alternatively, you could add extra mapping into your NGINX conf
location /api/ {
proxy_pass localhost:5000;
}
And then set baseUrl to localhost:80/api/
You have exposed ports 80 and 5432, but not the port 5000 which is listened by backend application.
Expose port 5000 and set baseUrl to :5000
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -yq --no-install-recommends \
apt-utils \
curl \
# Install git
git \
# Install apache
apache2 \
# Install php 7.2
libapache2-mod-php7.2 \
php7.2-cli \
php7.2-json \
php7.2-curl \
php7.2-fpm \
php7.2-gd \
php7.2-ldap \
php7.2-mbstring \
php7.2-mysql \
php7.2-soap \
php7.2-sqlite3 \
php7.2-xml \
php7.2-zip \
php7.2-intl \
php-imagick \
# Install tools
openssl \
nano \
graphicsmagick \
imagemagick \
ghostscript \
mysql-client \
iputils-ping \
locales \
sqlite3 \
ca-certificates \
&& apt-get clean && rm -f /var/www/html/index.html && rm -rf /var/lib/apt/lists/**
ENV LANG en_US.utf8
RUN groupadd --gid 5000 newuser \
&& useradd --home-dir /home/newuser --create-home --uid 5000 \
--gid 5000 --shell /bin/sh --skel /dev/null newuser
WORKDIR /var/www/html
COPY index.php /var/www/html
EXPOSE 80
HEALTHCHECK --interval=5s --timeout=3s --retries=3 CMD curl -f http://localhost || exit 1
CMD ["apachectl", "-D", "FOREGROUND"]
USER newuser
The error I get:
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs Action '-D FOREGROUND' failed. The Apache error log may have more information.
As #Henry wrote:
A non root user cannot bind to ports below 1024. Use a port that is higher e.g. 8080.
I suggest you change apache port and, if you need to access apache from the host, map the port 8080 to 80 in docker.
e.g.
docker build -t myapacheimg .
docker run -it --rm -p 8080:80 myapacheimg
In order to have this stuff working you need to perform the following operations:
change the ports in the /etc/apache2/ports.conf
change the virtualhost in the /etc/apache2/sites-enabled/000-default.conf
change the ownership of the /var/log/apache2 and /var/run/apache2 folders
In other words, here's an excerpt of the Dockerfile:
...
&& apt-get clean && rm -f /var/www/html/index.html && rm -rf /var/lib/apt/lists/**
COPY ./ports.conf /etc/apache2/ports.conf
COPY ./000-default.conf /etc/apache2/sites-enabled/000-default.conf
ENV LANG en_US.utf8
RUN groupadd --gid 5000 newuser \
&& useradd --home-dir /home/newuser --create-home --uid 5000 \
--gid 5000 --shell /bin/sh --skel /dev/null newuser
RUN chown -R newuser /var/log/apache2 /var/run/apache2
...
ports.conf
Listen 8080
000-default.conf:
<VirtualHost *:8080>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
IMHO this is an ugly solution. I'd use the docker image php:7.2-apache and avoid all these problems.
A non root user cannot bind to ports below 1024. Use a port that is higher e.g. 8080.
It can be done without changing the port also, using the setcap command to modify the linux capability.
For example, If you wish to use the default port 80 for apache2 then your Dockerfile will look something like this:
[...]
RUN apt-get update \
&& apt-get install -y libcap2-bin \
&& setcap 'cap_net_bind_service=+ep' /usr/sbin/apache2 \
&& chown www-data:www-data /var/log/apache2
USER www-data
[...]
Good day.
I have been working with Docker recently. Faced such a problem:
The administrative part of the site is working fine, and the public part gives an error 502 bad gateway.
Here are the settings for my docker:
docker-kompose.yml (root folder):
version: '3'
services:
nginx:
image: nginx:1.14
ports:
- "${NGINX_HOST}:${NGINX_PORT}:80"
volumes:
- ./:/var/www/html:cached
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on: [php]
env_file: .env
restart: always
php:
build:
context: docker/php
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ./:/var/www/html:cached
- ./docker/php/php.ini:/usr/local/etc/php/php.ini:cached
- ./docker/php/php-fpm.conf:/usr/local/etc/php-fpm.d/99-custom.conf:cached
user: "${DOCKER_UID}"
env_file: .env
restart: always
docker-kompose.override.yml (root folder):
version: '3'
services:
markup:
build: docker/markup
user: "${DOCKER_UID}"
ports:
- "${MARKUP_PORT}:4000"
volumes:
- ./:/app:cached
command: bash -c "npm install --no-save && bower install && gulp external"
environment:
NODE_ENV: "${MARKUP_ENV}"
.env file (root folder):
COMPOSE_PROJECT_NAME=nrd
# "docker-compose.yml:docker-compose.prod.yml" for prod
# "docker-compose.yml:docker-compose.stage.yml" for stage
COMPOSE_FILE=
DOCKER_UID=1000
DOCKER_ADDRESS=172.17.0.1
# "develop" or "production"
MARKUP_ENV=develop
MARKUP_PORT=4000
# 0.0.0.0 for external access
NGINX_HOST=127.0.0.1
NGINX_PORT=80
TIMEZONE=Europe/Moscow
nginx.conf (folder docker/nginx/):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off;
error_log /dev/null crit;
client_max_body_size 1024M;
client_header_buffer_size 4k;
large_client_header_buffers 4 8k;
sendfile on;
keepalive_timeout 65;
map $http_x_forwarded_proto $fastcgi_https {
default off;
https on;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
root /var/www/html;
listen 80;
server_name _;
access_log off;
error_log /dev/null crit;
charset utf-8;
location ~ ^/(\.git|log|docker|\.env) {
return 404;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_read_timeout 7200;
fastcgi_send_timeout 72000;
fastcgi_buffer_size 32k;
fastcgi_buffers 16 32k;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS $fastcgi_https if_not_empty;
fastcgi_param REMOTE_ADDR $http_x_forwarded_for;
include fastcgi_params;
if (!-f $request_filename) {
rewrite ^(.*)/index.php$ $1/ redirect;
}
}
location / {
index index.php index.html index.htm;
if (!-e $request_filename){
rewrite ^(.*)$ /bitrix/urlrewrite.php last;
}
}
location ~ /\.ht {
deny all;
}
location ~* ^.+\.(xml|html|jpg|jpeg|gif|ttf|eot|swf|svg|png|ico|mp3|css|woff2?|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|dat|avi|ppt|txt|tar|mid|midi|wav|bmp|rtf|wmv|mpeg|mpg|mp4|m4a|spx|ogx|ogv|oga|webm|weba|ogg|tbz|js)$ {
expires 30d;
access_log off;
}
}
}
.Dockerfile (folder docker/php):
# See https://github.com/docker-library/php/blob/master/7.1/fpm/Dockerfile
FROM php:7.1-fpm
ARG TIMEZONE
RUN apt-get update && apt-get install -y \
openssl \
git \
zlibc \
zlib1g \
zlib1g-dev \
libfreetype6-dev \
libssl-dev \
libjpeg62-turbo-dev \
libmemcached-dev \
libmagickwand-dev \
libmcrypt-dev \
libpng-dev \
libicu-dev \
unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN mkdir /.composer/ && chmod 777 /.composer/
# Set timezone
RUN ln -snf /usr/share/zoneinfo/${TIMEZONE} /etc/localtime && echo ${TIMEZONE} > /etc/timezone
RUN printf '[PHP]\ndate.timezone = "%s"\n', ${TIMEZONE} > /usr/local/etc/php/conf.d/tzone.ini
RUN "date"
# Type docker-php-ext-install to see available extensions
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install pdo_mysql gd
# Install memcached extension
RUN apt-get update \
&& apt-get install -y libmemcached11 libmemcachedutil2 build-essential libmemcached-dev libz-dev \
&& pecl install memcached \
&& echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini \
&& apt-get remove -y build-essential libmemcached-dev libz-dev \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /tmp/pear
RUN docker-php-ext-install intl
RUN docker-php-ext-install opcache
RUN docker-php-ext-install soap
RUN apt-get install -y --no-install-recommends default-libmysqlclient-dev
RUN apt-get install -y \
libzip-dev \
zip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip
RUN docker-php-ext-install mysqli
WORKDIR /var/www/html
CMD ["php-fpm"]
php.ini (folder docker/php):
[PHP]
short_open_tag = On
upload_max_filesize = 200M
post_max_size = 250M
display_errors = Off
memory_limit = 1024M
max_execution_time = 60
[mbstring]
mbstring.internal_encoding = UTF-8
mbstring.func_overload = 2
php-fpm.conf (folder docker/php):
[www]
pm = dynamic
pm.max_children = 50
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 10
access.format = "%{REMOTE_ADDR}e - %u %t \"%m %{REQUEST_URI}e%Q%q\" %s %{miliseconds}dms %{megabytes}MM %C%%"
;slowlog = /proc/self/fd/2
;request_slowlog_timeout = 2s
php_admin_value[error_log] = /proc/self/fd/2
php_admin_flag[log_errors] = on
.Dockerfile (folder docker/markup):
FROM node:10.13
RUN npm install -g gulp bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN mkdir /.npm && mkdir /.config && mkdir /.cache && mkdir /.local && chmod 777 /.npm && chmod 777 /.config && chmod 777 /.cache && chmod 777 /.local
EXPOSE 4000
WORKDIR /app/markup
Please help, I do not understand what is the matter. Today everything worked fine, I did not touch the settings anywhere, I did not change the passwords anywhere. The public part of my local project just started giving 502 errors
Using Docker for Windows.
I was trying to follow this article, but many things look different nowadays, so I ended up improvising. I'd like to simply connect my frontend with my backend, avoiding the CORS headache (although now I'm not sure which headache is the bigger one). Anyway, I defined my load balancer that way:
app.json:
{
"name": "NeuroCore Load Balancer",
"description": "A load balancer for NeuroCore"
}
Procfile:
web: sbin/haproxy -f haproxy.cfg
Dockerfile:
FROM heroku/heroku:18
RUN mkdir -p /app/user
WORKDIR /app/user
# Install HAProxy
RUN apt-get update && apt-get install -y libssl1.0.0 libpcre3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV HAPROXY_MAJOR 1.5
ENV HAPROXY_VERSION 1.5.14
ENV HAPROXY_MD5 ad9d7262b96ba85a0f8c6acc6cb9edde
# see http://sources.debian.net/src/haproxy/1.5.8-1/debian/rules/ for some helpful navigation of the possible "make" arguments
RUN buildDeps='curl gcc libc6-dev libpcre3-dev libssl-dev make' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
&& curl -SL "http://www.haproxy.org/download/${HAPROXY_MAJOR}/src/haproxy-${HAPROXY_VERSION}.tar.gz" -o haproxy.tar.gz \
&& echo "${HAPROXY_MD5} haproxy.tar.gz" | md5sum -c \
&& mkdir -p /app/user/src/haproxy \
&& tar -xzf haproxy.tar.gz -C /app/user/src/haproxy --strip-components=1 \
&& rm haproxy.tar.gz \
&& make -C /app/user/src/haproxy \
TARGET=linux2628 \
USE_PCRE=1 PCREDIR= \
USE_OPENSSL=1 \
USE_ZLIB=1 \
PREFIX=/app/user \
all \
install-bin \
&& rm -rf /app/user/src/haproxy \
&& apt-get purge -y --auto-remove $buildDeps
COPY haproxy.cfg /app/user/haproxy.cfg
haproxy.cfg:
global
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http
bind 0.0.0.0:$PORT
option forwardfor
# Force SSL
redirect scheme https code 301 if ! { hdr(x-forwarded-proto) https }
# And all other requests to `example-com`.
default_backend neurocore-backend
backend neurocore-backend
http-request set-header X-Forwarded-Host neurocore.herokuapp.com
http-request set-header X-Forwarded-Port %[dst_port]
reqirep ^Host: Host:\ neurocore-backend.herokuapp.com
server backend neurocore-backend.herokuapp.com:443 ssl verify none
After running docker-compose up web I get:
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
As far I can understand, this file should be generated by the heroku docker:init command, but it doesn't work nowadays. So, what should I do to make my app up and running?
I'm setting up a docker stack for a Symfony 4 application with Nginx and PHP 7. I need advice for my docker stack because i met one problem : every changes in a PHP file (a controller, a repository, an entity etc...) aren't displayed. Each time, i need to down ma docker stack and restart to see the changes, even for a simple dump().
I verified OPCache is enabled and configured like in Symfony documentation.
I think the problem is in my docker stack.
This is docker-compose.yml :
version: '2.2'
services:
# PHP
php:
build:
context: docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
container_name: dso_php
volumes:
- ".:/var/www/myproject:rw,cached"
- "./docker/php7-fpm/www.conf:/usr/local/etc/php-fpm.d/www.conf"
- "./docker/php7-fpm/php.ini:/usr/local/etc/php/conf.d/030-custom.ini"
env_file:
- .env
working_dir: /var/www/myproject
# NGINX
nginx:
build:
context: docker/nginx
args:
NGINX_HOST: ${NGINX_HOST}
container_name: dso_nginx
ports:
- 80:80
depends_on:
- php
volumes:
- ".:/var/www/myproject:cached"
- ./logs/nginx/:/var/log/nginx
env_file:
- .env
environment:
- NGINX_HOST=${NGINX_HOST}
I build my own Dockerfile for PHP and Nginx:
First PHP, here the Dockerfile :
FROM php:7.2-fpm
MAINTAINER HamHamFonFon <balistik.fonfon#gmail.com>
USER root
# Utils
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y curl less vim libpq-dev wget gnupg libicu-dev libpng-dev zlib1g-dev sudo wget \
&& docker-php-ext-install mysqli \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install intl \
&& docker-php-ext-install opcache \
&& docker-php-ext-install zip \
&& docker-php-ext-install gd
RUN apt-get install -y zip unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
&& composer --version
# npm & node
RUN curl -sL https://deb.nodesource.com/setup_9.x | bash
RUN apt-get install -y nodejs npm \
&& update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
# build tools
RUN apt-get install -y build-essential
# yarn package manager
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Git
RUN apt-get install -y git
# bugfix: remove cmdtest to install yarn correctly.
RUN apt-get remove -y cmdtest
RUN apt-get update
RUN apt-get install -y yarn
# Clear archives in apt cache folder
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint
RUN chmod +x /usr/local/bin/docker-entrypoint
ENTRYPOINT ["docker-entrypoint"]
php.ini :
; General settings
date.timezone = Europe/Paris
xdebug.max_nesting_level=500
short_open_tag = Off
memory_limit="512M"
; Error reporting optimized for production (http://www.phptherightway.com/#error_reporting)
display_errors = Off
display_startup_errors = Off
error_reporting = E_ALL
log_errors = On
error_log = /var/log/php-app/error.log
apc.enable_cli = 1
# http://symfony.com/doc/current/performance.html
opcache.interned_strings_buffer = 16
opcache.memory_consumption = 256
opcache.max_accelerated_files = 20000
opcache.validate_timestamps=0
; maximum memory allocated to store the results
realpath_cache_size=4096K
; save the results for 10 minutes (600 seconds)
realpath_cache_ttl=600
And www.conf (i have removed in this exemple all commented lines) :
[www]
user = site
listen = [::]:9000
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
php_admin_value[upload_max_filesize] = 50M
php_admin_value[post_max_size] = 50M
Now, for nginx :
First, Dockerfile :
FROM debian:jessie
ARG NGINX_HOST
MAINTAINER HamHamFonFon <balistik.fonfon#gmail.com>
# Install nginx
RUN apt-get update && apt-get install -y nginx
# Configure Nginx
ADD nginx.conf /etc/nginx/
ADD symfony.conf /etc/nginx/conf.d/
RUN sed "/server_name nginx_host;/c\ server_name ${NGINX_HOST};" -i /etc/nginx/conf.d/symfony.conf
RUN echo "upstream php-upstream { server php:9000; }" > /etc/nginx/conf.d/upstream.conf
RUN usermod -u 1000 www-data
# Run Nginx
CMD ["nginx"]
# Expose ports
EXPOSE 80
nginx.conf :
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
open_file_cache max=100;
client_body_temp_path /tmp 1 2;
client_body_buffer_size 256k;
client_body_in_file_only off;
}
daemon off;
And finally, symfony.conf :
server {
listen 80;
listen [::]:80;
server_name nginx_host;
client_max_body_size 20M;
root /var/www/deep-space-objects/public;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/(index)\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/symfony_error.log;
access_log /var/log/nginx/symfony_access.log;
}
In Dockerfile, the command "sed" replace "nginx_host" by the server name i declare in .env file.
My problem looks like this one : Docker with Symfony 4 - Unable to see the file changes but i have verified the OPCache configuration.
How can i check if nginx and php communicate ? Are there some badthings in my stack i can improve ?
Thank you, i don't know how to looking for.