Nginx docker communication to php container - connection refused - docker

New to docker and nginx set up at this level. Having issues getting nginx to talk to php container. Keep getting 502 Bad Gatewaywhen navigating, with the following errors from docker:
web | 2018/03/21 09:25:40 [error] 6#6: *6 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: web, request: "GET / HTTP/1.1", upstream: "fastcgi://172.18.0.4:9000", host: "localhost:8080"
web | 172.18.0.1 - - [21/Mar/2018:09:25:40 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36" "-"
web | 2018/03/21 09:25:40 [error] 6#6: *6 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: web, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://172.18.0.4:9000", host: "localhost:8080", referrer: "http://localhost:8080/"
web | 172.18.0.1 - - [21/Mar/2018:09:25:40 +0000] "GET /favicon.ico HTTP/1.1" 502 575 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36" "-"
My nginx conf:
server {
listen 80;
# listen 443 ssl;
server_name web;
charset utf-8;
client_max_body_size 10M;
root /usr/share/nginx/html/public;
index index.php index.html index.htm;
location / {
# try to serve file directly, fallback to rewrite
# see laravel docs On Nginx, the following directive in your site configuration will allow "pretty" URLs:
try_files $uri $uri/ /index.php?$query_string;
}
location ~ ^/.+\.php(/|$) {
include fastcgi_params;
fastcgi_pass php:9000;
fastcgi_param DB_HOST mariadb:33061;
fastcgi_param DB_DATABASE homestead;
fastcgi_param DEFAULT_HOME_URL $http_host;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 4 16k;
fastcgi_buffer_size 32k;
fastcgi_busy_buffers_size 32k;
}
}
and my docker compose yaml:
version: '3'
services:
web:
container_name: web
build:
context: ./nginx
restart: always
volumes:
- "./nginx/insight.conf:/etc/nginx/conf.d/default.conf"
- "/Users/lokisinclair/Projects/nps:/usr/share/nginx/html"
ports:
- "8080:80"
links:
- php
- mariadb
mailcatcher:
container_name: mailcatcher
image: yappabe/mailcatcher
ports:
- 1025:1025
- 1080:1080
php:
container_name: php
build:
context: ./php
restart: always
volumes:
- "/Users/lokisinclair/Projects/nps:/usr/share/nginx/html"
ports:
- "9000:9000" # PHP
- "5000:5000" # ProcessTags.py
links:
- mariadb
- mailcatcher
environment:
- DB_HOST=mariadb
- DB_USER=homestead
- DB_PASS=secret
mariadb:
container_name: db
image: mariadb
ports:
- "33061:3306"
restart: always
volumes:
- "./database:/var/lib/mysql"
- "./mariadb:/tmp"
command: mysqld --init-file="/tmp/init.sql"
environment:
- MYSQL_DATABASE=homestead
- MYSQL_USER=homestead
- MYSQL_PASSWORD=secret
I've attempted to rewrite the nginx .conf several times, using various guides but it will always respond with [XYZ] is not allowed here in...- which isn't overly helpful. This conf works, but just can't access the php container. Can anyone shed some light, please?
Update
I've since installed telnet on the web container and attempted to access port 9000, but its refused at command line too.
php dockerfile
FROM php:7.0.0-fpm
LABEL maintainer="<REMOVED>"
RUN apt-get update && apt-get upgrade -y \
g++ \
libc-client-dev \
libfreetype6-dev \
libicu-dev \
libjpeg62-turbo-dev \
libkrb5-dev \
libpq-dev \
libmagickwand-dev \
libmcrypt-dev \
libpng-dev \
libmemcached-dev \
libssl-dev \
libssl-doc \
libsasl2-dev \
zlib1g-dev \
python3.4 \
python-pip \
libmysqlclient-dev \
python-dev \
procps \
&& docker-php-ext-install \
bz2 \
iconv \
mbstring \
mysqli \
pgsql \
pdo_mysql \
pdo_pgsql \
soap \
zip \
&& docker-php-ext-configure gd \
--with-freetype-dir=/usr/include/ \
--with-jpeg-dir=/usr/include/ \
--with-png-dir=/usr/include/ \
&& docker-php-ext-install gd \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl \
&& docker-php-ext-install sockets \
&& yes '' | pecl install imagick && docker-php-ext-enable imagick \
&& docker-php-ext-configure imap --with-kerberos --with-imap-ssl \
&& docker-php-ext-install imap \
&& pecl install memcached && docker-php-ext-enable memcached \
&& pecl install mongodb && docker-php-ext-enable mongodb \
&& pecl install redis && docker-php-ext-enable redis \
&& pecl install xdebug && docker-php-ext-enable xdebug \
&& apt-get autoremove -y --purge \
&& apt-get clean \
&& rm -Rf /tmp/* \
&& pip install nltk \
&& pip install MySQL-python
# Grab python NLTK libs
RUN python -m nltk.downloader -d /usr/local/share/nltk_data all
# Set working directory to root of Insight
WORKDIR "/usr/share/nginx/html"
ADD init.sh /tmp/init.sh
CMD chmod +x /tmp/init.sh
CMD /bin/bash /tmp/init.sh
# Expose network ports
EXPOSE 5000
EXPOSE 9000

Related

Elastic beanstalk multicontainer backend not working

I have play application exposing backend on port 9000 and React.js front end wrapped within the same React application which exposes UI on port 3000. Whole application is deployed using dockerfile:
FROM hseeberger/scala-sbt:11.0.10_1.4.7_2.13.4
RUN apt-get --allow-releaseinfo-change update
RUN apt-get install -y unzip xvfb libxi6 libgconf-2-4 gnupg2
RUN apt-get update
#RUN apt-get clean
# Installing tools to build node packages
RUN apt-get update && apt-get install -y build-essential
#Installing docker-ce dependencies
RUN apt-get install -y ca-certificates gnupg-agent software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | \
tee -a /etc/apt/sources.list.d/docker.list
# Installing nodejs
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
# Install g8, which unfortunately requires Java 8 to install
# https://github.com/foundweekends/giter8/issues/449
RUN apt-get install -y apt-transport-https ca-certificates wget dirmngr gnupg software-properties-common
RUN wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
RUN add-apt-repository -y https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
RUN apt-get update && apt install -y adoptopenjdk-8-hotspot
RUN PATH=/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin:$PATH && curl https://raw.githubusercontent.com/foundweekends/conscript/master/setup.sh | sh && ~/.conscript/bin/cs foundweekends/giter8
RUN export PATH=/root/.conscript/bin:$PATH && g8 --version
RUN apt remove -y adoptopenjdk-8-hotspot
RUN add-apt-repository -r https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
# Install docker
RUN apt-get update && apt-get install -y docker-ce docker-ce-cli
RUN docker --version || true
# Install Node.js and npm
RUN apt-get update && apt-get install -y nodejs
RUN node --version || true
RUN npm --version || true
# Install protobuf & protoc-gen-grpc-web plugin
RUN apt-get install -y protobuf-compiler
RUN protoc --version || true
# jq for parsing config from secrets/cloudflow
RUN apt-get -y install jq
# Install kpt
RUN curl https://storage.googleapis.com/kpt-dev/latest/linux_amd64/kpt --output ./kpt
RUN chmod +x ./kpt && mv ./kpt /usr/bin
RUN kpt version
# Creating Home Directory in container
RUN mkdir - p /usr/src/app
# Setting Home Directory
WORKDIR /usr/src/app
# Copying src code to Container
COPY . /usr/src/app
# Compiling Scala Code
# RUN sbt compile
# Exposing Port
EXPOSE 3000
EXPOSE 9000
# Running Scala Application
CMD ["sbt", "clean", "compile", "run"]
I'm also having nginx with the following configuration:
upstream frontend {
server play:3000;
}
upstream backend {
server play:9000;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
}
location /api {
client_max_body_size 200M;
client_body_buffer_size 200M;
proxy_pass http://backend;
}
}
It works fine locally when I run it with the following docker-compose:
version: '3'
services:
play:
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/app
nginx:
depends_on:
- play
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '80:80'
I'm having two containers running properly and whole application works as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb5c29b5f361 danamex-app_nginx "/docker-entrypoint.…" 55 minutes ago Up 8 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp danamex-app_nginx_1
368ddb6f8fec danamex-app_play "sbt clean compile r…" 55 minutes ago Up 8 seconds 3000/tcp, 9000/tcp danamex-app_play_1
However, when I deploy the same application to elastic beanstalk using same docker-compose, play application doesn't expose ports:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2186d4f1745 current_nginx "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp current_nginx_1
96dccefba9a7 jeremycod/danamex "bin/danamex -Dpidfi…" About a minute ago Up About a minute current_play_1
I don't see anything from EB logs. There is no indication that something was wrong with the application.
play_1 | Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
play_1 | [[37minfo[0m] play.api.Play - Application started (Prod) (no global state)
play_1 | [[37minfo[0m] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:9000
nginx_1 | 2021/09/11 22:36:01 [error] 30#30: *43 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.2.40, server: , request: "GET / HTTP/1.1", upstream: "http://172.19.0.2:3000/", host: "xxxxxxxxxx.us-west-2.elasticbeanstalk.com"
nginx_1 | xxx.xx.2.40 - - [11/Sep/2021:22:36:01 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" "24.85.210.60"
nginx_1 | 2021/09/11 22:36:01 [error] 30#30: *43 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.2.40, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.19.0.2:3000/favicon.ico", host: "xxxxxxxxx.us-west-2.elasticbeanstalk.com", referrer: "http://xxxxxxxxxx.us-west-2.elasticbeanstalk.com/"
I've tried to change instance to t2.large in order to validate that it's not running out of memory.
Any idea what could be the problem?

Docker has an opened port, but it's not described in configuration files?

I have 2 Dockerfiles: one with nginx and another with php. Container with php exposes port 9000, but it is not described in docker-compose.yml or Dockerfile.
What makes docker to open the port 9000?
Does anyone come across this problem? Any help is be appreciated )))
Steps to reproduce
docker-compose.yml
version: "3.4"
services:
nginx:
build:
context: .
dockerfile: ./Dockerfile-Nginx
container_name: nginx
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./web:/var/www/vhosts
- ./config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./config/nginx/common-nginx-host.conf:/etc/nginx/common-nginx-host.conf:ro
- ./config/nginx/csp.conf:/etc/nginx/csp.conf:ro
- ./config/nginx/expires-map.conf:/etc/nginx/expires-map.conf:ro
- ./config/nginx/mime.types:/etc/nginx/mime.types:ro
- ./config/nginx/sites-enabled:/etc/nginx/sites-enabled:ro
- ./logs/nginx:/var/log/nginx
- ./data/letsencrypt:/etc/letsencrypt
environment:
- TZ=Etc/UTC
links:
- php
networks:
- local
restart: always
php:
build:
context: .
dockerfile: ./Dockerfile-PHP7.4
container_name: php
image: php
volumes:
- ./web:/var/www/vhosts
- ./logs/php/7.4:/var/log/php
- ./config/php/7.4/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
environment:
- TZ=Etc/UTC
networks:
- local
restart: always
networks:
local:
driver: bridge
Dockerfile-Nginx
FROM ubuntu:latest as nginx
RUN \
apt-get -yqq update && \
apt-get -yqq install nginx && \
apt-get install -y tzdata && \
apt-get install -yqq --no-install-recommends apt-utils && \
apt-get -yqq install software-properties-common && \
apt-get -yqq install && \
LC_ALL=C.UTF-8 add-apt-repository -y ppa:ondrej/php && apt-get update && \
apt-get install -yqq php7.4 php7.4-cli php7.4-common php7.4-mysql && \
apt-get -yqq install letsencrypt && \
apt-get clean
WORKDIR /var/www/vhosts
CMD ["nginx", "-g", "daemon off;"]
Dockerfile-PHP7.4
FROM php:7.4-fpm as php-fpm-7.4
RUN \
apt-get update -yqq && \
apt-get install -yqq \
xvfb libfontconfig wkhtmltopdf \
git sass \
libpng-dev \
libjpeg-dev \
libzip-dev \
libfreetype6-dev \
libxrender1 \
libfontconfig1 \
libx11-dev \
libxtst6 \
zlib1g-dev \
imagemagick \
libmagickwand-dev \
libmagickcore-dev \
libmemcached-dev && \
pecl install memcached && \
yes '' | pecl install imagick && \
docker-php-ext-install gd && \
docker-php-ext-enable gd && \
docker-php-ext-enable imagick && \
docker-php-ext-enable memcached && \
docker-php-ext-configure pdo_mysql --with-pdo-mysql && \
docker-php-ext-install -j$(nproc) \
pdo_mysql \
zip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY config/ImageMagick-6/policy.xml /etc/ImageMagick-6/policy.xml
RUN ${TZ} > /etc/timezone
CMD ["php-fpm"]
Part of nginx config file
location /index.php {
# Rate limiting
limit_req zone=defaultlimit burst=40 nodelay;
limit_req_status 444;
limit_req_log_level warn;
fastcgi_split_path_info ^(.+.php)(.*)$;
set $fsn /index.php;
if (-f $document_root$fastcgi_script_name){
set $fsn $fastcgi_script_name;
}
#include snippets/fastcgi-php.conf;
fastcgi_pass php:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fsn;
#PATH_INFO and PATH_TRANSLATED can be omitted, but RFC 3875 specifies them for CGI
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fsn;
}
Docker opend port 9000
It s because it s exposed into the parent docker image (php:7.4-fpm)
https://github.com/docker-library/php/blob/master/7.3/alpine3.12/fpm/Dockerfile#L233
Your PHP Dockerfile states FROM php:7.4-fpm
That php:7.4-fpm exposes the port 9000.

nginx 1.14.2 docker bad gateway respone

Good day.
I have been working with Docker recently. Faced such a problem:
The administrative part of the site is working fine, and the public part gives an error 502 bad gateway.
Here are the settings for my docker:
docker-kompose.yml (root folder):
version: '3'
services:
nginx:
image: nginx:1.14
ports:
- "${NGINX_HOST}:${NGINX_PORT}:80"
volumes:
- ./:/var/www/html:cached
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on: [php]
env_file: .env
restart: always
php:
build:
context: docker/php
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ./:/var/www/html:cached
- ./docker/php/php.ini:/usr/local/etc/php/php.ini:cached
- ./docker/php/php-fpm.conf:/usr/local/etc/php-fpm.d/99-custom.conf:cached
user: "${DOCKER_UID}"
env_file: .env
restart: always
docker-kompose.override.yml (root folder):
version: '3'
services:
markup:
build: docker/markup
user: "${DOCKER_UID}"
ports:
- "${MARKUP_PORT}:4000"
volumes:
- ./:/app:cached
command: bash -c "npm install --no-save && bower install && gulp external"
environment:
NODE_ENV: "${MARKUP_ENV}"
.env file (root folder):
COMPOSE_PROJECT_NAME=nrd
# "docker-compose.yml:docker-compose.prod.yml" for prod
# "docker-compose.yml:docker-compose.stage.yml" for stage
COMPOSE_FILE=
DOCKER_UID=1000
DOCKER_ADDRESS=172.17.0.1
# "develop" or "production"
MARKUP_ENV=develop
MARKUP_PORT=4000
# 0.0.0.0 for external access
NGINX_HOST=127.0.0.1
NGINX_PORT=80
TIMEZONE=Europe/Moscow
nginx.conf (folder docker/nginx/):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off;
error_log /dev/null crit;
client_max_body_size 1024M;
client_header_buffer_size 4k;
large_client_header_buffers 4 8k;
sendfile on;
keepalive_timeout 65;
map $http_x_forwarded_proto $fastcgi_https {
default off;
https on;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
root /var/www/html;
listen 80;
server_name _;
access_log off;
error_log /dev/null crit;
charset utf-8;
location ~ ^/(\.git|log|docker|\.env) {
return 404;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_read_timeout 7200;
fastcgi_send_timeout 72000;
fastcgi_buffer_size 32k;
fastcgi_buffers 16 32k;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS $fastcgi_https if_not_empty;
fastcgi_param REMOTE_ADDR $http_x_forwarded_for;
include fastcgi_params;
if (!-f $request_filename) {
rewrite ^(.*)/index.php$ $1/ redirect;
}
}
location / {
index index.php index.html index.htm;
if (!-e $request_filename){
rewrite ^(.*)$ /bitrix/urlrewrite.php last;
}
}
location ~ /\.ht {
deny all;
}
location ~* ^.+\.(xml|html|jpg|jpeg|gif|ttf|eot|swf|svg|png|ico|mp3|css|woff2?|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|dat|avi|ppt|txt|tar|mid|midi|wav|bmp|rtf|wmv|mpeg|mpg|mp4|m4a|spx|ogx|ogv|oga|webm|weba|ogg|tbz|js)$ {
expires 30d;
access_log off;
}
}
}
.Dockerfile (folder docker/php):
# See https://github.com/docker-library/php/blob/master/7.1/fpm/Dockerfile
FROM php:7.1-fpm
ARG TIMEZONE
RUN apt-get update && apt-get install -y \
openssl \
git \
zlibc \
zlib1g \
zlib1g-dev \
libfreetype6-dev \
libssl-dev \
libjpeg62-turbo-dev \
libmemcached-dev \
libmagickwand-dev \
libmcrypt-dev \
libpng-dev \
libicu-dev \
unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN mkdir /.composer/ && chmod 777 /.composer/
# Set timezone
RUN ln -snf /usr/share/zoneinfo/${TIMEZONE} /etc/localtime && echo ${TIMEZONE} > /etc/timezone
RUN printf '[PHP]\ndate.timezone = "%s"\n', ${TIMEZONE} > /usr/local/etc/php/conf.d/tzone.ini
RUN "date"
# Type docker-php-ext-install to see available extensions
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install pdo_mysql gd
# Install memcached extension
RUN apt-get update \
&& apt-get install -y libmemcached11 libmemcachedutil2 build-essential libmemcached-dev libz-dev \
&& pecl install memcached \
&& echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.ini \
&& apt-get remove -y build-essential libmemcached-dev libz-dev \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /tmp/pear
RUN docker-php-ext-install intl
RUN docker-php-ext-install opcache
RUN docker-php-ext-install soap
RUN apt-get install -y --no-install-recommends default-libmysqlclient-dev
RUN apt-get install -y \
libzip-dev \
zip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip
RUN docker-php-ext-install mysqli
WORKDIR /var/www/html
CMD ["php-fpm"]
php.ini (folder docker/php):
[PHP]
short_open_tag = On
upload_max_filesize = 200M
post_max_size = 250M
display_errors = Off
memory_limit = 1024M
max_execution_time = 60
[mbstring]
mbstring.internal_encoding = UTF-8
mbstring.func_overload = 2
php-fpm.conf (folder docker/php):
[www]
pm = dynamic
pm.max_children = 50
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 10
access.format = "%{REMOTE_ADDR}e - %u %t \"%m %{REQUEST_URI}e%Q%q\" %s %{miliseconds}dms %{megabytes}MM %C%%"
;slowlog = /proc/self/fd/2
;request_slowlog_timeout = 2s
php_admin_value[error_log] = /proc/self/fd/2
php_admin_flag[log_errors] = on
.Dockerfile (folder docker/markup):
FROM node:10.13
RUN npm install -g gulp bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN mkdir /.npm && mkdir /.config && mkdir /.cache && mkdir /.local && chmod 777 /.npm && chmod 777 /.config && chmod 777 /.cache && chmod 777 /.local
EXPOSE 4000
WORKDIR /app/markup
Please help, I do not understand what is the matter. Today everything worked fine, I did not touch the settings anywhere, I did not change the passwords anywhere. The public part of my local project just started giving 502 errors
Using Docker for Windows.

Symfony 2.6 on Docker nginx giving 404

Currently working on a legacy app built with Symfony 2.6 (Originally, the app is running on Centos7 on both dev and prod, but due to time constraint, I just need to make it work on local so I can continue working with feature requests).
I am adding Docker to the project as I am having problem managing dependencies on my local machine.
Setup:
ubuntu 18.04
nginx
docker 19.03
My current problem is:
- I am getting 404 on all routes I am hitting.
- I can see there are logs from the docker logs <container> -f (Update: This is the apache logs)
Logs:
192.168.176.1 - - [01/Oct/2019:14:56:17 +0000] "GET /user HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36"
192.168.176.1 - - [01/Oct/2019:14:57:51 +0000] "GET /user HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36"
192.168.176.1 - - [02/Oct/2019:05:31:59 +0000] "GET /user HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36"
//truncated
This is the folder structure:
|app
|bin
|docker
|app
default
Dockerfile
php-fpm.conf
start-container.sh
supervisord.conf
|src
|vendor
|web
app.php
app_dev.php
.htaccess
//some other stuff
.env
.gitignore
composer.json
composer.lock
docker-compose.yml
This is the ./web/.htaccess
DirectoryIndex app.php
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI}::$1 ^(/.+)/(.*)::\2$
RewriteRule ^(.*) - [E=BASE:%1]
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteCond %{ENV:REDIRECT_STATUS} ^$
RewriteRule ^app\.php(/(.*)|$) %{ENV:BASE}/$2 [R=301,L]
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule .? - [L]
RewriteRule .? %{ENV:BASE}/app.php [L]
</IfModule>
<IfModule !mod_rewrite.c>
<IfModule mod_alias.c>
RedirectMatch 302 ^/$ /app.php/
</IfModule>
</IfModule>
This is the ./docker/app/default
server {
listen 80 default_server;
root /var/www/html/public;
index index.html index.htm index.php;
server_name _;
charset utf-8;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
location / {
try_files $uri $uri/ /app.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.1-fpm.sock;
}
error_page 404 /app.php;
}
This is the ./docker/app/Dockerfile
FROM ubuntu:18.04
LABEL maintainer="My name"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y gnupg tzdata \
&& echo "Asia/Dubai" > /etc/timezone \
&& dpkg-reconfigure -f noninteractive tzdata \
&& apt-get install -y software-properties-common \
&& add-apt-repository -y ppa:ondrej/php
RUN apt-get update \
&& apt-get install -y curl zip unzip git supervisor sqlite3 make \
nginx php7.1-fpm php7.1-cli \
php7.1-pgsql php7.1-sqlite3 php7.1-gd \
php7.1-curl php7.1-memcached \
php7.1-imap php7.1-mysql php7.1-mbstring \
php7.1-xml php7.1-zip php7.1-bcmath php7.1-soap \
php7.1-intl php7.1-readline php7.1-xdebug \
php-msgpack php-igbinary \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& mkdir /run/php \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/\* /tmp/\* /var/tmp/\* \ //had to escape the * as it was commenting out the code in stackoverflow editor
&& echo "daemon off;" >> /etc/nginx/nginx.conf
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
ADD default /etc/nginx/sites-available/default
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
ADD php-fpm.conf /etc/php/7.1/fpm/php-fpm.conf
ADD start-container.sh /usr/bin/start-container
RUN chmod +x /usr/bin/start-container
ENTRYPOINT ["start-container"]
This is the ./docker-compose.yml
version: '3'
services:
app:
build: ./docker/app
image: 'be/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Additional note:
I added Docker to another legacy project using Symfony 2.8, and it is working.
Using the same setup as mentioned in this question.
Any help is very much appreciated.
Thank you.
I'm not too familiar with nginx configs but in my opinion root /var/www/html/public looks odd for old symfony, should have web instead of public

GET http://backend/api/countries net::ERR_NAME_NOT_RESOLVED

I am actually getting error when making CORS calls to the backend:
GET http://backend/api/countries net::ERR_NAME_NOT_RESOLVED
I am using for that docker-compose:
docker-compose.yml
version: '2'
networks:
minn_net:
services:
backend:
build: backend-symfony
container_name: backend
networks:
- minn_net
ports:
- 81:80
volumes:
- ./backend-symfony/backend/var/logs-nginx:/var/log/nginx
- ./backend-symfony/backend/:/var/www/html
- ./backend-symfony/errors/:/var/www/errors
db:
image: mysql:5.7.19
container_name: db
networks:
- minn_net
ports:
- 3306
volumes:
- "./.data/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
phpmyadmin:
image: phpmyadmin/phpmyadmin:edge-4.7
container_name: phpmyadmin
networks:
- minn_net
ports:
- 8080:80
links:
- db
frontend:
build: frontend-angular
container_name: frontend
networks:
- minn_net
links:
- backend # I added links as extra: may be the frontend recognises the backend
ports:
- 88:80
volumes:
- ./frontend-angular/frontend2/dist:/var/www/frontend
- ./frontend-angular/conf/docker/default.conf:/etc/nginx/conf.d/default.conf
- ./frontend-angular/logs/nginx/:/var/log/nginx
config of nginx for the backend
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/html/public;
index index.php;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
# Add option for x-forward-for (real ip when behind elb)
#real_ip_header X-Forwarded-For;
#set_real_ip_from 172.16.0.0/12;
location / {
# Match host using a hostname if you like
#if ($http_origin ~* (https?://.*\.tarunlalwani\.com(:[0-9]+)?$)) {
# set $cors "1";
#}
set $cors "1";
# OPTIONS indicates a CORS pre-flight request
if ($request_method = 'OPTIONS') {
set $cors "${cors}o";
}
# OPTIONS (pre-flight) request from allowed
# CORS domain. return response directly
if ($cors = "1o") {
add_header 'Access-Control-Allow-Origin' '$http_origin' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Headers' 'Origin, Content-Type, Accept, Lang, Authorization' always;
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
# add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
try_files $uri /index.php$is_args$args;
}
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|eof|woff|ttf)$ {
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
#add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
if (-f $request_filename) {
expires 30d;
access_log off;
}
}
location ~ \.php$ {
add_header 'Access-Control-Allow-Headers' 'Content-Type,Authorization,Lang';
#add_header 'Access-Control-Allow-Headers' '*';
add_header 'Access-Control-Allow-Methods' 'POST,GET,PUT,DELETE,OPTIONS';
add_header 'Access-Control-Allow-Origin' '*';
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Angular service that calls the backend
import { Injectable } from '#angular/core';
import { HttpClient, HttpErrorResponse, HttpHeaders } from '#angular/common/http';
// import { HttpClientModule } from '#angular/common/http';
// Grab everything with import 'rxjs/Rx';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/throw';
import { Observer } from 'rxjs/Observer';
import 'rxjs/add/operator/do';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import * as _ from 'lodash';
import { ICountry } from '#app/shared/interfaces';
#Injectable()
export class DataService {
baseUrl = 'http://backend/api'; // it does not work
// baseUrl = 'http://172.18.0.4/api'; // it works perfectly
constructor(private http: HttpClient) { }
// private httpheadersGet = new HttpHeaders().set("Access-Control-Allow-Origin", "http://localhost:4200");
public getCountries(): Observable<ICountry[]> {
return (
this.http
.get<ICountry[]>(this.baseUrl + '/countries'/* , {"headers": this.httpheadersGet} */)
.do(console.log)
.map(data => _.values(data["hydra:member"]))
.catch(this.handleError)
);
}
}
Please be noticed, if I set backend IP address manually from the frontend (as it can be seen in the angular code), the CORS calls are working perfectly. So, have I missing something in the config of the docker-compose.yml file?
PS: Even the network and the container names were specified to avoid an incorrect container name in the angular code.
Nginx Dockerfile for the frontend container
FROM debian:stretch-slim
LABEL maintainer="NGINX Docker Maintainers <docker-maint#nginx.com>"
ENV NGINX_VERSION 1.13.9-1~stretch
ENV NJS_VERSION 1.13.9.0.1.15-1~stretch
RUN set -x \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y gnupg1 apt-transport-https ca-certificates \
&& \
NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
found=''; \
for server in \
ha.pool.sks-keyservers.net \
hkp://keyserver.ubuntu.com:80 \
hkp://p80.pool.sks-keyservers.net:80 \
pgp.mit.edu \
; do \
echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
done; \
test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* \
&& dpkgArch="$(dpkg --print-architecture)" \
&& nginxPackages=" \
nginx=${NGINX_VERSION} \
nginx-module-xslt=${NGINX_VERSION} \
nginx-module-geoip=${NGINX_VERSION} \
nginx-module-image-filter=${NGINX_VERSION} \
nginx-module-njs=${NJS_VERSION} \
" \
&& case "$dpkgArch" in \
amd64|i386) \
# arches officialy built by upstream
echo "deb https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
&& apt-get update \
;; \
*) \
# we're on an architecture upstream doesn't officially build for
# let's build binaries from the published source packages
echo "deb-src https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
\
# new directory for storing sources and .deb files
&& tempDir="$(mktemp -d)" \
&& chmod 777 "$tempDir" \
# (777 to ensure APT's "_apt" user can access it too)
\
# save list of currently-installed packages so build dependencies can be cleanly removed later
&& savedAptMark="$(apt-mark showmanual)" \
\
# build .deb files from upstream's source packages (which are verified by apt-get)
&& apt-get update \
&& apt-get build-dep -y $nginxPackages \
&& ( \
cd "$tempDir" \
&& DEB_BUILD_OPTIONS="nocheck parallel=$(nproc)" \
apt-get source --compile $nginxPackages \
) \
# we don't remove APT lists here because they get re-downloaded and removed later
\
# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
# (which is done after we install the built packages so we don't have to redownload any overlapping dependencies)
&& apt-mark showmanual | xargs apt-mark auto > /dev/null \
&& { [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; } \
\
# create a temporary local APT repo to install from (so that dependency resolution can be handled by APT, as it should be)
&& ls -lAFh "$tempDir" \
&& ( cd "$tempDir" && dpkg-scanpackages . > Packages ) \
&& grep '^Package: ' "$tempDir/Packages" \
&& echo "deb [ trusted=yes ] file://$tempDir ./" > /etc/apt/sources.list.d/temp.list \
# work around the following APT issue by using "Acquire::GzipIndexes=false" (overriding "/etc/apt/apt.conf.d/docker-gzip-indexes")
# Could not open file /var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages - open (13: Permission denied)
# ...
# E: Failed to fetch store:/var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages Could not open file /var/lib/apt/lists/partial/_tmp_tmp.ODWljpQfkE_._Packages - open (13: Permission denied)
&& apt-get -o Acquire::GzipIndexes=false update \
;; \
esac \
\
&& apt-get install --no-install-recommends --no-install-suggests -y \
$nginxPackages \
gettext-base \
&& apt-get remove --purge --auto-remove -y apt-transport-https ca-certificates && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list \
\
# if we have leftovers from building, let's purge them (including extra, unnecessary build deps)
&& if [ -n "$tempDir" ]; then \
apt-get purge -y --auto-remove \
&& rm -rf "$tempDir" /etc/apt/sources.list.d/temp.list; \
fi
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
After installing dnsutils
root#123e38093010:/# nslookup backend
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: backend
Address: 172.18.0.3

Resources