Can't connect UWSGI container with NGINX container [Docker] - docker

I can'f find my mistake myself, could anyone help me please.
So, I want to run Nginx with https and uWSGI+Flask in different containers for many reasons. I did it, but uwsgi container doesn't get request from nginx container.
My confing:
N.B. - IP address of my server has a 11.11.11.1 for only example propose.
Nginx Dockerfile:
FROM nginx:alpine
RUN apk add --no-cache openssl
RUN mkdir -p /etc/nginx/ssl/ \
&& cd /etc/nginx/ssl/ \
&& openssl req -newkey rsa:2048 -sha256 -nodes \
-keyout cert.key \
-x509 \
-days 9999 \
-out cert.pem \
-subj "/C=US/ST=New York/L=Brooklyn/O=Me/CN=11.11.11.1"
ADD nginx.conf /etc/nginx/nginx.conf
ADD nginx.custom.conf /etc/nginx/conf.d/nginx.custom.conf
EXPOSE 443
EXPOSE 80
uwsgi Dockerfile
FROM alpine:3.7
ADD requirements.txt requirements.txt
# Install uWSGI
RUN apk add --no-cache uwsgi-python3 python3 \
&& export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/site-packages:/usr/lib/python3.6/site-packages \
&& pip3 install --no-cache-dir -r requirements.txt
EXPOSE 4000
ADD ./app /app
WORKDIR /app
CMD [ "uwsgi", "--thunder-lock", "--ini", "/app/uwsgi.ini"]
uwsgi.ini:
[uwsgi]
app_base = /app
chmod-socket = 777
socket = 0.0.0.0:4000
chdir = %(app_base)
wsgi-file = uwsgi.py
callable = app
master = true
buffer-size = 32768
processes = 4
max-requests = 1000
harakiri = 20
vauum = true
reload-on-as = 512
die-on-term = true
plugins = python3
uwsgi.py:
from bot.controllers import app
if __name__ == '__main__':
app.run(host='0.0.0.0', port=4000, debug=True, use_reloader=False)
nginx.conf:
upstream flaskapp {
server 0.0.0.0:4000;
}
server {
listen 80;
listen 443 ssl;
server_name 11.11.11.1;
ssl on;
ssl_protocols SSLv3 TLSv1;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/cert.key;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass flaskapp;
}
}
docker-compose.yml:
version: '0.1'
services:
app:
build: .
ports:
- "4000:4000"
links:
- nginx
nginx:
image: nginx_ssl:5.0
ports:
- "443:443"
log:
app_1 | uwsgi socket 0 bound to TCP address 0.0.0.0:4000 fd 3
app_1 | uWSGI running as root, you can use --uid/--gid/--chroot options
app_1 | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
app_1 | Python version: 3.6.3 (default, Nov 21 2017, 14:55:19) [GCC 6.4.0]
app_1 | *** Python threads support is disabled. You can enable it with --enable-threads ***
app_1 | Python main interpreter initialized at 0x55e6e3cc5f40
app_1 | uWSGI running as root, you can use --uid/--gid/--chroot options
app_1 | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
app_1 | your server socket listen backlog is limited to 100 connections
app_1 | your mercy for graceful operations on workers is 60 seconds
app_1 | mapped 507960 bytes (496 KB) for 4 cores
app_1 | *** Operational MODE: preforking ***
app_1 | Set the MIATA_SERVER_IP environment variable
app_1 | Set the MIATA_PUBLIC_CERT environment variable
app_1 | WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x55e6e3cc5f40 pid: 1 (default app)
app_1 | uWSGI running as root, you can use --uid/--gid/--chroot options
app_1 | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
app_1 | *** uWSGI is running in multiple interpreter mode ***
app_1 | spawned uWSGI master process (pid: 1)
app_1 | spawned uWSGI worker 1 (pid: 8, cores: 1)
app_1 | spawned uWSGI worker 2 (pid: 9, cores: 1)
app_1 | spawned uWSGI worker 3 (pid: 10, cores: 1)
app_1 | spawned uWSGI worker 4 (pid: 11, cores: 1)
nginx_1 | 2018/04/12 14:15:33 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: 11.11.11.1, request: "GET / HTTP/1.1", upstream: "uwsgi://0.0.0.0:4000", host: "77.37.214.6"
nginx_1 | 172.17.0.1 - - [12/Apr/2018:14:15:33 +0000] "GET / HTTP/1.1" 502 174 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.1 Safari/603.1.30" "-"
My question is why NGINX can't connect to the UWSGI? Where I did mistake or what I didn't do?
Thank you in advance!

The nginx container should link to the app container. In the nginx upstream, the uwsgi container should be referenced. This will allow nginx to reference the uwsgi container to proxy the requests.
nginx.conf
upstream flaskapp {
server app:4000;
}
server {
listen 80;
listen 443 ssl;
server_name 11.11.11.1;
ssl on;
ssl_protocols SSLv3 TLSv1;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/cert.key;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass flaskapp;
}
}
docker-compose.yml
version: '2'
services:
app:
build: .
ports:
- "4000:4000"
nginx:
image: nginx_ssl:5.0
ports:
- "443:443"
links:
- app

Related

"This site can't be reached." error when connecting to pgadmin4 docker image hosted in Debian VM in GCP

pgadmin4 is a docker image running inside a Debian VM hosted in Google Cloud. I have already enabled http and https traffic for it. I ran the docker image using the following command:
root#fastapi-celery:~# docker run -p 5050:80 \
-e "PGADMIN_DEFAULT_EMAIL=admin#admin.com" \
-e "PGADMIN_DEFAULT_PASSWORD=admin" \
-d dpage/pgadmin4
This is the logs for it during initialization:
pgAdmin 4 - Application Initialisation
======================================**
[2023-01-25 09:23:20 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2023-01-25 09:23:20 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[2023-01-25 09:23:20 +0000] [1] [INFO] Using worker: gthread
[2023-01-25 09:23:20 +0000] [90] [INFO] Booting worker with pid: 90
This is my netstat command result:
root#fastapi-celery:\~# sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:5556 0.0.0.0:\* LISTEN 180176/docker-proxy
tcp 0 0 0.0.0.0:22 0.0.0.0:\* LISTEN 497/sshd: /usr/sbin
tcp 0 0 0.0.0.0:5050 0.0.0.0:\* LISTEN 262783/docker-proxy
tcp 0 0 0.0.0.0:8000 0.0.0.0:\* LISTEN 182361/docker-proxy
tcp6 0 0 :::5556 :::\* LISTEN 180182/docker-proxy
tcp6 0 0 :::22 :::\* LISTEN 497/sshd: /usr/sbin
tcp6 0 0 :::5050 :::\* LISTEN 262789/docker-proxy
tcp6 0 0 :::8000 :::\* LISTEN 182366/docker-proxy
I have tried stopping and restarting the docker image but it obviously didn't work. I am really not sure how to proceed debugging this.

Laravel Sail + Caddy - local dev site not trusted

First time using Docker + Sail + Caddy. I copied the Caddy setup from here. I have a local Laravel dev project which uses 2x subdomains (sd1.project.local, sd2.project.local).
Everything looks fine on sail up. I can see the certs created in the expected local stores, one for each subdomain. I then import the two certs to Chrome.
I can get to the site, though Chrome does not trust it. I have restarted Chrome after the cert imports.
What have I missed / messed? Thanks!
The error of curl -vvv on sd1.project.local is:
$ curl -vvv https://sd1.project.local
* Trying 127.0.0.1:443...
* TCP_NODELAY set
* Connected to sd1.project.local (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The output of the sail up command for caddy is:
project-caddy-1 | {"level":"info","ts":1651470513.5381901,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
project-caddy-1 | {"level":"warn","ts":1651470513.5383825,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-Host: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5384026,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-For: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5386436,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-Host: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5386622,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-For: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5391574,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
project-caddy-1 | {"level":"warn","ts":1651470513.5395052,"logger":"admin","msg":"admin endpoint disabled"}
project-caddy-1 | {"level":"info","ts":1651470513.5397356,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001264d0"}
project-caddy-1 | {"level":"info","ts":1651470513.5441852,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
project-caddy-1 | {"level":"warn","ts":1651470513.5442154,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv1","http_port":80}
project-caddy-1 | {"level":"info","ts":1651470513.5448365,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
project-caddy-1 | {"level":"info","ts":1651470513.544853,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["sd1.project.local","sd2.project.local"]}
project-caddy-1 | {"level":"info","ts":1651470513.545812,"logger":"tls.obtain","msg":"acquiring lock","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5458567,"logger":"tls.obtain","msg":"acquiring lock","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5476136,"logger":"tls.obtain","msg":"lock acquired","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5479813,"logger":"tls","msg":"finished cleaning storage units"}
project-caddy-1 | {"level":"info","ts":1651470513.5485632,"logger":"tls.obtain","msg":"lock acquired","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5563135,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.556336,"logger":"tls.obtain","msg":"releasing lock","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5574615,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.557494,"logger":"tls.obtain","msg":"releasing lock","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"warn","ts":1651470513.5588439,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
project-caddy-1 | 2022/05/02 05:48:33 define JAVA_HOME environment variable to use the Java trust
project-caddy-1 | 2022/05/02 05:48:33 not NSS security databases found
project-caddy-1 | {"level":"warn","ts":1651470513.561766,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [sd1.project.local]: no OCSP server specified in certificate","identifiers":["sd1.project.local"]}
project-caddy-1 | {"level":"warn","ts":1651470513.5634398,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [sd2.project.local]: no OCSP server specified in certificate","identifiers":["sd2.project.local"]}
project-caddy-1 | 2022/05/02 05:48:33 certificate installed properly in linux trusts
project-caddy-1 | {"level":"info","ts":1651470513.5809436,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
project-caddy-1 | {"level":"info","ts":1651470513.580976,"msg":"serving initial configuration"}
My docker-compose is:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
project.local:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
extra_hosts:
- 'host.docker.internal:host-gateway'
# ports:
# - "${APP_PORT:-80}:80"
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
GITHUB_TOKEN: '${GITHUB_TOKEN}'
FONTAWESOME_NPM_AUTH_TOKEN: '${FONTAWESOME_NPM_AUTH_TOKEN}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- redis
- mailhog
redis:
build:
context: "./Docker/Redis"
dockerfile: Dockerfile
privileged: true
command: sh -c "./init.sh"
ports:
- "${FORWARD_REDIS_PORT:-6379}:6379"
volumes:
- "sail-redis:/data"
networks:
- sail
healthcheck:
test: ["CMD", "redis-cli", "ping"]
retries: 3
timeout: 5s
mailhog:
image: "mailhog/mailhog:latest"
ports:
- "${FORWARD_MAILHOG_PORT:-1025}:1025"
- "${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025"
networks:
- sail
caddy:
build:
context: "./Docker/Caddy"
dockerfile: Dockerfile
args:
WWWGROUP: "${WWWGROUP}"
restart: unless-stopped
ports:
- "${APP_PORT:-80}:80"
- "${APP_SSL_PORT:-443}:443"
environment:
LARAVEL_SAIL: 1
HOST_DOMAIN: project.local
volumes:
- "./Docker/Caddy/file:/etc/caddy"
- ".:/srv:cache"
- "./Docker/Caddy/certificates:/data/caddy/certificates/local"
- "./Docker/Caddy/authorities:/data/caddy/pki/authorities/local"
- "sailcaddy:/data:cache"
- "sailcaddyconfig:/config:cache"
networks:
- sail
depends_on:
- project.local
networks:
sail:
driver: bridge
volumes:
sail-redis:
driver: local
sailcaddy:
external: true
sailcaddyconfig:
driver: local
My Caddifile is:
{
admin off
# debug
on_demand_tls {
ask http://project.local/caddy
}
local_certs
default_sni project
}
:80 {
reverse_proxy project.local {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port 443
# header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
:443 {
tls internal {
on_demand
}
reverse_proxy project.local {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port 443
# header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
sd1.project.local {
reverse_proxy project.local
}
sd2.project.local {
reverse_proxy project.local
}
My Dockerfile is:
FROM caddy:alpine
LABEL maintainer="Adrian Mejias"
ARG WWWGROUP
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add --no-cache bash \
&& apk add --no-cache nss-tools \
&& rm -rf /var/cache/apk/*
RUN addgroup -S $WWWGROUP
RUN adduser -G $WWWGROUP -u 1337 -S sail
COPY start-container /usr/local/bin/start-container
RUN chmod +x /usr/local/bin/start-container
ENTRYPOINT ["start-container"]
My start-container is:
#!/usr/bin/env sh
if [ ! -z "$WWWUSER" ]; then
addgroup $WWWUSER sail
fi
if [ $# -gt 0 ];
then
# #todo find alpine equivilent of below
# exec gosu $WWWUSER "$#"
else
/usr/bin/caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
fi

Nginx connection refused

I'm using docker containers to host a web app. I have three main containers: MySQL, flask and Nginx. The first two work as expected and the latter seems to be working fine as no error is displayed in the docker-compose startup.
Nginx container initialization output:
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/04/07 13:09:13 [notice] 1#1: using the "epoll" event method
nginx | 2022/04/07 13:09:13 [notice] 1#1: nginx/1.21.6
nginx | 2022/04/07 13:09:13 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx | 2022/04/07 13:09:13 [notice] 1#1: OS: Linux 4.19.130-boot2docker
nginx | 2022/04/07 13:09:13 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx | 2022/04/07 13:09:13 [notice] 1#1: start worker processes
nginx | 2022/04/07 13:09:13 [notice] 1#1: start worker process 21
Nginx dockerfile
# Dockerfile-nginx
FROM nginx:latest
# Nginx will listen on this port
# EXPOSE 80
# Remove the default config file that
# /etc/nginx/nginx.conf includes
RUN rm /etc/nginx/conf.d/default.conf
# We copy the requirements file in order to install
# Python dependencies
COPY nginx.conf /etc/nginx/conf.d/
Containers after being deployed and their respective ports:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bffffcfe2f70 sc_server_nginx "/docker-entrypoint.…" 14 seconds ago Up 13 seconds 0.0.0.0:80->80/tcp nginx
a73d958c1407 sc_server_flask "uwsgi app.ini" 9 hours ago Up 9 hours 8080/tcp flask
d273db5f80ef mysql:5.7 "docker-entrypoint.s…" 21 hours ago Up 9 hours (healthy) 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
I'm new to Nginx server, so I guess it may be a newbie error. I'm trying to redirect all the traffic from my host machine's 80 port to docker's 80 which redirects the traffic to the WSGI container via a socket.
I'm using the following Nginx configuration (nothing close to fancy I guess):
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass flask:8080;
}
}
As you can see the server listens at port 80 and redirects all the traffic via the socket uwsgi_pass flask:8080; to the WSGI container that is hosting the app.
However, whenever I type 127.0.0.1:80 or 0.0.0.0:80 in my browser the connection is refused. I have no firewall deployed, so I guess that there is no problem with port 80 being down.
This is my app.ini configuation file, in which the initialization and deployment params are indicated:
[uwsgi]
wsgi-file = wsgi.py
; This is the name of the variable
; in our script that will be called
callable = app
; We use the port 8080 which we will
; then expose on our Dockerfile
socket = :8080
; Set uWSGI to start up 4 workers
processes = 4
threads = 2
master = true
chmod-socket = 660
vacuum = true
die-on-term = true
Additionally, I also include the docker-compose.yml (I guess it may be helpful):
docker-compose.yml
services:
flask:
build: ./flask
container_name: flask
restart: always
environment:
- APP_NAME=MyFlaskApp
- YOURAPPLICATION_SETTINGS=docker_config.py
expose:
- 8080
depends_on:
mysql:
condition: service_healthy
mysql:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_DATABASE:
MYSQL_USER:
MYSQL_PASSWORD:
volumes:
- ./db/init.sql:/data/application/init.sql
healthcheck:
test: mysql -u -p --database -e "show tables;"
interval: 3s
retries: 5
start_period: 30s
nginx:
build: ./nginx
container_name: nginx
restart: always
depends_on:
- mysql
- flask
ports:
- "80:80"
Anyone can help?
Update
I've used Wireshark to scan the loopback interface to see the server's response to 0.0.0.0:80 (I suspect there might be some problem with port 80) and I get the following payload:
Update 2:
After deploying the app in EC2 everything seems to be working fine. Thus, it has to be some problem with port 80 at localhost. My machine's OS is macOS Monterrey 12.4 and the system firewall is turned down.

Exposing local dockerised app using nginx docker container

I have a julia app which has been dockerized using below docker file. And,I am running this local app as docker container on port 8080. The goal is to expose this docker app (port 8080) to public using nginx docker container.
I have followed this tutorial: https://www.domysee.com/blogposts/reverse-proxy-nginx-docker-compose and based on the instructions I have created two files, docker-compose.yml and nginx.conf as shown below.
Dockerfile for the local-app:
FROM julia:1.6
RUN apt-get update && apt-get install -y gcc
ENV JULIA_PROJECT #.
WORKDIR /home
ENV VERSION 1
ADD . /home
EXPOSE 8080
ENTRYPOINT ["julia", "-JApp.so", "-t", "auto", "-L", "src/App.jl", "-e", "App.run()"]
Docker-Compose:
version: "3.9"
services:
nginx:
image: nginx:alpine
container_name: production_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- .:/usr/share/nginx/html
- ./nginx/error.log:/etc/nginx/error_log.log
- ./nginx/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
ports:
- 8080:80
- 443:443
local-app:
image: local-app:latest
container_name: production-local-app
expose:
- "8080"
Nginx.conf:
events {
}
http {
error_log /etc/nginx/error_log.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
server_name app.local.hosting;
location /local-app {
proxy_pass http://localhost:8080;
rewrite ^/local-app(.*)$ $1 break;
}
listen 80;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/app.local.hosting/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.local.hosting/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
}
}
However, running the docker-compose starts two containers, 1 app and 2 nginx, but nginx exits with some error. May I ask, what could be the potential cause of the service failure. Look forward to all the suggestions, thanks in advance!
Update:
Adding the output from terminal upon executing the docker-compose file:
user#user:~/Desktop/App$sudo docker-compose up
Starting production-local-app ... done
Starting production_nginx ... done
Attaching to production_nginx, production-local-app
production_nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
production_nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
production_nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
production_nginx | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
production_nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
production_nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
production_nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
production_nginx | 2021/12/05 16:27:03 [emerg] 1#1: open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:23
production_nginx | nginx: [emerg] open() "/etc/letsencrypt/options-ssl-nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:23
production_nginx exited with code 1
production-local-app |
production-local-app | Web Server starting at http://localhost:8080 - press Ctrl/Cmd+C to stop the server.

nginx reverse proxy to other nginx 502 bad gateway

I would like to create two services, both of them having their own nginx. I would like to use third nginx as a reverse proxy to these two but I got
502 Bad Gateway
when I request
http://127.0.0.1:8080/
http://127.0.0.1:8080/one
http://127.0.0.1:8080/two
accessing:
http://127.0.0.1:8081
http://127.0.0.1:8082
works ok
I have this docker-compose.yml
version: "3.3"
services:
nginx-one:
image: nginx:1.17.8
ports:
- "8081:80"
networks:
- frontend
- backend
volumes:
- ./nginx-one/html:/usr/share/nginx/html
nginx-two:
image: nginx:1.17.8
ports:
- "8082:80"
networks:
- frontend
- backend
volumes:
- ./nginx-two/html:/usr/share/nginx/html
nginx-reverse-proxy:
image: nginx:1.17.8
ports:
- "8080:80"
networks:
- frontend
- backend
volumes:
- ./nginx-reverse-proxy/html:/usr/share/nginx/html
- ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d
debian-network:
image: cslev/debian_networking
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
- frontend
- backend
networks:
frontend:
internal: false
backend:
internal: true
and dir structure
.
├── docker-compose.yml
├── nginx-one
│   └── html
│   └── index.html
├── nginx-reverse-proxy
│   ├── conf.d
│   │   └── default.conf
│   └── html
│   └── index.html
├── nginx-two
   └── html
   └── index.html
nginx.conf content
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
and
conf.d/default.conf
server {
listen 80;
location /one {
proxy_pass http://127.0.0.1:8081/;
}
location /two {
proxy_pass http://127.0.0.1:8082/;
}
}
When I comment this line of docker-compose.yml
# ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d
so conf.d/default.conf is not used, and I request from the host's browser:
http://127.0.0.1:8080/
it gives a proper response from the nginx-reverse-proxy itself but obviously
http://127.0.0.1:8080/one
http://127.0.0.1:8080/two
don't provide any response from
http://127.0.0.1:8081
http://127.0.0.1:8082
but 404 instead.
docker ps output:
IMAGE COMMAND CREATED STATUS PORTS NAMES
cslev/debian_networking "bash" 25 minutes ago Up 3 minutes nginxproblem_debian-network_1
nginx:1.17.8 "nginx -g 'daemon of…" 47 minutes ago Up 3 minutes 0.0.0.0:8080->80/tcp nginxproblem_nginx-reverse-proxy_1
nginx:1.17.8 "nginx -g 'daemon of…" 14 hours ago Up 3 minutes 0.0.0.0:8082->80/tcp nginxproblem_nginx-two_1
nginx:1.17.8 "nginx -g 'daemon of…" 14 hours ago Up 3 minutes 0.0.0.0:8081->80/tcp nginxproblem_nginx-one_1
runnnig this script:
#!/bin/sh
docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080
docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080
docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080/one
docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080/one
docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080/two
docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080/two
docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8081
docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-one_1:8080
docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8082
docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-two_1:8082
gives:
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused
curl: (7) Failed to connect to 127.0.0.1 port 8081: Connection refused
curl: (7) Failed to connect to nginxproblem_nginx-one_1 port 8080: Connection refused
curl: (7) Failed to connect to 127.0.0.1 port 8082: Connection refused
curl: (7) Failed to connect to nginxproblem_nginx-two_1 port 8082: Connection refused
when - ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d is not commented,
and the same output when - ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d is commented in contrary to accessing ip addresses from the browser.
Unless you have host networking enabled, 127.0.0.1 is pointing to the container itself. You can refer to the other two containers from inside a container in the same network by the service name, e.g. nginx-one or nginx-two.
You're also mapping the container port 80 to port 8080/8081/8082 on the host machine. This however does nothing for communication between containers in the same network. Check the docs:
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
So, try changing http://127.0.0.1:8081/; to http://nginx-one/; and it should work.

Resources