I have a Docker compose file running an application that utilizes NGIX as a reverse proxy. The proxy is running on HTTPS for STIG Manager and Keycloak but the additional container I wish to add is running on a different port that is non-HTTPS.
#1 I want to add additional docker containers behind the proxy.
#2 I want to call the app using a DNS name.
Environment: (The server hosting docker)
gsil-docker1.gsil.mil
Compose File:
version: '3.7'
services:
nginx:
# image: nginx:1.23.1
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/nginx/nginx:1.23.1
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./certs/localhost/localhost.crt:/etc/nginx/cert.pem
- ./certs/localhost/localhost.key:/etc/nginx/privkey.pem
- ./certs/dod/Certificates_PKCS7_v5.9_DoD.pem.pem:/etc/nginx/dod-certs.pem
- ./nginx/index.html:/usr/share/nginx/html/index.html
ports:
- "443:443"
keycloak:
# image: quay.io/keycloak/keycloak:19.0.2
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/keycloak/keycloak:19.0.2
environment:
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=Pa55w0rd
- KC_PROXY=edge
- KC_HOSTNAME_URL=https://localhost/kc/
- KC_HOSTNAME_ADMIN_URL=https://localhost/kc/
- KC_SPI_X509CERT_LOOKUP_PROVIDER=nginx
- KC_SPI_X509CERT_LOOKUP_NGINX_SSL_CLIENT_CERT=SSL-CLIENT-CERT
- KC_SPI_TRUSTSTORE_FILE_FILE=/tmp/truststore.p12
- KC_SPI_TRUSTSTORE_FILE_PASSWORD=password
command: start --import-realm
volumes:
- ./certs/dod/Certificates_PKCS7_v5.9_DoD.pem.p12:/tmp/truststore.p12
- ./kc/stigman_realm.json:/opt/keycloak/data/import/stigman_realm.json
- ./kc/create-x509-user.jar:/opt/keycloak/providers/create-x509-user.jar
# uncomment below to persist Keycloak data
# - ./kc/h2:/opt/keycloak/data/h2
stigman:
# image: nuwcdivnpt/stig-manager:1.2.20
# alternative image based on Ironbank Node.js
image: nuwcdivnpt/stig-manager:latest-ironbank
environment:
- STIGMAN_OIDC_PROVIDER=http://keycloak:8080/realms/stigman
- STIGMAN_CLIENT_OIDC_PROVIDER=https://localhost/kc/realms/stigman
- STIGMAN_CLASSIFICATION=U
- STIGMAN_DB_HOST=mysql
- STIGMAN_DB_USER=stigman
- STIGMAN_DB_PASSWORD=stigmanpw
# uncomment below to fetch current STIG library from DISA and import it
# - STIGMAN_INIT_IMPORT_STIGS=true
init: true
mysql:
# image: mysql:8.0.21
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/mysql/mysql8:8.0.31
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_USER=stigman
- MYSQL_DATABASE=stigman
- MYSQL_PASSWORD=stigmanpw
# uncomment below to persist MySQL data
volumes:
- ./mysql-data:/var/lib/mysql
Nginx Config:
events {
worker_connections 4096; ## Default: 1024
}
pid /var/cache/nginx/nginx.pid;
http {
server {
listen 443 ssl;
server_name localhost;
root /usr/share/nginx/html;
client_max_body_size 100M;
ssl_certificate /etc/nginx/cert.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
ssl_prefer_server_ciphers on;
ssl_client_certificate /etc/nginx/dod-certs.pem;
ssl_verify_client optional;
ssl_verify_depth 4;
error_log /var/log/nginx/error.log debug;
if ($return_unauthorized) { return 496; }
location / {
autoindex on;
ssi on;
}
location /stigman/ {
proxy_pass http://stigman:54000/;
}
location /kc/ {
proxy_pass http://keycloak:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
# define which endpoints require mTLS
map_hash_bucket_size 128;
map $uri $secured_url {
default false;
"/kc/realms/stigman/protocol/openid-connect/auth" true;
}
map "$secured_url:$ssl_client_verify" $return_unauthorized {
default 0;
"true:FAILED" 1;
"true:NONE" 1;
"true:" 1;
}
}
I have tried adding settings to my docker-compose and nginx but I was unable to make it work.
docker-compose addition:
networks:
default:
name: grafana_default
external: true
nginx addtion:
server {
listen 80;
server_name grafana.gsil.mil;
location / {
proxy_pass http://grafana.gsil.smil:3000/;
}
}
Additionally, I have created a CNAME DNS entry for grafana.gsil.mil and pointed it to gsil-docker1.gsil.mil
The containers app are all running and I can reach all of them respectively by going to:
gsil-docker1.gsil.mil/stigman
gsil-docker1.gsil.mil/kc
gsil-docker1.gsil.mil:3000
The docker-compose file for grafana:
version: '3.0'
volumes:
grafana-data:
services:
grafana:
container_name: grafana
image: registry1.dso.mil/ironbank/opensource/grafana/grafana:9.3.2
environment:
- grafana.config
restart: always
volumes:
- grafana-data:/var/lib/grafana
ports:
- 3000:3000/tcp
I have done a lot of searching but examples I found tended to show http on nginx with http backend apps. I was struggling to find something that would help pull this all together. Can you have an https proxy with a http backend app or do I need to create certs and make all my backend apps run https?
The issue was simple to fix. I needed to add port 80 to my nginx config in my docker-compose file. NGINX cannot proxy http traffic when listening on https only (so add http).
version: '3.7'
services:
nginx:
ports:
- "443:443"
- "80:80"
My presumptions about these specific items were all correct:
-making docker aware of external networks (when the container you want to add/proxy is not part of the same network)
networks:
default:
name: grafana_default
external: true
-adding DNS CNAME entries was correct.
I have created a CNAME DNS entry for grafana.gsil.mil and pointed it to gsil-docker1.gsil.mil
-the appropriate lines had to be added to nginx.conf for each additional container that you need to add.
server {
listen 80;
server_name grafana.gsil.mil;
location / {
proxy_pass http://grafana.gsil.smil:3000/;
}
}
Related
I have Joplin running in a docker container on my NAS using docker compose. Now I want to setup a reverse proxy in order to make it accessible via my personal domain.
The joplin/docker-compose.yml file looks as follows:
version: '3'
services:
db:
image: postgres:13.1
volumes:
- /local/joplin:/var/lib/postgresql/data
restart: unless-stopped
environment:
- APP_PORT=22300
- POSTGRES_PASSWORD=********
- POSTGRES_USER=user
- POSTGRES_DB=database
app:
image: joplin/server:2.2.10
depends_on:
- db
ports:
- "22300:22300"
restart: unless-stopped
environment:
- APP_PORT=22300
- APP_BASE_URL=http://192.168.1.2:22300/
- DB_CLIENT=pg
- POSTGRES_PASSWORD=********
- POSTGRES_DATABASE=database
- POSTGRES_USER=user
- POSTGRES_PORT=5432
- POSTGRES_HOST=db
The nginx/docker-compose.yml file looks like this:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- /local/nginx/nginx.conf:/etc/nginx/nginx.conf
- /local/nginx/sites-enabled:/etc/nginx/sites-enabled
I used the default for my /local/nginx/nginx.conf. It is as follows. :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Furthermore, inside the /local/nginx/sites-enabled/ folder I created the following files:
/local/nginx/sites-enabled/example.org,
/local/nginx/sites-enabled/my.example.org.
The content of /local/nginx/sites-enabled/example.org is:
##
# example.org -- Configuration
server {
listen 80;
listen [::]:80;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example.org;
}
The content of /local/nginx/sites-enabled/my.example.org is:
##
# my.example.org -- Configuration
server {
listen 80;
listen [::]:80;
server_name my.example.org;
location / {
proxy_pass http://192.168.1.2:22300/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
I set up portforwarding on my router to the nginx container and it works. (I see the 404 screen of nginx when I go to http://example.org) However, I struggle to set up the reverse proxy for the joplin container. When I try to access http://my.example.org, I get a 510 error message. What am I doing wrong?
The weird thing is, when I replace http://192.168.1.2:22300/ with the ip of my personal pc running a test webpage, I can access it via http://my.example.org. Even when I setup Joplin on my pc it works. Something seems to be wrong with either my nginx or docker setup.
After lots of debugging and googling around I finally found the solution. What one has to do is the following:
Setup nginx with a network inside nginx/docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- /local/nginx/nginx.conf:/etc/nginx/nginx.conf
- /local/nginx/sites-enabled:/etc/nginx/sites-enabled
#----------------------------------------
# Setup network my_net
#----------------------------------------
networks:
- my_net
networks:
my_net:
driver: bridge
Make Joplin use the network defined for nginx, and add an extra one for communication to the database. (Plus, setup a name for the Joplin container.)
version: '3'
services:
db:
image: postgres:13.1
#----------------------------------------
# Setup communication to Joplin server
#----------------------------------------
container_name: database
networks:
- joplin_net
#----------------------------------------
volumes:
- /local/joplin:/var/lib/postgresql/data
restart: unless-stopped
environment:
- APP_PORT=22300
- POSTGRES_PASSWORD=********
- POSTGRES_USER=user
- POSTGRES_DB=database
app:
image: joplin/server:2.2.10
#----------------------------------------
# Setup communication to Postgres server
# and nginx
#----------------------------------------
container_name: joplin # This will be the name used by nginx.
networks:
- joplin_net
- nginx_my_net
#----------------------------------------
depends_on:
- db
ports:
- "22300:22300"
restart: unless-stopped
environment:
- APP_PORT=22300
- APP_BASE_URL=http://192.168.1.2:22300/
- DB_CLIENT=pg
- POSTGRES_PASSWORD=********
- POSTGRES_DATABASE=database
- POSTGRES_USER=user
- POSTGRES_PORT=5432
- POSTGRES_HOST=db
#----------------------------------------
# You can replace joplin_net with any
# name you like. However, the name for
# nginx_my_net has to be:
# app folder + '_' + network name
# The nginx application is in the nginx
# folder, therefore the prefix has to be
# 'nginx_'. The network name is 'my_net',
# so this has to be the suffix.
#----------------------------------------
networks:
joplin_net:
driver: bridge
nginx_my_net:
external: true
The /local/nginx/sites-enabled/my.example.org file has to be amended:
##
# my.example.org -- Configuration
server {
listen 80;
listen [::]:80;
server_name my.example.org;
# The next line make nginx use the docker DNS
# to find the Joplin container by its name
# ('joplin').
resolver 127.0.0.11 valid=30;
location / {
# The server name used here has to be the
# one defined using 'container_name' in the
# docker-compose.yml for the application we
# want to proxy to.
proxy_pass http://joplin:22300/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
}
Hope this saves someone a couple of days of head-scratching.
I have a nginx.conf in which I am running an application on localhost. I need to redirect the application from HTTP to HTTPS. In the nginx.conf, I have a configuration as below:
http {
error_log /etc/nginx/error/error.log warn; #./nginx/error.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name localhost;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:!MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
location / {
proxy_pass http://localhost:80;
proxy_ssl_certificate /etc/nginx/ssl.crt;
proxy_ssl_certificate_key /etc/nginx/ssl.key;
proxy_ssl_verify off;
allow all;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
client_max_body_size 0;
client_body_buffer_size 128k;
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_buffers 32 4k;
}
}
And docker-compose.yml as below:-
version: '2'
services:
mysql:
image: mysql:5.7.21
restart: always
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./mysql:/var/lib/mysql
networks:
- bookstack-bridge
bookstack:
image: solidnerd/bookstack:latest
container_name: bookstack
restart: always
depends_on:
- mysql
environment:
- APP_URL=http://localhost:8080
volumes:
- ./uploads:/var/www/bookstack/public/uploads
- ./storage-uploads:/var/www/bookstack/public/storage
ports:
- 8080:8080
networks:
- bookstack-bridge
nginx:
image: nginx:latest
container_name: bookstack-nginx
restart: always
And in the docker-compose.yml, I do have APP_URL=http://localhost:8080 env variable.
Does anybody have an idea, what needs to be changed to redirect from HTTP to HTTPS?
Thanks in advance.
I customized your docker-compose-yml.
Your docker-compose.yml would not work for https because some parts are wrong or missing.
To use HTTPS you have to create the certificates with Openssl. These must be in the folder /etc/nginx/certs in the container.
When you put the certificates in the folder you have to set - VIRTUAL_PORT=8080 to 443 and change the APP_URL from http to https
When you start a service and assign it to the network "web" nginx automatically sees that a new service is registered. It automatically maps to the port specified in the image. This happens with the volume command "/tmp/docker.sock:ro". ":ro" stands for Readonly
If you assign a service to the network "internal" it is not accessible from the outside and Nginx ignores it. See "mysql" service.
With "depends_on:" i say that all services have to start before bookstack starts. This is important! First Nginx, then MySql and finally bookstack.
I prefer to use VIRTUAL_HOST on its own local domain. You can also use localhost there, the only important thing is that your "hosts" file in the operating system points to your external Docker IP. Example: "192.168.5.121 bookstack.local"
My tip! I would store the service "nginx--proxy" in a sepparate docker-compose file. Then you can easily register further services with the nginx.
Good luck with that and if you want to use Bookstack only locally HTTPS might not be that urgent now. Otherwise search for "Create Certs for Nginx local"
Before you start create the network "web":
docker network create web
version: '2.4'
services:
mysql:
image: mysql:5.7.21
container_name: bookstack-mysql
restart: unless-stopped
networks:
- "internal"
healthcheck:
test: "exit 0"
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./docker/data/mysql:/var/lib/mysql
bookstack:
image: solidnerd/bookstack:0.29.3
container_name: bookstack
restart: unless-stopped
networks:
- "web"
- "internal"
depends_on:
nginx--proxy:
condition: service_started
mysql:
condition: service_healthy
environment:
- VIRTUAL_HOST=bookstack.local
- VIRTUAL_PORT=8080
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=admin
- APP_URL=http://bookstack.local
volumes:
- ./docker/data/uploads:/var/www/bookstack/public/uploads
- ./docker/data/storage-uploads:/var/www/bookstack/storage/uploads
nginx--proxy:
image: jwilder/nginx-proxy:latest
container_name: nginx--proxy
restart: always
environment:
DEFAULT_HOST: default.vhost
ports:
- "80:80"
- "443:443"
volumes:
- ./docker/data/certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- "web"
- "internal"
networks:
web:
external: true
internal:
external: false
The solution worked for me:-
In the docker-compose.yml, in nginx service section added networks tag-
networks:
- bookstack-bridge
And in the nginx.conf added proxy_pass as-
proxy_pass http://bookstack:8080;
Thanks you guys for your help.
I have a simple app of 3 containers which all run in the same AWS EC2 server. I want to configure Nginx to act as a reverse-proxy however I'm pretty new with Nginx and don't know how to set the conf file correctly.
Here is my docker-compose:
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
volumes:
- ./conf/nginx.conf:/etc/nginx/nginx.conf
frontend:
container_name: frontend
image: myfrontend:image
ports:
- "3000:3000"
backend:
container_name: backend
depends_on:
- db
environment:
DB_HOST: db
image: mybackend:image
ports:
- "8400:8400"
db:
container_name: mongodb
environment:
MONGO_INITDB_DATABASE: myDB
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./initialization/db:/docker-entrypoint-initdb.d
- db-volume:/data/db
volumes:
db-volume:
The backend fetches data from the database and sends it to be presented by the frontend.
Here is my nginx.conf file:
events {
worker_connections 4096;
}
http {
server {
listen 80;
listen [::]:80;
server_name myDomainName.com;
location / {
proxy_pass http://frontend:3000/;
proxy_set_header Host $host;
}
location / {
proxy_pass http://backend:8400/;
proxy_pass_request_headers on;
}
}
}
How can I set nginx to serve the frontend and backend containers?
You can use the below Nginx configs to solve your issue
events {
worker_connections 4096;
}
http {
server {
listen 80 default_server;
server_name frontend.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://frontend:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
server {
listen 80;
server_name backend.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://backend:8400;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
}
Nginx will be serving the backend and frontend on different domain names, with the below, etc hosts you will be able to get access to the services on the defined domain names
127.0.0.1 backend.me frontend.me
I've published my API, ID server STS and web ui on separate docker containers and I'm using a nginx container to act as the reverse proxy to serve these app. I can browse to each one of them and even open the discovery endpoint for the STS. Problem comes when I try to login into the web portal, it tries to redirect me back to the STS for logging in but I'm getting ERR_CONNECTION_REFUSED the url looks okay I think it's the STS that is not available from the redirection from the Web UI.
My docker-compose is as below:
version: '3.4'
services:
reverseproxy:
container_name: reverseproxy
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./proxy.conf:/etc/nginx/proxy.conf
- ./cert:/etc/nginx
ports:
- 8080:8080
- 8081:8081
- 8082:8082
- 443:443
restart: always
links:
- sts
sts:
container_name: sts
image: idsvrsts:latest
links:
- localdb
expose:
- "8080"
kernel:
container_name: kernel
image: kernel_api:latest
depends_on:
- localdb
links:
- localdb
portal:
container_name: portal
image: webportal:latest
environment:
- TZ=Europe/Moscow
depends_on:
- localdb
- sts
- kernel
- reverseproxy
localdb:
image: mcr.microsoft.com/mssql/server
container_name: localdb
environment:
- 'MSSQL_SA_PASSWORD=password'
- 'ACCEPT_EULA=Y'
- TZ=Europe/Moscow
ports:
- "1433:1433"
volumes:
- "sqldatabasevolume:/var/opt/mssql/data/"
volumes:
sqldata:
And this is the nginx.config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-sts {
server sts:8080;
}
upstream docker-kernel {
server kernel:8081;
}
upstream docker-portal {
server portal:8081;
}
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_certificate cert.pem;
ssl_certificate_key key.pem;
ssl_password_file global.pass;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
server {
listen 8080;
listen [::]:8080;
server_name sts;
location / {
proxy_pass http://docker-sts;
# proxy_redirect off;
}
}
server {
listen 8081;
listen [::]:8081;
server_name kernel;
location / {
proxy_pass http://docker-kernel;
}
}
server {
listen 8082;
listen [::]:8082;
server_name portal;
location / {
proxy_pass http://docker-portal;
}
}
}
The web ui redirects to the below url, which works okay if I browse to it using the STS server without nginx.
http://localhost/connect/authorize?client_id=myclient.id&redirect_uri=http%3A%2F%2Flocalhost%3A22983%2Fstatic%2Fcallback.html&response_type=id_token%20token&scope=openid%20profile%20kernel.api&state=f919149753884cb1b8f2b907265dfb8f&nonce=77806d692a874244bdbb12db5be40735
Found the issue. The containers could not see each other because nginx was not appending the port on the url.
I changed this:
'proxy_set_header Host $host;'
To this:
'proxy_set_header Host $host:$server_port;'
I have docker stack running 2 containers, first is Nginx, second - application.
The problem is that nginx shows Bad Gateway error:
Here is nginx conf:
upstream example {
server mystack_app1;
# Also tried with just 'app1'
# server mystack_app2;
keepalive 32;
}
server {
listen 80;
server_name example;
location / {
proxy_pass http://example;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
}
Here is docker-compose.yml
version: "3"
services:
app1:
image: my-app:latest
ports:
- "9000:9000"
networks:
- webnet
web:
image: my-web:latest
ports:
- "81:80"
networks:
- webnet
deploy:
restart_policy:
condition: on-failure
networks:
webnet:
I use following command to deploy docker stack:
docker stack deploy -c docker-compose.yml mystack
So I can access application from host's browser by localhost:9000 - it works ok.
Also, from the nginx container, I can ping mystack_app1.
But when accessing localhost:81, nginx shows 502 Bad Gateway
Please help.
It looks like your upstream definition is not correct. It's trying to connect to port 80 instead of port 9000.
Try
upstream example {
server mystack_app1:9000;
# Also tried with just 'app1'
# server mystack_app2;
keepalive 32;
}
Btw, I suggest you to use the container_name in your docker-compose file.