NginX reverse proxy for Kibana over Docker - docker

I have a Docker Compose setup with NginX, ElasticSearch and Kibana like the following:
web:
build:
context: .
dockerfile: ./system/docker/development/web.Dockerfile
depends_on:
- app
volumes:
- './system/ssl:/etc/ssl/certs'
networks:
- mynet
ports:
- 80:80
- 443:443
elasticsearch_1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_1"
environment:
- node.name=elasticsearch_1
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_2,elasticsearch_3
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- mynet
elasticsearch_2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_2"
environment:
- node.name=elasticsearch_2
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_1,elasticsearch_3
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_2:/usr/share/elasticsearch/data
ports:
- 9201:9201
networks:
- mynet
elasticsearch_3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_3"
environment:
- node.name=elasticsearch_3
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_1,elasticsearch_2
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_3:/usr/share/elasticsearch/data
ports:
- 9202:9202
networks:
- mynet
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_kibana"
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elasticsearch_1:9200
ELASTICSEARCH_HOSTS: http://elasticsearch_1:9200
networks:
- mynet
volumes:
es_volume_1: null
es_volume_2: null
es_volume_3: null
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/24
gateway: 172.18.0.1
When I (build and) run this using docker-compose up I'm able to access Kibana through URL http://localhost:5601/ but when I try to setup a reverse proxy for the same using NginX, I get a 502 Bad Gateway error. Here's my NginX config file:
server {
listen 80;
listen 443 ssl http2;
ssl_certificate /ssl/localhost.crt;
ssl_certificate_key /ssl/localhost.key;
...
location /app/kibana {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ /\. {
deny all;
}
...
}
What I'm trying to do here is be able to access Kibana like http://localhost/app/kibana. The articles I've gone through (like this) seem to be focused more on securing Kibana access through NginX (using Basic Auth) rather than the ability to access on a particular path on port 80.
Update
So, I changed localhost to kibana (as suggested by #mikezter) and now it seems to be able to at least find the Kibana service (so there's no more 502 error).
However, then I encountered a blank page with a few errors in browser debug console. Upon searching, I came across this location directive:
location ~ (/app|/translations|/node_modules|/built_assets/|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/spaces/enter) {
proxy_pass http://kibana:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Authorization "";
proxy_hide_header Authorization;
}
Now the page loads and there is some UI, but there's still some issue with the scripting, so the page is not available for user interaction.

You are connecting all the containers in this config via container network. Look at the environment variables set in the Kibana config:
ELASTICSEARCH_URL: http://elasticsearch_1:9200
Here you can see, that the hostname of the other container running ElasticSearch is elasticsearch_1. In a similar manner, the hostname of the container running Kibana woud be kibana. These hostnames are only availiable inside the container network.
So in your Nginx config, you'll have to proxy_pass to http://kibana:5601 instead of localhost.

I understand this might not be a total fix for your problem for the second part of the problem but using the following
ELK_VERSION=7.12.0
Kibana seems to work well on the default route '/'
The below worked for me.
server {
listen 80 default_server;
server_name $hostname;
location / {
proxy_pass http://kibana:5601;
# kindly add your header config that works for you
}
}
I think it has to do with the way you're configuring your nginx location regex match.
The configuration I eventually went with was to enable nginx listen on multiple ports.
so I isolated by port exposed by kibanna which listen on the default route.
E.g. in my nginx.conf
server {
listen 80 default_server;
server_name $hostname;
location / {
proxy_pass http://identity-api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 81;
server_name $hostname;
location / {
proxy_pass http://kibana:5601;
# kindly add your header config that works for you
}
}
Lastly I update my nginx port in docker-compose
nginx-reverseproxy:
ports:
- "80:80"
- "81:81"

First create a site-file for Nginx:
$ sudo nano /etc/nginx/sites-available/kibana.example.com
$ sudo ln -s /etc/nginx/sites-available/kibana.example.com /etc/nginx/sites-enabled/
Put the following into it:
server {
listen 80;
client_max_body_size 4G;
server_name kibana.example.com;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://kibana_server;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream kibana_server {
server 127.0.0.1:5601;
}
In your docker-compose.yml, serve Kibana only locally on the host:
services:
...
kibana:
ports:
- "127.0.0.1:5601:5601"
...
Execute docker compose up -d
Run sanity check for your nginx configuration: sudo nginx -t.
Reload nginx: sudo systemctl restart nginx.
Access your kibana server at http://kibana.example.org.
PS: Its implied that kibana.example.org is just a placeholder domain name.

Related

certbot challenge fails with jellyfin as it returns 404

jellyfin container runs behind nginx reverse proxy.
When I try to get an ssl certificate, jellyfin unfortunately returns a 404 error. Anyone know what I need to change in the configuration to make it work?
my docker-compose.yml
services:
nginx:
container_name: nginx
image: nginx:1.23.3-alpine
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d/
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/certs/:/etc/nginx/certs/
networks:
- jellyfin
jellyfin:
container_name: jellyfin
image: jellyfin/jellyfin
restart: unless-stopped
user: 1000:1000
volumes:
- ./jellyfin/config:/config
- ./jellyfin/cache:/cache
- ./jellyfin/media/:/media
networks:
- jellyfin
networks:
jellyfin:
driver: bridge
my nginx .conf file
upstream jellyfin {
server jellyfin:8096;
}
server {
listen 80;
server_name jellyfin.mydomain.com;
location / {
proxy_pass http://jellyfin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#upgrade to WebSocket protocol when requested
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
certbot response
Type: unauthorized
Detail: Invalid response from
http://jellyfin.mydomain.com/.well-known/acme-challenge/C8YTfjbIku65D_Hb2BCTkWEzdcwBqk4g8Wks0umq4Hw:
404

NGINX/Docker-Compose/Phoenix

I'm creating an application with docker-compose, vue.js, and phoenix/elixir. So far the phoenix application will work on localhost, but will not work when I run the application using docker-compose + NGINX and it's been difficult to debug. Any advice or suggestions would be helpful. The phoenix application itself does not have any of the configuration options changed from the "hello world" application and only adds in socket functionality for a chat room.
Here is the docker-compose file:
version: "3.9"
networks:
main_network:
volumes:
volume1:
varwwwcertbot:
certbotconf:
data_volume:
services:
phx:
build:
context: ./phx
restart: always
volumes:
- ./phx:/phx
ports:
- "4000:4000"
depends_on:
db:
condition: service_healthy
networks:
- main_network
db:
image: postgres
restart: always
volumes:
- ./docker-entrypoint-initdb.d/init.sql:/docker-entrypoint-initdb.d/init.sql
- data_volume:/var/lib/postgresql/data
environment:
- POSTGRES_NAME=dev-postgres
- POSTGRES_USER=pixel
- POSTGRES_DATABASE=lightchan
- POSTGRES_PASSWORD=exploration
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U pixel"]
interval: 5s
timeout: 5s
retries: 20
networks:
- main_network
frontend:
build:
context: ./frontend
restart: always
volumes:
- './frontend:/app'
- '/app/node_modules'
ports:
- "3000:3000"
networks:
- main_network
depends_on:
- "phx"
nginx:
build:
context: ./nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- volume1:/usr/share/nginx/html
- varwwwcertbot:/var/www/certbot
- certbotconf:/etc/letsencrypt
networks:
- main_network
certbot:
image: certbot/certbot:latest
volumes:
- varwwwcertbot:/var/www/certbot
- certbotconf:/etc/letsencrypt
networks:
- main_network
Here is the nginx file:
events{
}
http{
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
upstream websocket {
server 164.92.157.124:4000;
}
server {
listen 80;
server_name localhost lightchan.org www.lightchan.org;
root /usr/share/nginx/html;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
if ($scheme = http) {
return 301 https://lightchan.org$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name localhost lightchan.org www.lightchan.org;
ssl_certificate /etc/letsencrypt/live/lightchan.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/lightchan.org/privkey.pem;
location /media/pics/ {
autoindex on;
}
location / {
proxy_pass http://frontend:3000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_intercept_errors on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ^~ /socket/ {
proxy_pass http://websocket;
add_header X-uri "$uri";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "upgrade";
}
location ^~ /api/ {
proxy_pass http://phx:4000;
add_header X-uri "$uri";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "upgrade";
}
}
}
Note here that I attempted to proxy through an upstream for a /socket/ config and also attempted to use /api/ just using the phx docker tag. Both frontend and phx are linked in the docker-compose file, so they should be coming through. I've also turned on the 'upgrade' feature of Connection so this should be forwarding websockets too.
The phoenix application itself, still has the / front page uri, so I would suspect that being able to navigate to either www.lightchan.org/api or www.lightchan.org/socket would then allow me to see the phoenix hello world splash page, but instead there's a 502 error.
Any suggestions?
EDIT:
I've edited config/config.exs in the phoenix application to run on host 127.0.0.1 like this:
# Configures the endpoint
config :my_app, MyAppWeb.Endpoint,
url: [host: "127.0.0.1"],
render_errors: [view: MyAppWeb.ErrorView, accepts: ~w(html json), layout: false],
pubsub_server: MyApp.PubSub,
live_view: [signing_salt: "blah"]
and tested this running on locally, and it works, correctly showing the splash page at /. I've attempted to get this to work using http://<SERVER IP>:4000 on my server, but no luck.
EDIT EDIT:
Frustratingly, my autoindex for /media/pics is not working, but rather routing to the the frontend. Here is an embarrassingly thorough guide on routing priorities in NGINX (Guide on how to use regex in Nginx location block section?) and here (Nginx location priority). According to the second link:
location ^~ /images/ {
# matches any query beginning with /images/ and halts searching,
# so regular expressions will not be checked.
[ configuration D ]
}
should mean that
location ^~ /media/pics/ {
autoindex on;
}
location / {
<...>
should stop searching and the return an autoindex. So https://www.lightchan.org/media/pics/myimage.png should return myimage.png. That doesn't work and neither does
location /media/pics/ {
autoindex on;
}
Although I could have sworn it was working before...hmm.
Thanks to GreenMachine on Discord the answer was found here:
https://dev.to/eikooc/using-docker-for-your-elixir-phoenix-application-n8n
change dev.exs in config:
# from this:
http: [ip: {127, 0, 0, 1}, port: 4000],
# to this:
http: [ip: {0, 0, 0, 0}, port: 4000],

NGINX, Docker-Compose, SSL Reverse Proxy Has failed (111: Connection refused)

I'm trying to get a docker-compose to use a nginx reverse proxy for ssl. I've looked at several different tutorials online, and the below is the best approximation of the answer. However, I am getting a 502 Bad Gateway Error and the following error in nginx. I'm not sure why. https seems to work (as it routes to the Error page), but I don't know what else is happening here. Any ideas?
production_nginx | 2021/01/12 02:54:34 [error] 29#29: *1 connect() failed (111: Connection refused) while connecting to upstream, client: <IP_ADDRESS_HERE>, server: www.websiteunderdevelopment.com, request: "GET / HTTP/1.1", upstream: "http://172.21.0.4:3001/", host: "www.websiteunderdevelopment.net"
Here is the docker container -
version: "3.3"
services:
nginx:
image: nginx:latest
container_name: production_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./error_log:/etc/nginx/error_log.log
- ./nginx/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
ports:
- 80:80
- 443:443
depends_on:
- blog
- api
- db
blog:
container_name: blog
build: ./blog
ports:
- 3001:3000
expose:
- "3000"
- "3001"
- "80"
depends_on:
- api
api:
container_name: api
build: ./api
restart: always
ports:
- 4001:4000
expose:
- "4000"
- "4001"
depends_on:
- db
command: ["./wait-for-it.sh", "http://localhost:3306", "--", "npm", "start"]
volumes:
- ./api:/var/lib/api
db:
container_name: db
build: ./db
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=True
- MYSQL_DATABASE=blog
- MYSQL_USER=SUFUPDFD
- MYSQL_ROOT_PASSWORD=NEST
- MYSQL_PASSWORD=SUPERSECCRETT
volumes:
- db_data:/var/lib/mysql
volumes:
db_data: {}
Here is my nginx file -
events{}
http{
server {
listen 80;
listen 443 ssl;
server_name www.websiteunderdevelopment.com websiteunderdevelopment.com;
ssl_certificate /etc/letsencrypt/live/www.websiteunderdevelopment.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.websiteunderdevelopment.net/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:443;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://blog:3001/;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:443;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://api:4001/;
}
}
}

Nginx + Docker Compose - connect() failed (111: Connection refused) while connecting to upstream

thanks for taking the time to read this. I am trying to deploy my application to an AWS EC2 Instance using docker-compose. When i run the command docker-compose up and visit the site, I get an error from nginx saying the below error. I understand that nginx is receiving the request but is unable to find an upstream connection to my react app, and would appreciate any help in correctly configuring the ports/settings.
Error
2 connect() failed (111: Connection refused) while connecting to upstream, client: 108.212.77.70 server: example.com, request: "GET / HTTP/1.1", upstream: "http://172.29.0.4:8003/", host: "example.com"
Here is my nginx default config
upstream meetup_ws {
server channels:8001;
}
upstream meetup_backend {
server backend:8000;
}
upstream meetup_frontend {
server frontend:8003;
}
server {
listen 0.0.0.0:80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com example.com;
root /var/www/frontend;
index index.html;
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://meetup_frontend;
}
location /api {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://meetup_backend;
}
location /ws {
try_files $uri #proxy_websocket;
}
location #proxy_websocket {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://meetup_ws;
}
}
And this is my docker-compose.yml
version: '3'
services:
nginx:
build: ./nginx
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./frontend/build:/var/www/frontend
- ./nginx/certs:/etc/nginx/certs
depends_on:
- channels
db:
image: postgres:12.0-alpine
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_HOST=db
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data/
backend: &backend
build: ./backend
volumes:
- ./backend:/app
ports:
- 8000:8000
command: ["python", "manage.py", "runserver"]
env_file:
- ./.env
depends_on:
- db
- redis
frontend:
build: ./frontend
volumes:
- ./frontend:/app
- node_modules:/app/node_modules
ports:
- 8003:8003
command: npm start
stdin_open: true
redis:
image: "redis:5.0.7"
worker_channels:
<<: *backend
command: ["python", "manage.py", "runworker", "channels"]
depends_on:
- db
- redis
ports:
- 8002:8002
channels:
<<: *backend
command: daphne -b 0.0.0.0 -p 8001 backend.asgi:application
ports:
- 8001:8001
depends_on:
- db
- redis
volumes:
node_modules:
postgres_data:
It is a bit embarrassing why the issue existed but I was able to solve the issue. I did
ping frontend in my nginx container and it was successfully pinging the frontend container. Next I did curl -L http://frontend:8003 and it said curl: (7) Failed to connect to frontend port 8003: Connection refused. I went to the frontend container and did netstat -tulpn and it listed 3000 as the port that was exposed. I check my .env file and it was missing port=8003. Nginx was able to connect upstream afterwards.

How can I configure IdentityServer4 (DotNet Core) to work in Nginx reverse proxy

I've published my API, ID server STS and web ui on separate docker containers and I'm using a nginx container to act as the reverse proxy to serve these app. I can browse to each one of them and even open the discovery endpoint for the STS. Problem comes when I try to login into the web portal, it tries to redirect me back to the STS for logging in but I'm getting ERR_CONNECTION_REFUSED the url looks okay I think it's the STS that is not available from the redirection from the Web UI.
My docker-compose is as below:
version: '3.4'
services:
reverseproxy:
container_name: reverseproxy
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./proxy.conf:/etc/nginx/proxy.conf
- ./cert:/etc/nginx
ports:
- 8080:8080
- 8081:8081
- 8082:8082
- 443:443
restart: always
links:
- sts
sts:
container_name: sts
image: idsvrsts:latest
links:
- localdb
expose:
- "8080"
kernel:
container_name: kernel
image: kernel_api:latest
depends_on:
- localdb
links:
- localdb
portal:
container_name: portal
image: webportal:latest
environment:
- TZ=Europe/Moscow
depends_on:
- localdb
- sts
- kernel
- reverseproxy
localdb:
image: mcr.microsoft.com/mssql/server
container_name: localdb
environment:
- 'MSSQL_SA_PASSWORD=password'
- 'ACCEPT_EULA=Y'
- TZ=Europe/Moscow
ports:
- "1433:1433"
volumes:
- "sqldatabasevolume:/var/opt/mssql/data/"
volumes:
sqldata:
And this is the nginx.config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-sts {
server sts:8080;
}
upstream docker-kernel {
server kernel:8081;
}
upstream docker-portal {
server portal:8081;
}
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_certificate cert.pem;
ssl_certificate_key key.pem;
ssl_password_file global.pass;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
server {
listen 8080;
listen [::]:8080;
server_name sts;
location / {
proxy_pass http://docker-sts;
# proxy_redirect off;
}
}
server {
listen 8081;
listen [::]:8081;
server_name kernel;
location / {
proxy_pass http://docker-kernel;
}
}
server {
listen 8082;
listen [::]:8082;
server_name portal;
location / {
proxy_pass http://docker-portal;
}
}
}
The web ui redirects to the below url, which works okay if I browse to it using the STS server without nginx.
http://localhost/connect/authorize?client_id=myclient.id&redirect_uri=http%3A%2F%2Flocalhost%3A22983%2Fstatic%2Fcallback.html&response_type=id_token%20token&scope=openid%20profile%20kernel.api&state=f919149753884cb1b8f2b907265dfb8f&nonce=77806d692a874244bdbb12db5be40735
Found the issue. The containers could not see each other because nginx was not appending the port on the url.
I changed this:
'proxy_set_header Host $host;'
To this:
'proxy_set_header Host $host:$server_port;'

Resources