Bad Gateway 502 Connection Refused Nginx, Docker Compose and Keycloak - docker

I'm having trouble configuring Nginx and Keycloak together with docker-compose. I keep receiving 502 Bad Gateway error when trying to access the Keycloak dashboard behind Nginx reverse proxy.
Here is my docker-compose.yaml file
# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)
version: '3.8'
services:
nginx:
image: my-nginx-image
ports:
- "80:80"
depends_on:
- db-keycloak
- keycloak
restart:
always
networks: # join the backend and frontend network
- backend
- frontend
# Keycloak Service (Auth Server)
keycloak:
image: jboss/keycloak:15.0.0
restart: always
depends_on:
- db-keycloak
environment:
DB_VENDOR: postgres
DB_ADDR: db-keycloak
DB_DATABASE: keycloak
DB_USER: ${KEYCLOAK_DB_USER}
DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
KEYCLOAK_USER: ${KEYCLOAK_ADMIN_USER}
KEYCLOAK_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
PROXY_ADDRESS_FORWARDING: "true"
command: ["-Djboss.http.port=8100"]
ports:
- 8100:8100
networks: # join the backend and frontend network
- backend
- frontend # commenting out this line somehow resolves my issue
# Keycloak Database Service
db-keycloak:
image: postgres:latest
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: ${KEYCLOAK_DB_USER}
POSTGRES_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
networks:
- backend # join the backend network only
volumes:
- db-keycloak-data:/var/lib/postgres # persist keycloak db data
# Volumes
volumes:
db-keycloak-data:
# Networks for the backend and frontend
networks:
backend:
frontend:
I'm using a custom Nginx image built from the following Dockerfile:
FROM nginx
expose 80
COPY ./default.conf /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
The default.conf file is:
upstream keycloak {
server keycloak:8100;
}
server {
listen 80;
# keycloak
location /auth {
proxy_pass http://keycloak/auth;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /auth/admin {
proxy_pass http://keycloak/auth/admin;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I've disabled SSL in Keycloak, and I'm running keycloak on port 8100 because another container is using 8080 (I've excluded some of the irrelevant config for my other images, just note that I have other services running on the backend and frontend networks). The problem I'm having is that when I try to access the Keycloak dashboard at /auth I am greeted with a 502 Bad Gateway page. However, if I remove the keycloak service from the frontend network, I can access the dashboard just fine (like the following):
networks: # only join the backend network
- backend
This is the output from the Nginx when I try to navigate to the page:
2022/01/07 17:27:52 [error] 31#31: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 94.10.13.254, server: , request: "GET /auth HTTP/1.1", upstream: "http://172.18.0.4:8100/auth", host: "my-ec2-instance.eu-west-2.compute.amazonaws.com"
Running docker container inspect on the Keycloak container I can see that the IP 172.18.0.4 does match so it seems to be forwarding the request to the correct container address, and Nginx and Keycloak are both on the same network. Could this be an issue with my docker compose file configuration or maybe with Keycloak refusing the connection for another reason? Is there something I'm missing. Let me know if there is any other info I should include.

Related

Upstream timed out error when deploying Docker Nginx FastAPI application on Google Cloud

I'm trying to deploy simple FastAPI app with Docker and Nginx proxy on Google Cloud using simple ssh-terminal window.
My nginx.conf:
access_log /var/log/nginx/app.log;
error_log /var/log/nginx/app.log;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Proxy "";
upstream app_server {
server example.com:8000;
}
server {
server_name example.com;
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /root/ssl/cert.pem;
ssl_certificate_key /root/ssl/key.pem;
location / {
proxy_pass "http://app_server";
}
}
My docker-compose.yml:
version: '3.8'
services:
reverse-proxy:
image: jwilder/nginx-proxy
container_name: reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx:/etc/nginx/conf.d
- ./ssl/cert1.pem:/root/ssl/cert.pem
- ./ssl/privkey1.pem:/root/ssl/key.pem
- ./ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem
networks:
- reverse-proxy
web:
environment: [.env]
build: ./project
ports:
- 8000:8000
command: gunicorn main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000
volumes:
- ./project:/usr/src/app
networks:
- reverse-proxy
- back
networks:
reverse-proxy:
external:
name: reverse-proxy
back:
driver: bridge
After run docker-compose up command and going to example.com address, I get error:
*3 upstream timed out (110: Connection timed out) while connecting to upstream...
Also, I have opened ports with Google Cloud Firewall service (checked with netstat command) and configured my VM's instance with network parameters from this article.
I don't understand why I receive 504 Gateway Time-out cause my service work with the similar configuration on a simple VPS hosting, and also it works from the inside Google Cloud VM's ssh-terminal when using curl and check localhost instead example.com domain. I want to know how to run my service on Google Cloud VM using only docker-compose util for this purpose?
In Nginx config file, try to mention the web container name:
upstream app_server {
server web:8000;
}

Containerized Keycloak behind Nginx not working (502 Bad Gateway)

I need to serve containerized keycloak behind Nginx. Keycloak runs without any problem at 'localhost:8080' but when I try to access it through the reverse proxy at 'localhost/auth' I get '502 Bad Gateway'.
Here's the details of the error taken from Nginx logs:
[error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.80.1, server: , request: "GET /auth/ HTTP/1.1", upstream: "http://127.0.0.1:8080/auth/", host: "localhost"
Please find below my docker-compose file (I haven't pasted the other containers):
version: '3'
services:
keycloak:
image: jboss/keycloak
container_name: keycloak
ports:
- "8080:8080"
environment:
KEYCLOAK_USER: ${KEYCLOAK_USER}
KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
KEYCLOAK_DB_VENDOR: ${KEYCLOAK_DB_VENDOR}
KEYCLOAK_DB_ADDR: ${KEYCLOAK_DB_ADDR}
KEYCLOAK_DB_DATABASE: ${KEYCLOAK_DB_DATABASE}
KEYCLOAK_DB_USER: ${KEYCLOAK_DB_USER}
KEYCLOAK_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
PROXY_ADDRESS_FORWARDING: 'true'
depends_on:
- keycloak-db
keycloak-db:
image: postgres
container_name: keycloak-db
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USERNAME}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: keycloak
volumes:
- ./keycloak/data:/var/lib/postgresql/data/
networks:
- my-app
nginx:
image: nginx:1.15-alpine
container_name: nginx
build:
context: ./nginx
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
networks:
- my-app
networks:
my-app:
This is the Nginx upstream.conf:
# path: /etc/nginx/conf.d/upstream.conf
# Keycloak server
upstream keycloak {
server localhost:8080;
}
Nginx default.conf:
server {
listen 80;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
# keycloak
location /auth {
proxy_pass http://keycloak/auth;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /auth/admin {
proxy_pass http://keycloak/auth/admin;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Does anyone have any idea what is wrong in my configuration?
Obvious problem:
upstream keycloak {
server localhost:8080;
}
Each container has own "localhost", so you are connecting to nginx's localhost != keycloak's localhost != host's localhost. Use service name there, e.g.:
upstream keycloak {
server keycloak:8080;
}

Docker nginx reverse proxy error - 502 bad gateway - connection refused

I'm having a problem with getting to work my NGINX reverse proxy on Docker.
When I access:
local.lab - NGINX responds with expected index.html page
127.0.0.1:2000 or 127.0.0.1:2001 or 127.0.0.1:2002 - service works and I get expected results
local.lab/a1 or local.lab/a2 or local.lab/a3 - I get "502 Bad Gateway" error.
Detailed error from nginx log:
2021/02/25 18:20:48 [error] 30#30: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: local.lab, request: "GET /a2 HTTP/2.0", upstream: "http://127.0.0.1:2006/", host: "www.local.lab"
I tried to add network_mode: host to nginx service in docker compose without success.
I'm using docker compose:
version: '3.7'
services:
nginx:
container_name: lab-nginx
image: nginx:latest
restart: always
depends_on:
- http1
- http2
- http3
volumes:
- ./html:/usr/share/nginx/html/
- ./nginx.conf:/etc/nginx/nginx.conf
- ./error_log/error.log:/var/log/nginx/error.log
- ./cert:/var/log/nginx/cert/
ports:
- 80:80
- 443:443
http1:
container_name: lab-http1
image: httpd:latest
restart: always
# build:
# context: ./apache_service
ports:
- 2000:80
- 2005:443
volumes:
- ./apache/index1.html:/usr/local/apache2/htdocs/index.html
http2:
container_name: lab-http2
image: httpd:latest
restart: always
ports:
- 2001:80
- 2006:443
volumes:
- ./apache/index2.html:/usr/local/apache2/htdocs/index.html
http3:
container_name: lab-http3
image: httpd:latest
restart: always
ports:
- 2002:80
- 2007:443
volumes:
- ./apache/index3.html:/usr/local/apache2/htdocs/index.html
My nginx config:
worker_processes auto;
events { worker_connections 1024;}
error_log /var/log/nginx/error.log error;
http{
server {
listen 443 ssl http2;
server_name local.lab;
ssl_certificate /var/log/nginx/cert/local.lab.crt;
ssl_certificate_key /var/log/nginx/cert/local.lab.key;
ssl_protocols TLSv1.3;
location / {
root /usr/share/nginx/html;
index index.html;
}
location /a1 {
proxy_pass http://127.0.0.1:2000/;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /a2 {
proxy_pass http://127.0.0.1:2001/;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /a3 {
proxy_pass http://127.0.0.1:2002/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
How can I fix this?
The reverse proxy configuration in NGINX should reference the internal ports of your services, not the external ports they are mapped to in the docker-compose.yml. The services all have different names running in different containers so they can run on the same port (80 in this case) and use the service name, not the loopback address. You need to map them to different ports externally though because you can't have more than one service per port on your host.
For example:
location /a1 {
proxy_pass http://http1:80/;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /a2 {
proxy_pass http://http2:80/;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /a3 {
proxy_pass http://http3:80/;
proxy_set_header X-Forwarded-For $remote_addr;
}

Nginx can't find upstream from docker-compose

I am trying to run a nginx proxy server with a ktor java server behind. However, nginx throws "111: Connection refused" with those configurations. I've tried the "setting upstream server name from localhost to docker compose name" on web but it didn't help anything.
Thank you in advance, and sorry for my poor english.
docker-compose.yml
version: "3.8"
services:
nginx:
image: nginx:1.19.3
ports:
- 80:80
- 443:443
volumes:
- ./Nginx/logs:/var/log/nginx
- ./Nginx/confs:/etc/nginx/conf.d
- ./Nginx/confs:/etc/nginx/keys
mariadb:
image: mariadb:10.5.6
ports:
- 3306:3306
volumes:
- ./Mariadb/data:/var/lib/mysql
- ./Mariadb/confs:/etc/mysql/conf.d
- ./Mariadb/inits:/docker-entrypoint-initdb.d
env_file:
- .env
environment:
TZ: Asia/Seoul
MYSQL_USER: dockhyub
yangjin208:
build: ./Yangjin208
ports:
- "3000:8080"
env_file:
- .env
- ./Yangjin208/.env
links:
- mariadb:sql
yangjin208.conf under ./Nginx/confs
upstream yangjin208_app {
server yangjin208:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://yangjin208_app;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
localhost:3000 is accessible by browser, and has no problems.
So, I've found the problem - It seems like the docker internal network uses the original port instead of the port changed from docker-compose.yml's "ports" configuration. I was using the port 3000 (Port declared from docker-compose) instead of 8080 (The original port), and that was the reason it didn't work. Thanks for everyone.
I think you're not using the yangjin208.conf in nginx. Rename yangjin208.conf to default.conf.

Change remote IP wieh using L2TP VPN with docker

I have an L2TP server set up with docker-compose, and nginx to filter certain hosts to a hostname, but when I try to connect, nginx is reading the original IP, not the IP proxied through the VPN.
Nginx showing x.x.x.x instead of 192.168.x.x for the IP.
As a result, it's giving me a 403 (forbidden) error when I try to connect on any remote IP that isn't the ones I allowed, even while connected to the VPN, and even when the VPN gives me an IP such like 192.168.43.12
And when I try network_mode: host on the VPN, it fails to route any web traffic at all.
docker-compose.yml:
services:
vpn:
image: hwdsl2/ipsec-vpn-server
restart: always
env_file:
- ../config/vpn/vpn.env
ports:
- "500:500/udp"
- "4500:4500/udp"
- "1701:1701/udp"
privileged: true
hostname: example.com
volumes:
- /lib/modules:/lib/modules:ro
nginx:
build: ../config/nginx
restart: unless-stopped
ports:
- "80:80"
network_mode: host
nginx site conf:
server {
listen *:80;
server_name bt.example.com;
index index.html;
access_log /dev/stdout upstreamlog;
error_log /dev/stderr debug;
location / {
allow 127.0.0.1;
allow 192.168.0.0/16;
#allow x.x.x.x; # one remote IP I want to allow, normally uncommented
deny all;
proxy_pass http://localhost:9091;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

Resources