Nginx reverse proxy for keycloak - docker

I've deployed a keycloak server at localhost:7070 (in Docker container, it run on 8080), now I want to setup a reverse proxy for it. Here is my conf:
server {
listen 11080 ;
location /auth/ {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:7070/auth/;
}
}
when I access http://my-ip:11080/auth, I could see the welcome page. But when I tried to login following the link on the welcome page, then it show error and the url now is http://my-ip:auth/admin/, but I expect http://my-ip:11080/auth/admin/ with the port 11080
When I manually type http://my-ip:11080/auth/admin and press Enter, it redirect to http://my-ip/auth/admin/master/console/, but I expect http://my-ip:11080/auth/admin/master/console/ with the port 11080
I also tried many solutions that I found but no luck for now. Could you guy tell me what is the problem here?
UPDATE: docker-compose.yml
version: "3.7"
services:
keycloak:
volumes:
- keycloak-pgdb:/var/lib/postgresql/data
build:
context: .
dockerfile: Dockerfile
ports:
- "7070:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
- DB_VENDOR=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=keycloak
- DB_ADDR=localhost
- DB_USER=postgres
- DB_PASSWORD=root
- PROXY_ADDRESS_FORWARDING=true
volumes:
keycloak-pgdb:
Docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30ad65460a0c pic-keycloak_keycloak "entrypoint.sh" 38 minutes ago Up 38 minutes 5432/tcp, 0.0.0.0:7070->8080/tcp pic-keycloak_keycloak_1

The application in the container is not aware that you are forwarding port 11080, so when the application renders the response, if it's following the X-Forwarded-xxxxx headers, it will use the X-Forwarded-Poroto to determine where the redirection should be sent.
Depending on your application, you have 2 options do deal with this cases:
Application that recognizes a X-Forwarded-Port header can be told to redirect to a specific port, like in this case:
proxy_set_header X-Forwarded-Port 11080
Legacy application that do not obey the rules provided in the header can be handled by response rewrite pass. Here is example with sub_filter:
sub_filter 'http:/my-ip/auth' 'http:/my-ip:11080/auth';
For sub_filter to work, the module should be installed and enabled --with-http_sub_module

Related

Nginx config for unleash not working after reverse proxy

I have a very thin app in a server and I just set up unleash (feature flag management tool) on it (with docker).
So I just opened the port 4242 in both the host and the container machine (docker-compose segment bellow).
services:
custom-unleash:
container_name: custom_unleash
image: unleashorg/unleash-server:latest
command: docker-entrypoint.sh /bin/sh -c 'node index.js'
ports:
- "4242:4242"
environment:
- DATABASE_HOST=foo
- DATABASE_NAME=bar
- DATABASE_USERNAME=foo
- DATABASE_PASSWORD=bar
- DATABASE_SSL=false
- DATABASE_PORT=5432
then I added the following configuration to my nginx configs,
location /unleash {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:4242;
access_log /var/log/nginx/unleash-access.log main;
}
But when I simply enter http://SERVER_IP:4242/ in my browser the unleash login page appears; but when I want to access unleash panel via https://SERVER_DNS/unleash there will be a blank page.
I think it's because the browser tries to get static/index.1f5d6bc3.js file from https://SERVER_DNS/, (i.e. GET https://SERVER_DNS/static/index.1f5d6bc3.js).
but in the first scenario when I enter http://SERVER_IP:4242/ the browser tries to GET the file from http://SERVER_IP:4242/static/index.1f5d6bc3.js which will work because the unleash server will serve it.
Why this happens? how can I prevent the unleash server to send https://SERVER_DNS/static/index.1f5d6bc3.js file while it does not exists in my host server? is there something wrong with my nginx config?
I'm not sure about the nginx configuration but since you're deploying under a subpath maybe you need to add the environment variable UNLEASH_URL as specified in the docs https://docs.getunleash.io/reference/deploy/configuring-unleash#unleash-url
If that doesn't help, let me know and I'll get help from someone else from the team.

Docker Network Wildcard Subdomain routing of internal requests to nginx service

I'm building an e2e test suite inside a docker container for CI.
It has 3 services (admin, platform, dashboard) which all connect to a postgres instance.
I use nginx as a reverse proxy to direct the traffic to the correct service based on the subdomain.
Dashboard (dashboard.localtest.me) and Admin (admin.localtest.me) each have their own subdomain and everything else goes to Platform (ex. accounts.localtest.me, public.localtest.me) and there are hundreds of subdomains. The services are on their own ports as well as part of how the product is designed.
Currently I can bring all this up in docker-compose, make requests to <foo>.localtest.me:8080 from my browser and nginx will direct all the traffic to the correct endpoints. Works great.
But when the Cypress service tests start making requests (from inside the docker host) they don't resolve. My assumption is that since the hostname isn't resolving then the request isn't going to the nginx service, which means it can't get routed to the correct product service (admin, platform, or dashboard)
If I can route everything to nginx with a dns wildcard (*.localest.me) I think that would work but I can't figure out what modules or tools can allow for that. Everything I've found is for handing a reverse proxy connecting to the docker host, not containers making url requests internally.
TL;DR
How can I allow cypress container to make wildcard GET requests (*.localtest.me) to my nginx reverse_proxy container
This is roughly what my docker compose looks like, I've removed any of the internal env vars that aren't relevant
services:
postgres:
build:
context: ./
dockerfile: postgres.dockerfile
ports:
- 5432:5432
dashboard:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 9001:9000
platform:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 8001:8000
admin:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 7001:7000
nginx:
restart: always
image: nginx
build:
context: ./
dockerfile: nginx.dockerfile
ports:
- 8080:80
- 443:443
- 8000:8000
- 9000:9000
- 7000:7000
cypress:
image: cypress-testing
build:
context: ./
dockerfile: cypress-tests.dockerfile
ports:
- 6000:6000
Here is my nginx.conf
server {
listen 80;
listen 9000;
server_name dashboard.localtest.me;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://dashboard:9000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
server {
listen 80;
listen 7000;
server_name admin.localtest.me;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header Connection "";
proxy_pass http://admin:7000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
server {
listen 80;
listen 8000;
server_name ~(^|\.)localtest\.me$;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://platform:8000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}

Nginx run from docker-compose returns "host not found in upstream"

I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.

Connect simple docker compose to nginx?

I have been at this for very long but I cant seem to solve this. I am really stuck ... so I turn to you guys.
I am trying something that is supposably simple. I want to use nginx as a reverse proxy to my front end.
Docker-compose
version: '3.7'
services:
frontend:
expose:
- 9080
build: "./"...""
volumes:
- ./"..."/build:/usr/src/kitschoen-rj/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/"..."/staticfiles
ports:
- 8080:8080
depends_on:
- restapi
volumes:
static_volume:
nginx.conf
upstream kitschoen_frontend {
server frontend:9080;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I simply can't figure out, why I get an "Bad Gateway"-Error when I go to "localhost:8080".
After some serious troubleshooting I ended my torment. The problem was utterly stupid. I created a multi-stage build for my react application, that also served the react app with an nginx server (this really reduced the image size for react).
But the react nginx server exposed port 80 for the react app and is forwarding all requests accordingly.
So the solution was to change my nginx.conf to:
upstream kitschoen_frontend {
server frontend:80;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
What a horrible day!
If you read this post so far, and you are thinking that the approach to serve my frontend via a separate nginx server is really horrible design. Please feel free to tell me so ...

nginx does not automatically pick up dns changes in swarm

I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.

Resources