nginx does not automatically pick up dns changes in swarm - docker

I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?

Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.

Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.

Related

Nginx config for unleash not working after reverse proxy

I have a very thin app in a server and I just set up unleash (feature flag management tool) on it (with docker).
So I just opened the port 4242 in both the host and the container machine (docker-compose segment bellow).
services:
custom-unleash:
container_name: custom_unleash
image: unleashorg/unleash-server:latest
command: docker-entrypoint.sh /bin/sh -c 'node index.js'
ports:
- "4242:4242"
environment:
- DATABASE_HOST=foo
- DATABASE_NAME=bar
- DATABASE_USERNAME=foo
- DATABASE_PASSWORD=bar
- DATABASE_SSL=false
- DATABASE_PORT=5432
then I added the following configuration to my nginx configs,
location /unleash {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:4242;
access_log /var/log/nginx/unleash-access.log main;
}
But when I simply enter http://SERVER_IP:4242/ in my browser the unleash login page appears; but when I want to access unleash panel via https://SERVER_DNS/unleash there will be a blank page.
I think it's because the browser tries to get static/index.1f5d6bc3.js file from https://SERVER_DNS/, (i.e. GET https://SERVER_DNS/static/index.1f5d6bc3.js).
but in the first scenario when I enter http://SERVER_IP:4242/ the browser tries to GET the file from http://SERVER_IP:4242/static/index.1f5d6bc3.js which will work because the unleash server will serve it.
Why this happens? how can I prevent the unleash server to send https://SERVER_DNS/static/index.1f5d6bc3.js file while it does not exists in my host server? is there something wrong with my nginx config?
I'm not sure about the nginx configuration but since you're deploying under a subpath maybe you need to add the environment variable UNLEASH_URL as specified in the docs https://docs.getunleash.io/reference/deploy/configuring-unleash#unleash-url
If that doesn't help, let me know and I'll get help from someone else from the team.

Nginx reverse proxy for keycloak

I've deployed a keycloak server at localhost:7070 (in Docker container, it run on 8080), now I want to setup a reverse proxy for it. Here is my conf:
server {
listen 11080 ;
location /auth/ {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:7070/auth/;
}
}
when I access http://my-ip:11080/auth, I could see the welcome page. But when I tried to login following the link on the welcome page, then it show error and the url now is http://my-ip:auth/admin/, but I expect http://my-ip:11080/auth/admin/ with the port 11080
When I manually type http://my-ip:11080/auth/admin and press Enter, it redirect to http://my-ip/auth/admin/master/console/, but I expect http://my-ip:11080/auth/admin/master/console/ with the port 11080
I also tried many solutions that I found but no luck for now. Could you guy tell me what is the problem here?
UPDATE: docker-compose.yml
version: "3.7"
services:
keycloak:
volumes:
- keycloak-pgdb:/var/lib/postgresql/data
build:
context: .
dockerfile: Dockerfile
ports:
- "7070:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
- DB_VENDOR=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=keycloak
- DB_ADDR=localhost
- DB_USER=postgres
- DB_PASSWORD=root
- PROXY_ADDRESS_FORWARDING=true
volumes:
keycloak-pgdb:
Docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30ad65460a0c pic-keycloak_keycloak "entrypoint.sh" 38 minutes ago Up 38 minutes 5432/tcp, 0.0.0.0:7070->8080/tcp pic-keycloak_keycloak_1
The application in the container is not aware that you are forwarding port 11080, so when the application renders the response, if it's following the X-Forwarded-xxxxx headers, it will use the X-Forwarded-Poroto to determine where the redirection should be sent.
Depending on your application, you have 2 options do deal with this cases:
Application that recognizes a X-Forwarded-Port header can be told to redirect to a specific port, like in this case:
proxy_set_header X-Forwarded-Port 11080
Legacy application that do not obey the rules provided in the header can be handled by response rewrite pass. Here is example with sub_filter:
sub_filter 'http:/my-ip/auth' 'http:/my-ip:11080/auth';
For sub_filter to work, the module should be installed and enabled --with-http_sub_module

Nginx run from docker-compose returns "host not found in upstream"

I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.

Connect simple docker compose to nginx?

I have been at this for very long but I cant seem to solve this. I am really stuck ... so I turn to you guys.
I am trying something that is supposably simple. I want to use nginx as a reverse proxy to my front end.
Docker-compose
version: '3.7'
services:
frontend:
expose:
- 9080
build: "./"...""
volumes:
- ./"..."/build:/usr/src/kitschoen-rj/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/"..."/staticfiles
ports:
- 8080:8080
depends_on:
- restapi
volumes:
static_volume:
nginx.conf
upstream kitschoen_frontend {
server frontend:9080;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I simply can't figure out, why I get an "Bad Gateway"-Error when I go to "localhost:8080".
After some serious troubleshooting I ended my torment. The problem was utterly stupid. I created a multi-stage build for my react application, that also served the react app with an nginx server (this really reduced the image size for react).
But the react nginx server exposed port 80 for the react app and is forwarding all requests accordingly.
So the solution was to change my nginx.conf to:
upstream kitschoen_frontend {
server frontend:80;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
What a horrible day!
If you read this post so far, and you are thinking that the approach to serve my frontend via a separate nginx server is really horrible design. Please feel free to tell me so ...

Subdomains, Nginx-proxy and Docker-compose

I'm looking for a way to configure Nginx to access hosted services through a subdomain of my server. Those services and Nginx are instantiated with Docker-compose.
In short, when typing jenkins.192.168.1.2, I should access to Jenkins hosted on 192.168.1.2 redirected with Nginx proxy.
A quick look of what I currently have. It doesn't work without a top domain name, so it works fine on play-with-docker.com, but not locally with for example 192.168.1.2.
server {
server_name jenkins.REVERSE_PROXY_DOMAIN_NAME;
location / {
proxy_pass http://jenkins:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
To have a look of what I want: https://github.com/Ivaprag/devtools-compose
My overall goal is to access remote docker containers without modifying clients' DNS service.
Unfortunately nginx doesn't support sub-domains on IP addresses like that.
You would either have to modify the clients hosts file (which you said you didn't want to do)...
Or you can just set your nginx to redirect like so:
location /jenkins {
proxy_pass http://jenkins:8080;
...
}
location /other-container {
proxy_pass http://other-container:8080;
}
which would allow you to access jenkins at 192.168.1.2/jenkins
Or you can try and serve your different containers through different ports. E.g:
server {
listen 8081;
location / {
proxy_pass http://jenkins:8080;
...
}
}
server {
listen 8082;
location / {
proxy_pass http://other-container:8080;
...
}
}
And then access jenkins from 192.168.1.2:8081/
If you are already using docker-compose I recommend using the jwilder nginx-proxy container.
https://github.com/jwilder/nginx-proxy
This allows you to add unlimited number of web service containers to the backend of the defined nginx proxy, for example:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "nginx_certs:/etc/nginx/certs:rw"
nginx:
build:
context: ./docker/nginx/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host1.com
nginx_2:
build:
context: ./docker/nginx_2/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host2.com
apache_1:
build:
context: ./docker/apache_1/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host3.com
The nginx-proxy mount the host docker sock file in order to get information about the other containers running, if any of them have the env variable VIRTUAL_HOST set then it will add it to its configuration.
I was trying to configure subdomains in nginx (host), for two virtualhosts in one LXC container.
The way it worked for me:
For apache (in the container), I created two virtual hosts: one in port 80 and the other one in port 90.
For enabling port 90 in apache2 (container), it was necessary to add the line "Listen 90" below "Listen 80" in /etc/apache2/ports.conf
For NGINX (host machine), configured two DOMAINS, both in port 80 creating independent .conf files in /etc/nginx/sites-available. Created symbolic link for each file to /etc/nginx/sites-enabled.
In the first NGINX myfirstdomain.conf file, redirect to http://my.contai.ner.ip:80.
In the second NGINX myseconddomain.conf file, redirect to http://my.contai.ner.ip:90
That was it for me !

Resources