I have a very thin app in a server and I just set up unleash (feature flag management tool) on it (with docker).
So I just opened the port 4242 in both the host and the container machine (docker-compose segment bellow).
services:
custom-unleash:
container_name: custom_unleash
image: unleashorg/unleash-server:latest
command: docker-entrypoint.sh /bin/sh -c 'node index.js'
ports:
- "4242:4242"
environment:
- DATABASE_HOST=foo
- DATABASE_NAME=bar
- DATABASE_USERNAME=foo
- DATABASE_PASSWORD=bar
- DATABASE_SSL=false
- DATABASE_PORT=5432
then I added the following configuration to my nginx configs,
location /unleash {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:4242;
access_log /var/log/nginx/unleash-access.log main;
}
But when I simply enter http://SERVER_IP:4242/ in my browser the unleash login page appears; but when I want to access unleash panel via https://SERVER_DNS/unleash there will be a blank page.
I think it's because the browser tries to get static/index.1f5d6bc3.js file from https://SERVER_DNS/, (i.e. GET https://SERVER_DNS/static/index.1f5d6bc3.js).
but in the first scenario when I enter http://SERVER_IP:4242/ the browser tries to GET the file from http://SERVER_IP:4242/static/index.1f5d6bc3.js which will work because the unleash server will serve it.
Why this happens? how can I prevent the unleash server to send https://SERVER_DNS/static/index.1f5d6bc3.js file while it does not exists in my host server? is there something wrong with my nginx config?
I'm not sure about the nginx configuration but since you're deploying under a subpath maybe you need to add the environment variable UNLEASH_URL as specified in the docs https://docs.getunleash.io/reference/deploy/configuring-unleash#unleash-url
If that doesn't help, let me know and I'll get help from someone else from the team.
Related
I've deployed a keycloak server at localhost:7070 (in Docker container, it run on 8080), now I want to setup a reverse proxy for it. Here is my conf:
server {
listen 11080 ;
location /auth/ {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:7070/auth/;
}
}
when I access http://my-ip:11080/auth, I could see the welcome page. But when I tried to login following the link on the welcome page, then it show error and the url now is http://my-ip:auth/admin/, but I expect http://my-ip:11080/auth/admin/ with the port 11080
When I manually type http://my-ip:11080/auth/admin and press Enter, it redirect to http://my-ip/auth/admin/master/console/, but I expect http://my-ip:11080/auth/admin/master/console/ with the port 11080
I also tried many solutions that I found but no luck for now. Could you guy tell me what is the problem here?
UPDATE: docker-compose.yml
version: "3.7"
services:
keycloak:
volumes:
- keycloak-pgdb:/var/lib/postgresql/data
build:
context: .
dockerfile: Dockerfile
ports:
- "7070:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
- DB_VENDOR=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=keycloak
- DB_ADDR=localhost
- DB_USER=postgres
- DB_PASSWORD=root
- PROXY_ADDRESS_FORWARDING=true
volumes:
keycloak-pgdb:
Docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30ad65460a0c pic-keycloak_keycloak "entrypoint.sh" 38 minutes ago Up 38 minutes 5432/tcp, 0.0.0.0:7070->8080/tcp pic-keycloak_keycloak_1
The application in the container is not aware that you are forwarding port 11080, so when the application renders the response, if it's following the X-Forwarded-xxxxx headers, it will use the X-Forwarded-Poroto to determine where the redirection should be sent.
Depending on your application, you have 2 options do deal with this cases:
Application that recognizes a X-Forwarded-Port header can be told to redirect to a specific port, like in this case:
proxy_set_header X-Forwarded-Port 11080
Legacy application that do not obey the rules provided in the header can be handled by response rewrite pass. Here is example with sub_filter:
sub_filter 'http:/my-ip/auth' 'http:/my-ip:11080/auth';
For sub_filter to work, the module should be installed and enabled --with-http_sub_module
I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.
I have been at this for very long but I cant seem to solve this. I am really stuck ... so I turn to you guys.
I am trying something that is supposably simple. I want to use nginx as a reverse proxy to my front end.
Docker-compose
version: '3.7'
services:
frontend:
expose:
- 9080
build: "./"...""
volumes:
- ./"..."/build:/usr/src/kitschoen-rj/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/"..."/staticfiles
ports:
- 8080:8080
depends_on:
- restapi
volumes:
static_volume:
nginx.conf
upstream kitschoen_frontend {
server frontend:9080;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I simply can't figure out, why I get an "Bad Gateway"-Error when I go to "localhost:8080".
After some serious troubleshooting I ended my torment. The problem was utterly stupid. I created a multi-stage build for my react application, that also served the react app with an nginx server (this really reduced the image size for react).
But the react nginx server exposed port 80 for the react app and is forwarding all requests accordingly.
So the solution was to change my nginx.conf to:
upstream kitschoen_frontend {
server frontend:80;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
What a horrible day!
If you read this post so far, and you are thinking that the approach to serve my frontend via a separate nginx server is really horrible design. Please feel free to tell me so ...
I'm encountering a very specific problem with my NGINX/RabbitMQ setup in which the desired result is only accesible via a mobile device. I hope there is someone who could shine a light on what i'm doing wrong :). I have the following setup:
Two droplets on DigitalOcean:
Droplet A with rancher server installed on it
Droplet B which acts as a host, controlled by rancher. for this example, assume its ip-adress is 123.45.678.90
Two images on docker-hub:
myaccount/customnginx
myaccount/customrabbitmq
myaccount/customnginx
Dockerfile:
FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
nginx.conf (in which http://123.45.678.90:15672 = Droplet B + RabbitMQ port)
worker_processes 1;
events {
worker_connections 1024;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 80 default_server;
server_name www.mydomain.nl mydomain.nl;
access_log /dev/stdout;
location /rabbitmq/ {
proxy_pass http://123.45.678.90:15672/;
rewrite ^/rabbitmq$ /rabbitmq/ permanent;
rewrite ^/rabbitmq/(.*)$ /$1 break;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
myaccount/customrabbitmq
I can provide the rabbitMQ configuration upon request, but I don't think it is of much importance at the moment.
Both images are built into a stack on Rancher via the following docker-compose.yml:
version: '2'
services:
rabbitmq:
image: myaccount/customrabbitmq
ports:
- 5672:5672
- 15672:15672
nginx:
image: myaccount/customproxy
ports:
- 80:80
which looks like this
Now
When I try to access my RabbitMQ manager via www.mydomain.nl/rabbitmq on a mobile device everything works properly. When I try to do the same with any browser on my desktop (or laptop), nothing works. I don't even see the attempt logged on Rancher (nginx container). I also tried this in incognito-mode and/or with ad-block-plus/Disconnect disabled, but to no avail.
What's wrong with this configuration?
Thanks in advance.
Ok I think I managed to fix this. Either or both of the following had to do something with it:
I enabled connection through ipv6 on the DigitalOcean droplet, added the ipv6 adress as AAAA record (for both the www.mydomain.nl as mydomain.nl) in the DNS-records with the domain registrar. I don't know much about this subject, but thought the mobile device might have connected with ipv4, while desktop tried to connect with the other (which wasn't set-up properly). I went into the firefox ocnfig (type about:config in adress bar) and set network.dns.disableIPv6 to true. This seemed to help.
I waited a day. Maybe it took a little longer for the DNS (normal A-records) to propagate properly
I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.