Connect simple docker compose to nginx? - docker

I have been at this for very long but I cant seem to solve this. I am really stuck ... so I turn to you guys.
I am trying something that is supposably simple. I want to use nginx as a reverse proxy to my front end.
Docker-compose
version: '3.7'
services:
frontend:
expose:
- 9080
build: "./"...""
volumes:
- ./"..."/build:/usr/src/kitschoen-rj/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/"..."/staticfiles
ports:
- 8080:8080
depends_on:
- restapi
volumes:
static_volume:
nginx.conf
upstream kitschoen_frontend {
server frontend:9080;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I simply can't figure out, why I get an "Bad Gateway"-Error when I go to "localhost:8080".

After some serious troubleshooting I ended my torment. The problem was utterly stupid. I created a multi-stage build for my react application, that also served the react app with an nginx server (this really reduced the image size for react).
But the react nginx server exposed port 80 for the react app and is forwarding all requests accordingly.
So the solution was to change my nginx.conf to:
upstream kitschoen_frontend {
server frontend:80;
}
server {
listen 8080;
location / {
proxy_pass http://kitschoen_frontend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
What a horrible day!
If you read this post so far, and you are thinking that the approach to serve my frontend via a separate nginx server is really horrible design. Please feel free to tell me so ...

Related

Domain name not working on Digital Ocean droplet with Docker and Nginx

I've search a lot of online materials, but I wasn't able to find a solution for my problem. I'll try to make it as clear as possible. I think I'm missing something and maybe someone with more experience on the server configuration side may have the answer.
I have MERN stack app and I'm trying to deploy it on a DigitalOcean droplet, using Docker. All good so far, everything runs as it should should, except de fact that I'm not able to access my app by the domain. It works perfectly if I'm using the IP of the droplet.
What I've checked so far:
checked my ufw status and I have both HTTP and HTTPS enabled
the domain is from GoDaddy and it's live, linked with the proper namespaces from Digital Ocean.
in the domains sections from Digital Ocean everything it's set as it should. I have the proper CNAME records pointing to the IP of my droplet
a direct ping to my domain works fine (it returns the correct IP)
also checked DNS LookUp tools and everything seems to be linked just fine
When it comes to the Docker containers, I have 3 of them: client, backend and nginx.
This is how my docker-compose looks like:
version: '3'
services:
nginx:
container_name: jtg-nginx
depends_on:
- backend
- client
restart: always
image: host-of-my-image-nginx:latest
networks:
- local-net
ports:
- '80:80'
backend:
container_name: jtg-backend
image: host-of-my-image-backend:latest
ports:
- "5000:5000"
volumes:
- logs:/app/logs
- uploads:/app/uploads
networks:
- local-net
env_file:
- .env
client:
container_name: jtg-client
stdin_open: true
depends_on:
- backend
image: host-of-my-image-client:latest
networks:
- local-net
env_file:
- .env
networks:
local-net:
driver: bridge
volumes:
logs:
driver: local
uploads:
driver: local
I have two instances of Nginx. One is used inside the client container and the other one is used in it's own container.
This is the default.conf from the client:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
Now it comes the most important part. This is the default.conf used inside the main Nginx container:
upstream client {
server client:3000;
}
upstream backend {
server backend:5000;
}
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
I really don't understand what's wrong with this configuration and I think it's something very small that I'm missing out.
Thank you!
If you want to setup a domain name in front, you'll need to have a webserver instance that allows you to proxy_pass your hostname to your container
So this is what you may wanna do :
server{
listen 80;
server_name my-domain.com www.my-domain.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /backend {
rewrite /backend/(.*) /$1 break;
proxy_pass http://backend;
}
}
The mistery was solved. After adding SSL certificate everything works as it should.

Nginx reverse proxy for keycloak

I've deployed a keycloak server at localhost:7070 (in Docker container, it run on 8080), now I want to setup a reverse proxy for it. Here is my conf:
server {
listen 11080 ;
location /auth/ {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:7070/auth/;
}
}
when I access http://my-ip:11080/auth, I could see the welcome page. But when I tried to login following the link on the welcome page, then it show error and the url now is http://my-ip:auth/admin/, but I expect http://my-ip:11080/auth/admin/ with the port 11080
When I manually type http://my-ip:11080/auth/admin and press Enter, it redirect to http://my-ip/auth/admin/master/console/, but I expect http://my-ip:11080/auth/admin/master/console/ with the port 11080
I also tried many solutions that I found but no luck for now. Could you guy tell me what is the problem here?
UPDATE: docker-compose.yml
version: "3.7"
services:
keycloak:
volumes:
- keycloak-pgdb:/var/lib/postgresql/data
build:
context: .
dockerfile: Dockerfile
ports:
- "7070:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=password
- DB_VENDOR=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=keycloak
- DB_ADDR=localhost
- DB_USER=postgres
- DB_PASSWORD=root
- PROXY_ADDRESS_FORWARDING=true
volumes:
keycloak-pgdb:
Docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30ad65460a0c pic-keycloak_keycloak "entrypoint.sh" 38 minutes ago Up 38 minutes 5432/tcp, 0.0.0.0:7070->8080/tcp pic-keycloak_keycloak_1
The application in the container is not aware that you are forwarding port 11080, so when the application renders the response, if it's following the X-Forwarded-xxxxx headers, it will use the X-Forwarded-Poroto to determine where the redirection should be sent.
Depending on your application, you have 2 options do deal with this cases:
Application that recognizes a X-Forwarded-Port header can be told to redirect to a specific port, like in this case:
proxy_set_header X-Forwarded-Port 11080
Legacy application that do not obey the rules provided in the header can be handled by response rewrite pass. Here is example with sub_filter:
sub_filter 'http:/my-ip/auth' 'http:/my-ip:11080/auth';
For sub_filter to work, the module should be installed and enabled --with-http_sub_module

NGINX: Connect to backend with service name over Docker Compose's network from frontend

I have a simple dockerized flask backend that listens on 0.0.0.0:8080 and simple dockerized react frontend that sends a request to localhost:8080/api/v1.0/resource.
Now i want to run those containers in docker compose and issue the request to the service's name backend
The compose file looks like this:
version: '3'
services:
backend:
ports:
- "8080:8080"
image: "tobiaslocker/simple-dockerized-flask-backend:v0.1"
frontend:
ports:
- "80:80"
image: "tobiaslocker/simple-dockerized-react-frontend:v0.1"
The NGINX configuration that works for requests to localhost:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
The frontend sends the request axios.get('http://localhost:8080/api/v1.0/resource')
My questions:
How do i have to configure NGINX to be able to use the service name (e.g. backend)
How do i have to issue the request to match the configuration.
I am not sure how the proxy_pass will take effect when sending the request from the frontend and found it hard to debug.
Regards
My Answers:
How do i have to configure NGINX to be able to use the service name (e.g. backend)
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://backend:8080;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
Taken from here. Not sure if all settings are relevant but only setting proxy_pass didn't work for me.
How do i have to issue the request to match the configuration.
Same as before: axios.get('http://localhost:8080/api/v1.0/resource'), which makes sense, since it works locally and proxied with NGINX.

Nginx run from docker-compose returns "host not found in upstream"

I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.

nginx does not automatically pick up dns changes in swarm

I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.

Resources