nginx reverse proxy with part of url as dynamic - docker

i have a django app and nginx as services on docker-compose. Am using nginx as a reverse proxy.
here is the docker compose file
version: "3.8"
services:
# nginx reverse proxy
nginx:
image: nginx:1.17.10
container_name: nginx
ports:
- "80:80"
restart: on-failure
depends_on:
- app
app:
image: django-app-image
conatinter_name: app
expose
- 8001
restart: on-failure
here is the nginx configuration
user www-data;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# sendfile on;
upstream django-app {
server app:8001;
}
server {
listen 80;
listen [::]:80;
location /vocal-bm{
client_max_body_size 0;
proxy_pass http://django-app/vocal-bm;
proxy_set_header Host $host;
}
}
This is what i want to achieve:
I should visit http://localhost/vocal-bm/<some-variable>
and the route served should be http://django-app/vocal-bm/<some-variable>
The above configuration just routes to http://django-app
How do I get it to work i.e route to http://django-app/vocal-bm/<some-variable>
What I have tried
adding a trailing slash on the proxy_pass proxy_pass http://django-app/vocal-bm/;
but a get http://django-app/vocal-bm//<some-variable> with the double slash before the dynamic variable.
I have also tried the rewrite but it gives http://django-app also

Related

Add Additonal Docker Containers Behind NGINX Reverse Proxy

I have a Docker compose file running an application that utilizes NGIX as a reverse proxy. The proxy is running on HTTPS for STIG Manager and Keycloak but the additional container I wish to add is running on a different port that is non-HTTPS.
#1 I want to add additional docker containers behind the proxy.
#2 I want to call the app using a DNS name.
Environment: (The server hosting docker)
gsil-docker1.gsil.mil
Compose File:
version: '3.7'
services:
nginx:
# image: nginx:1.23.1
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/nginx/nginx:1.23.1
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./certs/localhost/localhost.crt:/etc/nginx/cert.pem
- ./certs/localhost/localhost.key:/etc/nginx/privkey.pem
- ./certs/dod/Certificates_PKCS7_v5.9_DoD.pem.pem:/etc/nginx/dod-certs.pem
- ./nginx/index.html:/usr/share/nginx/html/index.html
ports:
- "443:443"
keycloak:
# image: quay.io/keycloak/keycloak:19.0.2
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/keycloak/keycloak:19.0.2
environment:
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=Pa55w0rd
- KC_PROXY=edge
- KC_HOSTNAME_URL=https://localhost/kc/
- KC_HOSTNAME_ADMIN_URL=https://localhost/kc/
- KC_SPI_X509CERT_LOOKUP_PROVIDER=nginx
- KC_SPI_X509CERT_LOOKUP_NGINX_SSL_CLIENT_CERT=SSL-CLIENT-CERT
- KC_SPI_TRUSTSTORE_FILE_FILE=/tmp/truststore.p12
- KC_SPI_TRUSTSTORE_FILE_PASSWORD=password
command: start --import-realm
volumes:
- ./certs/dod/Certificates_PKCS7_v5.9_DoD.pem.p12:/tmp/truststore.p12
- ./kc/stigman_realm.json:/opt/keycloak/data/import/stigman_realm.json
- ./kc/create-x509-user.jar:/opt/keycloak/providers/create-x509-user.jar
# uncomment below to persist Keycloak data
# - ./kc/h2:/opt/keycloak/data/h2
stigman:
# image: nuwcdivnpt/stig-manager:1.2.20
# alternative image based on Ironbank Node.js
image: nuwcdivnpt/stig-manager:latest-ironbank
environment:
- STIGMAN_OIDC_PROVIDER=http://keycloak:8080/realms/stigman
- STIGMAN_CLIENT_OIDC_PROVIDER=https://localhost/kc/realms/stigman
- STIGMAN_CLASSIFICATION=U
- STIGMAN_DB_HOST=mysql
- STIGMAN_DB_USER=stigman
- STIGMAN_DB_PASSWORD=stigmanpw
# uncomment below to fetch current STIG library from DISA and import it
# - STIGMAN_INIT_IMPORT_STIGS=true
init: true
mysql:
# image: mysql:8.0.21
# alternative image from Ironbank
image: registry1.dso.mil/ironbank/opensource/mysql/mysql8:8.0.31
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_USER=stigman
- MYSQL_DATABASE=stigman
- MYSQL_PASSWORD=stigmanpw
# uncomment below to persist MySQL data
volumes:
- ./mysql-data:/var/lib/mysql
Nginx Config:
events {
worker_connections 4096; ## Default: 1024
}
pid /var/cache/nginx/nginx.pid;
http {
server {
listen 443 ssl;
server_name localhost;
root /usr/share/nginx/html;
client_max_body_size 100M;
ssl_certificate /etc/nginx/cert.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
ssl_prefer_server_ciphers on;
ssl_client_certificate /etc/nginx/dod-certs.pem;
ssl_verify_client optional;
ssl_verify_depth 4;
error_log /var/log/nginx/error.log debug;
if ($return_unauthorized) { return 496; }
location / {
autoindex on;
ssi on;
}
location /stigman/ {
proxy_pass http://stigman:54000/;
}
location /kc/ {
proxy_pass http://keycloak:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
# define which endpoints require mTLS
map_hash_bucket_size 128;
map $uri $secured_url {
default false;
"/kc/realms/stigman/protocol/openid-connect/auth" true;
}
map "$secured_url:$ssl_client_verify" $return_unauthorized {
default 0;
"true:FAILED" 1;
"true:NONE" 1;
"true:" 1;
}
}
I have tried adding settings to my docker-compose and nginx but I was unable to make it work.
docker-compose addition:
networks:
default:
name: grafana_default
external: true
nginx addtion:
server {
listen 80;
server_name grafana.gsil.mil;
location / {
proxy_pass http://grafana.gsil.smil:3000/;
}
}
Additionally, I have created a CNAME DNS entry for grafana.gsil.mil and pointed it to gsil-docker1.gsil.mil
The containers app are all running and I can reach all of them respectively by going to:
gsil-docker1.gsil.mil/stigman
gsil-docker1.gsil.mil/kc
gsil-docker1.gsil.mil:3000
The docker-compose file for grafana:
version: '3.0'
volumes:
grafana-data:
services:
grafana:
container_name: grafana
image: registry1.dso.mil/ironbank/opensource/grafana/grafana:9.3.2
environment:
- grafana.config
restart: always
volumes:
- grafana-data:/var/lib/grafana
ports:
- 3000:3000/tcp
I have done a lot of searching but examples I found tended to show http on nginx with http backend apps. I was struggling to find something that would help pull this all together. Can you have an https proxy with a http backend app or do I need to create certs and make all my backend apps run https?
The issue was simple to fix. I needed to add port 80 to my nginx config in my docker-compose file. NGINX cannot proxy http traffic when listening on https only (so add http).
version: '3.7'
services:
nginx:
ports:
- "443:443"
- "80:80"
My presumptions about these specific items were all correct:
-making docker aware of external networks (when the container you want to add/proxy is not part of the same network)
networks:
default:
name: grafana_default
external: true
-adding DNS CNAME entries was correct.
I have created a CNAME DNS entry for grafana.gsil.mil and pointed it to gsil-docker1.gsil.mil
-the appropriate lines had to be added to nginx.conf for each additional container that you need to add.
server {
listen 80;
server_name grafana.gsil.mil;
location / {
proxy_pass http://grafana.gsil.smil:3000/;
}
}

Docker - Nginx proxy_pass "502 bad gateway" only with client routes?

I have the following docker compose:
version: '3.1'
services:
backend:
container_name: backend
image: backendnode
restart: always
ports:
- 3000:3000
frontend:
container_name: frontend
image: frontnginx
restart: always
ports:
- 4200:80
apigw:
image: reverseproxy
restart: always
ports:
- 80:80
depends_on:
- frontend
- backend
This is the reverseproxy image nginx.conf:
worker_processes auto;
events { worker_connections 1024; }
http {
server {
listen 80;
server_name localhost 127.0.0.1;
location / {
proxy_pass http://frontend:4200;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass http://backend:3000;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
When running docker-compose run, I get the following results:
localhost:80/api/users: works great, nginx redirects to backend properly.
localhost:80/index.html: not working, I get the following error:
connect() failed (111: Connection refused) while connecting to upstream, client: 172.20.0.1, server: localhost, request: "GET /index.html HTTP/1.1", upstream: "http://172.20.0.5:4200/index.html", host: "localhost:80"
Frontend is a simple nginx web server, this is its nginx.conf:
events{}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
}
Any idea why reverse proxy it's not working with frontend routes?
Created answer from the comment thread:
Docker networking works like this: if you use communication within docker's network, you need to refer to the internal ports. Since port mapping is used for the "outside world". So in your case, you would need to refer to "frontend:80" instead of 4200.

HTTP redirected to HTTPS in nginx.conf

I have a nginx.conf in which I am running an application on localhost. I need to redirect the application from HTTP to HTTPS. In the nginx.conf, I have a configuration as below:
http {
error_log /etc/nginx/error/error.log warn; #./nginx/error.log warn;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
listen 80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name localhost;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:EECDH:EDH:!MD5:!RC4:!LOW:!MEDIUM:!CAMELLIA:!ECDSA:!DES:!DSS:!3DES:!NULL;
ssl_prefer_server_ciphers on;
keepalive_timeout 70;
location / {
proxy_pass http://localhost:80;
proxy_ssl_certificate /etc/nginx/ssl.crt;
proxy_ssl_certificate_key /etc/nginx/ssl.key;
proxy_ssl_verify off;
allow all;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log;
client_max_body_size 0;
client_body_buffer_size 128k;
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_buffers 32 4k;
}
}
And docker-compose.yml as below:-
version: '2'
services:
mysql:
image: mysql:5.7.21
restart: always
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./mysql:/var/lib/mysql
networks:
- bookstack-bridge
bookstack:
image: solidnerd/bookstack:latest
container_name: bookstack
restart: always
depends_on:
- mysql
environment:
- APP_URL=http://localhost:8080
volumes:
- ./uploads:/var/www/bookstack/public/uploads
- ./storage-uploads:/var/www/bookstack/public/storage
ports:
- 8080:8080
networks:
- bookstack-bridge
nginx:
image: nginx:latest
container_name: bookstack-nginx
restart: always
And in the docker-compose.yml, I do have APP_URL=http://localhost:8080 env variable.
Does anybody have an idea, what needs to be changed to redirect from HTTP to HTTPS?
Thanks in advance.
I customized your docker-compose-yml.
Your docker-compose.yml would not work for https because some parts are wrong or missing.
To use HTTPS you have to create the certificates with Openssl. These must be in the folder /etc/nginx/certs in the container.
When you put the certificates in the folder you have to set - VIRTUAL_PORT=8080 to 443 and change the APP_URL from http to https
When you start a service and assign it to the network "web" nginx automatically sees that a new service is registered. It automatically maps to the port specified in the image. This happens with the volume command "/tmp/docker.sock:ro". ":ro" stands for Readonly
If you assign a service to the network "internal" it is not accessible from the outside and Nginx ignores it. See "mysql" service.
With "depends_on:" i say that all services have to start before bookstack starts. This is important! First Nginx, then MySql and finally bookstack.
I prefer to use VIRTUAL_HOST on its own local domain. You can also use localhost there, the only important thing is that your "hosts" file in the operating system points to your external Docker IP. Example: "192.168.5.121 bookstack.local"
My tip! I would store the service "nginx--proxy" in a sepparate docker-compose file. Then you can easily register further services with the nginx.
Good luck with that and if you want to use Bookstack only locally HTTPS might not be that urgent now. Otherwise search for "Create Certs for Nginx local"
Before you start create the network "web":
docker network create web
version: '2.4'
services:
mysql:
image: mysql:5.7.21
container_name: bookstack-mysql
restart: unless-stopped
networks:
- "internal"
healthcheck:
test: "exit 0"
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=admin
volumes:
- ./docker/data/mysql:/var/lib/mysql
bookstack:
image: solidnerd/bookstack:0.29.3
container_name: bookstack
restart: unless-stopped
networks:
- "web"
- "internal"
depends_on:
nginx--proxy:
condition: service_started
mysql:
condition: service_healthy
environment:
- VIRTUAL_HOST=bookstack.local
- VIRTUAL_PORT=8080
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=admin
- APP_URL=http://bookstack.local
volumes:
- ./docker/data/uploads:/var/www/bookstack/public/uploads
- ./docker/data/storage-uploads:/var/www/bookstack/storage/uploads
nginx--proxy:
image: jwilder/nginx-proxy:latest
container_name: nginx--proxy
restart: always
environment:
DEFAULT_HOST: default.vhost
ports:
- "80:80"
- "443:443"
volumes:
- ./docker/data/certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- "web"
- "internal"
networks:
web:
external: true
internal:
external: false
The solution worked for me:-
In the docker-compose.yml, in nginx service section added networks tag-
networks:
- bookstack-bridge
And in the nginx.conf added proxy_pass as-
proxy_pass http://bookstack:8080;
Thanks you guys for your help.

Nginx reverse-proxy not serving static files

I tried to start some services via docker-compose. One of them is a nginx reverse-proxy, handling different paths. One path ("/react") is to a containerized react_app with a nginx on port 80. Solely, the reverse-proxy is working correctly. Also, if I server the nginx of the react_app on port 80, all work's fine. Combining both without changing anything in the config leads to 404 for static files like css and js.
Setup #1
Correct forward for path /test to Google.
docker-compose.yml
version: "3"
services:
#react_app:
# container_name: react_app
# image: react_image
# build: .
reverse-proxy:
image: nginx:latest
container_name: reverse-proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- '80:80'
nginx.conf (reverse-proxy)
location /test {
proxy_pass http://www.google.com/;
}
Setup #2
No reverse proxy. Correct answer from nginx inside of container react_app.
docker-compose.yml
version: "3"
services:
react_app:
container_name: react_app
image: react_image
build: .
#reverse-proxy:
# image: nginx:latest
# container_name: reverse-proxy
# volumes:
# - ./nginx.conf:/etc/nginx/nginx.conf
# ports:
# - '80:80'
Setup #3 (not working!)
Reverse proxy and React App with nginx. Loads index.html, but fails so load files in /static
nginx.conf (reverse-proxy)
location /react {
proxy_pass http://react_app/;
}
docker-compose.yml
version: "3"
services:
react_app:
container_name: react_app
image: react_image
build: .
reverse-proxy:
image: nginx:latest
container_name: reverse-proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- '80:80'
Activating both systems leads to failing static content. It seems to me that the reverse-proxy tries to server the files, but fails (for good reason), because there is no log entry in reac_app's nginx. Here's the config from the reac_app nginx, perhaps I'm missing something out.
nginx.conf (inside react_app container)
events {}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
try_files $uri /index.html;
}
}
}
--> Update
This is a rather unsatisfying workaround - but it works. Although now reacts routing is messed up. I cannot reach /react/login
http {
server {
server_name services;
location /react {
proxy_pass http://react_app/;
}
location /static/css {
proxy_pass http://react_app/static/css;
add_header Content-Type text/css;
}
location /static/js {
proxy_pass http://react_app/statics/js;
add_header Content-Type application/x-javascript;
}
}
}
If you check the paths of the missing static files in your browser, you'll notice their relative paths are not what you expect. You can fix this by adding sub filters inside your nginx reverse proxy configuration.
http {
server {
server_name services;
location /react {
proxy_pass http://react_app/;
######## Add the following ##########
sub_filter 'action="/' 'action="/react/';
sub_filter 'href="/' 'href="/react/';
sub_filter 'src="/' 'src="/react/';
sub_filter_once off;
#####################################
}
}
}
This will update the relative paths to your static files.

Is it Possible to Connect Docker Containers Through Http Using Http

I am attempting to use Docker to help deploy an application. The idea is to have two containers. One is the front-end containing Nginx and an Angular app.
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf
It is supposed to contact a Spring Boot based API generated using the gradle-docker plugin and Dockerfile recommended by Spring:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
They seem to run fine individually (I can access them on my development machine); however, I am having trouble connecting the two.
My docker-compose.yml file:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
api:
image: com.midamcorp/employee_search:latest
ports:
- 8080:8080
networks:
- app
networks:
app:
Based upon my understanding of the Docker documentation on networks, I was under the impression that the containers would be placed in the same network and thus could interact, with the service name (for example, api) acting as the "host". Based upon this assumption, I am attempting to access the API from the Angular application through the following:
private ENDPOINT_BASE: string = "http://api:8080/employee";
This returns an error: Http failure response for (unknown url): 0 Unknown Error.
To be honest, the sample I have looked at used this concept (substituting the service name for host to connect two containers) for database applications not HTTP. Is what I am attempting to accomplish not possible?
EDIT:
My nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
EDIT:
Updated nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream docker-java {
server api:8080;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name localhost;
location / {
proxy_pass http://docker-java;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
and docker-compose.yml
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
networks:
- app
networks:
app:
And the client / Angular app uses the following to contact the API private ENDPOINT_BASE: string = "http://localhost:8081/employee";
Output from docker ps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
947eb757eb4b b28217437313 "nginx -g 'daemon of…" 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp employee_service_webapp_1
e16904db67f3 com.midamcorp/employee_search:latest "java -Djava.securit…" 10 minutes ago Up 10 minutes employee_service_api_1
The problem you are experiencing is not anything weird. It's just that you did not explicitly name your containers. Instead, Docker generated a the names by itself. So, nginx will resolve employee_service_api_1, but will not recognize just api. Open you webapp container and take a look at your hosts (cat /etc/hosts) - it will show you employee_service_api_1 and it's IP address.
How to fix it.
Add container_name to your docker-compose.yml:
version: '3'
services:
webapp:
image: com.midamcorp/emp_front:latest
container_name: employee_webapp
ports:
- 80:80
- 443:443
networks:
- app
depends_on:
- api
api:
image: com.midamcorp/employee_search:latest
container_name: employee_api
networks:
- app
networks:
app:
I always refrain from using "simple" names (i.e. just api), cuz on my system multiple containers with similar names might show up, so I add some prefix. In this case I named the api container employee_api and now nginx will resolve to that name once you restart your containers.

Resources