I have two docker services (an angular web-app and a tomcat backend), which I want to protect with a third docker service, which is an nginx configured as a reverse-proxy. My proxy configuration is working, but I'm suffering with the basic authorization my reverse-proxy should also handle. When I protect my angular frontend service with basic auth via reverse-proxy config, everything works fine, but my backend is still exposed for everyone. When I add also basic auth to the backend service, I have the problem, that my basic auth configuration header from my frontend is not forwarded/added to the backend REST requests. Is it possible to configure the nginx reverse proxy to add the Authorization header to each request send by the frontend. Or maybe I'm thinking wrong and there is a better solution?
Here is my docker and nginx configuration:
reverse-proxy config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 80;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
server {
listen 8080;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-tomcat;
proxy_redirect off;
}
}
}
docker-compose (setting up all containers):
version: '2.4'
services:
reverse-proxy:
container_name: reverse-proxy
image: nginx:alpine
volumes:
- ./auth:/etc/nginx/conf.d
- ./auth/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "8080:8080"
restart: always
links:
- registry:registry
frontend-nginx:
container_name: frontend
build: './frontend'
volumes:
- /dockerdev/frontend/dist/:/usr/share/nginx/html
depends_on:
- reverse-proxy
- bentley-tomcat
restart: always
backend-tomcat:
container_name: backend
build: './backend'
volumes:
- /data:/data
depends_on:
- reverse-proxy
restart: always
registry:
image: registry:2
ports:
- 127.0.0.1:5000:5000
volumes:
- ./data:/var/lib/registry
frontend Dockerfile:
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./fast-nginx-default.conf /etc/nginx/conf.d/default.conf
frontend config:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
backend Dockerfile:
FROM openjdk:11
RUN mkdir -p /usr/local/bin/tomcat
COPY ./backend-0.0.1-SNAPSHOT.jar /usr/local/bin/tomcat/backend-0.0.1-SNAPSHOT.jar
WORKDIR /usr/local/bin/tomcat
CMD ["java", "-jar", "backend-0.0.1-SNAPSHOT.jar"]
Try adding this directives to your location block
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
I've solved my issue, by listining on port 80 for request with /api and redirected them to the tomcat on port 8080. For that I also had to adjust my front- and backend requests, now all my backend request begin with /api. By this solution I'm able to implement the basic auth on port 80 for protecting the front- and backend.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
client_max_body_size 25M;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
server {
listen 80;
location /api {
proxy_pass http://docker-tomcat;
}
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
}
Related
I am using docker-compose to build containers and to serve the frontend of my website at https:// example.com and the backend at a subdomain, https:// api.example.com. The SSL certificates for both the root and subdomain are working properly, and I can access the live site (static files served by Nginx) at https:// example.com so at least half of the configuration is working properly. The problem occurs when the frontend tries to communicate with the backend. All calls are met with a "No 'Access-Control-Allow-Origin'" 502 Error in the console logs. In the logs of the docker container, this is the error response.
Docker Container Error
2022/03/09 19:01:21 [error] 30#30: *7 connect() failed (111: Connection refused) while connecting
to upstream, client: xxx.xx.xxx.xxx, server: api.example.com, request: "GET /api/services/images/
HTTP/1.1", upstream: "http://127.0.0.1:8000/api/services/images/",
host: "api.example.com", referrer: "https://example.com/"
I think it's likely that something is wrong with my Nginx or docker-compose configuration. When setting the SECURE_SSL_REDIRECT, SECURE_HSTS_INCLUDE_SUBDOMAINS, and the SECURE_HSTS_SECONDS to False or None (in the Django settings) I am able to hit http:// api.example.com:8000/api/services/images/ and get the data I am looking for. So it is running and hooked up, just not taking requests from where I want it to be. I've attached the Nginx configuration and the docker-compose.yml. Please let me know if you need more info, I would greatly appreciate any input, and thanks in advance for the help.
Nginx-custom.conf
# Config for the frontend application under example.com
server {
listen 80;
server_name example.com www.example.com;
if ($host = www.example.com) {
return 301 https://$host$request_uri;
}
if ($host = example.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
server_name example.com www.example.com;
index index.html index.htm;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
root /usr/share/nginx/html;
try_files $uri /index.html =404;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
##### Config for the backend server at api.example.com
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
server_name api.example.com;
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Headers $http_access_control_request_headers;
add_header Access-Control-Allow-Methods $http_access_control_request_method;
location / {
proxy_pass http://127.0.0.1:8000/; #API Server
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Docker-Compose File
version: '3.9'
# services that make up the development env
services:
# DJANGO BACKEND
backend:
container_name: example-backend
restart: unless-stopped
image: example-backend:1.0.1
build:
context: ./backend/src
dockerfile: Dockerfile
command: gunicorn example.wsgi:application --bind 0.0.0.0:8000
ports:
- 8000:8000
environment:
- SECRET_KEY=xxx
- DEBUG=0
- ALLOWED_HOSTS=example.com,api.example.com,xxx.xxx.xxx.x
- DB_HOST=postgres-db
- DB_NAME=xxx
- DB_USER=xxx
- DB_PASS=xxx
- EMAIL_HOST_PASS=xxx
# sets a dependency on the db container and there should be a network connection between the two
networks:
- db-net
- shared-network
links:
- postgres-db:postgres-db
depends_on:
- postgres-db
# POSTGRES DATABASE
postgres-db:
container_name: postgres-db
image: postgres
restart: always
volumes:
- example-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=exampledb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
networks:
- db-net
# ANGULAR & NGINX FRONTEND
frontend:
container_name: example-frontend
build:
context: ./frontend
ports:
- "80:80"
- "443:443"
networks:
- shared-network
links:
- backend
depends_on:
- backend
networks:
shared-network:
driver: bridge
db-net:
volumes:
example-data:
Please assist
I'm trying to run both react.js and nest.js on http://localhost:3000 with docker-compose and Nginx, however
my react.js app isn't binding correctly. When I visit the link, I only see the nginx welcome page.
This is my docker-compose.yaml
version: "3.6"
services:
database:
image: postgres:13.1-alpine
env_file:
- ./database/.env
volumes:
- "db-data:/var/lib/postgresql/data"
networks:
- challenge
ports:
- "5432:5432"
backend:
build:
context: $PWD/../../backend
dockerfile: $PWD/backend/Dockerfile
volumes:
- ./backend/.env:/app/.env
- ../../backend/src:/app/src
- storage:/app/storage
ports:
- 3000
networks:
- challenge
depends_on:
- database
env_file:
- ./backend/.env
environment:
- FORCE_COLOR=1
frontend:
build:
context: $PWD/../../web
dockerfile: $PWD/frontend/Dockerfile
ports:
- 3001
networks:
- challenge
depends_on:
- backend
env_file:
- ./frontend/.env
volumes:
- ../../web/src:/app/frontend/src
nginx:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- 3000:80
depends_on:
- backend
- frontend
networks:
- challenge
volumes:
db-data:
storage:
networks:
challenge:
And this is my Dockerfile for the React
FROM node:14.18.1-alpine3.14 as build
WORKDIR /app/frontend
COPY package.json /app/frontend/
COPY yarn.lock /app/frontend/
RUN yarn install --frozen-lockfile
COPY . /app/frontend
RUN yarn run build
FROM nginx:1.21.3-alpine
COPY --from=build /app/frontend/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Also, this is my nginx.conf
events {
worker_connections 1024;
}
http {
client_max_body_size 1000M;
server {
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
listen 80;
server_name localhost;
location /api/v1 {
proxy_pass http://backend:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Please assist as I've tried changing the ports, and exposing different ports but that doesn't seem to work.
You have two nginx servers: one inside frontend service and another one inside nginx service. Is it intended?
I suppose you need to access the nginx service provided by frontend so in that case you need to go to:
http://localhost:3001
and map 3001:80 because 80 is the exposed port in that image.
I have a situation where I have created a microservice that can change the network settings of the cat5 ethernet ports on the running server. To do this it needs to be set in network_mode: host mode. The microservice exposes a HTTP rest api that I would like to have behind my nginx reverse proxy, but since it uses a bridge network, I cannot seem to access to the network_utilities(see the docker-compose file below) service. Any suggestions on how to make this work?
Here is my condensed docker-compose file:
version: '3.3'
services:
nginx:
image: nginx:stable
container_name: production_nginx
restart: unless-stopped
ports:
- 80:80
depends_on:
- smart-mobile-server
- network_utilities
volumes:
- ./config/front-end-gui:/var/www/html
- ./config/nginx:/etc/nginx
networks:
- smart-mobile-loopback
smart-mobile-server:
container_name: smart-mobile-rest-api
image: smartmobile_server:latest
build:
context: .
dockerfile: main.Dockerfile
environment:
NODE_ENV: production
command: sh -c "pm2 start --env production && pm2 logs all"
depends_on:
- 'postgres'
restart: unless-stopped
networks:
- smart-mobile-loopback
volumes:
- ~/server:/usr/app/dist/express-server/uploads
- ~/server/logs:/usr/app/logs
network_utilities:
image: smartgear/network-utilities-service:latest
network_mode: host
environment:
NODE_ENV: production
REST_API_PORT: '64000'
privileged: true
networks:
smart-mobile-loopback:
driver: bridge
nginx.conf
worker_processes 2;
events {
# Connections per worker process
worker_connections 1024;
# Turning epolling on is a handy tuning mechanism to use more efficient connection handling models.
use epoll;
# We turn off accept_mutex for speed, because we don’t mind the wasted resources at low connection request counts.
accept_mutex off;
}
http {
upstream main_server {
# least_conn Specifies that a group should use a load balancing method where a request is
# passed to the server with the least number of active connections,
# taking into account weights of servers. If there are several such
# servers, they are tried in turn using a weighted round-robin balancing method.
ip_hash;
# These are references to our backend containers, facilitated by
# Compose, as defined in docker-compose.yml
server smart-mobile-server:10010;
}
upstream network_utilities {
least_conn;
server 127.0.0.1:64000;
}
server {
# GZIP SETTINGS FOR LARGE FILES
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_min_length 0;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
include /etc/nginx/mime.types;
## SECURITY SETTINGS
# don't send the nginx version number in error pages and Server header
server_tokens off;
# when serving user-supplied content, include a X-Content-Type-Options: nosniff header along with the Content-Type: header,
# to disable content-type sniffing on some browsers.
add_header X-Content-Type-Options nosniff;
listen 80;
location / {
# This would be the directory where your React app's static files are stored at
root /var/www/html/;
index index.html;
try_files $uri /index.html;
}
location /api/documentation/network-utilities {
proxy_pass http://network_utilities/api/documentation/network-utilities;
proxy_set_header Host $host;
}
location /api/v1/network-utilities/ {
proxy_pass http://network_utilities/;
proxy_set_header Host $host;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://main_server/api/;
}
}
}
I have two Docker containers on the same network. One of them is a Spring Boot server app, the other is a React client app. I'm trying to get the client to make AJAX calls to the server. When I run them both locally on my machine, outside of Docker, everything works. When I run them with my docker configuration and using an Nginx proxy, I get 502 bad gateway errors.
Here is my docker-compose configuration:
version: '3'
video-server:
build:
context: .
dockerfile: video-server_Dockerfile
container_name: video-server
networks:
- videoManagerNetwork
environment:
- VIDEO_MANAGER_DIR=/opt/videos
volumes:
- ${VIDEO_MANAGER_DIR_PROD}:/opt/videos
video-client:
build:
context: .
dockerfile: video-client_Dockerfile
container_name: video-client
networks:
- videoManagerNetwork
ports:
- 9000:80
networks:
videoManagerNetwork:
As you can see, both containers are given explicit names and are on the same network. video-client is the Nginx React app, video-server is the Spring Boot app.
Here is my Nginx config:
worker_processes auto;
events {
worker_connections 8000;
multi_accept on;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent"';
include /etc/nginx/mime.types;
default_type text/plain;
server {
listen 80;
# TODO make sure the log is written to a docker volume
access_log /var/log/nginx/access.log compression;
root /var/www;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_set_header Host $http_host;
proxy_pass http://video-server:8080/api/;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
}
As you can see, I'm proxying all calls to /api/ to my video-server container. This should be working. I even shelled into the video-client container docker exec -it video-client bash, installed curl, and was able to successfully make calls to the other container, ie http://video-server:8080/api/categories.
I'm looking for suggestions about what the problem with my configuration could be. I'm not particularly experienced with Nginx, so I'm assuming I'm doing something wrong there.
Edit
I finally figured out what was necessary to make this work. I would still be interested to understand why this helps.
I added the following lines to the "http" section of the Nginx config, and the problem was solved:
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
So it looks like this changed the buffer and timeout settings. Why did this help?
I'm trying to set up a simple web stack locally on my Mac.
nginx to serve as a reverse proxy
react web app #1 to be served on localhost
react web app #2 to be served on demo.localhost
I'm using docker-compose to spin all the services at once, here's the file:
version: "3"
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
networks:
- backbone
landingpage:
container_name: landingpage
build: ./landingpage/
networks:
- backbone
expose:
- 3000
frontend:
container_name: frontend
build: ./frontend/
networks:
- backbone
expose:
- 3001
networks:
backbone:
driver: bridge
and here's the nginx config file (copied into the container with a COPY command in the Dockerfile):
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:3000;
}
upstream frontend {
server frontend:3001;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
server_name demo.localhost;
location / {
proxy_pass http://frontend;
}
}
}
I can successfully run docker-compose up, but only opens the web app, while demo.localhost does not.
I've also changed the hosts file contents on my Mac so I have
127.0.0.1 localhost
127.0.0.1 demo.localhost
to no avail.
I am afraid I'm missing something as I'm no expert in web development nor docker or nginx!
For reference: we were able to run this remotely using AWS ligthsail, using the following settings
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:5000;
}
upstream frontend {
server frontend:5000;
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name domain.com www.domain.com;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name demo.domain.com www.demo.domain.com;
location / {
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive, notranslate, noimageindex";
proxy_pass http://frontend;
}
}
}
with the following dockerfile for both react apps (basically exposing port 5000 for both services)
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --verbose
COPY . /usr/src/app
RUN npm run build --production
RUN npm install -g serve
EXPOSE 5000
CMD serve -s build
Unfortunately I cannot provide more details on doing this on a local machine
This is working for me. The difference might be that I'm using a fake domain name, but I can't say for sure. I'm also using ssl, because I couldn't get Firefox to access the fake domain via http. I'm routing the subdomain to Couchdb. The webclient service is the parcel-bundler development server.
/etc/hosts
127.0.0.1 example.local
127.0.0.1 www.example.local
127.0.0.1 db.example.local
develop/docker-compose.yaml
version: '3.5'
services:
nginx:
build:
context: ../
dockerfile: develop/nginx/Dockerfile
ports:
- 443:443
couchdb:
image: couchdb:3
volumes:
- ./couchdb/etc:/opt/couchdb/etc/local.d
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=password
webclient:
build:
context: ../
dockerfile: develop/web-client/Dockerfile
volumes:
- ../clients/web/src:/app/src
environment:
- CLIENT=web
- COUCHDB_URL=https://db.example.local
develop/nginx/Dockerfile
FROM nginx
COPY develop/nginx/conf.d/* /etc/nginx/conf.d/
COPY develop/nginx/ssl/certs/* /etc/ssl/example.local/
develop/nginx/conf.d/default.conf
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name example.local www.example.local;
location / {
proxy_pass http://webclient:1234;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name db.example.local;
location / {
proxy_pass http://couchdb:5984/;
}
}
develop/web-client/Dockerfile
FROM node:12-alpine
WORKDIR /app
COPY clients/web/*.config.js ./
COPY clients/web/package*.json ./
RUN npm install
CMD ["npm", "start"]
Here is the blog that shows how to generate the self-signed certs.