I'm trying to set up a simple web stack locally on my Mac.
nginx to serve as a reverse proxy
react web app #1 to be served on localhost
react web appĀ #2 to be served on demo.localhost
I'm using docker-compose to spin all the services at once, here's the file:
version: "3"
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
networks:
- backbone
landingpage:
container_name: landingpage
build: ./landingpage/
networks:
- backbone
expose:
- 3000
frontend:
container_name: frontend
build: ./frontend/
networks:
- backbone
expose:
- 3001
networks:
backbone:
driver: bridge
and here's the nginx config file (copied into the container with a COPY command in the Dockerfile):
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:3000;
}
upstream frontend {
server frontend:3001;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
server_name demo.localhost;
location / {
proxy_pass http://frontend;
}
}
}
I can successfully run docker-compose up, but only opens the web app, while demo.localhost does not.
I've also changed the hosts file contents on my Mac so I have
127.0.0.1 localhost
127.0.0.1 demo.localhost
to no avail.
I am afraid I'm missing something as I'm no expert in web development nor docker or nginx!
For reference: we were able to run this remotely using AWS ligthsail, using the following settings
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:5000;
}
upstream frontend {
server frontend:5000;
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name domain.com www.domain.com;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name demo.domain.com www.demo.domain.com;
location / {
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive, notranslate, noimageindex";
proxy_pass http://frontend;
}
}
}
with the following dockerfile for both react apps (basically exposing port 5000 for both services)
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --verbose
COPY . /usr/src/app
RUN npm run build --production
RUN npm install -g serve
EXPOSE 5000
CMD serve -s build
Unfortunately I cannot provide more details on doing this on a local machine
This is working for me. The difference might be that I'm using a fake domain name, but I can't say for sure. I'm also using ssl, because I couldn't get Firefox to access the fake domain via http. I'm routing the subdomain to Couchdb. The webclient service is the parcel-bundler development server.
/etc/hosts
127.0.0.1 example.local
127.0.0.1 www.example.local
127.0.0.1 db.example.local
develop/docker-compose.yaml
version: '3.5'
services:
nginx:
build:
context: ../
dockerfile: develop/nginx/Dockerfile
ports:
- 443:443
couchdb:
image: couchdb:3
volumes:
- ./couchdb/etc:/opt/couchdb/etc/local.d
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=password
webclient:
build:
context: ../
dockerfile: develop/web-client/Dockerfile
volumes:
- ../clients/web/src:/app/src
environment:
- CLIENT=web
- COUCHDB_URL=https://db.example.local
develop/nginx/Dockerfile
FROM nginx
COPY develop/nginx/conf.d/* /etc/nginx/conf.d/
COPY develop/nginx/ssl/certs/* /etc/ssl/example.local/
develop/nginx/conf.d/default.conf
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name example.local www.example.local;
location / {
proxy_pass http://webclient:1234;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name db.example.local;
location / {
proxy_pass http://couchdb:5984/;
}
}
develop/web-client/Dockerfile
FROM node:12-alpine
WORKDIR /app
COPY clients/web/*.config.js ./
COPY clients/web/package*.json ./
RUN npm install
CMD ["npm", "start"]
Here is the blog that shows how to generate the self-signed certs.
Related
refer to : https://demo.fifathailand.com/auth/signin/
on my server there is a caddy container running on port 80 and 443
then I setup my nuxtJS website on port 8080 and CaddyFile to reverse proxy from my domain to container:8080
It worked fine. I can access my site on domain.com or to be specific https://demo.fifathailand.com/
but when I access routes such as https://demo.fifathailand.com/contacts and ctrl+f5 it always redirect to http://demo.fifathailand.com:8080/contacts which is not on SSL and the login credentials that user might have login disappear.
FYI, this is not happened in the homepage. Don't know why :<
CaddyFile
{
email email#gmail.com
}
demo.fifathailand.com {
reverse_proxy webcontainer:8080
}
nuxtjs docker-compose
version: '3'
services:
nuxtjs-web:
build:
context: .
dockerfile: Dockerfile
container_name: nuxtjs-web
restart: unless-stopped
volumes:
- .:/app
- ./docker/nginx:/etc/nginx/config.d
ports:
- '8080:8080'
networks:
default:
external:
name: network
Dockerfile
# build stage
FROM node:alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY docker/nginx/default.conf /temp/default.conf
RUN envsubst /app < /temp/default.conf > /etc/nginx/conf.d/default.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
nginx default.conf
server {
listen 8080;
listen [::]:8080;
server_name localhost;
gzip on;
gzip_static on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_disable "MSIE [1-6]\.";
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I would like the users to always stay on domain.com or means https://demo.fifathailand.com all the time.
I'm using nginx in docker-compose file for handling my frontend and backend website.
I had no problems for a long time but once I've got the error "504 Gateway Time-out" when I try to access my project through localhost and it's port
http://localhost:8080
when I type docker Ip and its port
http://172.18.0.1:8080
I can access the project and nginx works correctly.
I'm sure my config file is correct because It was working for 6 months and I don't know what happened for it.
what should I check to find the problem?
docker-compose file:
.
.
.
nginx:
container_name: nginx
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend_appx
networks:
backend_appx :
external: true
.
.
nginx config file:
upstream nextjs_upstream {
server next_app:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
# set root
root /var/www/html/frontend;
# set log
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs_upstream;
add_header X-Cache-Status $upstream_cache_status;
}
}
I am trying to build a web application using docker-compose, with jwilder / nginx-proxy and letsencrypt companion, but when I try it, nginx throws me a 503 error.
"503 Service Temporarily Unavailable"
The docker-compose file that I have is as follows
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /etc/nginx/certs
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy:rw
www:
build:
context: .
dockerfile: Dockerfile
restart: always
expose:
- "80"
environment:
- VIRTUAL_HOST=example.com, www.example.com
- LETSENCRYPT_HOST=example.com, www.example.com
- LETSENCRYPT_EMAIL=contact#example.com
My web app is builded with react, and i make this Dockerfile for build the container image:
FROM node:10-alpine as build
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.14-alpine
COPY --from=build /usr/src/app/build/ /usr/share/nginx/html
COPY www/nginx.config /etc/nginx/conf.d/default.conf
and this is the nginx.config used by this image:
server {
server_name example.com www.example.com;
listen 80 reuseport default;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6].";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
The web app image is working fine, i can open it if i run only this. The problem is with the nginx-proxy and companion containers, maybe nginx-proxy is not able to find the www container?
Can someone helpe with this please.
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Then it should automatically link to each other and be able to access via the domain you provided.
The domain you provided as VIRTUAL_HOST="domain" could have expired.
This is the reason my nginx-proxy was returning response "503 Service Temporarily Unavailable"
I have two docker services (an angular web-app and a tomcat backend), which I want to protect with a third docker service, which is an nginx configured as a reverse-proxy. My proxy configuration is working, but I'm suffering with the basic authorization my reverse-proxy should also handle. When I protect my angular frontend service with basic auth via reverse-proxy config, everything works fine, but my backend is still exposed for everyone. When I add also basic auth to the backend service, I have the problem, that my basic auth configuration header from my frontend is not forwarded/added to the backend REST requests. Is it possible to configure the nginx reverse proxy to add the Authorization header to each request send by the frontend. Or maybe I'm thinking wrong and there is a better solution?
Here is my docker and nginx configuration:
reverse-proxy config:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 80;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
server {
listen 8080;
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-tomcat;
proxy_redirect off;
}
}
}
docker-compose (setting up all containers):
version: '2.4'
services:
reverse-proxy:
container_name: reverse-proxy
image: nginx:alpine
volumes:
- ./auth:/etc/nginx/conf.d
- ./auth/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "8080:8080"
restart: always
links:
- registry:registry
frontend-nginx:
container_name: frontend
build: './frontend'
volumes:
- /dockerdev/frontend/dist/:/usr/share/nginx/html
depends_on:
- reverse-proxy
- bentley-tomcat
restart: always
backend-tomcat:
container_name: backend
build: './backend'
volumes:
- /data:/data
depends_on:
- reverse-proxy
restart: always
registry:
image: registry:2
ports:
- 127.0.0.1:5000:5000
volumes:
- ./data:/var/lib/registry
frontend Dockerfile:
FROM nginx
COPY ./dist/ /usr/share/nginx/html
COPY ./fast-nginx-default.conf /etc/nginx/conf.d/default.conf
frontend config:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
backend Dockerfile:
FROM openjdk:11
RUN mkdir -p /usr/local/bin/tomcat
COPY ./backend-0.0.1-SNAPSHOT.jar /usr/local/bin/tomcat/backend-0.0.1-SNAPSHOT.jar
WORKDIR /usr/local/bin/tomcat
CMD ["java", "-jar", "backend-0.0.1-SNAPSHOT.jar"]
Try adding this directives to your location block
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
I've solved my issue, by listining on port 80 for request with /api and redirected them to the tomcat on port 8080. For that I also had to adjust my front- and backend requests, now all my backend request begin with /api. By this solution I'm able to implement the basic auth on port 80 for protecting the front- and backend.
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
client_max_body_size 25M;
upstream docker-nginx {
server frontend-nginx:80;
}
upstream docker-tomcat {
server backend-tomcat:8080;
}
server {
listen 80;
location /api {
proxy_pass http://docker-tomcat;
}
location / {
auth_basic "Protected area";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
proxy_pass http://docker-nginx;
proxy_redirect off;
}
}
}
I'm very new to Docker and Nginx and this might be a stupid question but how can I point my nginx box to look at the files in Rails public? Basically, I have an nginx box, and an application box. I would like to know where I can put those files so that the nginx box can read them.
version: "3"
services:
api:
build: "./api"
env_file:
- .env-dev
ports:
- "3000:3000"
depends_on:
- db
volumes:
- .:/app/api
command: rails server -b "0.0.0.0"
nginx:
build: ./nginx
env_file: .env-dev
volumes:
- .:/app/nginx
depends_on:
- api
links:
- api
ports:
- "80:80"
...
Api dockerfile:
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get install -qq -y \
build-essential \
libmysqlclient-dev \
nodejs \
--fix-missing \
--no-install-recommends
ENV INSTALL_PATH /api
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile $INSTALL_PATH
RUN bundle install
COPY . .
EXPOSE 3000
Nginx Dockerfile:
FROM nginx
ENV INSTALL_PATH /nginx
RUN mkdir -p $INSTALL_PATH
COPY nginx.conf /etc/nginx/nginx.conf
# COPY ?
EXPOSE 80
nginx config (this is correctly being copied over)
daemon off;
worker_processes: 1;
events { worker_connections: 1024; }
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
# Rails Api
upstream api {
server http://api/;
}
# Configuration for the server
server {
# Running port
listen 80;
# Proxying the connections connections
location /api {
proxy_pass http://api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /public/50x.html
error_page 404 /public/404.html
location = /50x.html {
root /api/public;
}
location = /404.html {
root /api/public
}
}
}
Now, when I go to localhost:80 it show the generic nginx folder. However, I'm unsure how to link the public dir of rails api/public/ to the nginx container.
Can I just COPY path/to/rails/public path/nginx. Where is nginx expecting to find those files?
Edit
I believe I should be putting them in /var/www/app_name, correct?
I think what you want to achieve should be done by mounting a volume of container 'api' to container 'nginx', something like this:
version: "3"
services:
api:
image: apiimg
volumes:
- apivol:/path/to/statics
nginx:
image: nginximg
volumes:
- apivol:/var/www/statics
volumes:
apivol: {}
So there's a shared volume declared for all containers, apivol, which is mapped to /path/to/statics on your Rails container, and to /var/www/statics in your nginx container. This way you don't need to copy anything manually into the nginx container.
The default location for static content on nginx is /etc/nginx/html, but you could put it in var/www/app_name as long as you remember to add
root /var/www/app_name
in the corresponding location block for your static content.