jwilder/nginx-proxy 503 Service Temporarily Unavailable - docker

I am trying to build a web application using docker-compose, with jwilder / nginx-proxy and letsencrypt companion, but when I try it, nginx throws me a 503 error.
"503 Service Temporarily Unavailable"
The docker-compose file that I have is as follows
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /etc/nginx/certs
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy:rw
www:
build:
context: .
dockerfile: Dockerfile
restart: always
expose:
- "80"
environment:
- VIRTUAL_HOST=example.com, www.example.com
- LETSENCRYPT_HOST=example.com, www.example.com
- LETSENCRYPT_EMAIL=contact#example.com
My web app is builded with react, and i make this Dockerfile for build the container image:
FROM node:10-alpine as build
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.14-alpine
COPY --from=build /usr/src/app/build/ /usr/share/nginx/html
COPY www/nginx.config /etc/nginx/conf.d/default.conf
and this is the nginx.config used by this image:
server {
server_name example.com www.example.com;
listen 80 reuseport default;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6].";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
The web app image is working fine, i can open it if i run only this. The problem is with the nginx-proxy and companion containers, maybe nginx-proxy is not able to find the www container?
Can someone helpe with this please.

You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Then it should automatically link to each other and be able to access via the domain you provided.

The domain you provided as VIRTUAL_HOST="domain" could have expired.
This is the reason my nginx-proxy was returning response "503 Service Temporarily Unavailable"

Related

NuxtJS on prod keep redirect from domain.com to domain.com:port

refer to : https://demo.fifathailand.com/auth/signin/
on my server there is a caddy container running on port 80 and 443
then I setup my nuxtJS website on port 8080 and CaddyFile to reverse proxy from my domain to container:8080
It worked fine. I can access my site on domain.com or to be specific https://demo.fifathailand.com/
but when I access routes such as https://demo.fifathailand.com/contacts and ctrl+f5 it always redirect to http://demo.fifathailand.com:8080/contacts which is not on SSL and the login credentials that user might have login disappear.
FYI, this is not happened in the homepage. Don't know why :<
CaddyFile
{
email email#gmail.com
}
demo.fifathailand.com {
reverse_proxy webcontainer:8080
}
nuxtjs docker-compose
version: '3'
services:
nuxtjs-web:
build:
context: .
dockerfile: Dockerfile
container_name: nuxtjs-web
restart: unless-stopped
volumes:
- .:/app
- ./docker/nginx:/etc/nginx/config.d
ports:
- '8080:8080'
networks:
default:
external:
name: network
Dockerfile
# build stage
FROM node:alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY docker/nginx/default.conf /temp/default.conf
RUN envsubst /app < /temp/default.conf > /etc/nginx/conf.d/default.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
nginx default.conf
server {
listen 8080;
listen [::]:8080;
server_name localhost;
gzip on;
gzip_static on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_disable "MSIE [1-6]\.";
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I would like the users to always stay on domain.com or means https://demo.fifathailand.com all the time.

Error 504 Gateway Time-out when running nginx in docker through docker-compose

I'm using nginx in docker-compose file for handling my frontend and backend website.
I had no problems for a long time but once I've got the error "504 Gateway Time-out" when I try to access my project through localhost and it's port
http://localhost:8080
when I type docker Ip and its port
http://172.18.0.1:8080
I can access the project and nginx works correctly.
I'm sure my config file is correct because It was working for 6 months and I don't know what happened for it.
what should I check to find the problem?
docker-compose file:
.
.
.
nginx:
container_name: nginx
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend_appx
networks:
backend_appx :
external: true
.
.
nginx config file:
upstream nextjs_upstream {
server next_app:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
# set root
root /var/www/html/frontend;
# set log
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs_upstream;
add_header X-Cache-Status $upstream_cache_status;
}
}

Nginx reverse-proxy not serving static files

I tried to start some services via docker-compose. One of them is a nginx reverse-proxy, handling different paths. One path ("/react") is to a containerized react_app with a nginx on port 80. Solely, the reverse-proxy is working correctly. Also, if I server the nginx of the react_app on port 80, all work's fine. Combining both without changing anything in the config leads to 404 for static files like css and js.
Setup #1
Correct forward for path /test to Google.
docker-compose.yml
version: "3"
services:
#react_app:
# container_name: react_app
# image: react_image
# build: .
reverse-proxy:
image: nginx:latest
container_name: reverse-proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- '80:80'
nginx.conf (reverse-proxy)
location /test {
proxy_pass http://www.google.com/;
}
Setup #2
No reverse proxy. Correct answer from nginx inside of container react_app.
docker-compose.yml
version: "3"
services:
react_app:
container_name: react_app
image: react_image
build: .
#reverse-proxy:
# image: nginx:latest
# container_name: reverse-proxy
# volumes:
# - ./nginx.conf:/etc/nginx/nginx.conf
# ports:
# - '80:80'
Setup #3 (not working!)
Reverse proxy and React App with nginx. Loads index.html, but fails so load files in /static
nginx.conf (reverse-proxy)
location /react {
proxy_pass http://react_app/;
}
docker-compose.yml
version: "3"
services:
react_app:
container_name: react_app
image: react_image
build: .
reverse-proxy:
image: nginx:latest
container_name: reverse-proxy
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- '80:80'
Activating both systems leads to failing static content. It seems to me that the reverse-proxy tries to server the files, but fails (for good reason), because there is no log entry in reac_app's nginx. Here's the config from the reac_app nginx, perhaps I'm missing something out.
nginx.conf (inside react_app container)
events {}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
try_files $uri /index.html;
}
}
}
--> Update
This is a rather unsatisfying workaround - but it works. Although now reacts routing is messed up. I cannot reach /react/login
http {
server {
server_name services;
location /react {
proxy_pass http://react_app/;
}
location /static/css {
proxy_pass http://react_app/static/css;
add_header Content-Type text/css;
}
location /static/js {
proxy_pass http://react_app/statics/js;
add_header Content-Type application/x-javascript;
}
}
}
If you check the paths of the missing static files in your browser, you'll notice their relative paths are not what you expect. You can fix this by adding sub filters inside your nginx reverse proxy configuration.
http {
server {
server_name services;
location /react {
proxy_pass http://react_app/;
######## Add the following ##########
sub_filter 'action="/' 'action="/react/';
sub_filter 'href="/' 'href="/react/';
sub_filter 'src="/' 'src="/react/';
sub_filter_once off;
#####################################
}
}
}
This will update the relative paths to your static files.

testing local subdomain with nginx and docker

I'm trying to set up a simple web stack locally on my Mac.
nginx to serve as a reverse proxy
react web app #1 to be served on localhost
react web appĀ #2 to be served on demo.localhost
I'm using docker-compose to spin all the services at once, here's the file:
version: "3"
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
networks:
- backbone
landingpage:
container_name: landingpage
build: ./landingpage/
networks:
- backbone
expose:
- 3000
frontend:
container_name: frontend
build: ./frontend/
networks:
- backbone
expose:
- 3001
networks:
backbone:
driver: bridge
and here's the nginx config file (copied into the container with a COPY command in the Dockerfile):
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:3000;
}
upstream frontend {
server frontend:3001;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
server_name demo.localhost;
location / {
proxy_pass http://frontend;
}
}
}
I can successfully run docker-compose up, but only opens the web app, while demo.localhost does not.
I've also changed the hosts file contents on my Mac so I have
127.0.0.1 localhost
127.0.0.1 demo.localhost
to no avail.
I am afraid I'm missing something as I'm no expert in web development nor docker or nginx!
For reference: we were able to run this remotely using AWS ligthsail, using the following settings
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css
application/x-javascript text/xml
application/xml application/xml+rss
text/javascript;
upstream landingpage {
server landingpage:5000;
}
upstream frontend {
server frontend:5000;
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name domain.com www.domain.com;
location / {
proxy_pass http://landingpage;
}
}
server {
listen 80;
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
server_name demo.domain.com www.demo.domain.com;
location / {
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive, notranslate, noimageindex";
proxy_pass http://frontend;
}
}
}
with the following dockerfile for both react apps (basically exposing port 5000 for both services)
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --verbose
COPY . /usr/src/app
RUN npm run build --production
RUN npm install -g serve
EXPOSE 5000
CMD serve -s build
Unfortunately I cannot provide more details on doing this on a local machine
This is working for me. The difference might be that I'm using a fake domain name, but I can't say for sure. I'm also using ssl, because I couldn't get Firefox to access the fake domain via http. I'm routing the subdomain to Couchdb. The webclient service is the parcel-bundler development server.
/etc/hosts
127.0.0.1 example.local
127.0.0.1 www.example.local
127.0.0.1 db.example.local
develop/docker-compose.yaml
version: '3.5'
services:
nginx:
build:
context: ../
dockerfile: develop/nginx/Dockerfile
ports:
- 443:443
couchdb:
image: couchdb:3
volumes:
- ./couchdb/etc:/opt/couchdb/etc/local.d
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=password
webclient:
build:
context: ../
dockerfile: develop/web-client/Dockerfile
volumes:
- ../clients/web/src:/app/src
environment:
- CLIENT=web
- COUCHDB_URL=https://db.example.local
develop/nginx/Dockerfile
FROM nginx
COPY develop/nginx/conf.d/* /etc/nginx/conf.d/
COPY develop/nginx/ssl/certs/* /etc/ssl/example.local/
develop/nginx/conf.d/default.conf
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name example.local www.example.local;
location / {
proxy_pass http://webclient:1234;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.local/server.crt;
ssl_certificate_key /etc/ssl/example.local/server.key.pem;
server_name db.example.local;
location / {
proxy_pass http://couchdb:5984/;
}
}
develop/web-client/Dockerfile
FROM node:12-alpine
WORKDIR /app
COPY clients/web/*.config.js ./
COPY clients/web/package*.json ./
RUN npm install
CMD ["npm", "start"]
Here is the blog that shows how to generate the self-signed certs.

Pointing nginx container to static files in docker

I'm very new to Docker and Nginx and this might be a stupid question but how can I point my nginx box to look at the files in Rails public? Basically, I have an nginx box, and an application box. I would like to know where I can put those files so that the nginx box can read them.
version: "3"
services:
api:
build: "./api"
env_file:
- .env-dev
ports:
- "3000:3000"
depends_on:
- db
volumes:
- .:/app/api
command: rails server -b "0.0.0.0"
nginx:
build: ./nginx
env_file: .env-dev
volumes:
- .:/app/nginx
depends_on:
- api
links:
- api
ports:
- "80:80"
...
Api dockerfile:
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get install -qq -y \
build-essential \
libmysqlclient-dev \
nodejs \
--fix-missing \
--no-install-recommends
ENV INSTALL_PATH /api
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile $INSTALL_PATH
RUN bundle install
COPY . .
EXPOSE 3000
Nginx Dockerfile:
FROM nginx
ENV INSTALL_PATH /nginx
RUN mkdir -p $INSTALL_PATH
COPY nginx.conf /etc/nginx/nginx.conf
# COPY ?
EXPOSE 80
nginx config (this is correctly being copied over)
daemon off;
worker_processes: 1;
events { worker_connections: 1024; }
http {
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
# Rails Api
upstream api {
server http://api/;
}
# Configuration for the server
server {
# Running port
listen 80;
# Proxying the connections connections
location /api {
proxy_pass http://api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /public/50x.html
error_page 404 /public/404.html
location = /50x.html {
root /api/public;
}
location = /404.html {
root /api/public
}
}
}
Now, when I go to localhost:80 it show the generic nginx folder. However, I'm unsure how to link the public dir of rails api/public/ to the nginx container.
Can I just COPY path/to/rails/public path/nginx. Where is nginx expecting to find those files?
Edit
I believe I should be putting them in /var/www/app_name, correct?
I think what you want to achieve should be done by mounting a volume of container 'api' to container 'nginx', something like this:
version: "3"
services:
api:
image: apiimg
volumes:
- apivol:/path/to/statics
nginx:
image: nginximg
volumes:
- apivol:/var/www/statics
volumes:
apivol: {}
So there's a shared volume declared for all containers, apivol, which is mapped to /path/to/statics on your Rails container, and to /var/www/statics in your nginx container. This way you don't need to copy anything manually into the nginx container.
The default location for static content on nginx is /etc/nginx/html, but you could put it in var/www/app_name as long as you remember to add
root /var/www/app_name
in the corresponding location block for your static content.

Resources