I am using docker compose to serve up a front-end (vue.js) a back-end and a nginx reverse-proxy.
When I navigate to a route and hit refresh I get a 404 nginx error.
Here is part of my docker compose, omitted a few lines for brevity
version: '3'
services:
# Proxies requests to internal services
dc-reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy_demo
depends_on:
- dc-front-end
- dc-back-end
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 5004:80
dc-front-end:
..
container_name: dc-front-end
ports:
- 8080:80
# API
dc-back-end:
container_name: dc-back-end
ports:
- 5001:5001
here is the nginx.conf that belongs to the reverse proxy service
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_pass http://dc-front-end:80;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /dc-back-end/ {
proxy_pass http://dc-back-end:5001/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
and this is the nginx.conf for the front-end
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /app;
#root /usr/share/nginx/html;
location / {
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
and finally the docker file for the front-end service
# build stage
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I have tried using try_files $uri $uri/ /index.html; in both nginx files but it still gives 404 on page refresh or if i try and navigate to the page in the browser (rather than clicking a link)
As usual the laws of Stackoverflow dictate that you only solve your own question once you post the question.
docker file was wrong. threw me as everything else worked
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx:stable-alpine as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
Related
refer to : https://demo.fifathailand.com/auth/signin/
on my server there is a caddy container running on port 80 and 443
then I setup my nuxtJS website on port 8080 and CaddyFile to reverse proxy from my domain to container:8080
It worked fine. I can access my site on domain.com or to be specific https://demo.fifathailand.com/
but when I access routes such as https://demo.fifathailand.com/contacts and ctrl+f5 it always redirect to http://demo.fifathailand.com:8080/contacts which is not on SSL and the login credentials that user might have login disappear.
FYI, this is not happened in the homepage. Don't know why :<
CaddyFile
{
email email#gmail.com
}
demo.fifathailand.com {
reverse_proxy webcontainer:8080
}
nuxtjs docker-compose
version: '3'
services:
nuxtjs-web:
build:
context: .
dockerfile: Dockerfile
container_name: nuxtjs-web
restart: unless-stopped
volumes:
- .:/app
- ./docker/nginx:/etc/nginx/config.d
ports:
- '8080:8080'
networks:
default:
external:
name: network
Dockerfile
# build stage
FROM node:alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY docker/nginx/default.conf /temp/default.conf
RUN envsubst /app < /temp/default.conf > /etc/nginx/conf.d/default.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
nginx default.conf
server {
listen 8080;
listen [::]:8080;
server_name localhost;
gzip on;
gzip_static on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_disable "MSIE [1-6]\.";
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I would like the users to always stay on domain.com or means https://demo.fifathailand.com all the time.
I'm writing apk uploading functionality in my React App. Every time I try to upload an apk file bigger than 10 mb, I get back an error: 413 Request Entity Too Large. I've already used client_max_body_size 888M; directive. Can anybody explain me, what I do wrong?
Here is my nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/errors.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/accesses.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
listen [::]:80;
server_name up-app.unpluggedsystems.app;
# return 301 http://$server_name$request_uri;
root /usr/share/nginx/html;
index index.html;
client_max_body_size 888M;
location /api/7/apps/search {
proxy_pass http://ws75.aptoide.com/api/7/apps/search;
proxy_set_header X-Forwarded-For $remote_addr;
}
location ~ \.(apk)$ {
proxy_pass https://pool.apk.aptoide.com;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass https://up-app.unpluggedsystems.app;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
And Dockerfile that I use (maybe something wrong here?):
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
COPY create-env-file.sh ./create-env-file.sh
RUN npm install
RUN npm install react-scripts#3.4.1 -g
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY fullchain.crt /etc/ssl/fullchain.crt
COPY unpluggedapp.key.pem /etc/ssl/unpluggedapp.key.pem
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I FE Junior, and I so weak in such things. Maybe I put client_max_body_size in wrong place?
Please move client_max_body_size under http{} section
http {
# some code here
sendfile on;
client_max_body_size 888M;
#...
}
Make sure to restart nginx After modifying the configuration file.
You can try
client_max_body_size 0;
(set to no limit, but this is not recommended for production env.)
More over, Remember that if you have SSL, that will require you to set the above limitation for the SSL server and location{} in nginx.conf too. If your client (browser) tries to upload on http, and you expect them to get 301'd to https, nginx server will actually drop the connection before the redirect due to the file being too large for the http server, so the limitation has to be set in both http and https.
The trick is to put in both http{} and server{} (for vhost)
http {
# some code
client_max_body_size 0;
}
server {
# some code
client_max_body_size 0;
}
As I deploy my first ever rails app to server, I keep getting this error right when from home url.
The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved.
My configuration:
Dockerfile
FROM ruby:3.0.0-alpine3.13
RUN apk add --no-cache --no-cache --update alpine-sdk nodejs postgresql-dev yarn tzdata
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 4000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0", "-e", "production", "-p", "4000"]
entrypoint.sh
#!/bin/sh
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /app/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
docker-compose.yml
central:
build:
context: ./central
dockerfile: Dockerfile
command: sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 4000 -e production -b '0.0.0.0'"
volumes:
- ./central:/usr/src/app
env_file:
- ./central/.env.prod
stdin_open: true
tty: true
depends_on:
- centraldb
centraldb:
image: postgres:12.0-alpine
volumes:
- centraldb:/var/lib/postgresql/data/
env_file:
- ./central/.env.prod.db
nginx:
image: nginx:1.19.0-alpine
volumes:
- ./nginx/prod/certbot/www:/var/www/certbot
- ./central/public/:/home/apps/central/public/:ro
ports:
- 80:80
- 443:443
depends_on:
- central
links:
- central
restart: unless-stopped
command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
nginx.conf
upstream theapp {
server central:4000;
}
server {
listen 80;
server_name thedomain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name thedomain.com;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
# hidden ssl config
root /home/apps/central/public;
index index.html;
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {
deny all;
}
location / {
try_files $uri #rails;
}
location #rails {
proxy_pass http://theapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
client_max_body_size 4G;
keepalive_timeout 10;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /home/apps/central/public;
}
error_log /var/log/nginx/central_error.log;
access_log /var/log/nginx/central_access.log;
}
As I check the log file of nginx but it doesn't show anything.
The app work well in my local and even in production with ports config (of course if I go to thedomain.com:4000), but I need to serve it with Nginx in production with thedomain.com so I need solution for this.
Trying to set up an HTTPS server using NGINX and DOCKER. Keep getting the same error while checking nginx configuration file nginx -t:
2020/11/13 13:37:52 [emerg] 6#6: cannot load certificate "/etc/nginx/certs/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: [emerg] cannot load certificate "/etc/nginx/certs/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: configuration file /etc/nginx/nginx.conf test failed
Tried copying certs dir into etc/nginx in Dockerfile and it didn't work:
Dockerfile
FROM node:latest as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY certs /etc/nginx
COPY nginx.conf /etc/nginx/nginx.conf
RUN nginx -t
Tried setting up docker volumes as well and still the same error.
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80 default_server;
server_name test;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name test;
ssl_certificate certs/cert.crt;
ssl_certificate_key certs/cert.key;
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
P.S. Permissions of the certs are set to 444
chmod -R 444 certs
I am attempting to use an NGINX container to host a static web application. This container should also redirect certain requests (i.e. www.example.com/api/) to another container on the same network.
I am getting the "host not found in upstream" issue when calling docker-compose build, even though I am enforcing that the NGINX container is the last to be built.
I have tried the following solutions:
Enforcing a network name and aliases (as per Docker: proxy_pass to another container - nginx: host not found in upstream)
Adding a "resolver" directive (as per Docker Networking - nginx: [emerg] host not found in upstream and others), both for 8.8.8.8 and 127.0.0.11.
Rewriting the nginx.conf file to have the upstream definition before the location that will redirect to it, or after it.
I am running on a Docker for Windows machine that is using a mobylinux VM to run the relevant container(s). Is there something I am missing? It isn't obvious to me that the "http://webapi" address should resolve correctly, as the images are built but not running when you are calling docker-compose.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream docker-webapi {
server webapi:80;
}
server {
listen 80;
server_name localhost;
location / {
root /wwwroot/;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://docker-webapi;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
docker-compose:
version: '3'
services:
webapi:
image: webapi
build:
context: ./src/Api/WebApi
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61219:80"
model.api:
image: model.api
build:
context: ./src/Services/Model/Model.API
dockerfile: Dockerfile
volumes:
- /etc/example/secrets/:/app/secrets/
ports:
- "61218:80"
webapp:
image: webapp
build:
context: ./src/Web/WebApp/
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- webapi
Dockerfile:
FROM nginx
RUN mkdir /wwwroot
COPY nginx.conf /etc/nginx/nginx.conf
COPY wwwroot ./wwwroot/
EXPOSE 80
RUN service nginx start
You issue is below
RUN service nginx start
You never run service command inside docker, because there is no init system. Also RUN commands are run during build time.
nginx original image has everything you need for nginx to start fine. So just remove the line and it would work.
By default nginx image has below CMD instruction
CMD ["nginx" "-g" "daemon off;"]
You can easily find that out by running the below command
docker history --no-trunc nginx | grep CMD