I'm currently using Nginx as a reverse proxy and to serve my static assets. I was using React Router's HashLocation setting since it was the default and it allowed me to refresh on a route with no problems and no need for any additional configurations, but the issue with using that setting is the necessity of the url having /#/ prepending my routes (e.g. http://example-app.com/#/signup).
I'm now trying to switch to React Router's HistoryLocation setting, but I can't figure out how to properly configure Nginx to serve index.html for all routes (e.g. http://example-app.com/signup).
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
This setup worked fine when I was using HashLocation, but after changing to HistoryLocation (the only change I made), I get back a 404 Cannot GET when attempting to refresh on a sub-route's url.
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes.
location / {
try_files $uri /your/index.html;
}
http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files
I know your example is more complex with the #proxy but the above works fine for my application.
Related
I have a NextJS application which acts like a monolith. Means, I have multiple directories under /pages which acts as different projects.
/pages/first_project/[...slugs]
/pages/second_project/[...slugs]
For the setup I have Docker which runs nextjs as a container alongside Nginx as a prima-facie to reverse proxy back to the application.
I would like Nginx to map its /location against the pages directory in NextJS.
For e.g
/first_project --> /pages/first_project/
/second_project --> /pages/second_project/
In my company, the hosting of the projects are like this:
<company.com>/first_project/[...slugs]
<company.com>/second_project/[...slugs]
The issue I am facing is, since /location in the Nginx looks for all the build files in that particular location. I can't have dynamic hosting possible. I was wondering if this is at all possible?
If I don't provide a basePath in next.config.js, then for <company.com>/first_project/, Nginx expects files to be available in the location --> first_project. However, files are available at the root directory of the project. So I end up getting error.
I obviously can fix this issue by setting basePath in next.config as first_project. However, this will fail for second_project. Is there any way to dynamically load basePath? Please let me know.
Company Nginx Config:
server {
listen 80;
server_name _;
location /first_project {
proxy_pass http://local:8000;
}
location /second_project {
proxy_pass http://local:8000;
}
}
(local:8000)NextJS Nginx Monolith Config
upstream nextjs_upstream {
server <nextjs_project>:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location /_next/static {
proxy_pass http://nextjs_upstream;
}
location / {
proxy_pass http://nextjs_upstream;
}
}
Any help will be appreciated. Thanks :)
Sorry for mistakes. I am new with Nginx.
I have my application deployed on docker engine.
So I have basically 5 docker images but here 2 are most important:
1st backend. (Django DRF application using gunicorn)
2nd frontend. (React App on Nginx)
I am upstreaming backend on Nginx so in Nginx.conf file I have 2 locations defined:
"/" for frontend
"/api" for backend (upstream backend to be able to use it).
I am able to start my containers and they "talk" to each other if I am using IP address in my browser. So backend get requests and give responses.
Now I bought dns and added ssl certificates (LetsEncrypt, but still i have to add exception , but that is a separate question). If I reach my site using DNS frontend works, but backend does not work.
Here is unsuccessful with using DNS.
and successful request using IP address.
Here is my nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
# include /etc/nginx/conf.d/*.conf;
upstream backend {
server api:8000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/ssl/live/site.org/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/site.org/privkey.pem;
location /api {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
#
# Om nom nom cookies
#
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
# Tried this ipv6=off
resolver 1.1.1.1 ipv6=off valid=30s;
set $empty "";
proxy_pass http://backend$empty;
# proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Content-Security-Policy upgrade-insecure-requests;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
# location /static/ {
# alias /home/app/web/staticfiles/;
# }
}
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$host$request_uri;
}
location ~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
}
This HTTP 400 Bad Request error looks like the one coming from the Django request validation, since your requests differs only by the Host HTTP request header value. You should include every used domain name to the ALLOWED_HOSTS list in the settings.py Django file. Domain names should be specified as they would appear in the Host header (excluding the possible port number); a wildcard-like entry like .example.com is allowed, assuming the example.com domain and every subdomain. Special value * can be used to skip Host header validation (not recommended unless you do this validation at some other request processing level).
The post_action in the config for my Nginx 1.4.2 instance is not firing (or does not appear to be). I'm wondering if it is because Rails is returning a X-Accel-Redirect header. The ultimate goal is track when downloads hosted on S3 have completed.
Step 1: The request hits nginx at http://host/download/...
location ~ /download/ {
proxy_pass http://rails-app-upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
post_action #finished;
}
Step 2: After authenticating the request for the download file and marking the start of the download in the database, Rails returns the request with the following headers:
{
'X-Accel-Redirect' => '/s3-zip/my-bucket.s3.amazonaws.com/downloads/0001.jpg`
'Content-Disposition' => "attachment; filename=download.jpg",
'X-Download-Log-Id' => log.id.to_s,
'X-Download-Mem-Id' => log.membership_id.to_s
}
Step 3: Nginx catches the x-accel-redirect header and hits this location that is defined:
location ~ "^/s3-zip/(?<s3_bucket>.[a-z0-9][a-z0-9-.]*.s3.amazonaws.com)/(?<path>.*)$" {
# examples:
# s3_bucket = my-bucket.s3.amazonaws.com
# path = downloads/0001.mpg
# args = X-Amz-Credentials=...&X-Amz-Date=...
internal;
access_log /var/log/nginx/s3_assets-access.log main;
error_log /var/log/nginx/s3_assets-error.log warn;
resolver 8.8.8.8 valid=30s; # Google DNS
resolver_timeout 10s;
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
# remove amazon headers
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
# no file buffering
proxy_buffering off;
# bubble errors up
proxy_intercept_errors on;
proxy_pass https://$s3_bucket/$path?$args;
}
Missed Step: The following location called by the post_action in Step 1 is never fired. Is this because of the x-accel-redirect header, or because Step 3 uses a proxy_pass, or something else? This last location calls the rails route again once nginx has completed the request to mark the download as completed.
location #finished {
internal;
rewrite ^ /download/finish/$sent_http_x_download_log_id?bytes=$body_bytes_sent;
}
I have a download link that goes to a method in a controller which uses send_file so that I may rename the file (it is an MP3 with a uuid as a filename). After clicking on the link I see the request in the NGINX logs and Rails logs, however it takes up to 90 seconds before the download beings. I have tried various settings with proxy_buffers and client_*_buffers with no affect. I have an HTML5 audio player that uses the real URL for the file and it streams the file right away with no delay.
My NGINX config:
upstream app {
server unix:/home/archives/app/tmp/unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name archives.example.com;
root /home/archives/app/public/;
client_max_body_size 200M;
client_body_buffer_size 100M;
proxy_buffers 2 100M;
proxy_buffer_size 100M;
proxy_busy_buffers_size 100M;
try_files /maintenance.html $uri/index.html $uri.html $uri #production;
location #production {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /home/archives/app/public/uploads/audio/=/uploads/audio/;
proxy_redirect off;
proxy_pass http://app;
}
location ~ "^/assets/*" {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location ~ (?:/\..*|~)$ {
access_log off;
log_not_found off;
deny all;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/archives/app/public;
}
}
Rails controller:
def download
send_file #audio.path, type: #audio_content_type, filename: "#{#audio.title} - #{#audio.speaker.name}"
end
Maybe it is slow because you have set an overly large proxy buffer? 100M proxy buffer means that your server will download 100M from origin data before starting to send it to the destination. Default is 32kB and something like 512kB would be already a nice number.
After testing I found out it was turbolinks causing the issue. It was doing a XHR request in the background, downloading the file first then allowing the browser to actually download the file. After adding 'data-no-turbolink'='true' to my link, do files download instantly.
This question already has answers here:
multiple rails apps on nginx and unicorn
(3 answers)
Closed 8 years ago.
I'm looking for set up a nginx server with unicorn. I the first app is set but it's on the root "/". what i really want is type localhost/app1 and it would run, while if a just enter to the root, html or php pages are going to be open.
Any clue?
Here's the current nginx.config:
worker_processes 4;
user nobody nogroup; # for systems with a "nogroup"
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/html text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
upstream sip {
server unix:/home/analista/www/sip/tmp/sockets/sip.unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred; # for Linux
client_max_body_size 4G;
server_name sip_server;
keepalive_timeout 5;
# path for static files
root /home/analista/www/sip/public;
try_files $uri/index.html $uri.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# proxy_buffering off;
proxy_pass http://sip;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sip;
break;
}
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/analista/www/sip/public;
}
}
}
I've got it!
Turns out it was really simple and I wrote a post about it on my blog. http://jrochelly.com/post/2013/08/nginx-unicorn-multiple-rails-apps/
Here's the content:
I'm using Ruby 2.0 and Rails 4.0. I suppose you already have nginx and unicorn installed. So, let's get started!
In you nginx.conf file we are going to make nginx point to a unicorn socket:
upstream unicorn_socket_for_myapp {
server unix:/home/coffeencoke/apps/myapp/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
Then, with your server listening to port 80, add a location block that points to the subdirectory your rails app is (this code, must be inside server block):
location /myapp/ {
try_files $uri #unicorn_proxy;
}
location #unicorn_proxy {
proxy_pass http://unix:/home/coffeencoke/apps/myapp/current/tmp/sockets/unicorn.sock;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
Now you can just Unicorn as a Deamon:
sudo unicorn_rails -c config/unicorn.rb -D
The last thing to do, and the one I dug the most is to add a scope for your rails routes file, like this:
MyApp::Application.routes.draw do
scope '/myapp' do
root :to => 'welcome#home'
# other routes are always inside this block
# ...
end
end
This way, your app will map a link /myapp/welcome, intead of just /welcome
But there's a even better way
Well, the above will work on production server, but what about development? Are you going to develop normally then on deployment you change your rails config? For every single app? That's not needed.
So, you need to create a new module that we are going to put at lib/route_scoper.rb:
require 'rails/application'
module RouteScoper
def self.root
Rails.application.config.root_directory
rescue NameError
'/'
end
end
After that, in your routes.rb do this:
require_relative '../lib/route_scoper'
MyApp::Application.routes.draw do
scope RouteScoper.root do
root :to => 'welcome#home'
# other routes are always inside this block
# ...
end
end
What we are doing is to see if the root directory is specified, if so use it, otherwise, got to "/". Now we just need to point the root directory on config/enviroments/production.rb:
MyApp::Application.configure do
# Contains configurations for the production environment
# ...
# Serve the application at /myapp
config.root_directory = '/myapp'
end
In config/enviroments/development.rb I do not specify the config.root_directory. This way it uses the normal url root.