Please see follow-up at bottom
I'm new to NGINX and trying to setup a simple, in-house development Ubuntu server for multiple REST API and SPA apps entry points, so I can learn some NGINX basics.
All the APIs and SPAs I want to serve are dockerized, and each exposes its services (for API) or page (for SPA) on a localhost (the Docker's host) port.
For instance, I have an API at localhost:60380 and an Angular SPA app at localhost:4200, each running in its own Docker container.
I can confirm that these work fine, as I can reach both at their localhost-based URL. Each API also provides a Swagger entry point at its URL e.g. localhost:60380/swagger (or, more verbosely, localhost:60380/swagger/index.html).
I'd now want to have NGINX listening at localhost:80, and reverse-proxy requests to each corresponding services, based on the request's URL. To keep things clean, NGINX too is dockerized, i.e. run from a container using the NGINX open source version.
To dockerize NGINX I followed the directions at https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/, i.e. I run a container from the nginx image, using volumes to point to host's folders for NGINX configuration and static content. I just changed the Docker command, as I had issues in using the mount-based syntax suggested in the documentation (it seems that / is not an allowed character, even if I specified the bind option; please notice that the following command is executed from /var):
docker run --name mynginx -v $(pwd)/www:/usr/share/nginx/html:ro -v $(pwd)/nginx/conf:/etc/nginx/conf:ro -p 80:80 -d nginx
i.e.:
host /var/www => container /usr/share/nginx/html;
host /var/nginx/conf => /etc/nginx.
As a test, I created a couple of static web sites in the host's folders mapped as the source for the volumes, i.e.:
/var/www/site1
/var/www/site2
Both these folders just have a static web page (index.html).
I placed in the host's /var/nginx/conf folder a nginx.conf file to serve these 2 static webs. This is the configuration I came up with:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# include imports configuration from a separate file.
# In this case it imports a types block, mapping each MIME type
# to a file extension, e.g.:
# types {
# text/html html htm shtml;
# text/css css;
# application/javascript js;
# ... etc
# }
include /etc/nginx/mime.types;
# the default type used if no mapping is found in types:
# here the browser will just download the file.
default_type application/octet-stream;
# log's format: the 1st parameter is the format's name (main);
# the second is a series of variables with different values
# for every request.
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# path to the log file and log format's name (main, defined above).
access_log /var/log/nginx/access.log main;
# set to on: do not block on disk I/O.
sendfile on;
# keep connection alive timeout. As a page usually has a lot of assets,
# this keeps the connection alive the time required to send them;
# otherwise, a new connection would be created for each asset.
keepalive_timeout 65;
# enable output compression. Recommendation is on.
gzip on;
# include all the .conf files under this folder:
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80;
server_name localhost;
location /site1 {
root /usr/share/nginx/html/site1;
index index.html index.htm;
}
location /site2 {
root /usr/share/nginx/html/site2;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
This works fine, and I can browse to these two sites from localhost/site1 and localhost/site2.
I then started one of my dockerized APIs exposed at localhost:60380. I added to the NGINX configuration, in the same server block, the following location, to reach it at localhost/sample/api (and its swagger at localhost/sample/api/swagger):
location /sample/api {
proxy_pass_header Server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:60380;
}
As this is an ASP.NET Core web API, I used as a starting point the configuration suggested at https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-3.1. Apart from some header passing directions, that's not essentially different from the one found e.g. at How to use nginx to serve a web app on a Docker container.
I have then saved the NGINX configuration in the host folder, and signaled NGINX to refresh it with docker kill -s HUP <mycontainername>.
Anyway, while I am still able to reach the API at localhost:60380, and the two static webs still work, I get a 404 when accessing localhost/sample/api or localhost/sample/api/swagger.
I tried to add proxy_redirect http://localhost:60380/ /sample/api/; as suggested here, but nothing changes.
Could you suggest what I'm doing wrong?
Update 1
I tried added the trailing / to the URI but I'm still getting 404. If this works for Kaustubh (see the answer below), that's puzzling for me as I'm still on 404; or maybe we did something different. Let me recap also for the benefit of other unexperienced readers like me:
prepare the host:
cd /var
mkdir nginx
cd nginx
mkdir conf
cd ..
mkdir www
cd www
mkdir site1
cd ..
mkdir site2
cd ..
Then add an index.html page in each of the folders /var/www/site1, /var/www/site2, and the below nginx.conf under var/nginx/conf:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# include imports configuration from a separate file.
# In this case it imports a types block, mapping each MIME type
# to a file extension, e.g.:
# types {
# text/html html htm shtml;
# text/css css;
# application/javascript js;
# ... etc
# }
include /etc/nginx/mime.types;
# the default type used if no mapping is found in types:
# here the browser will just download the file.
default_type application/octet-stream;
# log's format: the 1st parameter is the format's name (main);
# the second is a series of variables with different values
# for every request.
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# path to the log file and log format's name (main, defined above).
access_log /var/log/nginx/access.log main;
# set to on: do not block on disk I/O.
sendfile on;
# keep connection alive timeout. As a page usually has a lot of assets,
# this keeps the connection alive the time required to send them;
# otherwise, a new connection would be created for each asset.
keepalive_timeout 65;
# enable output compression. Recommendation is on.
gzip on;
# include all the .conf files under this folder:
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80;
server_name localhost;
location /site1 {
root /usr/share/nginx/html/site1;
index index.html index.htm;
}
location /site2 {
root /usr/share/nginx/html/site2;
index index.html index.htm;
}
# https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-3.1
# https://stackoverflow.com/questions/57965728/how-to-use-nginx-to-serve-a-web-app-on-a-docker-container
# https://serverfault.com/questions/801725/nginx-config-for-restful-api-behind-proxy
location /sample/api {
# proxy_redirect http://localhost:60380/ /sample/api/;
proxy_pass_header Server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:60380/;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker run --name mynginx -v $(pwd)/www:/usr/share/nginx/html:ro -v $(pwd)/nginx/conf:/etc/nginx/conf:ro -p 80:80 -d --net=host nginx (notice the added --net=host)
navigate to localhost/site1 and localhost/site2: this works.
start your API at localhost:60380 (this is the API port in my sample). I can see it working at localhost:60380 and its swagger page at localhost:60380/swagger.
navigate to localhost/sample/api: 404. Same for localhost/sample/api/swagger/index.html or any other URI with this prefix.
I tried to replicate this at my end as much as possible. I was able to get it working only after I used --net=host in the docker run command for nginx. Below is the command I used. I had to use this option because the nginx docker container was not able to connect to my api docker container
$ docker run --name nginx -v $(pwd):/usr/share/nginx/html:ro -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 --net=host -id nginx
/etc/nginx/conf.d/default.conf is the default virtual host configuration in nginx that displays the Welcome to nginx page.
I changed it to below config:
server {
listen 80;
server_name localhost;
# For static files
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# For reverse_proxy
location /sample/api {
proxy_pass http://localhost:8080/;
}
}
According to this answer a trailing slash after the port no should fix this.
I have tested the same at my end and it works.
Related
I have a NextJS application which acts like a monolith. Means, I have multiple directories under /pages which acts as different projects.
/pages/first_project/[...slugs]
/pages/second_project/[...slugs]
For the setup I have Docker which runs nextjs as a container alongside Nginx as a prima-facie to reverse proxy back to the application.
I would like Nginx to map its /location against the pages directory in NextJS.
For e.g
/first_project --> /pages/first_project/
/second_project --> /pages/second_project/
In my company, the hosting of the projects are like this:
<company.com>/first_project/[...slugs]
<company.com>/second_project/[...slugs]
The issue I am facing is, since /location in the Nginx looks for all the build files in that particular location. I can't have dynamic hosting possible. I was wondering if this is at all possible?
If I don't provide a basePath in next.config.js, then for <company.com>/first_project/, Nginx expects files to be available in the location --> first_project. However, files are available at the root directory of the project. So I end up getting error.
I obviously can fix this issue by setting basePath in next.config as first_project. However, this will fail for second_project. Is there any way to dynamically load basePath? Please let me know.
Company Nginx Config:
server {
listen 80;
server_name _;
location /first_project {
proxy_pass http://local:8000;
}
location /second_project {
proxy_pass http://local:8000;
}
}
(local:8000)NextJS Nginx Monolith Config
upstream nextjs_upstream {
server <nextjs_project>:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location /_next/static {
proxy_pass http://nextjs_upstream;
}
location / {
proxy_pass http://nextjs_upstream;
}
}
Any help will be appreciated. Thanks :)
I'm having trouble trying to get the following to work in Docker
What I want is that when the user requests http://localhost/api then NGINX reverse proxies to my .Net Core API running in another container.
Container Host: Windows
Container 1: NGINX
dockerfile
FROM nginx
COPY ./nginx.conf /etc/nginx/nginx.conf
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
location /api1 {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Container 2: .Net Core API
Dead simple - API exposed on port 80 in the container
Then there is the docker-compose.yml
docker-compose.yml
version: '3'
services:
api1:
image: api1
build:
context: ./Api1
dockerfile: Dockerfile
ports:
- "5010:80"
nginx:
image: vc-nginx
build:
context: ./infra/nginx
dockerfile: Dockerfile
ports:
- "5000:80"
Reading the Docker documentation it states:
Links allow you to define extra aliases by which a service is
reachable from another service. They are not required to enable
services to communicate - by default, any service can reach any other
service at that service’s name.
So as my API service is called api1, I've simply referenced this in the nginx.conf file as part of the reverse proxy configuration:
proxy_pass http://api1;
Something is wrong as when I enter http:\\localhost\api I get a 404 error.
Is there a way to fix this?
The problem is the nginx location configuration.
The 404 error is right, because your configuration is proxying request from http://localhost/api/some-resource to a missing resource, because your mapping is for /api1 path and you're asking for /api.
So you should only change the location to /api and it will work.
Keep in mind that requests to http://localhost/api will be proxied to http://api1/api (the path is kept). If your backend is configured to expose api with a prefixing path this is ok, otherwise you will receive another 404 (this time from your service).
To avoid this you should rewrite the path before proxying the request with a rule like this:
# transform /api/some-resource/1 to /some-resource/1
rewrite /api/(.*) /$1 break;
I'm currently using Nginx as a reverse proxy and to serve my static assets. I was using React Router's HashLocation setting since it was the default and it allowed me to refresh on a route with no problems and no need for any additional configurations, but the issue with using that setting is the necessity of the url having /#/ prepending my routes (e.g. http://example-app.com/#/signup).
I'm now trying to switch to React Router's HistoryLocation setting, but I can't figure out how to properly configure Nginx to serve index.html for all routes (e.g. http://example-app.com/signup).
Here's my initial nginx setup (not including my mime.types file):
nginx.conf
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes auto;
# Process needs to run in foreground within container
daemon off;
events {
worker_connections 1024;
}
http {
# Hide nginx version information.
server_tokens off;
# Define the MIME types for files.
include /etc/nginx/mime.types;
# Update charset_types due to updated mime.types
charset_types
text/xml
text/plain
text/vnd.wap.wml
application/x-javascript
application/rss+xml
text/css
application/javascript
application/json;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;
# Define upstream servers
upstream node-app {
ip_hash;
server 192.168.59.103:8000;
}
include sites-enabled/*;
}
default
server {
listen 80;
root /var/www/dist;
index index.html index.htm;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location #proxy {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
location / {
try_files $uri $uri/ #proxy;
}
}
This setup worked fine when I was using HashLocation, but after changing to HistoryLocation (the only change I made), I get back a 404 Cannot GET when attempting to refresh on a sub-route's url.
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
in the location / block. This allows me to refresh and directly access the routes as top locations, but now I can't submit PUT/POST requests, instead getting back a 405 method not allowed. I can see the requests are not being handled properly as the configuration I added now rewrites all my requests to /index.html, and that's where my API is receiving all the requests, but I don't know how to accomplish both being able to submit my PUT/POST requests to the right resource, as well as being able to refresh and access my routes.
location / {
try_files $uri /your/index.html;
}
http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files
I know your example is more complex with the #proxy but the above works fine for my application.
This question already has answers here:
multiple rails apps on nginx and unicorn
(3 answers)
Closed 8 years ago.
I'm looking for set up a nginx server with unicorn. I the first app is set but it's on the root "/". what i really want is type localhost/app1 and it would run, while if a just enter to the root, html or php pages are going to be open.
Any clue?
Here's the current nginx.config:
worker_processes 4;
user nobody nogroup; # for systems with a "nogroup"
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
}
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/html text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
upstream sip {
server unix:/home/analista/www/sip/tmp/sockets/sip.unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred; # for Linux
client_max_body_size 4G;
server_name sip_server;
keepalive_timeout 5;
# path for static files
root /home/analista/www/sip/public;
try_files $uri/index.html $uri.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# proxy_buffering off;
proxy_pass http://sip;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sip;
break;
}
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/analista/www/sip/public;
}
}
}
I've got it!
Turns out it was really simple and I wrote a post about it on my blog. http://jrochelly.com/post/2013/08/nginx-unicorn-multiple-rails-apps/
Here's the content:
I'm using Ruby 2.0 and Rails 4.0. I suppose you already have nginx and unicorn installed. So, let's get started!
In you nginx.conf file we are going to make nginx point to a unicorn socket:
upstream unicorn_socket_for_myapp {
server unix:/home/coffeencoke/apps/myapp/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
Then, with your server listening to port 80, add a location block that points to the subdirectory your rails app is (this code, must be inside server block):
location /myapp/ {
try_files $uri #unicorn_proxy;
}
location #unicorn_proxy {
proxy_pass http://unix:/home/coffeencoke/apps/myapp/current/tmp/sockets/unicorn.sock;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
Now you can just Unicorn as a Deamon:
sudo unicorn_rails -c config/unicorn.rb -D
The last thing to do, and the one I dug the most is to add a scope for your rails routes file, like this:
MyApp::Application.routes.draw do
scope '/myapp' do
root :to => 'welcome#home'
# other routes are always inside this block
# ...
end
end
This way, your app will map a link /myapp/welcome, intead of just /welcome
But there's a even better way
Well, the above will work on production server, but what about development? Are you going to develop normally then on deployment you change your rails config? For every single app? That's not needed.
So, you need to create a new module that we are going to put at lib/route_scoper.rb:
require 'rails/application'
module RouteScoper
def self.root
Rails.application.config.root_directory
rescue NameError
'/'
end
end
After that, in your routes.rb do this:
require_relative '../lib/route_scoper'
MyApp::Application.routes.draw do
scope RouteScoper.root do
root :to => 'welcome#home'
# other routes are always inside this block
# ...
end
end
What we are doing is to see if the root directory is specified, if so use it, otherwise, got to "/". Now we just need to point the root directory on config/enviroments/production.rb:
MyApp::Application.configure do
# Contains configurations for the production environment
# ...
# Serve the application at /myapp
config.root_directory = '/myapp'
end
In config/enviroments/development.rb I do not specify the config.root_directory. This way it uses the normal url root.
I'm trying to have my rails server listen on 2 different ports. One solution proposed to me was to use nginx. I installed nginx with sudo passenger-install-nginx-module and added the following to /etc/nginx/conf.d:
server {
listen 80;
listen 10000;
server_name www.myapp.com
passenger_enabled on;
root /root/myapp/public;}
When I went to www.myapp.com I got a 403 Forbidden error. I figured it was because there were no static html files in /public. I dropped a simple "hello world" html page in there and it loaded correctly. I then proceeded to start my rails app using passenger start -e production, which caused it to run in standalone phusion passenger mode on port 3000. I go to myapp.com:3000 and I get the app. However, myapp:80 and myapp:10000 still don't work. I'm confused on how to get my nginx to point to the rails server I'm running. Am I doing this completely wrong? Thanks!
Set nginx to forward to my rails server using this https://gist.github.com/jeffrafter/1229497
worker_processes 1;
error_log /usr/local/var/log/nginx.error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream dev {
server 127.0.0.1:3000;
}
server {
listen 80;
# You could put a server_name directive here (or multiple) if
# you have not setup wildcard DNS for *.dev domains
# See http://jessedearing.com/nodes/9-setting-up-wildcard-subdomains-on-os-x-10-6
# If we choose a root, then we can't switch things around easily
# Using /dev/null means that static assets are served through
# Rails instead, which for development is okay
root /dev/null;
index index.html index.htm;
try_files $uri/index.html $uri.html $uri #dev;
location #dev {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://dev;
}
error_page 500 502 503 504 /50x.html;
}
}