multi-website host behind nginx reverse proxy - docker

I have followed what seems like countless sites on how to set-up a reverse proxy with nginx.
I am going to run several websites in docker containers on an EC2 instance. The instance is in a target group behind an ALB - SSL termination at the ALB.
I have created sites A and B:
sitea.conf
server {
root /var/www/html;
server_name sitea.com;
location / {
proxy_pass http://127.0.0.1:9090;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
siteb.conf
server {
root /var/www/html;
server_name siteb.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This was a default install of nginx on an AWS Linux 2 AMI.
I put both sitea.conf and siteb.conf in the
/etc/nginx/sites-available
directory and then created the symlink
ln -s /etc/nginx/sites-available/* /etc/nginx/sites-enabled
What I am expecting is the routing by nginx.
What is happening is sitea.com is getting ALL of the traffic.
Even the load balancer health checks are being routed by nginx to sitea. Tailing the logs on the container
docker logs --follow sitea
I see all of the health check requests coming in (and getting re-directed because it is a wordpress container).
Nginx is not routing any traffic based on the host header (the load balancer health checks being the tell tale indicator).
Obviously something with my configuration - but I thought this was all there was too it. Where else do I need to configure nginx for a multi-site reverse proxy?
EDIT:
Including the /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/sites-enabled/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

Related

Nginx upstream server works with IP address but not with DNS

Sorry for mistakes. I am new with Nginx.
I have my application deployed on docker engine.
So I have basically 5 docker images but here 2 are most important:
1st backend. (Django DRF application using gunicorn)
2nd frontend. (React App on Nginx)
I am upstreaming backend on Nginx so in Nginx.conf file I have 2 locations defined:
"/" for frontend
"/api" for backend (upstream backend to be able to use it).
I am able to start my containers and they "talk" to each other if I am using IP address in my browser. So backend get requests and give responses.
Now I bought dns and added ssl certificates (LetsEncrypt, but still i have to add exception , but that is a separate question). If I reach my site using DNS frontend works, but backend does not work.
Here is unsuccessful with using DNS.
and successful request using IP address.
Here is my nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
# include /etc/nginx/conf.d/*.conf;
upstream backend {
server api:8000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/ssl/live/site.org/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/site.org/privkey.pem;
location /api {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
#
# Om nom nom cookies
#
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
# Tried this ipv6=off
resolver 1.1.1.1 ipv6=off valid=30s;
set $empty "";
proxy_pass http://backend$empty;
# proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Content-Security-Policy upgrade-insecure-requests;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
# location /static/ {
# alias /home/app/web/staticfiles/;
# }
}
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$host$request_uri;
}
location ~ /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
}
This HTTP 400 Bad Request error looks like the one coming from the Django request validation, since your requests differs only by the Host HTTP request header value. You should include every used domain name to the ALLOWED_HOSTS list in the settings.py Django file. Domain names should be specified as they would appear in the Host header (excluding the possible port number); a wildcard-like entry like .example.com is allowed, assuming the example.com domain and every subdomain. Special value * can be used to skip Host header validation (not recommended unless you do this validation at some other request processing level).

Put two nginx series

Is there a way to put two NGINX server in series?
In my configuration, I have multiple docker-compose instances of containers, which all run the same web applications. In additions, I have two NGINX. The NGINX1 server is located on my physical machine, and the other NGINX server (NGINX2) is located inside a docker-compose container.
Is there a way, connecting to the NGINX1 server, to automatically reach the APP1 application (which is inside a container) passing through the second NGINX (NGINX2, which, also, is internal to the container) by simply typing in a browser the link "mydomain.com/app1"?
I know that a more simple solution would be to point directly the docker-compose container to the external NGINX, but could I apply the scenario described instead?
For better understanding, I made a simple images showing my architecture.
image showing the architecture of the project
Here is my NGINX1 config file:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 9999;
server {
listen 80;
server_name client1.nginx.loc;
access_log logs/nginx_client_loc-access.log;
error_log logs/nginx_client_loc-error.log;
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
And here there is the second NGINX config (NGNX2, internal to the container)
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 99999;
server {
listen 80;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
location /App2 {
include /etc/nginx/mime.types;
proxy_pass http://APP2/targets;
proxy_set_header X-Forwarded-For $remote_addr;
}
Thanks so much
If I understood correctly you want NGINX1 to pass into NGINX2 which would pass the packet onward to APP1?
In this case, the solution is rather straight-forward:
Config NGINX1 to send the packet into a specific port, e.g. port 777. Then, add an NGINX2 listener which would listen on port 777 and send it away.
NGINX1:
http {
...
server {
listen 80;
...
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
location /APP1 {
proxy_pass <URL for NGINX2>:777;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
#error_page 404 /404.html;
...
}
NGINX2:
http {
include mime.types;
...
server {
listen 80;
...
}
server {
listen 777;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
}
...
This way a packet that arrives to /APP1 is forwarded by NGINX1 into port 777 of NGINX2 which in-turn forwards it into the APP1 content.
Also, if you could next time include ports on your architecture diagram, thsi would make it clearer to understand packet-movement.
Hopes this helps.

Ruby on Rails app with Nginx unreachable

I'm deploying my Ruby on Rails website on a remote server.
I put my code in /var/www/[websitename]
/opt/nginx/conf/nginx.conf is as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /home/tamer/.rvm/gems/ruby-2.5.0#meraki/gems/passenger-5.2.0;
passenger_ruby /home/tamer/.rvm/gems/ruby-2.5.0#meraki/wrappers/ruby;
passenger_app_env development;
include mime.types;
default_type application/octet-stream;
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name http://[my external ip];
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/[my directory]/public;
index index.html index.htm;
# Static assets are served from the mentioned root directory
location / {
# root /var/www/APPNAME/current;
# index index.html index.htm;
proxy_pass http://127.0.0.1:3000;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# proxy_set_header X-Real-Port $server_port;
# proxy_set_header X-Real-Scheme $scheme;
proxy_set_header X-NginX-Proxy true;
}
Then, I ran rails s -b 127.0.0.1 -p 3000
The code runs perfectly in my terminal.
However, the browser gives me "This site can’t be reached".
I get the same result with passenger start -a 127.0.0.1 -p 3000
How can I fix this problem
It worked after running iptables -F and restarting my rails appllication.
NOTE
your are connecting to your passenger instance localhost:3000
you should be able to connect via http://localhost IE default port 80 I think this is what you desire.
listen 80; <- > proxy_pass http://127.0.0.1:3000;

Puma shows al requests coming from 127.0.0.1 when behind nginx

I am having a problem where the only IP Address that shows up in my rails log is 127.0.0.1, it appears that the remote ip is not getting proxy passed. I am unsure of what I a missing. Nginx is custom compiled within an omnibus package. and I have that build script below as well. If anyone can give me some insight that would be greatly appreciated.
Nginx Build Recipe:
name "nginx"
default_version "1.9.10"
dependency "pcre"
dependency "openssl"
source url: "http://nginx.org/download/nginx-#{version}.tar.gz",
md5: "64cc970988356a5e0fc4fcd1ab84fe57"
relative_path "nginx-#{version}"
build do
command ["./configure",
"--prefix=#{install_dir}/embedded",
"--with-http_ssl_module",
"--with-http_stub_status_module",
"--with-http_gzip_static_module",
"--with-http_v2_module",
"--with-http_realip_module",
"--with-ipv6",
"--with-debug",
"--with-ld-opt=-L#{install_dir}/embedded/lib",
"--with-cc-opt=\"-L#{install_dir}/embedded/lib -I#{install_dir}/embedded/include\""].join(" ")
command "make -j #{workers}", :env => {"LD_RUN_PATH" => "#{install_dir}/embedded/lib"}
command "make install"
end
Nginx Config:
user smart-mobile smart-mobile;
worker_processes 1;
error_log stderr;
pid nginx.pid;
daemon off;
events {
worker_connections 10240;
}
http {
#log_format combined '$remote_addr - $remote_user [$time_local] '
# '"$request" $status $body_bytes_sent '
# '"$http_referer" "$http_user_agent"';
#
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
proxy_cache_path proxy_cache keys_zone=smart-mobile:10m max_size=1g levels=1:2;
proxy_cache smart-mobile;
include /opt/smart-mobile/embedded/conf/mime.types;
include /var/opt/smart-mobile/nginx/conf/smart-mobile.conf;
}
Nginx Site Config:
upstream smart_mobile {
server unix:/var/opt/smart-mobile/puma/puma.socket;
}
server {
listen 80;
server_name 10.10.20.108;
access_log /var/log/smart-mobile/nginx/smart-mobile-http.access.log;
error_log /var/log/smart-mobile/nginx/smart-mobile-http.error.log;
root /opt/smart-mobile/embedded/smart-mobile-rails/public;
index index.html;
## Real IP Module Config
## http://nginx.org/en/docs/http/ngx_http_realip_module.html
location / {
if (-f /opt/smart-mobile/embedded/smart-mobile-rails/tmp/maintenance.enable) {
return 503;
}
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
try_files $uri $uri/index.html $uri.html #ruby;
}
location #ruby {
proxy_pass http://smart_mobile;
}
error_page 404 /404.html;
error_page 402 /402.html;
error_page 500 /500.html;
error_page 502 /502.html;
error_page 503 #maintenance;
location #maintenance {
if ($uri !~ ^/icos/) {
rewrite ^(.*)$ /503.html break;
}
}
}
Puma Config:
directory '/opt/smart-mobile/embedded/smart-mobile-rails'
threads 2,4
bind 'unix:///var/opt/smart-mobile/puma/puma.socket'
pidfile '/var/opt/smart-mobile/puma/puma.pid'
preload_app!
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
before_fork do
ActiveRecord::Base.connection_pool.disconnect!
end
This worked for me (puma 3.4.0):
# Serve static content if a corresponding file exists.
location / {
try_files $uri #proxy;
# NOTE: Parameters below apply ONLY for static files that match.
expires max;
add_header Cache-Control "public";
add_header By-Nginx "yes"; # DEBUG
}
# Serve dynamic content from the backend.
location #proxy {
proxy_pass http://backend_for_www.site.com;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
After some exploration, I've found out that:
Puma is trained to look at HTTP header X-Forwarded-For specifically.
Once it's passed correctly, Puma should hook it up.
No configuration on Puma end is necessary.
request.headers["REMOTE_ADDR"] will stay "127.0.0.1", this doesn't change no matter how hard you try.
Passing header X-Real-IP does not affect the logging issue anyhow.
Basically you can use set_remote_address header: "X-Real-IP" in Puma configuration file to set "remote address of the connection" from this header.
But Puma itself doesn't look in that direction I don't know any other software that does. Documented here: http://www.rubydoc.info/gems/puma/3.2.0/Puma%2FDSL%3Aset_remote_address.
This was my own fault I had all my proxy_set_headers before the try_files. I moved the proxy_set_header directives into the #ruby location block and removed the X-Real-IP header. Everything is working now thank you for all the input.

How can I get nginx to serve static files from two locations when also serving a unicorn rails server?

Ok, so I have pretty much the standard nginx config for serving a unicorn rails server (listens to a socket file and also serves static files from the rails_app/public directory).
However, I want to do the following:
serve static files from
rails_app/public (as currently is
done)
serve static files with url /reports/ from a different root (like /mnt/files/)
I tried adding the following to my nginx config:
location /reports/ {
root /mnt/matthew/web;
}
but it didn't work.
Any ideas how I can get this to happen?
(below is my entire nginx.conf file:
worker_processes 1;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
# use epoll; # enable for Linux 2.6+
# use kqueue; # enable for FreeBSD, OSX
}
http {
# nginx will find this file in the config directory set at nginx build time
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
# click tracking!
access_log /tmp/nginx.access.log combined;
sendfile on;
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/html text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
# this can be any application server, not just Unicorn/Rainbows!
upstream app_server {
server unix:/home/matthew/server/tmp/unicorn.sock fail_timeout=0;
}
server {
# enable one of the following if you're on Linux or FreeBSD
listen 80 default deferred; # for Linux
# listen 80 default accept_filter=httpready; # for FreeBSD
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
location /reports/ {
root /mnt/matthew/web;
}
# path for static files
root /home/matthew/server/public;
try_files $uri/index.html $uri.txt $uri.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root public;
}
}
}
location #app is looking for the files in /home/matthew/server/public, as that's the parent root specified. If your try files statement is matching files in location /reports/ that has a different root, those files won't be found. You need to set things up like this:
location /reports/ {
root /mnt/matthew/web;
try_files $uri/index.html $uri.txt $uri.html $uri #foo;
}
root /home/matthew/server/public;
try_files $uri/index.html $uri.txt $uri.html $uri #app;
location #foo {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
root /mnt/matthew/web
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}

Resources