nginx index directive works fine locally but gives 404 on ec2 - docker

I have a web project that I want to deploy using docker-compose and nginx.
Locally, I:
docker-compose build
docker-compose push
If I docker-compose up, I can access localhost/ and get redirected to my index.html.
Now on my ec2 instance (a regular ec2 instance where I installed docker and docker-compose) I docker-compose pull, then docker-compose up.
All the containers launch correctly and I can exec sh into my nginx container and see there's a /facebook/index.html file.
If I go to [instance_ip]/index.html, everything works as expected.
If I go to [instance_ip]/, I get a 404 response.
nginx receives the request (I see it in the access logs) but does not redirect to index.html.
Why is the index directive not able to redirect to my index.html file?
I tried to:
Reproduce locally by remove all local images and pulling from my registry.
Kill my ec2 instance and launch a new one.
But I got the same result.
I'm using docker-compose 1.11.1 and docker 17.05.0. On the ec2 instance it's docker 17.03.1 and I tried both docker-compose 1.11.1 and 1.14.1 (Sign that I'm a bit desperate ;)).
An extract from my docker-compose file:
nginx:
image: [image from registry]
build:
context: ./
dockerfile: deploy/nginx.dockerfile
ports:
- "80:80"
depends_on:
- web
My nginx image starts from alpine, installs nginx, adds the index.html file and copies my conf file in /etc/nginx/nginx.conf.
Here's my nginx config. I checked that it is present on the running containers (both locally and on ec2).
# prevent from exiting when using `run` to launch container
daemon off;
worker_processes auto;
#
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
sendfile off;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
server {
error_log /var/log/nginx/file.log debug;
listen 80 default_server;
# root /home/domain.com;
# Bad developers use underscore in headers.
underscores_in_headers on;
# root should be out of location block
root /facebook;
location / {
index index.html;
# autoindex on;
try_files $uri #app;
}
location #app {
include uwsgi_params;
# Using docker-compose linking, the nginx docker-compose service depends on a 'web' service.
uwsgi_pass web:3033;
}
}
}
I have no idea why the container is behaving differently on the ec2 instance.
Any pointers appreciated!

Related

Reverse-proxy to Dockerized REST APIs with Dockerized NGINX

Please see follow-up at bottom
I'm new to NGINX and trying to setup a simple, in-house development Ubuntu server for multiple REST API and SPA apps entry points, so I can learn some NGINX basics.
All the APIs and SPAs I want to serve are dockerized, and each exposes its services (for API) or page (for SPA) on a localhost (the Docker's host) port.
For instance, I have an API at localhost:60380 and an Angular SPA app at localhost:4200, each running in its own Docker container.
I can confirm that these work fine, as I can reach both at their localhost-based URL. Each API also provides a Swagger entry point at its URL e.g. localhost:60380/swagger (or, more verbosely, localhost:60380/swagger/index.html).
I'd now want to have NGINX listening at localhost:80, and reverse-proxy requests to each corresponding services, based on the request's URL. To keep things clean, NGINX too is dockerized, i.e. run from a container using the NGINX open source version.
To dockerize NGINX I followed the directions at https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/, i.e. I run a container from the nginx image, using volumes to point to host's folders for NGINX configuration and static content. I just changed the Docker command, as I had issues in using the mount-based syntax suggested in the documentation (it seems that / is not an allowed character, even if I specified the bind option; please notice that the following command is executed from /var):
docker run --name mynginx -v $(pwd)/www:/usr/share/nginx/html:ro -v $(pwd)/nginx/conf:/etc/nginx/conf:ro -p 80:80 -d nginx
i.e.:
host /var/www => container /usr/share/nginx/html;
host /var/nginx/conf => /etc/nginx.
As a test, I created a couple of static web sites in the host's folders mapped as the source for the volumes, i.e.:
/var/www/site1
/var/www/site2
Both these folders just have a static web page (index.html).
I placed in the host's /var/nginx/conf folder a nginx.conf file to serve these 2 static webs. This is the configuration I came up with:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# include imports configuration from a separate file.
# In this case it imports a types block, mapping each MIME type
# to a file extension, e.g.:
# types {
# text/html html htm shtml;
# text/css css;
# application/javascript js;
# ... etc
# }
include /etc/nginx/mime.types;
# the default type used if no mapping is found in types:
# here the browser will just download the file.
default_type application/octet-stream;
# log's format: the 1st parameter is the format's name (main);
# the second is a series of variables with different values
# for every request.
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# path to the log file and log format's name (main, defined above).
access_log /var/log/nginx/access.log main;
# set to on: do not block on disk I/O.
sendfile on;
# keep connection alive timeout. As a page usually has a lot of assets,
# this keeps the connection alive the time required to send them;
# otherwise, a new connection would be created for each asset.
keepalive_timeout 65;
# enable output compression. Recommendation is on.
gzip on;
# include all the .conf files under this folder:
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80;
server_name localhost;
location /site1 {
root /usr/share/nginx/html/site1;
index index.html index.htm;
}
location /site2 {
root /usr/share/nginx/html/site2;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
This works fine, and I can browse to these two sites from localhost/site1 and localhost/site2.
I then started one of my dockerized APIs exposed at localhost:60380. I added to the NGINX configuration, in the same server block, the following location, to reach it at localhost/sample/api (and its swagger at localhost/sample/api/swagger):
location /sample/api {
proxy_pass_header Server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:60380;
}
As this is an ASP.NET Core web API, I used as a starting point the configuration suggested at https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-3.1. Apart from some header passing directions, that's not essentially different from the one found e.g. at How to use nginx to serve a web app on a Docker container.
I have then saved the NGINX configuration in the host folder, and signaled NGINX to refresh it with docker kill -s HUP <mycontainername>.
Anyway, while I am still able to reach the API at localhost:60380, and the two static webs still work, I get a 404 when accessing localhost/sample/api or localhost/sample/api/swagger.
I tried to add proxy_redirect http://localhost:60380/ /sample/api/; as suggested here, but nothing changes.
Could you suggest what I'm doing wrong?
Update 1
I tried added the trailing / to the URI but I'm still getting 404. If this works for Kaustubh (see the answer below), that's puzzling for me as I'm still on 404; or maybe we did something different. Let me recap also for the benefit of other unexperienced readers like me:
prepare the host:
cd /var
mkdir nginx
cd nginx
mkdir conf
cd ..
mkdir www
cd www
mkdir site1
cd ..
mkdir site2
cd ..
Then add an index.html page in each of the folders /var/www/site1, /var/www/site2, and the below nginx.conf under var/nginx/conf:
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
# include imports configuration from a separate file.
# In this case it imports a types block, mapping each MIME type
# to a file extension, e.g.:
# types {
# text/html html htm shtml;
# text/css css;
# application/javascript js;
# ... etc
# }
include /etc/nginx/mime.types;
# the default type used if no mapping is found in types:
# here the browser will just download the file.
default_type application/octet-stream;
# log's format: the 1st parameter is the format's name (main);
# the second is a series of variables with different values
# for every request.
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# path to the log file and log format's name (main, defined above).
access_log /var/log/nginx/access.log main;
# set to on: do not block on disk I/O.
sendfile on;
# keep connection alive timeout. As a page usually has a lot of assets,
# this keeps the connection alive the time required to send them;
# otherwise, a new connection would be created for each asset.
keepalive_timeout 65;
# enable output compression. Recommendation is on.
gzip on;
# include all the .conf files under this folder:
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80;
server_name localhost;
location /site1 {
root /usr/share/nginx/html/site1;
index index.html index.htm;
}
location /site2 {
root /usr/share/nginx/html/site2;
index index.html index.htm;
}
# https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-3.1
# https://stackoverflow.com/questions/57965728/how-to-use-nginx-to-serve-a-web-app-on-a-docker-container
# https://serverfault.com/questions/801725/nginx-config-for-restful-api-behind-proxy
location /sample/api {
# proxy_redirect http://localhost:60380/ /sample/api/;
proxy_pass_header Server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:60380/;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker run --name mynginx -v $(pwd)/www:/usr/share/nginx/html:ro -v $(pwd)/nginx/conf:/etc/nginx/conf:ro -p 80:80 -d --net=host nginx (notice the added --net=host)
navigate to localhost/site1 and localhost/site2: this works.
start your API at localhost:60380 (this is the API port in my sample). I can see it working at localhost:60380 and its swagger page at localhost:60380/swagger.
navigate to localhost/sample/api: 404. Same for localhost/sample/api/swagger/index.html or any other URI with this prefix.
I tried to replicate this at my end as much as possible. I was able to get it working only after I used --net=host in the docker run command for nginx. Below is the command I used. I had to use this option because the nginx docker container was not able to connect to my api docker container
$ docker run --name nginx -v $(pwd):/usr/share/nginx/html:ro -v $(pwd)/default.conf:/etc/nginx/conf.d/default.conf:ro -p 80:80 --net=host -id nginx
/etc/nginx/conf.d/default.conf is the default virtual host configuration in nginx that displays the Welcome to nginx page.
I changed it to below config:
server {
listen 80;
server_name localhost;
# For static files
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# For reverse_proxy
location /sample/api {
proxy_pass http://localhost:8080/;
}
}
According to this answer a trailing slash after the port no should fix this.
I have tested the same at my end and it works.

How to configure Nginx container to serve an app running on the host machine with a self signed cert?

I'm updating a legacy api to have a better dev experience. I have dockerized nginx and a java api and am managing them with vscode dev-containers plugin. There is another project that runs on node that currently is not dockerized and runs on my host machine (macOS). Previously nginx was configured on the host machine to allow https requests from the node client app to the java api. I need to have that same functionality without dockerizing the node app (yet).
I followed the steps in this post to sign my certs. There is an admin login page on the java api. When I try to access an admin page via https://localhost I get served the page fine. So no issues there.
The previous configuration expected you to have an entry on the hosts file 127.0.0.1 website in order to navigate to the the node app via https. This isn't working anymore with the dockerized nginx. I'm open to any suggestions as I've been spinning my wheels for a while.
This is my current nginx.conf file
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
upstream backend {
# app is the java api docker service; this was localhost:8080 in the old configuration
server app:8080;
}
upstream frontend {
# points to the port where the node app is running on the host machine. Used to be localhost:4000
server host.docker.internal:4000;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://backend;
}
}
server {
listen 443 ssl;
server_name website;
ssl_certificate /etc/nginx/certs/website.crt;
ssl_certificate_key /etc/nginx/certs/website.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://frontend;
}
}
include servers/*;
}
Here is my docker-compose.yml
version: '3.8'
services:
app:
user: vscode
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ..:/workspace:cached
- ./app/repository:/home/vscode/.m2/repository:cached
ports:
- "8080:8080"
command: sleep infinity
web:
image: nginx:1.19.1-alpine
ports:
- "8082:8082"
- "443:443"
- "80:80"
volumes:
- ./web/certs:/etc/nginx/certs

Can't connect from one docker container to another by its public domain name

I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere

Trouble starting docker container with Nginx

I created a Docker CentOS image exposing ports 80, 8000, and 443. When it was done being built, I ran the container using ...
docker run -it -p 8080:8000 -u root <image_id>
Installed on the image is Nginx. There are some issues using the service command to start nginx, so I simply started it by going to /usr/bin/nginx. I can see that nginx is running by using ps aux | grep nginx
bash-4.2# ps aux | grep nginx
root 8 0.0 0.1 122892 2172 ? Ss 18:57 0:00 nginx: master process nginx
nginx 9 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 10 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 11 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
nginx 12 0.0 0.3 123356 6972 ? S 18:57 0:00 nginx: worker process
I get the ip address for the container at /etc/hosts, 172.17.0.11, but when I go to that address in a web browser it just loads for a long time, and eventually times out. I am admittedly pretty new to Nginx, and nginx configurations, so I'm not sure if theres something I'm missing in the default configuration file. I checked the access and error logs for nginx, and both are empty.
/etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.12 85a91e447fca
/etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
The IP you searched for was the ip of the container from it's point of view. you must access the docker container from the external location as your browser lives on your local machine and not in the container.
The local machine and the container are are completely separate worlds and can only be accessed through the ports you exposed. think (as an analogue) about it as the ip of your internal network at home and the ip you get from your internet provider.
if you are using docker-machine you can find the ip you need to use with the following command:
docker-machine ip default
if you are using docker 'native' you can go to http://localhost:8080
but as your nginx listens to port 80 and 443 you need to start the container with a command like:
docker run -it -p 8080:80 -p 8443:443 imagename
Note that ports don't just exist if you want them to. If ngnix is running on port 80 and 443 in the container it will not suddenly start running on port 8000 if you add it to the docker run command. within the container it will still run on port 80 and 443. so if you want to map the outside world (read your own computer) to the container than you have to map a port (e.g. 8000) to the actual port 80 in the container. Nothing else will work. any other mapping than 80 or 443 within the container will in this case fail.
Your docker run command should map some host ports for container ports 80 and 443. Looks like you only map 8000.
Also you should use host IP, (assuming you map all required container ports to host ports), such as http://{host ipv4}:8080

Passenger + Rails app only works on port 3000

Passenger will start up on port 80, but only the home page (which is 100% HTML) shows up. No other page will resolve. And even stranger, all traffic that fails to resolve is forwarded to HTTPS (which, of course, also fails to resolve).
This works:
rvmsudo passenger start --daemonize
This does not work:
rvmsudo passenger start --daemonize --port 80
My config.ru is pretty standard, too:
require ::File.expand_path('../config/environment', __FILE__)
run Rails.application
I am using Rails 4.2.0 and Ruby 2.2.2 with Passenger 5.0.7
Anyone have any ideas?
nginx conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini
#passenger_ruby /home/ubuntu/.rvm/gems/ruby-2.2.2
app specific conf:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name www.mydomain.com;
# Tells Nginx to serve static assets from this directory.
root /var/www/mydomain/public;
location / {
# Tells Nginx to forward all requests for www.foo.com
# to the Passenger Standalone instance listening on port 4000.
proxy_pass http://127.0.0.1:4000;
# These are "magic" Nginx configuration options that
# should be present in order to make the reverse proxying
# work properly. Also contains some options that make WebSockets
# work properly with Passenger Standalone. Please learn more at
# http://nginx.org/en/docs/http/ngx_http_proxy_module.html
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
}
}
I think you need a different "setup"/configuration if you want to run passenger behind a nginx.
Nginx should listen on port 80 (its specified in the server-section of your nginx.conf), and forwards traffic to your app running under passengers hood (specified in nginx.conf to be on port 4000, but started by hand at port 80), if I do not misread.
Probably nginx tells you that its unhappy in its logfile /var/log/nginx.log I believe. You can confirm what is sitting on port 80 by executing netstat -tlpn .

Resources