I want to configure NGINX with Rails 4 and run my application in production mode. The problem is I get 403 code - command: rails s -e production and in browser typing localhost. Naturally I established 755 privileges for the whole files in my application folder. There is my nginx.conf below:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 50M;
# fastcgi_buffers 8 16k;
# fastcgi_buffer_size 32k;
# fastcgi_connect_timeout 300;
# fastcgi_send_timeout 300;
# fastcgi_read_timeout 300;
upstream proxy-user {
server 127.0.0.1:2000;
}
upstream thin_cluster {
server unix:/tmp/thin.0.sock;
# server unix:/tmp/thin.1.sock;
# server unix:/tmp/thin.2.sock;
}
server {
listen 80;
server_name localhost;
# access_log /var/log/nginx-access.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
if (!-f $request_filename) {
proxy_pass http://proxy-user;
break;
}
}
}
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!ADH:!MD5;
access_log /var/log/nginx-access-ssl.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO https;
proxy_set_header SSL_CLIENT_S_DN $ssl_client_s_dn;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
You kinda have few issues in your config, I'll write what I would have done and you tell me what ever questions you have, I'll assume that the server is on port 2000 because that's the upstream you used.
I'll also ignore the http block and only use the server and upstream blocks.
upstream rails {
server 127.0.0.1:2000;
}
server {
server_name domain.com; # or whichever
listen 80;
# ssl settings start
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# ssl settings end
root /home/user/Apps/myapp/public;
error_page 500 502 503 504 /50x.html;
access_log /var/log/nginx/domain-access.log;
error_log /var/log/nginx/domain-error.log;
location #pass_to_rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails;
}
location / {
try_files $uri $uri/ #pass_to_rails;
}
}
You should place this inside sites-available and symlink to sites-enabled if you are on a debian/ubuntu distro, or use /etc/nginx/conf.d if you are on another distro, to keep things tidy and maintainable.
also make sure to uncomment one of those lines depending on what you want to use
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
Related
I am using proxy_pass in Nginx to redirect /phpmyadmin[/] to a container where PHPMyAdmin is running (the project is available here on Github). The problem is that when I use https://localhost/phpmyadmin/, I get the HTML and content of PHPMyAdmin in the correct way, but, when I drop "/" at the end of the URL, it returns me the HTML with the wrong resource URLs (such as https://localhost/favicon.ico instead of https://localhost/phpmyadmin/favicon.ico). My Nginx config file looks like this:
upstream backend {
server app1:9000;
server app2:9000;
server app3:9000;
server app4:9000;
}
upstream docker_phpmyadmin {
server phpmyadmin:443;
}
server {
server_name localhost;
listen 443 ssl http2;
root /var/www;
location ~ /phpmyadmin {
# (START) some queries don't work when modsecurity is on
modsecurity off;
# (END) some queries don't work when modsecurity is on
# (START) large databases might demand long time and large memory
client_max_body_size 100m;
client_body_buffer_size 10M;
proxy_read_timeout 6000;
proxy_send_timeout 6000;
# (END) large databases might demand long time and large memory
rewrite /phpmyadmin/?(.*)$ /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://docker_phpmyadmin;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
}
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
location / {
index index.php;
try_files $uri $uri/ /index.php;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 404 error.html;
location = /error.html {
root /var/www;
}
location ~* \.(ico|css|js|gif|jpe?g|png)$ {
expires 10d;
}
location ~* \.php$ {
try_files $uri =404;
fastcgi_pass backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
The same issue arises when I try to use MailCow in my project using:
location ~ /mailcow {
rewrite /mailcow/?(.*)$ /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://docker_nginx_mailcow;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto https;
# proxy_redirect off;
}
I have two docker container of my 2 different net core API project running on my machine (Linux) on respective ports 3333:80 and 6666:8088. I have deployed their front end part on Nginx server each having its own configuration in sites-available folder.
The problem is that my 1st container (API) is working fine, getting response from front end application as well as from Postman but the 2nd container is not working, throwing this error HTTP 502 bad gateway and error msg:
recv() failed (104: Connection reset by peer) while reading response header from upstream
What's wrong over here? Kindly help me to resolve this issue. Following are my config files:
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
client_max_body_size 50M;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
site1.conf
server {
listen 80 default_server;
server_name _;
root /var/www/app.admin-crm.com;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:3333/api/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded_Host $server_name;
add_header Access-Control_Allow-Credentials true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_max_body_size 50M;
}
location / {
try_files $uri $uri/ /index.html;
}
}
site2.conf:
server {
listen 8088 default_server;
server_name _;
root /var/www/stilaar-web;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:6666/api/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded_Host $server_name;
add_header Access-Control_Allow-Credentials true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_max_body_size 50M;
}
location / {
try_files $uri $uri/ /index.html;
}
}
I have been trying to get my application working in production. I was able to access the site before changing config.force_ssl = true in my config\environments\production.rb.
I have seen many others with this problem need to add proxy_set_header X-Fowarded-Proto https;
I have tried adding this in my /etc/nginx/sites-available/default but haven't seen a difference.
My full default is below:
upstream puma {
server unix:///home/deploy/apps/appname/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
root /var/www/html;
index index.html index.htm index.nginx-debian.html
server_name appname.com www.appname.com
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
After making changes I reloaded nginx using sudo service nginx reload followed by sudo service nginx stop and sudo service nginx start
Am I missing something?
EDIT:
I updated my default and removed the config.force_ssl = true:
upstream puma {
server unix:///home/kiui/apps/appnamw/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
keepalive_timeout 70;
server_name appname.com www.appname.com;
ssl on;
ssl_certificate /root/appname.com.chain.cer;
ssl_certificate_key /root/appname.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
root /home/deploy/apps/appname/current/public;
access_log /home/deploy/apps/appname/current/log/nginx.access.log;
error_log /home/deploy/apps/appname/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
}
I can now access the site with http but not https.
Could you try the following:
upstream puma {
server unix:///home/deploy/apps/appname/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80;
server_name appname.com www.appname.com;
return 301 https://$host$request_uri;
}
server {
# SSL configuration
ssl on;
listen 443 ssl;
ssl_certificate path-to-your-crt-file;
ssl_certificate_key path-to-your-key-file;
server_name appname.com www.appname.com;
...
}
My problem was where I was adding the code above. I was adding it in default rather than nginx.conf. Moving the code above solved the problem.
I followed Deploying a Rails App on Ubuntu 14.04 with Capistrano, Nginx, and Puma to deploy a Rails app to Digital Ocean.
It suggested to keep nginx.conf (/etc/nginx/sites-enabled/medical-app) as
upstream puma {
server unix:///home/myappuser/apps/medical-app/shared/tmp/sockets/medical-app-puma.sock;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /home/myappuser/apps/medical-app/current/public;
access_log /home/myappuser/apps/medical-app/current/log/nginx.access.log;
error_log /home/myappuser/apps/medical-app/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
and than I added domain and than installed SSL using let's encrypt
which changed the nginx.conf (/etc/nginx/sites-enabled/medical-app) as following
upstream puma {
server unix:///home/myappuser/apps/medical-app/shared/tmp/sockets/medical-app-puma.sock;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /home/myappuser/apps/medical-app/current/public;
access_log /home/myappuser/apps/medical-app/current/log/nginx.access.log;
error_log /home/myappuser/apps/medical-app/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
server {
# server_name example.com;
root /home/myappuser/apps/medical-app/current/public;
access_log /home/myappuser/apps/medical-app/current/log/nginx.access.log;
error_log /home/myappuser/apps/medical-app/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
server_name www.medtib.com medtib.com; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.medtib.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.medtib.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = medtib.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.medtib.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
server_name www.medtib.com medtib.com;
return 404; # managed by Certbot
}
Now https is working fine but if I enable force SSL through Rails config
config.force_ssl = true
Than it gives error page not working with message redirected too many times
and if I try to login with Facebook which requires https than it gives following error
I don't have idea about nginx etc.
You should forward X-Forwarded-Proto header to your application to inform your application which protocol used. (https, http)
Put the following:
proxy_set_header X-Forwarded-Proto $scheme;
Before:
proxy_pass http://puma;
It should do the trick.
I have used OAuth authentication in my server.I'm trying a server configuration using an nginx reverse proxy and ssl.
I have redirected the https request to http using niginx reverse proxy configuration.
My Nginx server blocks look like:
server {
listen 193.169.200.88:8084;
server_name test2.test.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:50489/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:82
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
and
server {
listen 171.16.202.219:443 ssl;
server_name test2.test.com;
ssl_certificate /nginx/conf/nginxnew.crt;
ssl_certificate_key /nginx/conf/nginxnew.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://193.169.200.88:8084;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
If i run the server in my local machine i can easily login. But if I run the same code behind the reverse proxy the oauth request not working properly.
The oauth request going like http://test2.test.com:50489/oauth/v1/authorize?client_id=f7d1705a-b3fb-45b8-84ee-0afa4179f964&redirect_uri=https:%2F%2Ftest1.test.com%2Flogin&state=LPHxiHo_AmnfB1hqPjSNVQ&scope=Profile&response_type=code.
I want this oauth url like below,
https://test2.test.com/oauth/v1/authorize?client_id=f7d1705a-b3fb-45b8-84ee-0afa4179f964&redirect_uri=https:%2F%2Ftest1.test.com%2Flogin&state=LPHxiHo_AmnfB1hqPjSNVQ&scope=Profile&response_type=code
Any ideas for this?