I'm hosing a rails application on digital ocean. Its working perfectly. I would like to host a Sinatra application on the same VPS. I have setup the nameservers and DNS.
My opt/nginx/conf/nginx.conf is:
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /home/deploy/.rvm/gems/ruby-2.0.0-p0/gems/passenger-4.0.0.rc6;
passenger_ruby /home/deploy/.rvm/wrappers/ruby-2.0.0-p0/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name domain1.com;
charset utf-8;
root /home/deploy/apps/domain1/current/public;
passenger_enabled on;
rails_spawn_method smart;
rails_env production;
}
server {
listen 80;
server_name domain2.com www.domain2.com;
charset utf-8;
root /home/deploy/apps/domain2-path/public;
passenger_enabled on;
rails_spawn_method smart;
}
}
Now when I go domain2.com it loads the application of domain1.com, what am I doing wrong.
PS: Domain1.com is rails applicion and Domain2.com is sinatra application.
You cannot do it only by defining another DNS address.
You should run the other app on different URL.
Then do something like this:
upstream rails {
server 127.0.0.1:8000;
}
upstream sinatra {
server 127.0.0.1:7000;
}
server {
location /rails {
proxy_pass http://rails;
}
location /sinatra {
proxy_pass http://sinatra;
}
}
Related
we have a website where each user will have his own subdomain, lets's call the domain example.com.
when user1 gets created, he should be able to access his page through user1.example.com
right now when the user access user1.example.com he gets
"Your connection is not private" error message.
We are using rails 7 and we are hosted on AWS lightsail.
the SSL certificate is created using AWS certmanager and attached to the loadbalancer.
our simple Nginx config
listen 80;
listen [::]:80;
server_name _;
root /home/ubuntu/link/to/application/public;
passenger_enabled on;
passenger_app_env production;
location /cable {
passenger_app_group_name myapp_websocket;
passenger_force_max_concurrent_requests_per_process 0;
}
# Allow uploads up to 100MB in size
client_max_body_size 5m;
location ~ ^/(assets|packs) {
expires max;
gzip_static on;
}
}
EDIT 1:
we got a new wildcard certificate from letsencrypt certbot and updated ngnix with the following:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/ubuntu/link/to/app/current/public;
server_name domain.com www.domain;
passenger_enabled on;
passenger_app_env production;
location /cable {
passenger_app_group_name myapp_websocket;
passenger_force_max_concurrent_requests_per_process 0;
}
# Allow uploads up to 100MB in size
client_max_body_size 5m;
location ~ ^/(assets|packs) {
expires max;
gzip_static on;
}
listen 443 ssl; # managed by Certbot
# RSA certificate
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot
}
now both domain.com and www.domain.com has SSL certificates, but I still can have user1.domain.com to have that certificate
I'm using puma and nginx to run a rails 3.2 app on AWS. I terminate my HTTPS on the loadbalancer (ELB). How can I redirect www.mydomain.com to mydomain.com? My config below does not work.
user nginx;
worker_processes auto;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 200;
reset_timedout_connection on;
types_hash_max_size 2048;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
upstream myapp {
# Path to Puma SOCK file
server unix:/var/www/nginx-default/myapp/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name www.my-app.com;
return 301 $scheme://my-app.com$request_uri;
}
server {
listen 80;
server_name my-app.com;
root /var/www/nginx-default/myapp/public;
#charset koi8-r;
# set client body size (upload size)
client_max_body_size 100M;
location / {
proxy_pass http://myapp; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# We don't need .ht files with nginx.
location ~ /\.ht {
deny all;
}
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
}
If I remove
server {
listen 80;
server_name www.my-app.com;
return 301 $scheme://my-app.com$request_uri;
}
and use
server {
listen 80;
server_name localhost;
root /var/www/nginx-default/myapp/public;
#...
}
it works but does not handle the www subdomain.
This is a speculative answer, as I do not use AWS and ELB. However, using $scheme in your redirect will result in HTTPS being redirected to HTTP:
If you do not accept HTTP at the ELB, you should redirect to HTTPS only, with:
server {
listen 80;
server_name www.my-app.com;
return 301 https://my-app.com$request_uri;
}
However, if you accept either HTTP or HTTPS at the ELB, you should be able to use the X-Forwarded-Proto header inserted by the ELB. Try this:
server {
listen 80;
server_name www.my-app.com;
return 301 $http_x_forwarded_proto://my-app.com$request_uri;
}
Rails on Phusion Passenger with Nginx not allow uploading files of size > 2G.
During upload I get 500 error and a RackMultipart file in /tmp folder of size 2G exactly.
nginx.conf:
worker_processes 2;
timer_resolution 100ms;
worker_priority -5;
error_log /opt/vhod/webapp/shared/log/nginx_error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/vhod-admin/.rbenv/versions/1.8.7-p370/lib/ruby/gems/1.8/gems/passenger-3.0.19;
passenger_ruby /home/vhod-admin/.rbenv/versions/1.8.7-p370/bin/ruby;
passenger_max_pool_size 3;
passenger_pool_idle_time 1200;
passenger_spawn_method smart;
passenger_friendly_error_pages on;
passenger_log_level 1;
passenger_debug_log_file /opt/vhod/webapp/shared/log/passenger_debug.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
client_max_body_size 0;
proxy_max_temp_file_size 0;
proxy_read_timeout 360s;
keepalive_timeout 70;
server {
listen 443;
server_name vhod;
charset utf-8;
root /opt/vhod/webapp/current/redmine/public;
passenger_enabled on;
passenger_use_global_queue on;
passenger_min_instances 1;
rails_env production;
ssl on;
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
ssl_protocols SSLv3 TLSv1;
if (-f /opt/vhod/webapp/shared/system/maintenance.html) {
rewrite ^(.*)$ /opt/vhod/webapp/shared/system/maintenance.html last;
break;
}
}
}
Everything works only without nginx. When i run mongrel/thin/webrick server with the application. So, the passenger is the latest version, 3.0.19, nginx is 1.2.6.
What's the matter?
This is a possible bug in Phusion Passenger, which has been solved in version 4.0.0 RC 4.
Set the client_max_body_size to > 2000m.
http://wiki.nginx.org/HttpCoreModule#client_max_body_size
I have:
Ubuntu 12.04 LTS
ruby-1.9.3-p194
Rails 3.2.7
I am trying to get access to my Rails application through Nginx + Passenger.
/opt/nginx/conf/nginx.conf file is:
user test;
worker_processes 1;
events {
worker_connections 1024;
}
http {
passenger_root /home/test/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14;
passenger_ruby /home/test/.rvm/wrappers/ruby-1.9.3-p194/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name 10.11.11.178;
root /efiling/public;
passenger_enabled on;
location / {
root html;
index index.html index.htm;
}
....
When I enter a link 10.11.11.178 I get Welcome to nginx!
But I am expecting to get Rails app default page.
What is wrong?
Thanks in advance.
#Jashwant. First, I removed line as was mentioned by #Brandon. Second, I removed index.html file from my_app\public folder.
Try removing the following from the config:
location / {
root html;
index index.html index.htm;
}
I am totally new to nginx deployment and having problem setting up the subdomain for rails app which is running in passenger. My app structure is like this
-- sss.com (parent domain)
-- sub.sss.com (subdomain)
-- zzz.com (which will be redirected to sub.sss.com)
For more clear perspective, think of the gmail structure
-- google.com (parent domain)
- mail.google.com (subdomain)
-- gmail.com (which will be redirected to mail.google.com)
And remember sub.sss.com is not just a directory under sss, its completely a different rails app.
To setup a similar structure i have configured nginx like this
server {
listen 80;
server_name sss.com *.sss.com;
rewrite ^(.*) http://sss.com$1 permanent;
}
server {
listen 80;
server_name sss.com;
passenger_enabled on;
access_log logs/sss.log;
error_log logs/sss_error.log;
root /var/www/sss/public;
}
server {
listen 80;
server_name sub.sss.com;
passenger_enabled on;
access_log logs/sub.log;
error_log logs/sub_error.log;
root /var/www/sub/public;
}
server {
listen 80;
server_name zzz.com;
rewrite ^(.*) http://sub.sss.com$1 permanent;
}
When i start nginx i got this warning message
nginx: [warn] conflicting server name "sss.com" on 0.0.0.0:80, ignored
And got this message when tried to access the url www.sss.com
Chrome - Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.
FF - Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
But when i access zzz.com, it successfully redirects to sub.sss.com with a same error.
Seems its messed up in some kind of loop. anybody got a idea how to solve this?
In your first server you define the sss.com like server in the second too. You just need delete from first. like that :
server {
listen 80;
server_name *.sss.com;
rewrite ^(.*) http://sss.com$1 permanent;
}
server {
listen 80;
server_name sss.com;
passenger_enabled on;
access_log logs/sss.log;
error_log logs/sss_error.log;
root /var/www/sss/public;
}
server {
listen 80;
server_name sub.sss.com;
passenger_enabled on;
access_log logs/sub.log;
error_log logs/sub_error.log;
root /var/www/sub/public;
}
server {
listen 80;
server_name zzz.com;
rewrite ^(.*) http://sub.sss.com$1 permanent;
}
You have 3 domains/subdamians and there should be only 3 server blocks instead of the four you had.
Try ...
server {
# This server block serves sss.com
listen 80;
server_name sss.com;
passenger_enabled on;
access_log logs/sss.log;
error_log logs/sss_error.log;
root /var/www/sss/public;
}
server {
# This server block serves sub.sss.com
listen 80;
server_name sub.sss.com;
passenger_enabled on;
access_log logs/sub.log;
error_log logs/sub_error.log;
root /var/www/sub/public;
}
server {
# This server block redirects zzz.com to sub.sss.com
listen 80;
server_name zzz.com;
rewrite ^ http://sub.sss.com$request_uri? permanent;
}