Whenever I try to access the Rails app, I get the default Nginx 404 page and the following error in /var/log/nginx/error.log:
2015/11/16 21:45:30 [error] 16240#0: *78 open() "/usr/local/apps/careers_api/current/public/application/test_aggregate.json" failed (2: No such file or directory), client: 70.184.87.69, server: careers-api.dynamynd.com, request: "GET /application/test_aggregate.json HTTP/1.1", host: "careers-api.dynamynd.com"
nginx.conf:
upstream api_server {
server unix:/run/unicorn/unicorn-api.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name abc.xyz.com;
root /usr/local/apps/abc_xyz/current/public;
ssl on;
ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/STAR_xyz_com.key;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://api_server;
}
}
I have pretty much the same configuration on other apps running on the same machine, and they work just fine.
location #app is a named location. It can only be invoked indirectly from another location block. You have an implicit location / block, but that contains no command to invoke the upstream service.
So, unless that file is a local static file, located at /usr/local/apps/abc_xyz/current/public/application/test_aggregate.json, you will get a 404 error.
Perhaps you are missing something like:
location / {
try_files $uri $uri/ #app;
}
or
try_files $uri #app;
in the server block.
Related
I have been trying to get my application working in production. I was able to access the site before changing config.force_ssl = true in my config\environments\production.rb.
I have seen many others with this problem need to add proxy_set_header X-Fowarded-Proto https;
I have tried adding this in my /etc/nginx/sites-available/default but haven't seen a difference.
My full default is below:
upstream puma {
server unix:///home/deploy/apps/appname/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
root /var/www/html;
index index.html index.htm index.nginx-debian.html
server_name appname.com www.appname.com
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
After making changes I reloaded nginx using sudo service nginx reload followed by sudo service nginx stop and sudo service nginx start
Am I missing something?
EDIT:
I updated my default and removed the config.force_ssl = true:
upstream puma {
server unix:///home/kiui/apps/appnamw/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
keepalive_timeout 70;
server_name appname.com www.appname.com;
ssl on;
ssl_certificate /root/appname.com.chain.cer;
ssl_certificate_key /root/appname.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
root /home/deploy/apps/appname/current/public;
access_log /home/deploy/apps/appname/current/log/nginx.access.log;
error_log /home/deploy/apps/appname/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
}
I can now access the site with http but not https.
Could you try the following:
upstream puma {
server unix:///home/deploy/apps/appname/shared/tmp/sockets/appname-puma.sock;
}
server {
listen 80;
server_name appname.com www.appname.com;
return 301 https://$host$request_uri;
}
server {
# SSL configuration
ssl on;
listen 443 ssl;
ssl_certificate path-to-your-crt-file;
ssl_certificate_key path-to-your-key-file;
server_name appname.com www.appname.com;
...
}
My problem was where I was adding the code above. I was adding it in default rather than nginx.conf. Moving the code above solved the problem.
I use Active Storage successfully when developing (Disk storage), but when application is deployed (Amazon S3 storage) all my attachments are not found.
Uploading works without any problems - files appear in S3 bucket and active storage database records are created. But any time I use .variant() or url_for(), all those files are missing.
Rails logs doesn't tell me anything, as if request did not happen at all. That makes me think that my web server configuration is wrong.
This is my current nginx configuration:
upstream my_app {
server unix:/srv/example.com/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
root /srv/example.com/current/public;
include h5bp/directive-only/ssl.conf;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include h5bp/basic.conf;
include h5bp/auth.conf;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://my_app;
}
access_log /var/log/nginx/example.com/access.log;
error_log /var/log/nginx/example.com/error.log;
charset utf-8;
error_page 404 /404.html;
error_page 502 503 #maintenance;
if (-f /srv/example.com/shared/maintenance.txt) {
return 503;
}
location #maintenance {
root /srv/maintenance;
rewrite ^(.*)$ /index.html break;
}
}
basic.conf is from here: https://github.com/h5bp/server-configs-nginx/tree/master/h5bp
auth.conf is only HTTP simple authentication.
Nginx log shows lines like this:
2018/03/12 11:48:32 [error] 9402#9402: *16285 open() "/srv/example.com/current/public/rails/active_storage/blobs/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBDZz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--fe2fe760c83020f37a9fe8f78bcb4fc958744008/test-powerpoint-document.pptx" failed (2: No such file or directory), client: xxx.xx.xxx.xx, server: example.com, request: "GET /rails/active_storage/blobs/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBDZz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--fe2fe760c83020f37a9fe8f78bcb4fc958744008/test-powerpoint-document.pptx?disposition=upload HTTP/1.1", host: "example.com"
What am I missing?
In case anyone stumbles into the same issue, I fixed my case by commenting out the following:
# h5bp/location/expires.conf
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|mp4|ogg|ogv|webm|htc)$ {
access_log off;
add_header Cache-Control "max-age=2592000";
}
Now all images display and attachments download without any issue.
I'm trying to follow this article:
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
with a fresh Amazon Linux EC2 instance. I'm using the out-of-the-box /etc/nginx/nginx.conf file, and added my config file to /etc/nginx/sites-default/default
Puma seems to be running fine:
/home/ec2-user/flviewer/shared/log/puma_error.log: [8006] *
Listening on
unix:///home/ec2user/flviewer/shared/sockets/tmp/puma.sock
But this shows up in /var/log/nginx/error.log:
2016/12/12 05:33:00 [error] 11018#0: *1 open()
"/usr/share/nginx/html/flviewer" failed (2: No such file or
directory), client: 173.73.119.219, server: localhost, request: "GET
/flviewer HTTP/1.1", host: "54.86.222.53"
Why the heck is it looking in '/usr/share/nginx/html/flviewer' when it should be looking at the socket i opened?
here is my config as dumped by 'nginx -T':
# configuration file /etc/nginx/sites-available/default:
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/ec2-user/flviewer/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/ec2-user/flviewer/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
#autoindex on;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Nothing worked. I stripped /etc/nginx.conf down to just this, and am up and running. I had to throw away all of the boilerplate that was in nginx.conf. This works:
config file:
# Run nginx as a normal console program, not as a daemon
daemon off;
user ec2-user;
# Log errors to stdout
error_log /dev/stdout info;
events {} # Boilerplate
http {
# Print the access log to stdout
access_log /dev/stdout;
# Tell nginx that there's an external server called #app living at our socket
upstream app {
server unix:/home/ec2-user/flv/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
# Accept connections on localhost:2048
listen 80;
server_name localhost;
# Application root
root /home/ec2-user/flv/shared/public;
# If a path doesn't exist on disk, forward the request to #app
try_files $uri/index.html $uri #app;
# Set some configuration options on requests forwarded to #app
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
}
I think it has to do with using the default nginx config file. Try moving /etc/nginx/sites-available/default to /etc/nginx/sites-enabled/flviewer.
$ mv /etc/nginx/sites-available/default /etc/nginx/sites-enabled/flviewer
Then reload and restart nginx.
I need to deploy my rails application,So I have followed all step from here, https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
But end of the tutorial, I get this error --> "502 Bad Gateway"
EDIT
The error message now --> "We're sorry, but something went wrong."
But Nginx error output is the same, I check puma error messages but they just log when it start and when it stop gracefully.
Rails logs which is under app_directory/log does not produce any output.
puma-manager --> I checked it works correctly
paths ---> I have checked three times
Nginx error.log output message:
2016/05/18 14:22:21 [crit] 1099#0: *7 connect() to unix:/home/deploy /hotel-automata/shared/sockets/puma.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.2.105, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/home/deploy/hotel-automata/shared/sockets/puma.sock:/500.html", host: "192.168.2.170"
OS -> Vmware Player, Bridged Network Ubuntu Server 14.0.4
Ruby Version: 2.3.1
Rails Version: 4.2.5.2
This is my nginx config contents of /etc/nginx/sites-available/default
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/hotel-automata/shared/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/hotel-automata/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
EDIT:
Make user that the socket exist. Otherwise it's failing on this point:
in config/puma.rb you need to have line pointing to your socket:
bind "unix://<path or variable for the path where the socket will be>/sockets/puma.sock"
example with variable:
application_path = '/home/deploy/hotel-automata/shared'
bind "unix://#{application_path}/sockets/puma.socket"
Check permissions on the socket
You will need to make sure that Nginx is able to access your socket (have the required rights i.e. RW)
The check the permissions on the whole path try this:
namei -m /home/deploy/hotel-automata/shared/sockets/puma.sock
Alternatively try this:
sudo -u <user> test <-r / -w > <path> && echo True
i.e.
sudo -u nginx test -w /home/deploy/hotel-automata/shared/sockets/puma.sock && echo True
Nginx will require RW access to that socket.
If it doesn't return true then it means that the user has NOT got that permission i.e. -w -> write
Your puma.rb file should look like this.
# /config/puma.rb
app = "manabalss" # App-specific
root = "/home/deployer/apps/#{app}"
workers 5
threads 1, 1 # relying on many workers for thread-unsafe apps
rackup DefaultRackup
port 11592
environment ENV['RACK_ENV'] || 'production'
daemonize true
pidfile "#{root}/puma/puma.pid"
stdout_redirect "#{root}/puma/puma.log", "#{root}/puma/puma_error.log"
bind "unix:/tmp/puma.socket
And your nginx.conf should be like this.
# config/deploy/nginx.conf
upstream puma {
server unix:/tmp/puma.socket fail_timeout=1;
}
# This block redirects http requests to https version
server {
listen 37.139.0.211:80 default deferred;
server_name www.manabalss.lv, manabalss.lv;
return 307 https://manabalss.lv$request_uri;
}
server {
listen 37.139.0.211:443 ssl;
server_name manabalss.lv;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:15m;
ssl_session_timeout 15m;
root /home/deployer/apps/manabalss/current/public;
location ^~ /assets/ {
gzip_static on;
gzip_vary on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
If this doesn't help, you would like to have a look at this.
Rails + Puma + Nginx Every Bad Gateway 502
I am trying to configure the nginx to have a separate subdir for rails-5 api backend. (to separate frontend and backend)
Original, I am calling backend api under GET "/bills". Now I'd like to be it: GET '/api/bills'. so all requests under 'api' should be redirected to rails app.
But I can't make it working. The redirection works, but I see on rails side in logs: ActionController::RoutingError (No route matches [GET] "/api/bills"). Of course this route doesn't exists. Rails only knows about the "/bills" route. Could I configure nginx so, that the redirection would be transparent to Rails, and it would see the request like [GET]"/bills" ?
here is my current config:
upstream app {
# Path to Unicorn SOCK file, as defined previously
server unix:/var/sockets/unicorn.myapp.sock fail_timeout=0;
}
server {
#redirect to https
listen 0.0.0.0:80;
listen [::]:80 ipv6only=on default_server;
server_name localhost; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/app_access.log;
error_log /var/log/nginx/app_error.log;
}
server {
listen 0.0.0.0:443 ssl;
listen [::]:443 ipv6only=on ssl default_server;
server_name localhost; ## Replace this with something like gitlab.example.com
ssl on;
ssl_certificate /etc/ssl/nginx/host.crt;
ssl_certificate_key /etc/ssl/nginx/host.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
location ^~ /api {
#try_files $uri/index.html $uri $uri/;
try_files $uri/index.html $uri #app;
root /app/backend/public;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
error_page 500 502 503 504 /500.html;
}
location / {
# Application Frontend root, as defined previously
try_files $uri $uri/ =404;
root /app/frontend/;
}
client_max_body_size 4G;
keepalive_timeout 10;
}
Inside your location #app block try adding this:
rewrite ^/api(.*)$ $1 break;
That should just strip off the /api prefix before sending the remainder of the URI upstream.
See this document for details.