I have a Rails 4.2 app that is currently running on an Ubuntu server with Nginx and Passenger and it get's a lot of traffic which Passenger doesn't handle very well (very often processes hang).
I decided to replace Passenger with Puma as I did with other apps on another server where things improved drastically but with this one as soon as i deployed the new version running on Puma, i noticed problems started happening, getting a lot of 502 bad gateway errors and looking in the logs i saw a lot of either of these errors:
puma.sock failed (11: Resource temporarily unavailable)
[error] 6658#6658: *5788 upstream timed out (110: Connection timed out) while reading response header from upstream
After googling around and ended up trying several things including the following Sysctl tweaks:
/etc/sysctl.conf
# Increase number of incoming connections
net.core.somaxconn = 65535
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536
Then reloaded with sudo sysctl -p
I've also tweaked the following Nginx configs:
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 400000;
events {
worker_connections 10000;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
keepalive_requests 100000;
server_tokens off;
server_names_hash_bucket_size 256;
[...]
}
Here's my Puma config:
workers 3
preload_app!
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
if rails_env == "production"
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
end
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
on_worker_boot do
#reconnect to mongo
Mongoid::Clients.clients.each do |name, client|
client.close
client.reconnect
end
#reconnect to redis
$redis.redis.client.reconnect
end
before_fork do
Mongoid.disconnect_clients
end
I've also tried specifying the backlog value when biding to the socket like so:
bind "unix://#{shared_dir}/sockets/puma.sock?backlog=1024"
Here's the nginx config for the app:
upstream pumamyapp {
server unix:///var/www/myapp/shared/sockets/puma.sock;
}
server {
listen 80;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server_name www.myapp.com;
rewrite ^(.*) https://myapp.com$1 permanent;
}
server {
listen 80;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
root /var/www/myapp/public;
server_name myapp.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 100M;
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
location /cgi-bin {
return 404;
}
location /setup.cgi {
return 404;
}
location / {
try_files $uri #app;
}
location #app {
proxy_pass http://pumamyapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
}
}
I had to rollback to the previous version that runs on passenger because the site was unusable, any idea what is wrong and how can I make it right?
duelingpetWS2 droplet
address: http://68.183.163.139/
Currently installed:
NodeJS
NPM
Rbenv
ruby 2.5.1p57
Rails 5.2.2
MySQL
Ubuntu 18.04
nginx
/var/www/duelingpets.net/html/index.html
<html>
<head>
<title>Welcome to Duelingpets.net!</title>
</head>
<body>
<h1>Success! The duelingpets.net server block is working!</h1>
</body>
</html>
New Version
/etc/nginx/sites-available/duelingpets.net
upstream duelingpets {
server unix:///path/to/web/tmp/puma.sock;
}
server {
listen 80;
listen [::]:80;
root /var/www/duelingpets.net/html;
#index index.html index.htm index.nginx-debian.html;
server_name duelingpets.net www.duelingpets.net;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://duelingpets;
}
}
Old version
/etc/nginx/sites-available/duelingpets.net
server {
listen 80;
listen [::]:80;
root /var/www/duelingpets.net/html;
index index.html index.htm index.nginx-debian.html;
server_name duelingpets.net www.duelingpets.net;
location / {
try_files $uri $uri/ =404;
}
}
sudo ln -s /etc/nginx/sites-available/duelingpets.net /etc/nginx/sites-enabled/
sudo vim /etc/nginx/nginx.conf
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
Current Behavior of the site.
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
I guess you are using Puma as rails' app server, check this ticket: https://github.com/puma/puma/issues/125
It sets this config
<VirtualHost *:80>
NameVirtualHost 99.99.99.99
ServerName yourapp.com
ServerSignature Off
ProxyRequests Off
<Proxy *>
Order Allow,Deny
Allow from all
</Proxy>
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
ProxyVia On
</VirtualHost>
Note the proxypass to localhost:3000, this is the important part (you don't need document root).
And make sure you start puma with puma -b tcp://127.0.0.1:3000 so it works via tcp instead of sockets.
Anyway, I prefer using nginx instead of apache, you can set nginx to use sockets which is how puma starts by default and there are more tutorials for nginx+puma (there's the config for nginx on that link too).
Can't get past sidekiq errors.
Trying to migrate from Heroku to AWS EB. I have a rails app running rails 4.2.0, ruby 2.3 on a linux machine, but keep running into issues. The webapp won't load - it simply times out over and over.
INFO: Running in ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
INFO: See LICENSE and the LGPL-3.0 for licensing details.
INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
INFO: Booting Sidekiq 3.5.4 with redis options {:url=>nil}
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
Redis keeps telling me its url is nil despite what seems to be a solid setup. (It works on another app I managed to get running with the same configuration. I also found the ERROR MISCONF notice to be troublesome too, but the Redis URL isn't even setting. Further, both are in the same security group
This is my config/sidekiq.rb:
rails_root = Rails.root || File.dirname(__FILE__) + '/../..'
rails_env = Rails.env || 'development'
redis_config = YAML.load_file(rails_root.to_s + '/config/redis.yml')
redis_config.merge! redis_config.fetch(Rails.env, {})
redis_config.symbolize_keys!
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
And my config/redis.yml:
development:
host: localhost
port: 6379
test:
host: localhost
port: 6379
production:
host: ENV['REDIS_HOST']
port: 6379
My applicatoin.yml:
REDIS_HOST: project-name-001.random-token.0001.use1.cache.amazonaws.com
Here's the setup_swap.config, sidekiq.config, and nginx.config.
I've also seen this issue, but I assume it's unrelated. Perhaps I'm mistaken? If irrelevant, will address in another post.
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
What could it be?
Is there anything important I'm missing?
Edit: Add nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 ;
# listen [::]:443 ssl http2 ;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers <redacted>;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
Updated response: I updated nginx.conf to read: include /etc/nginx/conf.d/webapp_healthd.conf; but still got the following:
[root] service nginx restart
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[ OK ]
And also, the following persists:
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots,
but is currently not able to persist on disk. Commands that may modify
the data set are disabled. Please check Redis logs for details about
the error.
Update 2 removed duplicate references to localhost port 80 and nginx stopped complaining, but I still get the Heartbeat MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. error.
Output from Sidekiq.redis(&:info):
{
"redis_version"=>"3.2.8",
"redis_git_sha1"=>"00000000",
"redis_git_dirty"=>"0",
"redis_build_id"=>"12e5c8be08dc4d3",
"redis_mode"=>"standalone",
"os"=>"Linux 4.4.51-40.60.amzn1.x86_64 x86_64",
"arch_bits"=>"64",
"multiplexing_api"=>"epoll",
"gcc_version"=>"4.8.3",
"process_id"=>"24835",
"run_id"=>"83a8de8b50f482a4e271228435b2f0c8e3fa5b5c",
"tcp_port"=>"6379",
"uptime_in_seconds"=>"341217",
"uptime_in_days"=>"3",
"hz"=>"10",
"lru_clock"=>"1108155",
"executable"=>"/usr/local/bin/redis-server",
"config_file"=>"/etc/redis/redis.conf",
"connected_clients"=>"2",
"client_longest_output_list"=>"0",
"client_biggest_input_buf"=>"0",
"blocked_clients"=>"0",
"used_memory"=>"842664",
"used_memory_human"=>"822.91K",
"used_memory_rss"=>"3801088",
"used_memory_rss_human"=>"3.62M",
"used_memory_peak"=>"924360",
"used_memory_peak_human"=>"902.70K",
"total_system_memory"=>"1043574784",
"total_system_memory_human"=>"995.23M",
"used_memory_lua"=>"37888",
"used_memory_lua_human"=>"37.00K",
"maxmemory"=>"0",
"maxmemory_human"=>"0B",
"maxmemory_policy"=>"noeviction",
"mem_fragmentation_ratio"=>"4.51",
"mem_allocator"=>"jemalloc-4.0.3",
"loading"=>"0",
"rdb_changes_since_last_save"=>"177",
"rdb_bgsave_in_progress"=>"0",
"rdb_last_save_time"=>"1493941570",
"rdb_last_bgsave_status"=>"err",
"rdb_last_bgsave_time_sec"=>"0",
"rdb_current_bgsave_time_sec"=>"-1",
"aof_enabled"=>"0",
"aof_rewrite_in_progress"=>"0",
"aof_rewrite_scheduled"=>"0",
"aof_last_rewrite_time_sec"=>"-1",
"aof_current_rewrite_time_sec"=>"-1",
"aof_last_bgrewrite_status"=>"ok",
"aof_last_write_status"=>"ok",
"total_connections_received"=>"17",
"total_commands_processed"=>"141824",
"instantaneous_ops_per_sec"=>"0",
"total_net_input_bytes"=>"39981126",
"total_net_output_bytes"=>"72119284",
"instantaneous_input_kbps"=>"0.00",
"instantaneous_output_kbps"=>"0.00",
"rejected_connections"=>"0",
"sync_full"=>"0",
"sync_partial_ok"=>"0",
"sync_partial_err"=>"0",
"expired_keys"=>"3",
"evicted_keys"=>"0",
"keyspace_hits"=>"14",
"keyspace_misses"=>"533",
"pubsub_channels"=>"0",
"pubsub_patterns"=>"0",
"latest_fork_usec"=>"160",
"migrate_cached_sockets"=>"0",
"role"=>"master",
"connected_slaves"=>"0",
"master_repl_offset"=>"0",
"repl_backlog_active"=>"0",
"repl_backlog_size"=>"1048576",
"repl_backlog_first_byte_offset"=>"0",
"repl_backlog_histlen"=>"0",
"used_cpu_sys"=>"167.52",
"used_cpu_user"=>"46.03",
"used_cpu_sys_children"=>"0.00",
"used_cpu_user_children"=>"0.00",
"cluster_enabled"=>"0",
"db0"=>"keys=1,expires=0,avg_ttl=0"
}
Interestingly, I can't find my redis logs to investigate further. In my redis.conf, all I see is this.
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
I've even run find / -path /sys -prune -o -path /proc -prune -o -name *redis* and don't see ANY log files. (╯°□°)╯︵ ┻━┻
What's also strange is that production.log is simply not getting written to, check the permissions: rw-r--r-- 1 webapp webapp 0 May 8 20:01 production.log
Please share your /etc/nginx/nginx.conf, I guess nginx.conf include other servers conf files in conf.d folder, check for the line include /etc/nginx/conf.d/*.conf; in your nginx.conf, if so it might load the file twice or other default file with the same server name, you can change it to include /etc/nginx/conf.d/webapp_healthd.conf or what ever name you want, but before check what is the file on the machine.
Also Check out the /etc/nginx/sites-enabled/ directory if there is any temp file such as ~default or .save. check it with ls -lah, delete them, restart nginx and check for errors or do it via ebextensions and deploy again.
UPDATE
Try to remove from nginx.confall the section of server { ... }, make sure to include inside http your file /etc/nginx/conf.d/webapp_healthd.conf, there you already have server listen 80; and localhost..
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/webapp_healthd.conf;
index index.html index.htm;
}
003_nginx.config
files:
"/etc/nginx/conf.d/webapp_healthd.conf" :
mode: "000755"
owner: root
group: root
content: |
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
root /var/app/current/public;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
try_files $uri/index.html $uri #my_app;
location #my_app {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
"/opt/elasticbeanstalk/hooks/appdeploy/post/03_restart_nginx.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
rm /etc/nginx/conf.d/webapp_healthd.conf.bak
rm /etc/nginx/conf.d/custom.conf
service nginx restart
I have deployed my Rails app to VPS (DigitalOcean). I have installed NGINX that will handle all my static css, js and html files.
I have uploaded my project via capistrano.
When I open my page at example.com it shows me page Welcome to NGINX. I can only access to my pages by entering example.com:8080/admin and it does not load css, js and html files.
NGINX does not detect static files, which are generated by Rails.
What did I miss? Why I my rails app is on 8080 port?
My nginx.conf file is:
upstream puma {
server unix:///var/www/newsapp/shared/tmp/sockets/newsapp-puma.sock;
}
server {
listen 80 default_server deferred;
# server_name example.com;
root /var/www/newsapp/current/public;
access_log /var/www/newsapp/current/log/nginx.access.log;
error_log /var/www/newsapp/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
I'm using Puma. My deploy.rb file:
set :application, 'newsapp'
set :repo_url, 'https://example#bitbucket.org/example.git'
set :linked_dirs, %w(
bin log vendor/bundle public/system
tmp/pids tmp/cache tmp/sockets
)
set :puma_bind, "unix:///var/www/newsapp/shared/tmp/sockets/newsapp-puma.sock"
set :puma_state, "/var/www/newsapp/shared/tmp/pids/puma.state"
set :puma_pid, "/var/www/newsapp/shared/tmp/pids/puma.pid"
set :puma_access_log, "/var/www/newsapp/shared/log/puma.error.log"
set :puma_error_log, "/var/www/newsapp/shared/log/puma.access.log"
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
My config/deploy/production.rb:
server "my.server.ip.here",
:user => "deployer",
:roles => %w(web app db)
My nginx.conf file located on VPS /etc/nginx:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
There are a lot of lines which were commented. I have just skipped them.
My var/log/nginx/error.log has that kind of lines:
2015/07/12 13:25:08 [emerg] 12215#0: open() "/var/www/newsapp/newsapp/current/log/nginx.access.log" failed (2: No such file or directory)
I think maybe you do not remove default page in site-enabled.
rm -f /etc/nginx/sites-enabled/default
Next, your nginx access log open failed problem.
Try changing your log file path.
access_log /var/www/newsapp/current/log/nginx.access.log;
# => /var/www/newsapp/shared/log/ACCESS_LOG_FILENAME.log
error_log /var/www/newsapp/current/log/nginx.error.log info;
# => /var/www/newsapp/shared/log/ERROR_LOG_FILENAME.log
Maybe I think when nginx have started first, there is no current folder because current folder is generated when app is deployed.
And nginx is not restarted whenever deploying your app.
So try restarting nginx or modifying nginx log path to static path not symlink path. (Don't forget to restart nginx after modify conf file)
css, js not loading problem.
Did you execute assets:precompile while deploying ?
Check if there is require 'capistrano/rails' in Capfile.
and then set your :rails_env in deploy.rb.
and deploy again!
In my nginx.conf I have changed my upstream puma to:
upstream puma {
server 0.0.0.0:8080;
}
In my config/deploy.rb I needed to write this:
set :puma_bind, "0.0.0.0:8080"
I have mentioned this in my /etc/nginx/nginx.conf file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
passenger_root /home/dinshaw/.rvm/gems/ruby-2.1.5/gems/passenger-4.0.56;
passenger_ruby /home/dinshaw/.rvm/gems/ruby-2.1.5/wrappers/ruby;
client_max_body_size 2M;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
rails_env development;
listen 80;
server_name localhost;
root /home/dinshaw/projects/freeway/freeway-sdk-portal/public;
access_log /home/dinshaw/projects/freeway/freeway-sdk-portal/log/nginx_access.log;
error_log /home/dinshaw/projects/freeway/freeway-sdk-portal/log/nginx_error.log;
passenger_enabled on;
}
}
# mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
# }
And there is no effect after this I had this outside the server i.e like this also :
passenger_root /home/dinshaw/.rvm/gems/ruby-2.1.5/gems/passenger-4.0.56;
passenger_ruby /home/dinshaw/.rvm/gems/ruby-2.1.5/wrappers/ruby;
server {
rails_env development;
client_max_body_size 2M;
listen 80;
server_name localhost;
root /home/dinshaw/projects/myCode/public;
access_log /home/dinshaw/projects/myCode/log/nginx_access.log;
error_log /home/dinshaw/projects/myCode/log/nginx_error.log;
passenger_enabled on;
}
But this is also not working If am uploading more than this also it is not giving error. My
nginx version: nginx/1.6.3
Please guide me its about more than 2 days am working on it and not getting what to do.
What you have done by putting the client_max_body_size inside both your http {} and server {} brackets is micro-define your client_max_body_size.
NGINX works by basically nesting control settings, the outer most shell being http {} -> server {} -> location {}. So, what this means is that if you set something in http {}, it is applied to server {} and location {}. But if you set something in server {}, it will only apply to that server {} and not to http {}.
note: always, whenever you make some changes to your nginx.conf file, you must restart your nginx server (as #ihsan suggested above):
sudo service nginx restart
What you've done here is a good try, but you've defined the same thing twice so it shouldn't make much of a difference if it didn't work the first time.
I have run into this issue many times, and aside from changing your nginx.conf file to allow your max size to be 2M, you also need to change your php.ini file to allow you a certain size to upload (2M I guess?).
Inside your php.ini file you will find something that looks like:
upload_max_filesize = 10M
post_max_size = 10M
You must also change these limits to match your expected upload file size. How big you go is your choice, but remember, the bigger you set this limit, the more opportunity you are giving spammers or server bullies to upload big files to your server, thus taking up precious space and bandwidth.
Finally, if you monitor your NGINX error.log, you should be able to see the exact process that's restricting your upload. You enable your error.log within your http {} or server {} brackets, as:
error_log /var/log/nginx/error.log warn;
Reminder though, that setting your log level to warn will make very big files (especially if you have lots of virtual servers running), so it's suggested to only keep it like this when you're troubleshooting perhaps, and then turning it back to higher errors only.
For more information on how to monitor NGINX errors, read here: https://www.nginx.com/resources/admin-guide/logging-and-monitoring/
Hope this helps!