I have app in RoR, I test it on apache2 to upload file > 1 GB, it's working. But I must use nginx. I have this error after 100% upload on nginx server:
2012/11/09 17:17:01 [error] 1436#0: *12 upstream prematurely closed
connection while reading response header from upstream, client:
134.19.136.32, server: my_domain, request: "POST /attachments HTTP/1.1", upstream:
"http://unix:/tmp-sock/unicorn.my_domain.sock:/attachments", host:
"my_domain", referrer: "http://my_domain/"
I think problem is with unicorn settings, but I don't know where exactly.
BTW: Everthing work only in Firefox.
My config files:
#nginx main config
user www-data;
worker_processes 5;
pid /run/nginx.pid;
events {
worker_connections 768;
accept_mutex off;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
output_buffers 1 2m;
send_timeout 50s;
client_body_temp_path /citishare/datastore0;
client_max_body_size 204800m;
reset_timedout_connection on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#site config
upstream unicorn {
server unix:/tmp-sock/unicorn.citidrive.sock fail_timeout=10000;
}
server {
listen 80 default deferred;
server_name citidrive.citicom.sk;
root /home/deployer/apps/citidrive/current/public;
proxy_buffering on;
proxy_buffer_size 8M;
proxy_buffers 2048 8M;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
proxy_set_header X-Accel-Mapping /home/deployer/apps/citidrive/current/public/system/=/private_files/;
proxy_set_header X-Accel-Limit-Rate off;
location /private_files/ {
internal;
alias /home/deployer/apps/citidrive/current/public/system/;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
proxy_read_timeout 500;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
proxy_max_temp_file_size 3072m;
keepalive_timeout 80;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
}
#unicorn.rb
root = "/home/deployer/apps/citidrive/current"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
listen "/tmp-sock/unicorn.citidrive.sock"
worker_processes 5
timeout 80
#unicorn_init.sh
#!/bin/sh
### BEGIN INIT INFO
# Provides: unicorn
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Manage unicorn server
# Description: Start, stop, restart unicorn server for a specific application.
### END INIT INFO
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/deployer/apps/citidrive/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
AS_USER=deployer
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
stop)
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PIN && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PIN
then
echo >&2 "$OLD_PIN still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
run "$CMD"
;;
reopen-logs)
sig USR1
;;
*)
echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>"
exit 1
;;
esac
Sending a huge file straight to your rails application is not a good solution for uploading big files. You can use nginx upload module to handle big file uploads using unicorn+nginx. This way, nginx actually handles the file upload and instead of passing the whole file to rails as multipart-form data, you send the local file path, uploaded by nginx, to the unicorn rails server and all rails does is to move the file from the tmp path of the os to the path you define. Here I configured nginx and rails to do upload job with nginx upload module. Now there are better solution for handling uploads such as TUS protocol.
I solved the problem by increasing the keepalive_timeout in the Nginx configuration. Maybe this is not a proper solution, or maybe you can increase that based on location and request type. And for that Firefox part, Firefox waits for a response longer than Chrome.
Related
I'm getting a "We're sorry, but something went wrong" screen upon attempting to deploy. It was previously working fine, however when I tried to update it, I ran into this issue. This is my first time deploying anything, and I'm still not exactly sure what I'm doing so I would really appreciate some input as to what I'm doing incorrectly.
I get the following error from unicorn.log
I, [2018-10-01T19:54:53.470419 #4905] INFO -- : unlinking existing
socket=/home/deploy/production/appName/tmp/sockets
/bcrypt_unicorn.todo.sock
I, [2018-10-01T19:54:53.470635 #4905] INFO -- : listening on addr=/home/deploy/production/appName/tmp/sockets/bcrypt_unicorn.todo.sock fd=10
I, [2018-10-01T19:54:53.470737 #4905] INFO -- : worker=0 spawning...
I, [2018-10-01T19:54:53.471045 #4905] INFO -- : worker=1 spawning...
I, [2018-10-01T19:54:53.471397 #4905] INFO -- : master process ready
I, [2018-10-01T19:54:53.476589 #4908] INFO -- : worker=0 spawned pid=4908
I, [2018-10-01T19:54:53.476714 #4908] INFO -- : Refreshing Gem list
I, [2018-10-01T19:54:53.477787 #4910] INFO -- : worker=1 spawned pid=4910
I, [2018-10-01T19:54:53.477910 #4910] INFO -- : Refreshing Gem list
I, [2018-10-01T19:54:59.740522 #4908] INFO -- : worker=0 ready
I, [2018-10-01T19:54:59.744825 #4910] INFO -- : worker=1 ready
and the following error from /var/log/nginx/error.log:
2018/10/01 20:00:41 [crit] 5067#5067: *2 connect() to unix:/home/deploy
/production/appName/tmp/sockets/bcrypto_unicorn.todo.sock failed
(2: No such file or directory) while connecting to upstream, client:
77.75.77.32, server: , request: "GET /genres/gaming HTTP/1.1", upstream:
"http://unix:/home/deploy/production/appName/tmp/sockets
/bcrypto_unicorn.todo.sock:/genres/gaming", host: "appName.com"
I restarted Nginx with
sudo service nginx restart
Reload the updated configuration
sudo nginx -s reload
Then stopped the running Unicorn process
ps aux | grep "unicorn master"
kill -9 PID
Then pulled updated code to deploy
git status
git stash save -u quick-fix
git pull origin master
git stash apply
Then migrated the db
RAILS_ENV=production rake db:migrate
RAILS_ENV=production rake assets:precompile
Then finally restarted Unicorn
bundle exec unicorn -E production -c config/unicorn.rb -D
My unicorn.rb file is as follows:
app_path = "/home/deploy/production/appName"
working_directory app_path
pid app_path + "/tmp/pids/unicorn.pid"
stderr_path app_path + "/log/unicorn.log"
stdout_path app_path + "/log/production.log"
listen app_path + '/tmp/sockets/bcrypt_unicorn.todo.sock'
worker_processes 2
timeout 65
appName/config/nginx.conf
upstream unicorn {
server unix:/tmp/sockets/bcrypto_unicorn.todo.sock fail_timeout=0;
}
server{
listen 80 default deferred;
root /home/deploy/production/appName/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn{
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 403 404 /404.html;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
and /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml applicati$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/bcrypto
upstream bcrypto_unicorn {
server unix:/home/deploy/production/appName/tmp/sockets/bcrypto_unicorn.todo.sock fa$
}
server {
listen 80 default deferred;
root /home/deploy/production/appName/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #bcrypto_unicorn;
location #bcrypto_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://bcrypto_unicorn;
}
error_page 422 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 60;
}
Can someone please point me in the direction of what I'm doing incorrectly here?
Firstly, check your tmp/sockets folder to see whether its exists or not. If not, create the folder with following command
mkdir /home/deploy/production/appName/tmp/sockets
Then, change your socket to the same path with nginx by modify your unicorn.rb
listen app_path + '/tmp/sockets/bcrypt_unicorn.todo.sock'
Lastly, restart unicorn
bundle exec unicorn -E production -c config/unicorn.rb -D
The problem is that Nginx is looking for the socket in a different place that Unicorn has set it.
Nginx is trying to find the socket at:
/home/deploy/production/appName/tmp/sockets/byc2-master_unicorn.todo.sock while the Unicorn config is setting it to /tmp/appName_unicorn.todo.sock
To solve the problem both paths must be the same. So you need to place the same path in the upstream directive of /etc/nginx/sites-enabled/bcrypto and the one defined by the listen directive in unicorn.rb
Then restart Unicorn and reload Nginx
My application is returning 502 error:
In the error.log:
2017/10/12 15:42:28 [error] 12727#12727: *415 connect() to unix:/var/www/autonomos/production/current/tmp/sockets/unicorn.sock failed (111: Connection refused) while connecting to upstream, client: 172.31.81.4, server: api.autonomosapp.com.br, request: "GET /v1/auth/validate_token HTTP/1.1", upstream: "http://unix:/var/www/autonomos/production/current/tmp/sockets/unicorn.sock:/v1/auth/validate_token", host: "api.autonomosapp.com.br"
My nginx/sites-enabled
upstream unicorn_autonomos_production {
server unix:/var/www/autonomos/production/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
#listen 443 ssl;
server_name api.autonomosapp.com.br;
root /var/www/autonomos/production/current/public;
access_log /var/www/autonomos/production/shared/log/access.log;
error_log /var/www/autonomos/production/shared/log/error.log;
client_max_body_size 500M;
keepalive_timeout 5;
gzip_types application/x-javascript text/css;
location /elb-status {
return 200;
}
location ~ /.well-known {
allow all;
root /var/www/autonomos/production/current/public;
}
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://unicorn_autonomos_production;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/autonomos/production/current/public;
}
}
nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My etc/init/unicorn_autonomos_production
start on runlevel [2]
stop on runlevel [016]
console owner
setuid autonomos
pre-start exec /usr/local/rvm/bin/autonomos_production_unicorn_rails -E production -c /var/www/autonomos/production/current/config/unicorn.rb -D > /tmp/upstart_autonomos_production.log 2>&1
post-stop exec kill `cat /var/www/autonomos/production/current/tmp/pids/unicorn.pid`
respawn
unicorn.stderr.log
I, [2017-10-10T04:24:00.952787 #2245] INFO -- : reaped #<Process::Status: pid 2248 exit 0> worker=0
I, [2017-10-10T04:24:00.952946 #2245] INFO -- : master complete
My unicorn_autonomos_production not in init.d, it is a problem?
When I try:
service unicorn_autonomos_production start
The error is:
Failed to start unicorn_autonomos_production.service: Unit unicorn_autonomos_production.service not found.
I reload the nginx server today, I needed to initialize the unicorn too? How can I do?
I exec the command on terminal:
exec /usr/local/rvm/bin/autonomos_production_unicorn_rails -E production -c /var/www/autonomos/production/current/config/unicorn.rb -D > /tmp/upstart_autonomos_production.log 2>&1
and init unicorn
I tried the solution of a similar question and many other on stackoverflow but none of them seem to solve this issue. The default niginx "Welcome" page was running even when I configured /etc/nginx/passenger.conf and /etc/nginx/passenger.conf. It was after I configured the /etc/nginx/sites-enabled/default, by changing the default path to my rails app, I started getting 403 forbidden error.
This is the error log.
2017/02/20 06:05:17 [error] 27311#27311: *2 directory index of "/home/deploy/Blog/current/public/" is forbidden, client: 111.93.247.206, server: mydomain.com, request: "GET / HTTP/1.1", host: "35.154.168.57"
My nginx files are as follows.
/etc/nginx/nginx.conf
user deploy;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
# include /etc/nginx/passenger.conf;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
/etc/nginx/passenger.conf
passenger_ruby /home/deploy/.rvm/wrappers/ruby-2.3.1/ruby;
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
#passenger_ruby /usr/bin/passenger_free_ruby;
/etc/nginx/sites-enabled/default
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name mydomain.com;
passenger_enabled on;
rails_env production;
root /home/deploy/Blog/current/public;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
The permissions are:
lrwxrwxrwx 1 root root 34 Feb 20 06:00 /etc/nginx/sites-enabled/default
-rw-r--r-- 1 root root 179 Feb 20 06:35 /etc/nginx/passenger.conf
-rw-r--r-- 1 root root 1608 Feb 20 06:34 /etc/nginx/nginx.conf
Please can somebody tell what am I doing wrong or what have I not done?
Thank You
Follow these steps:
Backup /home/deploy/Blog/current/public
chown -R <nginxuser>:<nginxuser> /home/deploy/Blog/current/public
nginxuser: the user that runs nginx, its probably one of the following: nginx, www-data, root.
Not sure what exactly you missing. Please align yourself with my setup on https://www.wiki11.com.
Your issue is coming because nginx is trying to search for index.html file into /home/deploy/apps/mll/current/public which is not present there.
In order to fix, you will need to add passenger with your nginx.
Instructions to follow.
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 561F9B9CAC40B2F7
sudo apt-get install -y apt-transport-https ca-certificates
Add Passenger APT repository
sudo sh -c 'echo deb https://oss-binaries.phusionpassenger.com/apt/passenger xenial main > /etc/apt/sources.list.d/passenger.list'
sudo apt-get update
Install passenger and nginx
sudo apt-get install -y nginx-extras passenger
Now start nginx webserver.
sudo service nginx start
Next, we need to update the Nginx configuration to point Passenger to the version of Ruby that we're using.
sudo vim /etc/nginx/nginx.conf
And add or uncomment
include /etc/nginx/passenger.conf;
Save and close nginx.conf. Then open /etc/nginx/passenger.conf
sudo vim /etc/nginx/passenger.conf
If you are using .rbenv, then
passenger_ruby /home/deploy/.rbenv/shims/ruby;
Or if you are using rvm, then
passenger_ruby /home/deploy/.rvm/wrappers/ruby-2.5.0/ruby;
Or if you are using system ruby, then
passenger_ruby /usr/bin/ruby;
Next, restart nginx server
sudo service nginx restart
Add passenger_enabled on; into your site-enabled/centers or site-enabled/nodeapp file.
server {
listen 80;
listen [::]:80;
root /home/deploy/apps/mll/current/public;
index index.html index.htm;
server_name myrailssite.com;
passenger_enabled on;
location / {
try_files $uri $uri/ =404;
}
}
Restart nginx server again, sudo service nginx restart. Hopefully it should work.
For more details, follow,
https://www.phusionpassenger.com/library/install/nginx/install/oss/xenial/
I've just setup nginx and unicorn. I start unicorn like this:
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
I've tried the various commands for stopping the unicorn but none of them work. I usually just restart the server and start unicorn again but this is very annoying.
EDIT
unicorn.rb file (/var/www/Web/config/):
# Set the working application directory
# working_directory "/path/to/your/app"
working_directory "/var/www/Web"
# Unicorn PID file location
# pid "/path/to/pids/unicorn.pid"
pid "/var/www/Web/pids/unicorn.pid"
# Path to logs
# stderr_path "/path/to/log/unicorn.log"
# stdout_path "/path/to/log/unicorn.log"
stderr_path "/var/www/Web/log/unicorn.log"
stdout_path "/var/www/Web/log/unicorn.log"
# Unicorn socket
listen "/tmp/unicorn.Web.sock"
listen "/tmp/unicorn.Web.sock"
# Number of processes
# worker_processes 4
worker_processes 2
# Time-out
timeout 30
default.conf (/etc/nginx/conf.d/):
upstream app {
# Path to Unicorn SOCK file, as defined previously
server unix:/tmp/unicorn.Web.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
# Application root, as defined previously
root /root/Web/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
This is what I do:
$ for i in `ps awx | grep unico | grep -v grep | awk '{print $1;}'`; do kill -9 $i; done && unicorn_rails -c /var/www/Web/config/unicorn.rb -D
If you don't want to have all this line, script it, like this:
/var/www/Web/unicorn_restart.sh:
#!/bin/bash
for i in `ps awx | grep unicorn | grep -v grep | awk '{print $1;}'`; do
kill $i
done
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
and then:
$ chmod +x /var/www/Web/unicorn_restart.sh
summon it each time calling:
$ /var/www/Web/unicorn_restart.sh
This is the output I'm seeing in my nginx error log:
013/11/10 09:40:38 [error] 20439#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: <server ip>, server: , request: "GET / HTTP/1.0", upstream: "http:/some ip address:80/", host: "some id address"
Here is the nginx.conf file contents:
user www-user;
worker_processes 1;
#error_log /var/log/nginx/error.log warn;
error_log /srv/app.myserver.com/current/log/nginx-error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 30000;
events {
worker_connections 10000;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
access_log /srv/app.myserver.com/current/log/nginx-access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/myserver.conf;
}
And here is the contents of /etc/nginx/conf.d/myserver.conf:
upstream myserver {
# This is the socket we configured in unicorn.rb
server unix:/srv/app.myserver.com/current/tmp/myserver.sock
fail_timeout=0;
}
server {
listen 80 default deferred;
#client_max_body_size 4G;
server_name app.myserver.com;
#keepalive_timeout 5;
# Location of our static files
root /srv/app.myserver.com/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #myserver;
location #myserver {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://myserver;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Finally, here's the contents of my config/unicorn.rb file with comments stripped out to save space here:
worker_processes 4
user "www-user", "www-user"
root = "/srv/app.myserver.com/current/"
working_directory root
# QUESTION HERE: should this be considered relative to working_directory or from filesystem root?
listen "/tmp/myserver.sock", :backlog => 64
listen 8080, :tcp_nopush => true
listen 80, :tcp_nopush => true
timeout 30
pid "/srv/app.myserver.com/current/tmp/pids/unicorn.pid"
I'm using Capistrano to deploy and I've made sure that the tmp dir is there and there's a myserver.sock file in there.
And finally, when I do nginx -V I get this list of configuration flags:
--prefix=/etc/nginx
--sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log
--pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp
--user=nginx
--group=nginx
--with-http_ssl_module
--with-http_realip_module
--with-http_addition_module
--with-http_sub_module
--with-http_dav_module
--with-http_flv_module
--with-http_mp4_module
--with-http_gunzip_module
--with-http_gzip_static_module
--with-http_random_index_module
--with-http_secure_link_module
--with-http_stub_status_module
--with-mail
--with-mail_ssl_module
--with-file-aio
--with-ipv6
--with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables'
I don't see anything in there calling out the upstream module. Could that be my problem?
This is my first pass at using nginx and unicorn so I'm kind of missing lots of context still...
if you need further information, let me know...
A couple things to try:
In your nginx config, set your upstream server to use localhost:<unicorn-port> instead of the socket. Example:
upstream myserver {
server localhost:8080 fail_timeout=0;
}
Since nginx is your web server, I'd remove listen 80, :tcp_nopush => true from your unicorn.rb.
First, thanks all for the ideas. I figured this out and I was barking up the wrong tree entirely. The problem was that unicorn was failing to start. This was because one of our helpers was a class, not a module and while thin and webrick allow this, Unicorn was having kittens. I had a few other trivial things but once I was able to start unicorn things worked fine. At the time of this post I didn't realize I had to start unicorn--my head was firmly up my...well, you get the picture.
Again, thanks for the ideas in answers and comments. I appreciate it greatly.
Error exists between Keyboard and Chair. :P