Follow rails one app click deployment. Database done well, even I check rails console everything working fine.
Ruby version is 2.3.0 and rails version is 5.0.1
But when I hit IP address it gives an error time out
on check unicorn logs I get
/usr/local/rvm/gems/ruby-2.2.1/gems/unicorn-5.0.1/bin/unicorn:126:in `<top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/unicorn:23:in `load'
/usr/local/rvm/gems/ruby-2.2.1/bin/unicorn:23:in `<main>'
/usr/local/rvm/gems/ruby-2.2.1#global/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1#global/bin/ruby_executable_hooks:15:in `<main>'
E, [2017-02-26T15:47:18.969274 #9861] ERROR -- : reaped #<Process::Status: pid 11928 exit 1> worker=2
I, [2017-02-26T15:47:18.969471 #9861] INFO -- : worker=2 spawning...
I, [2017-02-26T15:47:18.974112 #11942] INFO -- : worker=2 spawned pid=11942
I, [2017-02-26T15:47:18.978540 #11936] INFO -- : Refreshing Gem list
I, [2017-02-26T15:47:18.986558 #11938] INFO -- : Refreshing Gem list
and nginx error is
017/02/26 15:34:17 [error] 18564#0: *31 connect() to unix:/var/run/unicorn.sock failed (111: Connection refused) while connecting to upstream, client: 121.52.156.57, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock:/", host: "188.166.157.124"
2017/02/26 15:35:42 [error] 32360#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 119.155.34.115, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "188.166.157.124"
2017/02/26 15:42:38 [error] 6296#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 119.152.140.90, server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "188.166.157.124"
unicorn.conf
listen "unix:/var/run/unicorn.sock"
worker_processes 4
user "rails"
working_directory "/home/rails/company_startup"
pid "/var/run/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
ps aux | grep unicor
rails 4751 18.0 4.2 172880 21516 ? R 14:59 0:00 unicorn worker[2] -D -c /etc/unicorn.conf -E production
rails 4757 0.0 4.1 172404 20972 ? Rl 14:59 0:00 unicorn worker[3] -D -c /etc/unicorn.conf -E production
rails 4760 0.0 2.9 159860 14568 ? Rl 14:59 0:00 unicorn worker[1] -D -c /etc/unicorn.conf -E production
root 4764 0.0 0.1 11712 620 pts/0 S+ 14:59 0:00 grep --color=auto unicorn
root 20463 0.4 2.6 146740 13176 ? Sl 04:32 2:48 unicorn master -D -c /etc/unicorn.conf -E production
nginx file here:
upstream app_server {
server unix:/var/run/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/rails/company_startup/public;
server_name _;
index index.htm index.html;
client_max_body_size 1M;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
This sounds like an unicorn restart problem. You said you don't use capistrano. How do you deploy your application?
EDIT
Unicorn makes better use of resources available to you using multi-process architecture. When it starts, the worker loads the ruby environment and then spawns workers that handles the requests. The master never handles the request, always the workers.
When a worker takes too long, the master can kill it and starts a new worker again.
You seem to use 4 workers. I don't know the size of your droplet on DO, but it seems that the master can't start anymore workers. Could you tell me the size of your droplet (CPU & memory)?
I would install the unicorn-worker-killer gem and test the application again. This should restart your workers in a more effective way than the unicorn master.
EDIT 2:
If this doesn't work, could you try replacing your upstream line with this in your nginx conf file:
upstream app_server { server 127.0.0.1:8080 fail_timeout=0; }
And this in your unicorn conf file:
listen "127.0.0.1:8080
And restart nginx then unicorn.
EDIT 3:
I think I got it
Could you please change your files like this :
unicorn.conf
listen "/var/run/unicorn.sock"
worker_processes 4
user "rails"
working_directory "/home/rails/company_startup"
pid "/var/run/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
Nginx file
upstream app_server {
server unix:/var/run/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/rails/company_startup/public;
server_name <PLEASE PUT YOUR SERVER NAME>;
index index.htm index.html;
client_max_body_size 1M;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
Restart unicorn (Make sure to replace the values between <>)
kill -s QUIT $(< /var/run/unicorn.pid)
bundle exec unicorn -c <PATH TO unicorn.conf FILE> -E <RAILS ENVIRONMENT> -D
Then restart nginx
sudo service nginx restart
and see if it works.
Related
My Rails application is running fine in my local development enviroment (Mac). I pushed the code to a test server, and trying to start it manually, but getting an error. The details are:
OS: Ubuntu 18.04
Rails 5.2.3
Ruby 2.6.3
Puma
Webpacker
Nginx
my /etc/nginx/sites-enabled/myfqdn.com.conf file:
upstream puma {
server unix:///home/test/shared/tmp/sockets/test-puma.sock;
}
server {
listen 80 default_server deferred;
server_name test.xxxxxx.com;
root /home/amptest/public;
access_log /home/test/log/nginx.access.log;
error_log /home/test/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
I created the directories:
/home/myapp/shared
/home/myapp/shared/tmp
/home/myapp/shared/tmp/sockets
/home/myapp/shared/tmp/pids
my config/puma.rb, has the following:
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RAILS_ENV") { "development" }
plugin :tmp_restart
In my .bashrc, I have:
export RAILS_ENV='test'
In my Gemfile, I have:
gem 'puma'
gem 'webpacker'
I ran:
bundle install
With no errors
And successfully did db migration:
rake db:migrate
and seeded the DB:
rake db:seed
Then I did:
bundle exec rake assets:precompile
Which completed properly:
bundle exec rake assets:precompile
2019-09-10 01:36:10 WARN Selenium [DEPRECATION] Selenium::WebDriver::Chrome#driver_path= is deprecated. Use Selenium::WebDriver::Chrome::Service#driver_path= instead.
yarn install v1.17.3
[1/4] Resolving packages...
success Already up-to-date.
Done in 0.76s.
I, [2019-09-10T01:36:13.413669 #3461] INFO -- : Writing /home/myapp/public/assets/express/lib/application-9232ebdb20ad39572e70fb9e29810e63dbb63b58f5f18617c7c2bc8bd28321b5.js
I, [2019-09-10T01:36:13.413941 #3461] INFO -- : Writing /home/myapp/public/assets/express/lib/application-9232ebdb20ad39572e70fb9e29810e63dbb63b58f5f18617c7c2bc8bd28321b5.js.gz
Compiling…
Compiled all packs in /home/test/public/packs-test
But I try to go to test.xxxxxx.com, I get the error "Something Went Wrong".
The nginx error log says:
2019/09/10 05:38:21 [crit] 1192#1192: *1 connect() to unix:///home/test/shared/tmp/sockets/test-puma.sock failed (2: No such file or directory) while connecting to upstream, client: xx.xxx.xxx.xxxx, server: test.xxxxxx.com, request: "GET / HTTP/1.1", upstream: "http://unix:///home/test/shared/tmp/sockets/test-puma.sock:/", host: "test.xxxxxx.com"
2019/09/10 05:38:47 [info] 1192#1192: *3 client closed connection while waiting for request, client: xx.xxx.xxx.xxxx, server: 0.0.0.0:80
So, I am missing a step or more, including what I need to do to make sure Puma is properly started. Any ideas?
This is because the app's rails server is not running. You never ran rails server command.
If you're using this for test/development purposes, consider this:
Start the rails server at port 3001, rails s -b 0.0.0.0 -p 3001
Then in nginx config use rails server URL to Reverse proxy requests:
proxy_pass http://127.0.0.1:3001
If you're using this for production, then consider using Passenger and you can see passenger installation docs here
I was setting up Passenger and Nginx for my Rails App (Mac OS X 10.11).
I used these commands:
gem install passenger
rvmsudo passenger-install-nginx-module
All these gets installed perfectly.
Added nginx path in .bash_profile:
export PATH=$PATH:/opt/nginx/sbin/
My nginx conf:
passenger_root /Users/MyUserName/.rvm/gems/ruby-2.2.5#rails4115/gems/passenger-5.3.4;
passenger_ruby /Users/MyUserName/.rvm/gems/ruby-2.2.5#rails4115/wrappers/ruby;
server {
listen 443;
ssl on;
server_name app1-local.staging.com;
rails_env development;
passenger_enabled on;
root /Users/MyUsername/Desktop/Github/MainApp/public;
ssl_certificate /opt/nginx/ssl/mainapp.com.crt;
ssl_certificate_key /opt/nginx/ssl/mainapp.com.key;
}
server {
listen 443;
ssl on;
server_name app2-local.staging.com app3-local.staging.com app4-local.staging.com local.staging.com;
rails_env development;
passenger_enabled on;
root /Users/MyUsername/Desktop/Github/MainApp2/public;
ssl_certificate /opt/nginx/ssl/mainapp.com.crt;
ssl_certificate_key /opt/nginx/ssl/mainapp.com.key;
}
server {
listen 80;
server_name app1-local.staging.com;
rails_env development;
passenger_enabled on;
root /Users/MyUsername/Desktop/Github/MainApp/public;
}
server {
listen 80;
server_name app2-local.staging.com app3-local.staging.com app4-local.staging.com local.staging.com;
rails_env development;
passenger_enabled on;
root /Users/MyUsername/Desktop/Github/MainApp2/public;
}
After this I started nginx: sudo nginx, it worked. Now if I visit local.staging.com, I get 502 internal server error.
Nginx error logs displays:
upstream prematurely closed connection while reading response header from upstream
If I do passenger-status, it outputs:
Phusion Passenger is currently not serving any applications.
Note: on running, passenger-config validate-install command, it says Everything looks good. :-)
EDIT
Output of sudo passenger-memory-stats:
-------- Apache processes --------
----------- Nginx processes ------------
PID PPID VMSize Resident Name
62366 1 2411.6 MB 1.3 MB nginx: master process nginx
63016 62366 2411.6 MB 1.5 MB nginx: worker process
63008 28339 2377.8 MB 0.3 MB tail -f /opt/nginx/logs/error.log
------ Passenger processes ------
PID VMSize Resident Name
36415 0.0 MB 0.0 MB (PassengerAgent)
63010 2416.8 MB 3.3 MB Passenger watchdog
63014 2417.6 MB 3.3 MB Passenger ust-router
63602 2457.3 MB 5.7 MB Passenger core
EDIT 2
Now in nginx error.log, getting:
Process aborted! signo=SIGSEGV(11), reason=#0, signal sent by PID 0 with UID 0, si_addr=0x0, randomSeed=1534502322
And
upstream prematurely closed connection while reading response header from upstream
Any help here?
I'm deploying my Rails app using nginx, puma, and capistrano. It's deployed by a user called deploy and the deploy location is under the home directory (/home/deploy)
I have Puma configured to create a socket under the shared folder that Capistrano symlinks all it's releases to. Correspondingly, nginx is configured to look at that socket as well (see config files below)
However when I start up the Rails / Puma webserver -
cd /home/deploy/my_app/current
SECRET_KEY_BASE=.... DATABASE_PASSWORD=... rails s -e production
I notice that no socket file is created. When I visit the site in my browser and then look at the Nginx error log, it is also complaining about that socket not existing.
2016/07/17 14:26:19 [crit] 26055#26055: *12 connect() to unix:/home/deploy/my_app/shared/tmp/sockets/puma.sock failed (2: No such file or directory) while connecting to upstream, client: XX.YY.XX.YY, server: localhost, request: "GET http://testp4.pospr.waw.pl/testproxy.php HTTP/1.1", upstream: "http://unix:/home/deploy/my_app/shared/tmp/sockets/puma.sock:/500.html", host: "testp4.pospr.waw.pl"
How do I go about getting puma to create that socket?
Thanks!
Puma Config
# config/puma.rb
...
# `shared_dir` is the symlinked `shared/` directory created
# by Capistrano - `/home/deploy/my_app/shared`
# Set up socket location
bind "unix://#{shared_dir}/tmp/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/tmp/pids/puma.pid"
state_path "#{shared_dir}/tmp/pids/puma.state"
activate_control_app
...
Nginx sites config
# /etc/nginx/sites-available/default
upstream app {
# Path to Puma SOCK file
server unix:/home/deploy/my_app/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/my_app/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Are you sure you are running Puma with that configuration? I don't think rails server is the proper way to start Puma in a production environment.
I would use this instead:
RACK_ENV=production bundle exec puma -C config/puma.rb
Once you get this working manually, then use the --daemon flag to keep the server running in the background.
Also, where is shared_dir defined in your config/puma.rb? Perhaps you omitted the part of the file, but if not, make sure you insert the correct value.
I had a similar issue, the reason was in the incorrect value of shared_dir. You need to update with following if you want to work it on deploy:
set :puma_bind,-> { "unix://#{shared_path}/tmp/sockets/puma.sock" }
set :puma_state, -> { "#{shared_path}/tmp/pids/puma.state" }
set :puma_pid, -> { "#{shared_path}/tmp/pids/puma.pid" }
Notice: after this changes you may have a problem with manual runcap production puma:start/stop/restart and you will need to remove -> {.
/etc/nginx/nginx.conf looks like:
user deploy;
worker_processes 5;
error_log logs/error.log;
events {
worker_connections 1024;
use epoll;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream foreman4000 {
server x.x.x.x:4000;
server x.x.x.x:4001;
server x.x.x.x:4002;
server x.x.x.x:4003;
server x.x.x.x:4004;
}
server {
listen 80;
server_name x.x.x.x; #server IP
access_log /opt/nginx/foreman4000.access.log;
location / {
proxy_pass http://foreman4000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Here I use gem foreman, which uses upstart to manage all process and start all servers with one command
I created Procfile in the main directory of the project which contains:
redis: redis-server
thin: bundle exec thin start -p $PORT
faye: rackup faye.ru -E production -s thin
Added to Gemfile:
gem 'foreman'
gem 'thin'
gem "foreman-export-daemontools", "~> 0.0.1"
Ran bundle install locally to edit Gemfile.lock
Deployed project on the server.
Started Nginx
deploy#dcards101:/opt/nginx/conf$ sudo /etc/init.d/nginx stop [ OK ]
deploy#dcards101:/opt/nginx/conf$ sudo /etc/init.d/nginx srart [ OK ]
Exported data from Procfile to Upstart
deploy#dcards101:/var/www/cards/current$ rvmsudo foreman export upstart -a cards -u root
Started application
deploy#dcards101:/var/www/cards/current$ rvmsudo start cards
Now everything had to be good but what i see on the server is only
502 Bad Gateway
nginx/1.0.15
Logs say:
2012/07/17 17:22:30 [error] 11593#0: *148 no live upstreams while connecting to upstream, client: x.x.x.x, server: x.x.x.x, request: "GET / HTTP/1.1", upstream: "http://foreman4000/", host: "x.x.x.x"
Please help with anything you can. Server -- Ubuntu 10 LTS.
got the same error solved it this way:
first install nginx_tcp_proxy_module
( I followed this tutorial but changed it to use passenger and thin with nginx)
than add the tcp part to your nginx.conf:
tcp {
upstream websockets {
## node processes
server 12.34.56.78:9292;
check interval=300 rise=2 fall=5 timeout=1000;
}
server {
listen 9200;
server_name domain.org;
tcp_nodelay on;
proxy_pass websockets;
}
}
doesn´t work on port 80 for me
after that I still get empty responses from faye/privat_pub but there was an extremly trivial solution:
RAILS_ENV=production bundle exec rackup private_pub.ru -s thin -E production
look private_pub - Issue #29
Now everything works except chrome how fires 2 times
(and I need an deamon-process for the rackup)
hope it helps you too
I think your problem is that you put your app-server and the faye server in the same upstream!
If I get the method of upstream and foreman right, your first visitor get the app the second faye and so on. ( maybe I`m wrong because I don´t know foreman .. but if foreman shares all available servers to all services, that might be your problem )
I wood say try capistrano instead of foreman .. so you have full control which server starts where .. because at my my host http don`t work for private_pub (because of nginx) so I had to install the nginx_tcp_proxy_module to get the tcp block working in my nginx.conf
or just try server by server via ssh to find the error
I am (trying) to set up an ubuntu 11.04 server on rackspace to run a rails 3.2 app with nginx and unicorn.
I found this awesome blog http://techbot.me/2010/08/deployment-recipes-deploying-monitoring-and-securing-your-rails-application-to-a-clean-ubuntu-10-04-install-using-nginx-and-unicorn/ that has helped me massively and apart from mysql setup issues I think I have everything nailed except for a bad gateway error
The nginx error log shows
2012/02/25 14:38:34 [crit] 29139#0: *1 connect() to unix:/tmp/mobile.socket failed (2: No such file or directory) while connecting to upstream, client: xx.xx.xxx.xx, server: localhost, request: "GET / HTTP/1.1", upstream: "http://unix:/tmp/mobile.socket:/", host: xx.xx.xxx.xx
(I have x'd out the domains)
I guess this could be a user permissions thing but the file does not actually exist and I'm not sure how it should be created. I am reluctant to create it manually as I feel that doing so would be fixing a symptom rather than fixing the cause
It should also be noted that the user I created on the server has sudo permissions and needs to use sudo to start nginx, not sure if this is right?
Any pointers as to what I should be looking for to fix this are greatly appreciated.
For completeness my configuration files look like this
/etc/init.dunicorn
#! /bin/sh
### BEGIN INIT INFO
# Provides: unicorn
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the unicorn web server
# Description: starts unicorn
### END INIT INFO
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/unicorn_rails
DAEMON_OPTS="-c /home/testapp/mobile/current/unicorn.rb -E production -D"
NAME=unicorn_rails
DESC=unicorn_rails
PID=/home/testapp/mobile/shared/pids/unicorn.pid
case "$1" in
start)
echo -n "Starting $DESC: "
$DAEMON $DAEMON_OPTS
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
kill -QUIT `cat $PID`
echo "$NAME."
;;
restart)
echo -n "Restarting $DESC: "
kill -QUIT `cat $PID`
sleep 1
$DAEMON $DAEMON_OPTS
echo "$NAME."
;;
and the nginx configuration in /etc/nginx/sites-available/default
# as we are going to use Unicorn as the application server
# we are not going to use common sockets
# but Unix sockets for faster communication
upstream mobile {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/mobile.socket fail_timeout=0;
}
server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80 default;
# you could put a list of other domain names this application answers
server_name localhost;
root /home/testapp/mobile/current/public;
access_log /var/log/nginx/mobile_access.log;
rewrite_log on;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://mobile;
proxy_redirect off;
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# if the request is for a static resource, nginx should serve it directly
# and add a far future expires header to it, making the browser
# cache the resource and navigate faster over the website
location ~ ^/(images|javascripts|stylesheets|system)/ {
root /home/testapp/mobile/current/public;
expires max;
break;
}
}
UPDATE
My unicorn.rb file
# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
# documentation.
worker_processes 4
# Help ensure your application will always spawn in the symlinked
# "current" directory that Capistrano sets up.
working_directory "/home/testapp/mobile/current"
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/tmp/mobile.socket", :backlog => 64
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
# feel free to point this anywhere accessible on the filesystem
user 'testapp', 'testapp'
shared_path = '/home/testapp/mobile/shared'
pid "#{shared_path}/pids/unicorn.pid"
stderr_path "#{shared_path}/log/unicorn.stderr.log"
stdout_path "#{shared_path}/log/unicorn.stdout.log"
As per suggestion I have manually created the mobile.socket file and I now get the following error
[error] 1083#0: *4 connect() to unix:/tmp/mobile.socket failed (111: Connection refused) while connecting to upstream
Is this just a permissions thing on the mobile.socket file? If so what permissions should I need?
Update 2
nginx and unicorn both seem to be running ok
testapp#airmob:~/mobile/current$ ps aux | grep nginx
root 6761 0.0 0.1 71152 1224 ? Ss 18:36 0:00 nginx: master process /usr/sbin/nginx
testapp 6762 0.0 0.1 71492 1604 ? S 18:36 0:00 nginx: worker process
testapp 6763 0.0 0.1 71492 1604 ? S 18:36 0:00 nginx: worker process
testapp 6764 0.0 0.1 71492 1604 ? S 18:36 0:00 nginx: worker process
testapp 6765 0.0 0.1 71492 1604 ? S 18:36 0:00 nginx: worker process
testapp 13071 0.0 0.0 8036 600 pts/0 R+ 21:21 0:00 grep --color=auto nginx
I have renamed mobile.socket to mobile.sock in the relevant config files (unicorn.rb and nginx default) and all is good, no need to create any socket files, it just works as expected.
This also happens if the app server is not running (In my case unicorn). Unicorn creates the socket and nginx looks for it. If the socket is not there nginx kicks up a fuss, so if you are reading this looking for a solution make sure your app server (unicorn) is running and make sure all your socket names match in the various configuration files (unicorn.rb and whatever nginx.conf file has the socket mentioned in it)
You specified that it should use the socket located at /tmp/mobile.socket so yes, the solution is to simply create it.
upstream mobile {
# for UNIX domain socket setups:
server unix:/tmp/mobile.socket fail_timeout=0;
}
I'm assuming you're referencing the same socket in your unicorn.rb.