I am running into an odd issue trying to chown or chgrp a file from a unicorn process. Running the same code from rails c, it changes the group to the correct group, ex:
bash-$ whoami
zac
bash-$ groups
zachallett sysadmin
bash-$ ls -la
...
-rwxrw---- zac sysadmin 154 Nov 1 15:33 file.txt
...
rails controller action:
def controller
file = "#{Rails.root}/file.txt"
%x(chgrp zachallett #{file})
end
in unicorn log:
chgrp: changing group of `/var/www/app/current/file.txt': Operation not permitted
output of ps aux | grep unicorn:
zac 6579 0.0 1.1 254640 45188 ? Sl 17:13 0:01 unicorn_rails master -c config/unicorn.rb -E production -D
zac 6582 0.0 1.0 254640 42704 ? Sl 17:13 0:00 unicorn_rails worker[0] -c config/unicorn.rb -E production -D
zac 6585 0.0 1.0 254640 42704 ? Sl 17:13 0:00 unicorn_rails worker[1] -c config/unicorn.rb -E production -D
zac 6588 0.0 1.0 254640 42704 ? Sl 17:13 0:00 unicorn_rails worker[2] -c config/unicorn.rb -E production -D
zac 6591 0.0 1.0 254640 42704 ? Sl 17:13 0:00 unicorn_rails worker[3] -c config/unicorn.rb -E production -D
zac 6594 0.0 1.1 254728 45004 ? Sl 17:13 0:00 unicorn_rails worker[4] -c config/unicorn.rb -E production -D
zac 6597 0.0 1.1 254728 45072 ? Sl 17:13 0:00 unicorn_rails worker[5] -c config/unicorn.rb -E production -D
zac 7274 0.0 0.0 103232 848 pts/0 S+ 17:32 0:00 grep unicorn
Running the same chgrp from rails c, it changes the group just fine. So user zac owns the file, and is part of the sysadmin group, however I am unable to run chgrp on the file from a unicorn process.
EDIT: Add unicorn.rb config file
env = ENV["RAILS_ENV"] || "development"
working_directory "/var/www/<APP>/current"
pid "/var/www/<APP>/shared/pids/unicorn.pid"
stderr_path "/var/www/<APP>/shared/log/unicorn/stderr.log"
stdout_path "/var/www/<APP>/shared/log/unicorn/stdout.log"
listen "/var/www/<APP>/shared/sockets/unicorn.socket"
worker_processes env == "production" ? 6 : 2
timeout 120
preload_app true
user "zac", "sysadmin"
before_fork do |server, worker|
old_pid = "/var/www/<APP>/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# already killed
end
end
end
Related
I am having difficulties to (re)start Unicorn as I am deploying with Capistrano as well as manually.
This is deploy.rb:
# config valid only for current version of Capistrano
lock "3.8.1"
set :application, "project"
set :repo_url, "git#bitbucket.org:username/project.git"
set :branch, "master"
set :tmp_dir, '/home/deployer/tmp'
set :deploy_to, "/home/deployer/apps/project"
set :keep_releases, 5
set(:executable_config_files, %w(
unicorn_init.sh
))
# files which need to be symlinked to other parts of the
# filesystem. For example nginx virtualhosts, log rotation
# init scripts etc.
set(:symlinks, [
{
source: "nginx.conf",
link: "/etc/nginx/sites-enabled/default"
},
{
source: "unicorn_init.sh",
link: "/etc/init.d/unicorn_#{fetch(:application)}"
},
{
source: "log_rotation",
link: "/etc/logrotate.d/#{fetch(:application)}"
},
{
source: "monit",
link: "/etc/monit/conf.d/#{fetch(:application)}.conf"
}
])
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
invoke 'unicorn:restart'
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
desc "Make sure local git is in sync with remote."
task :check_revision do
on roles(:web) do
unless `git rev-parse HEAD` == `git rev-parse origin/master`
puts "WARNING: HEAD is not the same as origin/master"
puts "Run `git push` to sync changes."
exit
end
end
end
before "deploy", "deploy:check_revision"
end
And during this process of deploying, here's the error that Capistrano gives me:
SSHKit::Runner::ExecuteError: Exception while executing as deployer#IP: Exception while executing as deployer#IP: bundle exit status: 1
bundle stdout: Nothing written
bundle stderr: master failed to start, check stderr log for details
And here's the unicorn log:
I, [2017-07-26T09:27:41.274475 #21301] INFO -- : Refreshing Gem list
E, [2017-07-26T09:27:44.101407 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
E, [2017-07-26T09:27:44.101620 #21301] ERROR -- : retrying in 0.5 seconds (4 tries left)
E, [2017-07-26T09:27:44.602753 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
E, [2017-07-26T09:27:44.603927 #21301] ERROR -- : retrying in 0.5 seconds (3 tries left)
E, [2017-07-26T09:27:45.106551 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
E, [2017-07-26T09:27:45.107051 #21301] ERROR -- : retrying in 0.5 seconds (2 tries left)
E, [2017-07-26T09:27:45.608434 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
E, [2017-07-26T09:27:45.608617 #21301] ERROR -- : retrying in 0.5 seconds (1 tries left)
E, [2017-07-26T09:27:46.109480 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
E, [2017-07-26T09:27:46.109664 #21301] ERROR -- : retrying in 0.5 seconds (0 tries left)
E, [2017-07-26T09:27:46.611021 #21301] ERROR -- : adding listener failed addr=/tmp/unicorn.project.sock (in use)
bundler: failed to load command: unicorn (/home/deployer/apps/project/shared/bundle/ruby/2.4.0/bin/unicorn)
Errno::EADDRINUSE: Address already in use - connect(2) for /tmp/unicorn.project.sock
/home/deployer/apps/project/shared/bundle/ruby/2.4.0/gems/unicorn-5.2.0/lib/unicorn/socket_helper.rb:122:in `initialize'
It says ** Errno::EADDRINUSE: Address already in use - connect(2) for /tmp/unicorn.project.sock** -- however, what does it mean exactly? How to fix it?
I tried to run Unicorn also manually from the server - `bundle exec unicorn -D -c config/unicorn/production.rb -E production`, but only got
> master failed to start, check stderr log for details
that's pointing out on the error message from the log above.
EDIT: Instances of Unicorn
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rails 2134 0.0 0.5 13620 5640 ? Ss Jul24 0:00 /bin/bash /home/rails/rails_project/.unicorn.sh
rails 2347 0.0 1.6 91052 17136 ? Sl Jul24 0:04 unicorn master -c /etc/unicorn.conf -E production --debug
rails 2349 0.0 7.3 270292 74716 ? Sl Jul24 0:10 unicorn worker[0] -c /etc/unicorn.conf -E production --debug
rails 2352 0.0 7.5 270420 77188 ? Sl Jul24 0:09 unicorn worker[1] -c /etc/unicorn.conf -E production --debug
rails 2354 0.0 7.4 270628 75236 ? Sl Jul24 0:09 unicorn worker[2] -c /etc/unicorn.conf -E production --debug
rails 2358 0.0 7.6 270288 77312 ? Sl Jul24 0:10 unicorn worker[3] -c /etc/unicorn.conf -E production --debug
deployer 9330 0.0 7.9 269032 80364 ? Sl 08:36 0:02 unicorn master -c /home/deployer/apps/project/current/config/unicorn/production.rb -E deployment -D
deployer 9334 0.0 7.2 269560 73800 ? Sl 08:36 0:00 unicorn worker[0] -c /home/deployer/apps/project/current/config/unicorn/production.rb -E deployment -D
deployer 9337 0.0 7.2 269560 73800 ? Sl 08:36 0:00 unicorn worker[1] -c /home/deployer/apps/project/current/config/unicorn/production.rb -E deployment -D
deployer 9340 0.0 7.2 269560 73748 ? Sl 08:36 0:00 unicorn worker[2] -c /home/deployer/apps/project/current/config/unicorn/production.rb -E deployment -D
deployer 24279 0.0 0.1 12948 1080 pts/0 S+ 11:13 0:00 grep --color=auto %CPU\|unicorn
I'm deploying a Ruby on Rails application on AWS Beanstalk. The application also needs a sidekiq process for background jobs. There's also a sneakers process running to listen on messages from a RabbitMQ instance.
I created an upstart process for sidekiq using ebextensions from the process outlined here. Using the same outline, I created another upstart process for running the sneakers rake task. All the config files are here in this gist.
The deploy runs fine and I can see the sidekiq and sneaker processes running, but after a few deploys I began seeing a number of rake processes being spawned which are taking up database connections.
[root#ip-XXX ec2-user]# ps aux | grep '[/]opt/rubies/ruby-2.0.0-p648/bin/rake'
webapp 13563 0.0 2.2 1400644 184988 ? Sl 01:41 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 13866 0.7 2.3 694804 193620 ? Sl 01:42 0:10 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 14029 0.0 2.2 1400912 183700 ? Sl 01:42 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 14046 0.0 2.2 1400912 183812 ? Sl 01:42 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 14048 0.0 2.2 1400912 183804 ? Sl 01:42 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 14073 0.0 2.2 1400912 183712 ? Sl 01:42 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 14158 0.0 2.2 827056 187972 ? Sl Nov23 4:23 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 19139 0.9 2.3 694744 193388 ? Sl 01:47 0:10 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 19273 0.0 2.2 1400852 183680 ? Sl 01:47 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
webapp 19290 0.0 2.2 1400852 183732 ? Sl 01:47 0:00 /opt/rubies/ruby-2.0.0-p648/bin/rake
[root#ip-XXX ec2-user]# ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
webapp 14158 0.0 2.2 827056 187972 ? Sl Nov23 4:24 /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 13563 0.0 2.2 1400644 185700 ? Sl 01:41 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 13866 0.4 2.3 694804 193620 ? Sl 01:42 0:11 /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 14029 0.0 2.2 1400912 184412 ? Sl 01:42 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 14046 0.0 2.2 1400912 184372 ? Sl 01:42 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 14048 0.0 2.2 1400912 184516 ? Sl 01:42 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 14073 0.0 2.2 1400912 184540 ? Sl 01:42 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 19139 0.4 2.3 694876 193428 ? Sl 01:47 0:11 /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 19273 0.0 2.2 1400852 184288 ? Sl 01:47 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 19290 0.0 2.2 1400852 184472 ? Sl 01:47 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 19293 0.0 2.2 1400852 184488 ? Sl 01:47 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
webapp 19333 0.0 2.2 1400852 184420 ? Sl 01:47 0:00 \_ /opt/rubies/ruby-2.0.0-p648/bin/ rake
root 21038 0.0 0.0 217276 3460 ? Ssl 01:55 0:00 PassengerWatchdog
webapp 21041 0.1 0.0 704036 5652 ? Sl 01:55 0:02 \_ PassengerHelperAgent
webapp 21047 0.0 0.0 243944 7840 ? Sl 01:55 0:00 \_ PassengerLoggingAgent
root 21056 0.0 0.0 56404 1016 ? Ss 01:55 0:00 PassengerWebHelper: master process / var/lib/passenger/standalone/4.0.60/webhelper-1.8.1-x86_64-linux/PassengerWebHelper -c /tmp/ passenger-standalone.e022jt/config -p /tmp/passenger-standalone.e022jt/
webapp 21057 0.0 0.0 56812 4436 ? S 01:55 0:00 \_ PassengerWebHelper: worker process
webapp 21058 0.0 0.0 56812 4436 ? S 01:55 0:00 \_ PassengerWebHelper: worker process
root 21063 0.0 0.0 8552 1104 ? Ss 01:55 0:00 /var/lib/passenger/standalone/4.0.60/ support-x86_64-linux/agents/TempDirToucher /tmp/passenger-standalone.e022jt --cleanup --daemonize --pid-file /tmp/passenger-standalone.e022jt/temp_dir_toucher.pid --log-f
root 21078 0.0 0.0 11600 2748 ? Ss 01:55 0:00 /bin/bash
root 21102 0.0 0.0 54764 2556 ? S 01:55 0:00 \_ su -s /bin/bash -c bundle exec sidekiq -L /var/app/current/log/sidekiq.log -P /var/app/support/pids/sidekiq.pid
root 21103 8.1 2.6 1452872 212932 ? Sl 01:55 2:27 \_ sidekiq 4.1.2 current [0 of 25 busy]
root 21118 0.0 0.0 54768 2644 ? Ss 01:55 0:00 su -s /bin/bash -c bundle exec rake sneakers:run >> /var/app/current/log/sneakers.log 2>&1 webapp
webapp 21146 0.0 0.0 9476 2336 ? Ss 01:55 0:00 \_ bash -c bundle exec rake sneakers:run >> /var/app/current/log/sneakers.log 2>&1
webapp 21147 0.6 2.3 693604 193232 ? Sl 01:55 0:11 \_ /opt/rubies/ruby-2.0.0-p648/ bin/rake
webapp 21349 0.0 2.2 1400608 184160 ? Sl 01:55 0:00 \_ /opt/rubies/ ruby-2.0.0-p648/bin/rake
webapp 21411 0.0 2.2 1400608 183812 ? Sl 01:55 0:00 \_ /opt/rubies/ ruby-2.0.0-p648/bin/rake
webapp 21414 0.0 2.2 1400608 183988 ? Sl 01:55 0:00 \_ /opt/rubies/ ruby-2.0.0-p648/bin/rake
webapp 21475 0.0 2.2 1400608 183976 ? Sl 01:55 0:00 \_ /opt/rubies/ ruby-2.0.0-p648/bin/rake
webapp 21720 0.3 3.5 1311928 293968 ? Sl 01:55 0:07 Passenger RackApp: /var/app/current
I'm not sure what spawned these processes (if it was sidekiq or sneakers or passenger). With each deploy the number seems to grow until the postgres connections are maxed out.
Is my beanstalk configuration incorrect? Can anybody help me debug this so I can figure out what's creating these processes?
Looks like every time sneakers rake task was killed, it left behind orphan processes. To rectify I added this as a pre deploy hook:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/04_mute_sneakers.sh":
mode: "000755"
content: |
#!/bin/bash
initctl stop sneakers 2>/dev/null
kill $(ps aux | grep '[/]opt/rubies/ruby-2.0.0-p648/bin/rake' | awk '{print $2}') 2>/dev/null
echo "Killed Sneakers Process"
I've setup monit to monitor my sunspot_solr process, which seems to work at first. If I restart the monit service with sudo service monit restart my sunspot process starts:
ps aux | grep sunspot
root 4086 0.0 0.0 9940 1820 ? Ss 12:41 0:00 bash ./solr start -f -s /ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr
root 4137 45.1 4.8 1480560 185632 ? Sl 12:41 0:09 java -server -Xss256k -Xms512m -Xmx512m -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:CMSTriggerPermRatio=80 -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server/logs/solr_gc.log -Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server -Dsolr.solr.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr -Dsolr.install.dir=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr -jar start.jar --module=http
ubuntu 4192 0.0 0.0 10460 936 pts/3 S+ 12:41 0:00 grep --color=auto sunspot
However, I'm also running tail -f /var/logs/monit.log and see this at the same time:
[CST Mar 3 12:42:54] error : 'sunspot_solr' process is not running
[CST Mar 3 12:42:54] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:42:54] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:43:25] error : 'sunspot_solr' failed to start
Plus, to make sure monit can actually restart the sunspot_solr process, I run sudo kill -9 <the pid> and monit can't restart sunspot_solr:
[CST Mar 3 12:44:25] error : 'sunspot_solr' process is not running
[CST Mar 3 12:44:25] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:44:25] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:44:55] error : 'sunspot_solr' failed to start
Obviously something is wrong with my monit-solr_sunspot.conf file, but after messing around with it for a few hours now, I'm stumped:
check process sunspot_solr with pidfile /ebs/staging/shared/pids/sunspot-solr.pid
start program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr start -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
stop program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr stop -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
I've adapted this monit script to suit my needs: Sample sunspot-solr.monit but am still having no luck!
UPDATE
I've gotten monit to successfully restart sunspot_solr if I kill it, however it still produces the error that it failed to restart in the monit.log file.
I think monit runs as root. You may not want to use sudo both because it prompts for a password and because monit doesn't need it.
Using Rails 3.2.21, whenever gem. This is the list of my crontab:
Begin Whenever generated tasks for: abc
0 * * * * /bin/bash -l -c 'cd /home/deployer/abc/releases/20141201171336 &&
RAILS_ENV=production bundle exec rake backup:perform --silent'
Here's the output when the scheduled job is run:
deployer#localhost:~$ ps aux | grep rake
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash -l -c
'cd /home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent'
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash -l -c cd
/home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent
deployer 25631 69.2 4.4 409680 90072 ? Sl 12:00 0:06 ruby /home/deployer/abc/
shared/bundle/ruby/1.9.1/bin/rake backup:perform --silent
deployer 25704 0.0 0.0 11720 2012 pts/0 S+ 12:00 0:00 grep --color=auto rake
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently? How do I prevent that?
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash …
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash …
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently?
No, they aren't. The first is a /bin/sh that started the second, the crontab command /bin/bash …. Most probably /bin/sh is just waiting for termination of /bin/bash and not running again before /bin/bash … has finished execution; you can verify this with e. g. strace -p 25593.
Check your scheduled.rb for a duplicate entry, if you find then remove and deploy.
If there is no duplicate entry in scheduled.rb then you need to remove/comment it from cron tab.
To delete or comment jobs in cron take a look at https://help.1and1.com/hosting-c37630/scripts-and-programming-languages-c85099/cron-jobs-c37727/delete-a-cron-job-a757264.html OR http://www.esrl.noaa.gov/gmd/dv/hats/cats/stations/qnxman/crontab.html
I inherited a Rails application, and I'm trying to understand it. However, when I run:
rails s
I receive this log:
=> Booting Thin
=> Rails 3.2.1 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
However, this seems problematic to me, as both servers are trying to listen on 3000. What makes rails launch thin when I run rails s?
When the thin gem is installed rails will use that as a server by default.
You can change the port with the -p option, for example -p 3001. There are also some more options available to set environment, bind address and similar. There is more info on those in the Rails guide.
Example, a Padrinorb application with nginx and thin server:
Thin
# config/thin.yml
port: 3000
user: padrino
group: padrino
pid: tmp/pids/thin.pid
timeout: 30
wait: 30
log: log/thin.log
max_conns: 1024
require: []
environment: production
max_persistent_conns: 512
servers: 4
threaded: true
no-epoll: true
daemonize: true
socket: tmp/sockets/thin.sock
chdir: /home/padrino/my-padrino-app
tag: my-padrino-app
Nginx
# /etc/nginx/sites-enabled/my-padrino-app
server {
listen 80 default_server;
server_name my-padrino-app.com;
location / {
proxy_pass http://padrino;
}
}
upstream padrino {
server unix:/home/padrino/my-padrino-app/tmp/sockets/thin.0.sock;
server unix:/home/padrino/my-padrino-app/tmp/sockets/thin.1.sock;
server unix:/home/padrino/my-padrino-app/tmp/sockets/thin.2.sock;
server unix:/home/padrino/my-padrino-app/tmp/sockets/thin.3.sock;
}
Script to start, stop, restart, status
#!/usr/bin/env bash
# bin/my-padrino-app-service.sh
APPDIR="/home/padrino/my-padrino-app"
CURDIR=$(pwd)
if [[ $# -lt 1 ]]
then
echo
echo "Usage:"
echo " $0 <start|stop|restart|status>"
echo
exit 1
fi
case $1 in
"status")
cat $APPDIR/tmp/pids/thin.* &> /dev/null
if [[ $? -ne 0 ]]
then
echo "Service stopped"
else
for i in $(ls -C1 $APPDIR/tmp/pids/thin.*)
do
echo "Running: $(cat $i)"
done
fi
;;
"start")
echo "Making thin dirs..."
mkdir -p $APPDIR/tmp/thin
mkdir -p $APPDIR/tmp/pids
mkdir -p $APPDIR/tmp/sockets
echo "Starting thin..."
cd $APPDIR
# Production
thin start -e production -C $APPDIR/config/thin.yml
cd $CURDIR
sleep 2
$0 status
;;
"stop")
cat $APPDIR/tmp/pids/thin.* &> /dev/null
if [[ $? -eq 0 ]]
then
for i in $(ls -C1 $APPDIR/tmp/pids/thin.*)
do
PID=$(cat $i)
echo -n "Stopping thin ${PID}..."
kill $PID
if [[ $? -eq 0 ]]
then
echo "OK"
else
echo "FAIL"
fi
done
fi
$0 status
;;
"restart")
$0 stop
$0 start
$0 status
;;
esac
You can do something like this:
thin start -p 3000 -e production .... and so on for each parameter. But it is too boring...
The best approach is to create configuration yml file in your app_name/config/ directory.
#config/my_thin.yml
user: www-data
group: www-data
pid: tmp/pids/thin.pid
timeout: 30
wait: 30
log: log/thin.log
max_conns: 1024
require: []
environment: production
max_persistent_conns: 512
servers: 1
threaded: true
no-epoll: true
daemonize: true
socket: tmp/sockets/thin.sock
chdir: /path/to/your/apps/root
tag: a-name-to-show-up-in-ps aux
and run it specifying this configuration file : thin start -C config/mythin.yml