i ran into a very strange problem. i have a nginx and configed fine to use a rails unicorn server.
i use 'unicorn_rails -c config/unicorn.rb -E producttion -D' to start my unicorn server.
everything worked fine unless i logout my ssh sesstion.
after i logout my ssh session, the rails app will down.
and when i check the nginx log,it says that the socket.pid refused to connect.
and i find the unicorn's socket file just sit there and the unicorn's process are all alive.
the only solution is to kill the unicorn process and restart again.
i am so confused, anyone could help me? thanks!
Try running the process in the background using nohup unicorn_rails -c config/unicorn.rb -E production -D
This may help you however it's been a while since I had to startup my webserver through SSH without an init.d script or similar. You may get better help on SuperUser though since they deal more with systems stuff.
Related
Deployed a rails application using Nginx and Unicorn. Want to change config file. Is it necessary to restart Nginx? Or just do touch public/robots.txt?
If it works, why do touch public/robots.txt but not trigger at other files?
I think the touch tmp/restart.txt method is passenger specific. With unicorn, you can send a USR2 signal to it from the directory of the updated code to kill it, and restart the Unicorn instance.
Depending on the OS you're running it in, sending the signal could be different (sig vs kill etc). Also assuming you use Capistrano for deployment:
# Kill unicorn
run "kill -s USR2 `cat #{unicorn_pid_file_location}`"
# then restart unicorn with updated config
run "#{unicorn_rails_or_unicorn} -c #{your_current_folder}/config/unicorn.rb -D -E production"
I am running a rails 3.2 application on amazon ec2 in development environment and detach mode.
$ rails s -d
After this command the ec2 terminal hangs and does not come out of this command but the server starts as I can access the application.I have to close the terminal and the server remains started.
After this I kill the application.
$ lsof|grep 3000
$ kill -9 <pid>
Now if I try to restart the server, it gives error.
A server is already running. Check /home/ubuntu/trade_ship/tmp/pids/server.pid.
Exiting
Now even if I delete the tmp folder and recreate it, the server won't start. Can anyone help me with these two issues?
Even I had faced that issue, try to restart your system and then check.. this solution worked for me at that time.
First of all if you are not able to use port 3000 use rails s -p <port no> command
Second is if you have to kill the RUBY instance which server started so use
ps aux | grep ruby
username 17731 0.1 1.6 3127008 67996 ?? S 2:00PM 0:01.42 /Users/username/.rvm/rubies/ruby-1.9.2-p180/bin/ruby script/rails s -d
and then kill
kill -9 17731
This will definitely solve the issue
I get the following error while trying to run "cap production unicorn:start"
F, [2013-07-12T04:36:18.134045 #28998] FATAL -- : error adding listener addr=0.0.0.0:80
/home/ec2-user/apps/foo_prod/shared/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/socket_helper.rb:147:in `initialize': Permission denied - bind(2) (Errno::EACCES)
Running the following command manually, does work without any issues. What could be the problem here?
rvmsudo unicorn_rails -c config/unicorn/production.rb -D --env production
You need root access to bind to lower ports like port 80. Command rvmsudo executes in root context and therefore it works.
Cap task executes in a normal user context (probably deploy) depending on your configuration. You should add sudo ability to cap deploy user and make sure your cap task uses sudo to start unicorn.
Answer by #Iuri G. gives you reason and possible solution.
I have another suggestion, unless you have extremely compelling reason to run Unicorn with port 80, change that to a higher port (>1024), like 3000. This will solve your problem.
If it is an application that is exposed to public, it is too easy to overwhelm Unicorn and make your application unavailable to end users. In such a case, do put Unicorn behind a proxy (like Nginx). The proxy will be on port 80 and Unicorn on a higher port.
In my development environment, using RubyMine, I ran into this recently.
I used SSH to redirect port 80 to 8080.
sudo ssh -t -L 80:127.0.0.1:8080 user#0.0.0.0
I assume you are running Ubuntu as production server. On your server you need to edit your sudoers file:
First type select-editor and select nano (or another editor you feel confortable with)
Then at the bottom of the file, before the include line, add this line:
%deployer ALL=(ALL)NOPASSWD:/path/to/your/unicorn_rails
You need to replace deployer by the user name you are using with capistrano, and to replace /path/to/your/unicorn_rails with its correct path. This will allow your deployer user to "sudo unicorn_rails" without being prompt for a password.
Finally edit your unicorn:start capistrano task, and add rvmsudo ahead of your command line that start unicorn:
rvmsudo unicorn_rails -c config/unicorn/production.rb -D --env production
If it does not work you can try this instead
bundle exec sudo unicorn_rails -c config/unicorn/production.rb -D --env production
I managed to host the RedMine using the Rackup and Puma by running the following code in the CMD.
rackup -I "script/rails" -s "puma" -O "-q" -E "production"
But this will keep the CMD still up and running. Thus, I created a windows service to run a .BAT file that will execute this command. It worked and the RedMine is now hosted in the background
And now my problems appears. I am now unable to stop the RedMine. Even if I stopped the service that run the .BAT file, the RedMine is still hosted. This is because I do not know how to kill the rackup process in the OnStop() function of the windows service.
Only way I could kill it is by killing the ruby.exe process. Hope you all could guide me to do this in a better way. Thanks
I'm very new to system administration and have no idea how init.d works. So maybe I'm doing something wrong here.
I'm trying to start unicorn on boot, but somehow it just fails to start everytime. I'm able to manually do a start/stop/restart by simply service app_name start. Can't seem to understand why unicorn doesn't start at boot if manual starting stopping of service works. Some user permission issue maybe ??
My unicorn init script and the unicorn config files are available here https://gist.github.com/1956543
I'm setting up a development environment on Ubuntu 11.1 running inside a VM.
UPDATE - Could it be possible because of the VM ? I'm currently sharing the entire codebase (folder) with the VM, which also happens to contain the unicorn config needed to start unicorn.
Any help would be greatly appreciated !
Thanks
To get Unicorn to run when your system boots, you need to associate the init.d script with the default set of "runlevels", which are the modes that Ubuntu enters as it boots.
There are several different runlevels, but you probably just want the default set. To install Unicorn here, run:
sudo update-rc.d <your service name> defaults
For more information, check out the update-rc.d man page.
You can configure a cron job to start the unicorn server on reboot
crontab -e
and add
#reboot /bin/bash -l -c 'service unicorn_<your service name> start >> /<path to log file>/cron.log 2>&1'