I'm trying to start thin for my app but then the pid cannot be generated:
$ thin -C /var/www/project_path/current/config/myproject.testing.yml start
Now I can't stop it because there is no pid:
$ thin -C /var/www/project_path/current/config/myproject.testing.yml stop
/home/usr/.rvm/gems/ruby-1.9.2-p180#api/gems/thin-1.5.1/lib/thin/daemonizing.rb:131:in `send_signal': Can't stop process, no PID found in tmp/pids/thin.pid (Thin::PidFileNotFound)
This is the yml file:
pid: /home/usr/htdocs/testing/myproject/shared/pids/thin.pid
rackup: config.ru
log: /home/usr/htdocs/testing/myproject/shared/log/thin.log
timeout: 30
port: 1234
max_conns: 1024
chdir: /home/usr/htdocs/testing/myproject/current
max_persistent_conns: 128
environment: testing
address: 127.0.0.1
require: []
daemonize: true
update:
Now I can start the server but after some seconds process vanish automatically; means I can't see the pid which was generated by starting the server after some seconds.
lsof -wni tcp:1234
will give you the process ID
kill -9 PID
will kill the process
I had exactly the same annoying problem
I've found that if the server crashes on startup, the pid file gets created but there is no pid in the file. Try cat'ing the log file for the server ./logs/thin.3001.log and look for errors. You could also try starting the server manually via
rails s -p 3000
and see if there are any errors thrown.
Good luck
Chris
Related
I have started a rails server puma by using the following command.
nohup rails server &
its output was [2] 22481 along with the following:
nohup: ignoring input and appending output to 'nohup.out'
But now I have forget the returned process id, so how can I detect the process id so as to delete the process on aws.
To kill whatever is on port 3000 (webrick server default port), type this below command to get process id for 3000 port:
$ lsof -wni tcp:3000
Then, use process id (PID) to kill the process:
$ kill -9 PID
Rails server process pid can be found in this directory:
-> tmp/pids/server.pid
then,
Kill -9 pid
command
ps -ef
return the full output list of processes in which one of the list item is as:
ec2-user 12992 1 0 Dec20 ? 00:00:57 puma 3.12.0 (tcp://0.0.0.0:3000) [tukatech_garmentstore_live]
so force killed the process by.
kill -9 12992
did the job
ps aux|grep 3000
This will give you rails server id running on port 3000
I am trying to deploy Rails app with the Puma web server. When trying to start Puma server with a config file bundle exec puma -C config/puma.rb I get an error that the address is already in use.
Does someone know how to fix this?
bundle exec puma -C config/puma.rb
[23699] Puma starting in cluster mode...
[23699] * Version 2.11.3 (ruby 2.0.0-p353), codename: Intrepid Squirrel
[23699] * Min threads: 5, max threads: 5
[23699] * Environment: development
[23699] * Process workers: 2
[23699] * Preloading application
Jdbc-MySQL is only for use with JRuby
[23699] * Listening on tcp://0.0.0.0:3000
/.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:210:in `initialize': Address already in use - bind(2) (Errno::EADDRINUSE)
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:210:in `new'
from /Users/lexi87/.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:210:in `add_tcp_listener'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:96:in `block in parse'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:82:in `each'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/binder.rb:82:in `parse'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/runner.rb:119:in `load_and_bind'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/cluster.rb:302:in `run'
from /.rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/lib/puma/cli.rb:216:in `run'
from /rvm/gems/ruby-2.0.0-p353/gems/puma-2.11.3/bin/puma:10:in `<top (required)>'
from /.rvm/gems/ruby-2.0.0-p353/bin/puma:23:in `load'
from /.rvm/gems/ruby-2.0.0-p353/bin/puma:23:in `<main>'
from /.rvm/gems/ruby-2.0.0-p353/bin/ruby_executable_hooks:15:in `eval'
from /.rvm/gems/ruby-2.0.0-p353/bin/ruby_executable_hooks:15:in `<main>'
You need to use kill -9 59780 with 59780 replaced with found PID number (use lsof -wni tcp:3000 to see which process used 3000 port and get the process PID).
Or you can just modify your puma config change the tcp port tcp://127.0.0.1:3000 from 3000 to 9292 or other port that not been used.
Or you can start your rails app by using:
bundle exec puma -C config/puma.rb -b tcp://127.0.0.1:3001
To kill the puma process first run
lsof -wni tcp:3000
to show what is using port 3000. Then use the PID that comes with the result to run the kill process.
For example after running lsof -wni tcp:3000 you might get something like
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ruby 3366 dummy 8u IPv4 16901 0t0 TCP 127.0.0.1:3000 (LISTEN)
Now run the following to kill the process. (where 3366 is the PID)
kill -9 3366
Should resolve the issue
you can also try this trick:
ps aux | grep puma
sample output:
myname 77921 0.0 0.0 2433828 1972 s000 R+ 11:17AM 0:00.00 grep puma
myname 67661 0.0 2.3 2680504 191204 s002 S+ 11:00AM 0:18.38 puma 3.11.2 (tcp://localhost:3000) [my_proj]
then:
kill -9 67661
Found the script below in this github issue. Works great for me.
#!/usr/bin/env ruby
port = ARGV.first || 3000
system("sudo echo kill-server-on #{port}")
pid = `sudo lsof -iTCP -sTCP:LISTEN -n -P | grep #{port} | awk '{ print $2 }' | head -n 1`.strip
puts "PID: #{pid}"
`kill -9 #{pid}` unless pid.empty?
You can either run it in irb or inside a ruby file.
For the latter, create server_killer.rb then run it with ruby server_killer.rb
SOLUTION FOR GENERAL Address already in use - bind(2) (Errno::EADDRINUSE)
This issue is because we are trying to use the same port which is already is use. so we have to stop the services running on that port so that we can run another services.
we can use kill like kill -9 {PID}where {PID} is the PID of the services running on that port. To know the PID of any services lets say "firefox" we can use commands like pidof firefox, ps aux | grep -i firefox ,pgrep firefox and then use the kill command to stop that service.
sometime we might get into the situation where we don't know the PID or the service name to search for in this case we can use the following little ruby code to do it for us.(in this case port 3000, you can change it according to your need)
#!/usr/bin/env ruby
port = ARGV.first || 3000
system("sudo echo kill-server-on #{port}")
pid = `sudo lsof -iTCP -sTCP:LISTEN -n -P | grep #{port} | awk '{ print $2 }' | head -n 1`.strip
puts "PID: #{pid}"
`kill -9 #{pid}` unless pid.empty?
save it as something.rb and run sudo ruby something.rb
If the above solutions don't work on ubuntu/linux then you can try this
sudo fuser -k -n tcp port
Run it several times to kill processes on your port of choosing. port could be 3000 for example. You would have killed all the processes if you see no output after running the command
You can find and kill the running processes: ps aux | grep puma
Then you can kill it with kill PID
I had this issue on my Macbook Air, running Rails 5.0.3, Puma 5.2.2
Tried running
lsof -wni tcp:3000 but there's no process on this port number.
Managed to fix this by running:
export PORT=3000 on my terminal, then I just added this extra line to my .bash_profile
It might be old but in my case, it was because of docker. Hope it will help others.
I have two Thin servers running for a Rails app. I start them up with bundle exec thin start.
chdir: /[root]/current
environment: production
address: 0.0.0.0
port: 3001
timeout: 30
log: /[root]/log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
threadpool_size: 20
servers: 2
daemonize: true
When I wait a few hours usually one of the two servers is gone (e.g., only see one with htop or with pgrep -lf thin). And even worse, sometimes both of them are gone after 10 hours or so which results in a 500 error via the browser. Furthermore, when I start 3 or 4 servers 2 of the 4 processes die within 1 minute on average.
I don't see error messages in my Rails production.log nor in the thin.[port] log files specified in the app.yml file.
Is there a way to keep the Thin servers running?
Are you sure you can run your server with bundle exec -C app.yml start?
Try bundle exec thin -C app.yml start
As question state.
I'm trying to start ruby on rails when machine reboot.
I feel I have successfully auto start thin.
But my ROR page is still not working.
ie:when I open localhost:3000 ,this page cant get displayed.
Is ror project start automatically when thin start?
if not,what setting do I need to do?
I'm using ubuntu, ror project under /home/usr/test
You need to tell thin that which application you want to run automatically along with thin.
sudo thin config -C /etc/thin/testapp.yml -c /home/usr/test --servers 3 -e production
You can cross check the setting:
cat /etc/thin/testapp.yml
It should display something like:
---
pid: tmp/pids/thin.pid
address: 0.0.0.0
timeout: 30
port: 3000
log: log/thin.log
max_conns: 1024
require: []
environment: production
max_persistent_conns: 512
servers: 3
daemonize: true
chdir: /home/demo/public_html/testapp
Source: Rackspace's Knowledge Center
try this out:
you have to start thin server in the directory /home/usr/test.
try running thin by command bundle exec thin start.
I'm sending a USR2 signal to the master process in order to achieve zero downtime deploy with unicorn. After the old master is dead, I'm getting the following error:
adding listener failed addr=/path/to/unix_socket (in use)
unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize':
Address already in use - /path/to/unix_socket (Errno::EADDRINUSE)
The old master is killed in the before_fork block on the unicorn.rb config file. The process is started via upstart without the daemon (-D) option.
Any Ideia on what's going on?
Well, turns out you have to run in daemonized mode (-D) if you want to be able to do zero downtime deployment. I changed a few things in my upstart script and now it works fine:
setuid username
pre-start exec unicorn_rails -E production -c /path/to/app/config/unicorn.rb -D
post-stop exec kill cat `/path/to/app/tmp/pids/unicorn.pid`
respawn