Whenever I hit a binding.pry while running an app locally, I enter the pry session as normal, but after about a minute, I see something like this in my server output.
[54438] ! Terminating timed out worker: 54455
Then the server seems to run in a loop for a second or two (re-running queries that lead to the pry session) and I return to a new pry session from the same binding.pry, except in this new pry session whenever I type I can't see anything I'm typing. The only way to fix this problem is to quit the server and restart.
I've tried inserting the following line in my config/puma.rb file but it doesn't seem to make any difference.
worker_timeout 900 if ENV["RACK_ENV"] == "development"
The only thing that works is setting the number of puma workers I have to 0 in my .env file. E.g
PUMA_WORKERS=0
Is there any way around this issue that doesn't involve just eliminating all puma workers?
Related
I am initiating long running processes from a browser and showing results after it completes. I have defined in my controller :
def runthejob
pid = Process.fork
if pid.nil? then
#Child
output = execute_one_hour_job()
update_output_in_database(output)
# Exit here as this child process shouldn't continue anymore
exit
else
#Parent
Process.detach(pid)
#send response - job started...
end
The request in parent completes correctly. But , in child, there is always a "500 internal server error" . rails reports "Completed 500 Internal Server Error in 227192ms" . I am guessing this happens because the request response cycle of the child process is not completed as there is a "exit" in child. How do I fix this?
Is this the correct way to execute long running processes ? Is there any better way to do it?
When child is running, if I do "ps -e | grep rails" , I see that there are two instances of "rails server" . (I use the command "rails server" to start my rails server.)
ps -e | grep rails
75678 ttys002 /Users/xx/ruby-1.9.3-p194/bin/ruby script/rails server
75696 ttys002 /Users/xx/ruby-1.9.3-p194/bin/ruby script/rails server
Does this mean that there are two servers running ? How are the requests handled now? Wont the request go to the second server?
Thanks for helping me.
Try your code in production and see if the same error comes up. If not, your error may be from the development environment being cleared when the first request completes, whilst your forks still need the environment to exist. I haven't verified this with forks, but that is what happens with threads.
This line in your config/development.rb will retain the environment, but any code changes will require server restarts.
config.cache_classes = true
there are a lot of frameworks to do this in rails: DelayedJob, Resque, Sidekiq etc.
you can find a whole lot of examples on railscasts: http://railscasts.com/
I just got into rails programming and it looks like there are two programs I can use to run my project locally: rackup and foreman.
One difference I noticed is that foreman will not output some things that I would expect to see and I would see if I ran rackup instead, until I press ctrl+c to close the server. Then all those messages appear, as if they were being hidden.
Is there a reason for this? How can I get foreman to be more verbose?
If you are not seeing any output from your program, there is a likely chance that it is buffering stdout. Ruby buffers stdout by default.
you can fix this by putting the following code in your development.rb file:
$stdout.sync = true
http://github.com/ddollar/foreman/wiki/Missing-Output
Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299
I need to setup a connection to an external service in my Rails app. I do this in an initializer. The problem is that the service library uses threaded delivery (which I need, because I can't have it bogging down requests), but the Unicorn life cycle causes the thread to be killed and the workers never see it. One solution is to invoke a new connection on every request, but that is unnecessarily wasteful.
The optimal solution is to setup the up the connection in an after_fork block in the unicorn config. The problem there is that doesn't get invoked outside of unicorn, which means we can't test it in development/testing environments.
So the question is, what is the best way to determine whether a Rails app is running under Unicorn (either master or worker process)?
There is an environment variable that is accessible in Rails (I know it exists in 3.0 and 3.1), check the value of env['SERVER_SOFTWARE']. You could just put a regex or string compare against that value to determine what server you are running under.
I have a template in my admin that goes through the env variable and spits out its content.
Unicorn 4.0.1
env['SERVER_SOFTWARE'] => "Unicorn 4.0.1"
rails server (webrick)
env['SERVER_SOFTWARE'] => "WEBrick/1.3.1 (Ruby/1.9.3/2011-10-30)"
You can check for defined?(Unicorn) and in your Gemfile set: gem :unicorn, require: false
In fact you don't need Unicorn library loaded in you rails application.
Server is started by unicorn command from shell
Checking for Unicorn constant seems a good solution, BUT it depends very much on whether require: false is provided in the Gemfile. If it isn't (which is quite probable), the check might give a false positive.
I've solved it in a very straightforward manner:
# `config/unicorn.rb` (or alike):
ENV["UNICORN"] = 1
...
# `config/environments/development.rb` (or alike):
...
# Log to stdout if Web server is Unicorn.
if ENV["UNICORN"].to_i > 0
config.logger = Logger.new(STDOUT)
end
Cheers!
You could check to see if the Unicorn module has been defined with Object.constants.include?('Unicorn').
This is very specific to Unicorn, of course. A more general approach would be to have a method which sets up your connection and remembers it's already done so. If it gets called multiple times, it just returns doing nothing on subsequent calls. Then you call the method in after_fork and in a before_filter in your application controller. If it's been run in the after_fork it does nothing in the before_filter, if it hasn't been run yet it does its thing on the first request and nothing on subsequent requests.
Inside config/unicorn.rb
Define ENV variable as
ENV['RAILS_STDOUT_LOG']='1'
worker_processes 3
timeout 90
and then this variable ENV['RAILS_STDOUT_LOG'] will be accessible anywhere in your Rails app worker thread.
my issue:
I wanted to output all the logs(SQL queries) when on the Unicorn workers and not on any other workers on Heroku, so what I did is adding env variable in the unicorn configuration file
If you use unicorn_rails, below code will help
defined?(::Unicorn::Launcher)
I'm using delayed_job to run jobs, with new jobs being added every minute by a cronjob.
Currently I have an issue where the rake jobs:work task, currently started with 'nohup rake jobs:work &' manually, is randomly exiting.
While God seems to be a solution to some people, the extra memory overhead is rather annoying and I'd prefer a simpler solution that can be restarted by the deployment script (Capistrano).
Is there some bash/Ruby magic to make this happen, or am I destined to run a monitoring service on my server with some horrid hacks to allow the unprivelaged account the site deploys to the ability to restart it?
For me the daemons gem was unreliable with delayed_job. Could be a poorly written script (was using the one on collectiveidea's delayed_job github page), and not daemons fault, I'm not really sure. But for whatever reason, it would restart inconsistently on deployments.
I read somewhere this was due to it not waiting for the process to actually exit, so the pid files would get overwritten or something. But I didn't really bother to investigate. I switched to the daemons-spawn gem using these instructions and it seems to be much more reliable now.
The delayed_job docs suggest that you use a monitoring service to manage the rake worker job(s). I use runit--works well.
(You can install it in the mode where it does not replace init.)
Added:
Re: restart by Capistrano: yes, runit enables that. Just do a
sudo sv kill delayed_job
in your Capistrano recipe to kill the delayed_job worker. Runit will then restart it with your newly deployed code base.
I have implemented small rake task that restarts the jobs task over and over again:
desc "Start a delayed_job worker in a endless loop to prevent exits."
task :jobs => :environment do
while true
begin
Delayed::Worker.new(:min_priority => ENV['MIN_PRIORITY'],
:max_priority => ENV['MAX_PRIORITY'],
:quiet => false).start
rescue Exception => e
puts "Exception occured (#{e})"
end
puts "Task jobs:work exited, clearing queue and restarting"
sleep 1
Delayed::Job.delete_all
end
end
Apparently it did not work. So I ended with this simple solution:
for (( ;; )); do rake jobs:work --trace; done
get rid of delayed job and use either whenever or resque