I seem to have a very odd issue.... I am trying to develop a Rails app to be deployed on Heroku using Unicorn - as such I'm using Foreman in my local development environment to try to replicate production as closely as possible.
As you'd expect, my web/worker processes output to the development.log file in path/to/app/log. If I navigate to the file, it contains everything you would expect.
However if I use the command
tail -f log/development.log
(from the app path), I get log output from HEROKU!! How is this possible? (e.g.):
app[web.1]: [Worker(host:xxxx-xxx-xxx pid:5)] Starting job worker
heroku[web.1]: Idling
heroku[web.1]: Stopping process with SIGTERM
app[web.1]: I, [2012-02-19xxx-xxx-xxx #1] INFO -- : reaped #<Process::Status: pid 7 exit 0> worker=0
app[web.1]: I, [2012-02-19xxx-xxx-xxx #1] INFO -- : reaped #<Process::Status: pid 11 exit 0> worker=1
app[web.1]: I, [2012-02-19xxx-xxx-xxx #1] INFO -- : reaped #<Process::Status: pid 14 exit 0> worker=2
app[web.1]: I, [2012-02-19xxx-xxx-xxx #1] INFO -- : master complete
heroku[web.1]: Process exited with status 0
heroku[web.1]: State changed from up to down
heroku[slugc]: Slug compilation started
heroku[api]: Release v22 created by brandon#example.com
heroku[api]: Deploy xxxx by randon#example.com
heroku[slugc]: Slug compilation finished
This is really annoying as I can't properly see my development log... help would be appreciated!
This isn't something that Foreman's doing.
What happens if you just look at log/development.log? Is it the same?
Do you have any wacky aliases setup that might be causing this?
Related
I'm making an expensive call to my Heroku Rails server. After 13-15 seconds the console in my browser reports a Service Unavailable 503 error. However when I check my heroku logs, it reports:
Completed 200 OK in 45592ms (Views: 220.3ms | ActiveRecord: 33457.5ms)
Other times the heroku logs report a memory exceeded quota. Here is an example of that below.
2015-06-11T15:17:20.238285+00:00 app[web.1]: Completed 200 OK in 81881ms (Views: 201.6ms | ActiveRecord: 18021.2ms)
2015-06-11T15:17:33.482930+00:00 heroku[web.1]: Process running mem=841M(164.4%)
2015-06-11T15:17:33.482930+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2015-06-11T15:17:53.147570+00:00 heroku[web.1]: Process running mem=841M(164.4%)
2015-06-11T15:17:53.147679+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
2015-06-11T15:17:59.751540+00:00 app[web.1]: E, [2015-06-11T15:17:59.695813 #3] ERROR -- : worker=2 PID:13 timeout (121s > 120s), killing
2015-06-11T15:17:59.916750+00:00 app[web.1]: E, [2015-06-11T15:17:59.906435 #3] ERROR -- : reaped #<Process::Status: pid 13 SIGKILL (signal 9)> worker=2
2015-06-11T15:18:02.487428+00:00 app[web.1]: I, [2015-06-11T15:18:02.427293 #16] INFO -- : worker=2 ready
Why is it reporting a Completed 200 when the console is reporting a 503?
there are 2 different things here:
your app
the heroku load balancer
In this case the load balancer see that the request is taking too long and sends you the 503. In the back the service processes the request and returns completes with a 200.
See:
https://devcenter.heroku.com/articles/limits
https://devcenter.heroku.com/articles/request-timeout
Heroku does not gives us much information for error-code = H10. Simple put, something is going wrong with your application code / configuration. To see what's going wrong, run heroku run rails console and you will be able to see error details which will be help you in resolving the error. No need to use logs. heroku run rails console is a big help
This happened with me when I used Node. I put a timeout for 30 seconds and I used to get HTTP:503. I then came to know that it's the timeout that caused the issue. I changed the timeout to <30 seconds and it is working fine. Maybe it's because of the loadbalancer that Heroku uses.
Thanks.
I'm developing an AngularJS (front-end) running in an Nginx web server that sends request to a Rails API (backend) running in a Unicorn application server.
I recognize I'm just a developer and I have no idea about servers administration, so I just put the servers and start them.
The application is working, however, Unicorn has a strange behaviour. When I start it I always get this error:
roberto#ubuntu:~/dev/scripts$ ./start_unicorn.sh
I, [2014-06-14T11:46:06.085834 #4258] INFO -- : Refreshing Gem list
I, [2014-06-14T11:46:11.591592 #4258] INFO -- : listening on addr=0.0.0.0:8080 fd=10
I, [2014-06-14T11:46:12.087321 #4258] INFO -- : master process ready
I, [2014-06-14T11:46:12.151320 #4263] INFO -- : worker=0 ready
I, [2014-06-14T11:46:12.150526 #4266] INFO -- : worker=1 ready
E, [2014-06-14T11:46:39.112668 #4258] ERROR -- : worker=0 PID:4263 timeout (16s > 15s), killing
E, [2014-06-14T11:46:39.112898 #4258] ERROR -- : worker=1 PID:4266 timeout (16s > 15s), killing
E, [2014-06-14T11:46:39.118081 #4258] ERROR -- : reaped #<Process::Status: pid 4263 SIGKILL (signal 9)> worker=0
E, [2014-06-14T11:46:39.118634 #4258] ERROR -- : worker=1 PID:4266 timeout (16s > 15s), killing
E, [2014-06-14T11:46:39.121820 #4258] ERROR -- : reaped #<Process::Status: pid 4266 SIGKILL (signal 9)> worker=1
I, [2014-06-14T11:46:39.172067 #4284] INFO -- : worker=1 ready
I, [2014-06-14T11:46:39.172620 #4281] INFO -- : worker=0 ready
It takes some seconds until it responds. And this happens continuosly.
I guess I'm missing some configuration but no idea...
If you need any more details, such as config files, just let me know
Do you have your assets precompiled ? are you in production ?
If not when you start your server and get your first request, rails will try to compile your assets which can take more than 15 seconds and reach the unicorn timeout.
In your start.sh you should have somewhere
export RAILS_ENVIRONEMENT=production
And during you deployment you should have :
rake assets:precompile
I've got a large 2.3 Rails app running on Unicorn. I'm using Unicorn, so that I can have zero downtime deployments. However, I've noticed that the first request after a restart is very slow.
First request:
Completed 304 Not Modified in 2771.8ms (ActiveRecord: 98.6ms)
Second request:
Completed 304 Not Modified in 94.4ms (ActiveRecord: 26.9ms)
I do have preload_app true and I am re-establishing the db-connection in the after-fork.
I have no idea how to explain the 2600ms divergence between these two values.
Does anyone have any thoughts? Really, what I am looking for are ways to debug this issue.
UPDATE
Here is my unicorn.log after a restart:
I, [2014-05-16T13:46:26.529305 #11637] INFO -- : executing ["/data/app/current/ey_bundler_binstubs/unicorn", "-E", "staging", "-c", "/data/app/shared/config/custom_unicorn.rb", "-D", "/data/app/current/config.ru", {12=>#<Kgio::UNIXServer:fd 12>}] (in /data/app/releases/20140516184210)
I, [2014-05-16T13:46:27.566115 #11637] INFO -- : inherited addr=/var/run/engineyard/unicorn_afar.sock fd=12
I, [2014-05-16T13:46:27.566551 #11637] INFO -- : Refreshing Gem list
I, [2014-05-16T13:47:13.036963 #8247] INFO -- : reaped #<Process::Status: pid 8681 exit 0> worker=3
I, [2014-05-16T13:47:14.093196 #8247] INFO -- : reaped #<Process::Status: pid 8670 exit 0> worker=2
I, [2014-05-16T13:47:14.100269 #12047] INFO -- : worker=0 ready
I, [2014-05-16T13:47:15.105249 #12063] INFO -- : worker=1 ready
I, [2014-05-16T13:47:15.114038 #8247] INFO -- : reaped #<Process::Status: pid 8655 exit 0> worker=1
I, [2014-05-16T13:47:15.957970 #8247] INFO -- : reaped #<Process::Status: pid 8638 exit 0> worker=0
I, [2014-05-16T13:47:15.958159 #8247] INFO -- : master complete
I, [2014-05-16T13:47:16.087761 #12082] INFO -- : worker=2 ready
I, [2014-05-16T13:47:16.876129 #11637] INFO -- : master process ready
I, [2014-05-16T13:47:17.102994 #12095] INFO -- : worker=3 ready
And here is the first request on my rails logs:
Started GET "/" for 70.XX.XXX.XXX at 2014-05-16 13:47:51 -0700
Processing by HomeController#index as HTML
(1.1ms) SELECT ..... <regular controller/ActiveRecord queries>
Completed 304 Not Modified in 2724.8ms (ActiveRecord: 98.9ms)
First request to Rails app is very slow may be relevant.
Maybe there is a dependency that is loading / running on the first page load?
Some ideas:
Check the rails log to see if there's anything funky going on
Is this just happening for Unicorn or with other servers too?
Add log statements with time stamps to get a sense for what part of the app is taking a long time
Try using ruby prof
I'm following along with a screencasthttp://railscasts.com/episodes/335-deploying-to-a-vps?view=asciicastthat that instructs how to deploy to a VPS with nginx and unicorn (I'm installing a Ruby on Rails app on ubuntu). After installing the various services (nginx, postgres, unicorn, ruby) and running cap deploy:cold, my app (as expected) displayed the default nginx page, which the screencast instructs to remove in the following way so that nginx serves the production app
deployer#li349-144:~$ sudo rm /etc/nginx/sites-enabled/default
[sudo] password for deployer:
deployer#li349-144:~$ sudo service nginx restart
Restarting nginx: nginx.
I've previously deployed applications where, after removing this default page, the intended application appears. However, after I run these commands on my server this time and navigate to the ip address, the browser tells me it can't connect to the server. Not sure what was wrong, I tried to restart nginx, unicorn and postgres (all of which restarted successfully) but I got the same error message in the browser.
There's a log directory on my cloud server with a unicorn.log logfile and a production.log but neither of them indicate any problems. For example, the production.log only indicates that database migrations have been run, and this is the unicorn.log
I, [2013-07-11T23:26:57.717019 #6664] INFO -- : listening on addr=/tmp/unicorn.qbruby3.sock fd=10
I, [2013-07-11T23:26:57.717480 #6664] INFO -- : worker=0 spawning...
I, [2013-07-11T23:26:57.718428 #6664] INFO -- : worker=1 spawning...
I, [2013-07-11T23:26:57.719538 #6667] INFO -- : worker=0 spawned pid=6667
I, [2013-07-11T23:26:57.719713 #6667] INFO -- : Refreshing Gem list
I, [2013-07-11T23:26:57.722070 #6664] INFO -- : master process ready
I, [2013-07-11T23:26:57.726747 #6670] INFO -- : worker=1 spawned pid=6670
I, [2013-07-11T23:26:57.727030 #6670] INFO -- : Refreshing Gem list
I, [2013-07-11T23:27:09.930162 #6670] INFO -- : worker=1 ready
I, [2013-07-11T23:27:10.084362 #6667] INFO -- : worker=0 ready
I, [2013-07-12T01:05:52.638290 #6664] INFO -- : reloading config_file=/home/michael/apps/qbruby3/shared/config/unicorn.rb
I, [2013-07-12T01:05:52.668897 #6664] INFO -- : done reloading config_file=/home/michael/apps/qbruby3/shared/config/unicorn.rb
I, [2013-07-12T01:05:52.858858 #6664] INFO -- : reaped #<Process::Status: pid 6667 exit 0> worker=0
I, [2013-07-12T01:05:52.859032 #6664] INFO -- : worker=0 spawning...
I, [2013-07-12T01:05:52.860609 #7212] INFO -- : worker=0 spawned pid=7212
I, [2013-07-12T01:05:52.860839 #7212] INFO -- : Refreshing Gem list
I, [2013-07-12T01:05:52.875751 #6664] INFO -- : reaped #<Process::Status: pid 6670 exit 0> worker=1
I, [2013-07-12T01:05:52.875944 #6664] INFO -- : worker=1 spawning...
I, [2013-07-12T01:05:52.877405 #7215] INFO -- : worker=1 spawned pid=7215
I, [2013-07-12T01:05:52.877651 #7215] INFO -- : Refreshing Gem list
I, [2013-07-12T01:06:02.191290 #7212] INFO -- : worker=0 ready
I, [2013-07-12T01:06:02.269397 #7215] INFO -- : worker=1 ready
In this situation, what else could I check to identify the reason the app is not appearing?
I manage a rails app for a client of mine and recently it went down. The site was down for 9 hours before I noticed. I checked the logs and every request for the past 9 hours is prepended with the following code:
at=error code=H10 desc="App crashed"
Before that, I see the following logs:
2012-11-16T00:55:46+00:00 heroku[web.1]: Idling
2012-11-16T00:55:50+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2012-11-16T00:55:51+00:00 app[web.1]: [2012-11-16 00:55:51] ERROR SignalException: SIGTERM
2012-11-16T00:55:51+00:00 app[web.1]: /usr/local/lib/ruby/1.9.1/webrick/server.rb:90:in `select'
2012-11-16T00:56:00+00:00 heroku[web.1]: Error R12 (Exit timeout) -> At least one process failed to exit within 10 seconds of SIGTERM
2012-11-16T00:56:00+00:00 heroku[web.1]: Stopping remaining processes with SIGKILL
2012-11-16T00:56:02+00:00 heroku[web.1]: State changed from up to down
2012-11-16T00:56:02+00:00 heroku[web.1]: Process exited with status 137
2012-11-16T01:03:55+00:00 heroku[web.1]: Unidling
2012-11-16T01:03:55+00:00 heroku[web.1]: State changed from down to starting
2012-11-16T01:03:59+00:00 heroku[web.1]: Starting process with command `bundle exec rails server -p 4303`
2012-11-16T01:04:00+00:00 heroku[nginx]: 98.139.241.251 - - [16/Nov/2012:01:04:00 +0000] "GET / HTTP/1.1" 499 0 "-" "YahooCacheSystem" domain.com
2012-11-16T01:04:22+00:00 app[web.1]: => Ctrl-C to shutdown server
2012-11-16T01:04:22+00:00 app[web.1]: ** [NewRelic][11/16/12 01:04:21 +0000 b8af98a1-2246-4b34-9dfe-61b9d4b747bc (2)] INFO : Dispatcher: webrick
2012-11-16T01:04:22+00:00 app[web.1]: ** [NewRelic][11/16/12 01:04:21 +0000 b8af98a1-2246-4b34-9dfe-61b9d4b747bc (2)] INFO : Application: acsolar
2012-11-16T01:04:22+00:00 app[web.1]: ** [NewRelic][11/16/12 01:04:21 +0000 b8af98a1-2246-4b34-9dfe-61b9d4b747bc (2)] INFO : New Relic Ruby Agent 3.4.0.1 Initialized: pid = 2
2012-11-16T01:04:22+00:00 app[web.1]: => Booting WEBrick
2012-11-16T01:04:22+00:00 app[web.1]: => Rails 3.1.1 application starting in production on http://0.0.0.0:4303
2012-11-16T01:04:22+00:00 app[web.1]: => Call with -d to detach
2012-11-16T01:04:25+00:00 app[web.1]: [DEPRECATION] Your applications public directory contains an assets/products and/or assets/taxons subdirectory.
2012-11-16T01:04:25+00:00 app[web.1]: Run `rake spree:assets:relocate_images` to relocate the images.
2012-11-16T01:04:34+00:00 app[web.1]: ** [NewRelic][11/16/12 01:04:32 +0000 b8af98a1-2246-4b34-9dfe-61b9d4b747bc (2)] INFO : Reporting performance data every 60 seconds.
2012-11-16T01:04:34+00:00 app[web.1]: Connected to NewRelic Service at collector-5.newrelic.com
2012-11-16T01:05:00+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2012-11-16T01:05:00+00:00 heroku[web.1]: Stopping process with SIGKILL
2012-11-16T01:05:02+00:00 heroku[web.1]: Process exited with status 137
2012-11-16T01:05:02+00:00 heroku[web.1]: State changed from crashed to down
2012-11-16T01:05:02+00:00 heroku[web.1]: State changed from starting to crashed
I'm guessing that it may have spun down and had an error booting back up, but how come it stayed in the crashed state without restarting itself? Is there anything I can do to have it automatically restart if this happens again in the future?
I've got NewRelic running on this too and it didn't notify me at all, but that's another problem I'll have to investigate.
Heroku's support answer suggests restarting your app manually with heroku restart. They're fixing the issue right now.
Hi, A process management error on our side caused some crashed apps
only running 1 web dyno to be reported as "idle" even though they were
actually crashed. This means that the crashed dyno was never
restarted, causing subsequent requests to fail. We've identified this
problem and are implementing a fix. If your app is still unresponsive,
please try restarting it with the heroku restart command. Please let
us know if you need more help. Thanks, Heroku Support