I faced with periodic RoR 3 application unavailability. Usually I have 15 minutes unavailability period each day. The problem doesn't relates with application load and I can't find any errors in logs. The only errors I see is H12 Request timeout. I changed postgres plan to production but the problem occurs anyway.
On newrelic I see Postgres serious response time increase before H12 errors.
Please help.
Pavel.
Related
I have a rails app which when deployed to Heroku takes 3 to 4 minutes to go live even after successful deployment. During that that period it shows error as "request timeout". I have enabled preboot as mentioned here:
https://devcenter.heroku.com/articles/preboot but still same result. I am on Heroku enterprise and still facing this issue. Please suggest what i am missing, here is the dyno and Add-ons info that i am using
I have a Rails 4.2 app running on Heroku. Occasionally there is an issue that causes most incoming requests to get a server error. For example, there could be a memory leak or a max database connection issue. How can I setup a script or service to automatically restart the server when it detects errors?
I think this service could ping the app every few minutes and if it detects an error, it should confirm there's really a problem and then run heroku restart. How could this be set up?
After Googling this topic, I came across Neptune.io, which seems to provide a useful service for this task.
Could you tell me what happens with AWS Server now? From 3 weeks ago, util now, whenever I deploy my RoR app into AWS Server (using ElasticBeantalk tool), I meet a strange issue
Deployment time is quite good (just about 10-15 minutes), and the healthy of server is still green. But after that, the server is inaccessible. This status last about 3 - 4 hours !!! Then, everything is OK, server run fast and smoothly. I totally don't understand server healthy still un-change although this error happen. Everything I can do is "refresh browser periodically until it run"
I don't think my application is bigger enough with total deployment time like that. It just takes me about 20 minutes on local (production mode)
Here're some error I found out when server is hang:
"An error occured while starting up the preloader."
"Gateway timeout" when loading application.js (using chrome debug)
"Bad gateway" when loading application.js (using chrome debug)
Please give me some advise to solve that. I have been stucked on this issue for a long time
Thanks
I have a RoR app running on elastic beanstalk. I have occasionally seen 403 errors from Passenger for a while. Most of the time 1 server is running but this gets increased to 3 or 4 instances in busy periods during the day.
Session stickeyness is not turned on
I have noticed that when a new server is started the ELB is sending requests to it before bundle install has finished.
If I ssh to the newly started server I can see in /var/app/current/ that the app has not yet been installed and if I run top it looks like bundler is running and compiling things with cc1, etc.
/var/app/support/log/passenger.log shows that requests to valid urls within my rails app are being received and responded to with 404. Hardly surprising because the app isn't there yet
After 5-10 minutes all of the compiling is complete and the app files appear in /var/app/current and all is well.
This doesn't seem quite right to me. How do I set up the ELB / my rails app so that the ELB can tell when it is ready to receive requests?
I found the answer to this. There was no application health check url set. In this case the ELB pings the instance to see if it's healthy, i.e. it checks that it is booted rather than if rails is up and running. Setting the health check url to '/login/' fixed it for me because this gives a 404 until rails in running and a 200 afterwards.
Elastic beanstalk demands 2 correct responses before it deems an instance to be healthy. It checks the instance every 5 minutes. This means that an instance can take a while to start serving requests. i.e. it takes boot time + waiting for next poll from elb + 5 minutes before it sees any real traffic
I'm getting the following error with Ruby on Rails, Heroku and Postgresql:
PG::Error (FATAL: too many connections for role "********"
I've restarted the server several times to no avail. Any ideas?
Paying Heroku more money isn't always the answer.
I had this problem temporarily when I was running up against the dev-level database's row limit. Deleting rows using the console until I was below the limit solved the issue.
Another potential way you can run into this is if you're using unicorn. The number of connections used is the number of dynos times the number of unicorn workers per dyno. Heroku explains it all here, along with a way to configure it in config/unicorn.rb.
Also, seeing the number of connections being used can be useful. Just run heroku pg:info.
Apparently I was on a dev-level DB. I upgraded to Crane level production DB and everything should be fine.