Heroku “psql: FATAL: remaining connection slots are reserved for non-replication superuser connections” - ruby-on-rails

I got the above error message running Heroku Postgres Basic (as per this question) and have been trying to diagnose the problem.
One of the suggestions is to use connection pooling but it seems Rails has this built in. Another suggestion is that the app is configured improperly and opens too many connections.
My app manages all it's connections through Active Record, and I had one direct connection to the database from Navicat (or at least I thought I had).
How would I debug this?
RESOLUTION
Turns out it was an Heroku issue. From Heroku support:
We've detected an issue on the server running your Basic database.
While we pinpoint this and address it, we would recommend you
provision a new Basic database and migrate over with PGBackups as
detailed here:
https://devcenter.heroku.com/articles/upgrade-heroku-postgres-with-pgbackups
. That should put your database on a new server. I apologize for this
disruption – we're working to fix this issue and prevent it from
occurring in the future.

This has happened a few times on my app -- somehow there is a connection leak, then all of a sudden the database is getting 10 times as many connections as it should. If it is the case that you are getting swamped by an error like this, not traffic, try running this:
heroku pg:killall
That will terminate all connections to the database. If it is dangerous for your situation to possibly cut off queries be careful. I just have a rails app, and if it goes down, losing a couple queries is not a big deal, because the browser requests will have looooooong since timed out anyway.

You might be able to find why you have so many connections by inspecting view pg_stat_activity:
SELECT * FROM pg_stat_activity
Most likely, you have some stray loop that opens new connection(s) without closing it.

To save you the support call, here's the response I got from Heroku Support for a similar issue:
Hello,
One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.
Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.
This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.
If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.
Thanks,
Bradley

Related

How to properly organize work with database connections?

I have a Rails application that I've been developing for quite some time. All this time I tested it locally and on a DEV server. On the DEV server, next to the deployed application, there is also a PG database. And there were no problems with connections. I think there is simply no connection limit, or it is too high - not so important.
Today I started doing deployment to the PROD server. It is similar in power to that for DEV, but BD is already in the DO Database. By the way, the servers themselves are also located in DigitalOcean.
The problem is that DO Database has a limit of 20 connections. And as far as I understand, exceeding this limit - the Rails application gives an error:
ActiveRecord::ConnectionNotEstablished (FATAL: remaining connection slots are reserved for non-replication superuser connections)
The most obvious option is to reduce the number of requests on page load. But this still did not solve the problem if, for example, the number of users increases.
Can you please tell me which way to look? Are there any solutions to the problem other than updating the DO Database power?
You might want to try PG Bouncer (never tried it though, so i can't really tell how it will impact the app).

Rails Action Cable with Postgresql adapter on Heroku?

I'm making a new web app (Rails 6) that I would like to use websockets with. This is the first time I've attempted to use ActionCable with Rails. Ideally to start I would toss the app up on Heroku, since it is generally a fast and convenient way for me to get started with an app project.
If my app is successful I wouldn't be surprised if it had 200 concurrent users. This doesn't seem like a lot to me. I assume many of them would have open other tabs, so my estimate would be roughly 500 websocket connections for 200 users. If my site were more successful, I could see this going to 500 active users so more than 1000 websocket connections.
After preliminary research, it is recommended to use Redis on Heroku for ActionCable. I was dismayed to see that a Redis plan (Redistogo) with 500 websocket connections would cost $75/month and 1000 connections $200/month. More than dismayed, really. Shocked. Am I missing something? Why is this so expensive? It seems like these plans are also linked to large amounts of storage space, something that (my understanding) ActionCable doesn't even need.
I've seen that it's also possible (in theory) to configure ActionCable to use Postgresql as its adapter, though looking online I didn't see any examples of anyone doing this successfully on Heroku. So my questions are these:
Is it possible to use ActionCable on Heroku without paying through the nose? I assume this would mean Postgresql adapter...
How many connections could such a setup handle, what are the limitations, bottlenecks, etc?
I wasn't able to find much information about ActionCable and Postgresql, except that the number of pools has to match. I assume the pool number is not the same as the websocket connection count?
Thanks for any info...

Why is Heroku running so much slower than localhost?

I created a simple stock screener (filters out stocks given certain criteria) in Rails. On my localhost the stocks update instantly, but on Heroku it can take anywhere from 10-15 seconds before the stock list is updated.
My Heroku app is here: http://fruthscreener.herokuapp.com/
Github is here: github dot com/anfruth/Fruth_Screener_Rails
The code involved in updating the queries can be found in the user_stocks model and in the stocks controller under def create.
Any ideas why this is happening and suggestions as to how to fix it?
Thank you.
Not slow for me
--
Heroku
The only thing which will slow Heroku down is if your db connection is "off-site"
We've had apps before which ran super slowly due to the case that the database provider was a different host, in a different country.
Heroku runs on AWS, meaning it will run super fast if you have all the dependencies in the same data-center. One of the drawbacks of using one of these powerful "cloud" hosting providers is they need to keep all requests local to help their system run quickly; hence if your DB is "off-site", it will slow it down profusely.
You must remember that Rails apps can't run unless they have a db connection; so if your connectivity is slow, your app's performance is going to be hit hard
-
Postgres
If your app is running slow on Heroku, the best thing to do is to make sure you're using Heroku's postgres database. This is deployed on Heroku's AWS cloud, meaning it's on the same network as your app, hence allowing it to run as quickly as possible
You'll need to change your app's database connection to the new production server like this:
#config/database.yml
production:
.... #-> your Heroku db details here
This will allow you to run heroku run rake db:migrate after you push this new code to Heroku - which should define the db structure for you, allowing you to populate it as you wish
It sounds like you would benefit from using New Relic or another performance management package for Heroku in order to find out what is causing you trouble exactly. The free tier of New Relic should be enough to get you started.
By the way, if your app is a Heroku free tier app (one single web dyno), then your dyno will go to sleep when not in use, and you may be encountering dyno spin-up costs, which are frequently about 5-15 seconds. Repeat the same query several times in several minutes and see if the slowness persists for every request, or only the first one.

What happen to existing server queries when pushing new Rails code to Heroku or putting it in maintenance mode?

This page http://devcenter.heroku.com/articles/maintenance-mode doesn't give any indication.
Server queries to Heroku could run up to 30 seconds before they got terminated forcefully. So I am wondering what would happen if I push new code to the busy server, or set it to be in Maintenance mode? Would the existing queries just stopped? What if it is writing to a database, etc? Would it leave my data in a corrupted state?
Is there a correct way to let Rails app to shut down gracefully (finishing existing queries but not accepting any new one), so that I can upgrade the server code?
Thanks.
When you put your app in maintenance mode you are not changing your codebase at all. It's a front-end configuration.
It means, if a query was sent to the database, the database won't be stopped and the query will be executed. The connections are not dropped when you switch to maintenance mode.

Rescue rails app from server failure

I have a rails app which is now hosted on dedicated server. Today something happened: app doesn't respond and I have no ssh access, restarting doesn't help and I am waiting for tech support to respond. But this is not a question, I just need this app to be online even if server fails. Which is the easiest option? Can I create second server on different hosting and serve from there in case of failure, if so, how to sync db and files? Application is not heavily loaded, I just need it to be available.
Difficult problem to solve. There's no one proven way to make this happen, but in general you need "No Single Point of Failure"
There's an entire science devoted to reliability in web applications -- no way can you get that answered in a SO question.
You can take frequent backups of your database, store them on S3 (and/or somewhere else). You can then
have an image of your applications server at your host
spin it up when your server dies
restore the database
Have the new application server take over responsibility (easiest way: assume the old server's IP address)

Resources