How to properly organize work with database connections? - ruby-on-rails

I have a Rails application that I've been developing for quite some time. All this time I tested it locally and on a DEV server. On the DEV server, next to the deployed application, there is also a PG database. And there were no problems with connections. I think there is simply no connection limit, or it is too high - not so important.
Today I started doing deployment to the PROD server. It is similar in power to that for DEV, but BD is already in the DO Database. By the way, the servers themselves are also located in DigitalOcean.
The problem is that DO Database has a limit of 20 connections. And as far as I understand, exceeding this limit - the Rails application gives an error:
ActiveRecord::ConnectionNotEstablished (FATAL: remaining connection slots are reserved for non-replication superuser connections)
The most obvious option is to reduce the number of requests on page load. But this still did not solve the problem if, for example, the number of users increases.
Can you please tell me which way to look? Are there any solutions to the problem other than updating the DO Database power?

You might want to try PG Bouncer (never tried it though, so i can't really tell how it will impact the app).

Related

Rails Action Cable with Postgresql adapter on Heroku?

I'm making a new web app (Rails 6) that I would like to use websockets with. This is the first time I've attempted to use ActionCable with Rails. Ideally to start I would toss the app up on Heroku, since it is generally a fast and convenient way for me to get started with an app project.
If my app is successful I wouldn't be surprised if it had 200 concurrent users. This doesn't seem like a lot to me. I assume many of them would have open other tabs, so my estimate would be roughly 500 websocket connections for 200 users. If my site were more successful, I could see this going to 500 active users so more than 1000 websocket connections.
After preliminary research, it is recommended to use Redis on Heroku for ActionCable. I was dismayed to see that a Redis plan (Redistogo) with 500 websocket connections would cost $75/month and 1000 connections $200/month. More than dismayed, really. Shocked. Am I missing something? Why is this so expensive? It seems like these plans are also linked to large amounts of storage space, something that (my understanding) ActionCable doesn't even need.
I've seen that it's also possible (in theory) to configure ActionCable to use Postgresql as its adapter, though looking online I didn't see any examples of anyone doing this successfully on Heroku. So my questions are these:
Is it possible to use ActionCable on Heroku without paying through the nose? I assume this would mean Postgresql adapter...
How many connections could such a setup handle, what are the limitations, bottlenecks, etc?
I wasn't able to find much information about ActionCable and Postgresql, except that the number of pools has to match. I assume the pool number is not the same as the websocket connection count?
Thanks for any info...

Postgres Heroku Connection limit details - Rails application - ROR

I am working on the free and fair use of Heroku for a Rails Postgres application.
Regarding the pricing on Heroku Postgres, I can see a connection limit (set to 120 for the cheapest offer).
What does this connection limit mean?
I have seen it which may deal with the parameter max_connections but even that, I do not really get how it works.
If I have my rails server receiving an order and request my database for an update or simple select with always the same database Postgres user: does it count for one connection?
I do not think so but I would like to get more details on this limit.
Thank you very much.
max_connection defines the number of Postgres instances running simultaneously to serve queries. max_connections open up multiple connection for same postgres user. It can increase performance in some cases but this can lead to poor performance like these:
Disk contention: If you are using disk(not RAM) then more tables and indexes to be accessed at the same time, causing heavier seeking all over the disk
RAM: This leads more RAM usage.
Synchronization: You need to handle synchronization of multiple instances which will effect the overall throughput.
If you are user base increases, then you may face too many clients error due to limited number of connections. Then there is a concept of
The solution is to two fold:
pool connections: If Postgres is failing to handle the situation, you can use tools like pgbouncer, repmgr and PgPool to handle pool connection before going to Postgres.
Application optimization: Try to open as less connections as possible and close the connection as soon as work is done.
You can study further on this from following links:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
http://www.postgresql.org/docs/current/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS
You can study particularly for Ruby here: https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections

ActiveRecord::QueryCache#call taking over 70% of execution time

NewRelic is showing me that over 80% of execution time in the app server is taking place in "Middleware ActiveRecord::QueryCache#call"
Here is a gist of the relevant code tested (although I see similar results on other API endpoints).
Gist
I'm running the app server on AWS Elastic Beanstalk on a t2.medium instance and a t2.small Postgres RDS DB with max_connections set to 100. I'm testing this via loader.io, doing a test of 100 users with the maintain client load setting (this means about 6000 requests a minute).
Does anyone have an idea why the QueryCache is taking so much time?
Unfortunately, this issue with QueryCache is quite common and seems to have multiple causes, but the most common is that the connection between your EC2 app server and DB was temporarily severed, and QueryCache doesn't handle this particularly well.
Remedies include increasing your default connection pool size substantially (e.g. an order of magnitude higher), disabling QueryCache entirely, or increasing read_timeout in database.yml to 15 seconds or more depending on your environment.
If the read_timeout setting resolves the problem, you may want to investigate why there are so many disconnects between your app server and db.
Another path which might not be an option for you would be to run the app server on the same machine as the db, but that doesn't work for everyone due to their architecture. It certainly can be an effective test to see if eliminating the network variable helps. Good luck.

Using Heroku free tier for rails production low traffic website

I've a rails application which I am expecting to be low traffic, it is working fine on heroku free tier as of now.
Can I use Heroku free tier with my custom domain as my production environment? As of now I see 750 dyno hours will be fine for my website to work continuously but I want to know if there are any drawback of using free tier for production website.
For uploads I am already using amazon S3.
Thanks,
Yes, you can use custom domain for your production env on Heroku.
Heroku has database rows limitation for free tier; it takes away the INSERT privilege, if that limit is crossed and offers you to buy atleast basic (9$/month) database plan (10 million rows).
My service was shut down for same reason a while back; I got following email from them:
The database HEROKU_[hidden] on Heroku app [hidden] has
exceeded its allocated storage capacity. Immediate action is required.
The database contains 129,970 rows, exceeding the Dev plan limit of
10,000. INSERT privileges to the database will be automatically
revoked in 7 days. This will cause service failures in most
applications dependent on this database.
To avoid a disruption to your service, migrate the database to a Basic
($9/month) or Production plan:
Heroku free service is totally awesome for light production. Upgrade the performance by buying adequate dynos and database. You may need them when there are more incoming requests that they are getting queued up; consequently leading to occasional timeouts. John sufficiently answers when you may need more dynos here - https://stackoverflow.com/a/8428998/1376448
You will totally love it!

Heroku “psql: FATAL: remaining connection slots are reserved for non-replication superuser connections”

I got the above error message running Heroku Postgres Basic (as per this question) and have been trying to diagnose the problem.
One of the suggestions is to use connection pooling but it seems Rails has this built in. Another suggestion is that the app is configured improperly and opens too many connections.
My app manages all it's connections through Active Record, and I had one direct connection to the database from Navicat (or at least I thought I had).
How would I debug this?
RESOLUTION
Turns out it was an Heroku issue. From Heroku support:
We've detected an issue on the server running your Basic database.
While we pinpoint this and address it, we would recommend you
provision a new Basic database and migrate over with PGBackups as
detailed here:
https://devcenter.heroku.com/articles/upgrade-heroku-postgres-with-pgbackups
. That should put your database on a new server. I apologize for this
disruption – we're working to fix this issue and prevent it from
occurring in the future.
This has happened a few times on my app -- somehow there is a connection leak, then all of a sudden the database is getting 10 times as many connections as it should. If it is the case that you are getting swamped by an error like this, not traffic, try running this:
heroku pg:killall
That will terminate all connections to the database. If it is dangerous for your situation to possibly cut off queries be careful. I just have a rails app, and if it goes down, losing a couple queries is not a big deal, because the browser requests will have looooooong since timed out anyway.
You might be able to find why you have so many connections by inspecting view pg_stat_activity:
SELECT * FROM pg_stat_activity
Most likely, you have some stray loop that opens new connection(s) without closing it.
To save you the support call, here's the response I got from Heroku Support for a similar issue:
Hello,
One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.
Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.
This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.
If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.
Thanks,
Bradley

Resources