I'm deploying a Rails 4 app on Heroku. As I'm looking over the database plans available, I don't understand what the 'connection limit' means. The 'hobby tier plans' has a connection limit of 20. The next tier has a limit of 60. Now I'm curious when a database connection is established, so that I can calculate which plan is best for me. Is there a connection for every query? Because if so, it would mean that only 20 users can use the app at the time. I quess some of these are cached, but anyway, I'm not clear on this. Thanks for your help in advance! :)
When the rails process starts up it will grab a database connection and hold on to that connection until the process stops.
For most MRI Ruby apps you need 1 connection per process, you will most likely run unicorn on heroku with 3 workers per dyno, each worker will need 1 database connection.
When you connect to console heroku run console that will use a new database connection until you logout of the console.
https://devcenter.heroku.com/articles/rails-unicorn
If you are running a threaded Ruby like jruby then each thread will need its own database connection.
Checkout "Concurrency and Database Connections in Ruby with ActiveRecord" on the heroku docs, it is has very detailed explanation:
https://devcenter.heroku.com/articles/concurrency-and-database-connections
Related
I'm working on adding pgBouncer to my existing Rails Heroku application. My Rails app uses Sidekiq, has one Heroku postgres DB, and one TimescaleDB. I'm a bit confused about all the layers of connection pooling between these different services. I have the following questions I'd love some help with:
Rails already provides connection pooling out of the box, right? If so, what's the benefit of adding pgBouncer? Will they conflict?
Sidekiq already provides connection pooling out of the box, right? If so, what's the benefit of adding pgBouncer? Will they conflict?
I'm trying to add pgBouncer via the Heroku Buildpack for pgBouncer. It seems that I only add the pgBouncer to run on the web dyno but not with Sidekiq. Why is that? Shouldn't the worker and web dynos both be using pgBouncer?
The docs say that I can use pgBouncer on multiple databases. However, pgBouncer fails when trying to add my TimescaleDB database URL to pgBouncer. I've checked the script and everything looks accurate but I get the following error: ActiveRecord::ConnectionNotEstablished: connection to server at "127.0.0.1", port 6000 failed: ERROR: pgbouncer cannot connect to server. What's the best way to console into the pgBouncer instance and see in more detail what's breaking?
Thanks so much for your help.
They serve different purposes. ActiveRecord connection pool will manage the connections to limit connections in a thread-level from the same process while pgBouncer allow you to manage several applications pooling connection from the same DB or several DBs.
Sidekiq makes its own internal controller but PGBouncer works in several modes with multi processes in parallel as it's orchestrating 3 modes: session, transaction, and statement.
Probably this doc can help you to understand this part.
I think you can try admin console to check what's going on but not sure if it will work for troubleshooting as you expect.
I have 2 servers(A, B).
I am running rails app in A and db in B.
In server B, I have pgbouncer and postgresql running.
When I run 200 threads in A, I am getting that issue even though I increased pgbouncer max client connection to 500. And pgbouncer pool_mode is session.
Postgresql pool is 100.
I also increased db pool to 500 in server A.
How can I avoid this issue and run 200 threads without any issue?
Later, I've updated code. Dropped pgbouncer and use postgresql directly.
Created 2 new threads which do db operation and other threads don't do db operation anymore.
And while threads run, I was monitoring active connections. It keeps 3 active.
But at the end of threads, I got this issue.
I showed connections pool status using ActiveRecord::Base.connection_pool.stat
{:size=>500, :connections=>4, :busy=>3, :dead=>0, :idle=>1, :waiting=>0, :checkout_timeout=>5}
rake aborted!
ActiveRecord::StatementInvalid: PG::UnableToSend: no connection to the server
Is there anyone who can help me with this issue?
I merged db instance and app instance.
That works.
I am still not sure if it's db version issue or postgresql remote access issue.
In my opinion, it's remote access issue.
We are running a ruby on rails application on rails 3.2.12 (ruby 1.9.3) with current tinyTDS gem 0.6.2.
We use MS SQL 2012 or 2014 and facing more then usual the following error message:
TinyTds::Error: Adaptive Server connection timed out: EXEC sp_executesql [...]
Database AUTOCLOSE is off.
TCP Socket Timeouts are default Windows system.
Application server is on machine #1 (windows server), SQL server is on machine #2 (windows server).
When I check the connections (netstat) I have like 250 connections open for around 20-30 users.
I run perform.exe to see idle time on SQL server for the data and log disks.
database.yml has connection pool:32 and reconnect:true.
To me it looks like that tinyTDS lost connection and any exception prevents from reconnecting.
The question is, how can I debug into the problem to find out what the problem is?
UPDATE
My mistake, the original error message belongs to tinytDS 0.5.x. Since I updated to the latest version I get the following error in addition or instead:
ActiveRecord::LostConnection (TinyTds::Error: DBPROCESS is dead or not enabled: BEGIN TRANSACTION):
First, that pool size seems excessive. Are you using a ton of threads? If not, then only one connection will be used per app request/response. Just seems like that value is way to high.
Second, what SQL timed out? Have you found that certain SQL is slower than others? If so, then you have two options. The first would be to tune the DB using standard practices like indexes, etc. The second would be to increase the "timeout" option in your database.yml. The default timeout is 5000 which is 5 seconds. Have you tried setting it to 10000? I guess what I am asking is how are you sure this is a "connect" timeout vs a "wait" timeout?
We have recently been having issues with postgres running out of connection slots, and after a lot of debugging and shrugging of shoulders we have pretty much tracked it down to the fact that we understood Connection pools wrong.
We use Rails, Postgres and Unicorn, and Delayed Job
Are we correct to assume that the connection pool is process specific, i.e each process has its own 10 (our connection pool limit) connections to the db in the pool?
And If there are no threads anywhere in the app, are we correct to assume that for the most part each process will use 1 connection, since noone ever needs a second one?
Based on these assumptions we tracked it down to the number of processes
Web server - 4x unicorn
Delayed job 3x server - 30 processes = 90 connections
That's 94 connections, and a couple connections for rails:consoles and a couple of rails runner or rake tasks would explain why we were hitting the limit often right? It has been particularly often this week after I converted a ruby script into a rails runner script.
We are planning to increase the max from 100 -> 200 or 250 to relieve this but is there a trivial way to implement inter process connection pooling in rails?
You probably want to take a look at pgbouncer. It's a purpose-built PostgreSQL connection pooler. There are some notes on the wiki too. It's packaged for most linux distros too.
I got the above error message running Heroku Postgres Basic (as per this question) and have been trying to diagnose the problem.
One of the suggestions is to use connection pooling but it seems Rails has this built in. Another suggestion is that the app is configured improperly and opens too many connections.
My app manages all it's connections through Active Record, and I had one direct connection to the database from Navicat (or at least I thought I had).
How would I debug this?
RESOLUTION
Turns out it was an Heroku issue. From Heroku support:
We've detected an issue on the server running your Basic database.
While we pinpoint this and address it, we would recommend you
provision a new Basic database and migrate over with PGBackups as
detailed here:
https://devcenter.heroku.com/articles/upgrade-heroku-postgres-with-pgbackups
. That should put your database on a new server. I apologize for this
disruption – we're working to fix this issue and prevent it from
occurring in the future.
This has happened a few times on my app -- somehow there is a connection leak, then all of a sudden the database is getting 10 times as many connections as it should. If it is the case that you are getting swamped by an error like this, not traffic, try running this:
heroku pg:killall
That will terminate all connections to the database. If it is dangerous for your situation to possibly cut off queries be careful. I just have a rails app, and if it goes down, losing a couple queries is not a big deal, because the browser requests will have looooooong since timed out anyway.
You might be able to find why you have so many connections by inspecting view pg_stat_activity:
SELECT * FROM pg_stat_activity
Most likely, you have some stray loop that opens new connection(s) without closing it.
To save you the support call, here's the response I got from Heroku Support for a similar issue:
Hello,
One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.
Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.
This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.
If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.
Thanks,
Bradley