PG::UnableToSend: no connection to the server in Rails 5 - ruby-on-rails

I have 2 servers(A, B).
I am running rails app in A and db in B.
In server B, I have pgbouncer and postgresql running.
When I run 200 threads in A, I am getting that issue even though I increased pgbouncer max client connection to 500. And pgbouncer pool_mode is session.
Postgresql pool is 100.
I also increased db pool to 500 in server A.
How can I avoid this issue and run 200 threads without any issue?
Later, I've updated code. Dropped pgbouncer and use postgresql directly.
Created 2 new threads which do db operation and other threads don't do db operation anymore.
And while threads run, I was monitoring active connections. It keeps 3 active.
But at the end of threads, I got this issue.
I showed connections pool status using ActiveRecord::Base.connection_pool.stat
{:size=>500, :connections=>4, :busy=>3, :dead=>0, :idle=>1, :waiting=>0, :checkout_timeout=>5}
rake aborted!
ActiveRecord::StatementInvalid: PG::UnableToSend: no connection to the server
Is there anyone who can help me with this issue?

I merged db instance and app instance.
That works.
I am still not sure if it's db version issue or postgresql remote access issue.
In my opinion, it's remote access issue.

Related

pgBouncer + Sidekiq + Rails + Heroku + multiple databases

I'm working on adding pgBouncer to my existing Rails Heroku application. My Rails app uses Sidekiq, has one Heroku postgres DB, and one TimescaleDB. I'm a bit confused about all the layers of connection pooling between these different services. I have the following questions I'd love some help with:
Rails already provides connection pooling out of the box, right? If so, what's the benefit of adding pgBouncer? Will they conflict?
Sidekiq already provides connection pooling out of the box, right? If so, what's the benefit of adding pgBouncer? Will they conflict?
I'm trying to add pgBouncer via the Heroku Buildpack for pgBouncer. It seems that I only add the pgBouncer to run on the web dyno but not with Sidekiq. Why is that? Shouldn't the worker and web dynos both be using pgBouncer?
The docs say that I can use pgBouncer on multiple databases. However, pgBouncer fails when trying to add my TimescaleDB database URL to pgBouncer. I've checked the script and everything looks accurate but I get the following error: ActiveRecord::ConnectionNotEstablished: connection to server at "127.0.0.1", port 6000 failed: ERROR: pgbouncer cannot connect to server. What's the best way to console into the pgBouncer instance and see in more detail what's breaking?
Thanks so much for your help.
They serve different purposes. ActiveRecord connection pool will manage the connections to limit connections in a thread-level from the same process while pgBouncer allow you to manage several applications pooling connection from the same DB or several DBs.
Sidekiq makes its own internal controller but PGBouncer works in several modes with multi processes in parallel as it's orchestrating 3 modes: session, transaction, and statement.
Probably this doc can help you to understand this part.
I think you can try admin console to check what's going on but not sure if it will work for troubleshooting as you expect.

How do Puma threads interact with Postgres connection pools?

I have a Rails app running on OSE, 5 pods, 1 container per pod. The Rails app uses the Puma web server with default thread settings (min: 0, max: 16). In my database.yml I've defined a connection pool: of 10.
I'd like to know what my maximum PG connection footprint would be?
My current theory is:
5 pods x 1 container x 16 threads x 10 connection pool = 800 possible PostgreSQL connections.
However, I'm questioning if each of the 16 Puma threads share from the same PG connection pool? In which case the formula would be:
5 pods x 1 container x 10 connection pool = 50 possible PostgreSQL connections.
(Of course, having Puma 16 threads if this math is correct would be a problem since my app might request more connections than could be provided, at 1 per thread, 6 more than the pool offers.)
Can anyone point me to definitive documentation on the subject? Thanks!
If the connection pool is within the process and doles out database connections across threads, with threads waiting if all database connections busy, then the second is correct. If not, the first. Either way it can actually be worse though. If you are using rolling deployments, on a restart there could be one additional pod active.
Have a look at using pgbouncer (https://pgbouncer.github.io/) in front of the PostgreSQL database instance. My understanding is that it provides additional flexibility in being able to manage a pool of database connections without needing to do anything in your application, instead it is dealt with in pgbouncer.

ActiveRecord::ConnectionTimeoutError: could not obtain a database connection when using sidekiq

We are unable to scale the frequency of our crons as we've would've liked and the thing holding us back is the number of database connection issues.
We have a primary server which has the master db, and 3 slaves. We run sidekiq on all our machines.
Our postgresql.conf -: max_connections = 200
Our pool option is also set at pool: 200 on all our rails app - database.yml in our servers.
We are running 2 sidekiq processes on each of our servers
In the green machine, if we change our concurrency from 6 to 7, we start getting a steam of errors -: Sidekiq - could not obtain a database connection within 5.042 seconds. Where am I messing up? :-(
Could it be something else inside our app? The numbers just don't add up.
Also does the number of active record connections have any association with pg_stat_activity?
Thanks in advance
Just figured it out
We're using replication, and in shards.yml, we had not set the pool size :-(
It was picking up 5 by default.

TinyTds Error: Adaptive Server connection timed out

We are running a ruby on rails application on rails 3.2.12 (ruby 1.9.3) with current tinyTDS gem 0.6.2.
We use MS SQL 2012 or 2014 and facing more then usual the following error message:
TinyTds::Error: Adaptive Server connection timed out: EXEC sp_executesql [...]
Database AUTOCLOSE is off.
TCP Socket Timeouts are default Windows system.
Application server is on machine #1 (windows server), SQL server is on machine #2 (windows server).
When I check the connections (netstat) I have like 250 connections open for around 20-30 users.
I run perform.exe to see idle time on SQL server for the data and log disks.
database.yml has connection pool:32 and reconnect:true.
To me it looks like that tinyTDS lost connection and any exception prevents from reconnecting.
The question is, how can I debug into the problem to find out what the problem is?
UPDATE
My mistake, the original error message belongs to tinytDS 0.5.x. Since I updated to the latest version I get the following error in addition or instead:
ActiveRecord::LostConnection (TinyTds::Error: DBPROCESS is dead or not enabled: BEGIN TRANSACTION):
First, that pool size seems excessive. Are you using a ton of threads? If not, then only one connection will be used per app request/response. Just seems like that value is way to high.
Second, what SQL timed out? Have you found that certain SQL is slower than others? If so, then you have two options. The first would be to tune the DB using standard practices like indexes, etc. The second would be to increase the "timeout" option in your database.yml. The default timeout is 5000 which is 5 seconds. Have you tried setting it to 10000? I guess what I am asking is how are you sure this is a "connect" timeout vs a "wait" timeout?

Rails 4: When is a database connection established?

I'm deploying a Rails 4 app on Heroku. As I'm looking over the database plans available, I don't understand what the 'connection limit' means. The 'hobby tier plans' has a connection limit of 20. The next tier has a limit of 60. Now I'm curious when a database connection is established, so that I can calculate which plan is best for me. Is there a connection for every query? Because if so, it would mean that only 20 users can use the app at the time. I quess some of these are cached, but anyway, I'm not clear on this. Thanks for your help in advance! :)
When the rails process starts up it will grab a database connection and hold on to that connection until the process stops.
For most MRI Ruby apps you need 1 connection per process, you will most likely run unicorn on heroku with 3 workers per dyno, each worker will need 1 database connection.
When you connect to console heroku run console that will use a new database connection until you logout of the console.
https://devcenter.heroku.com/articles/rails-unicorn
If you are running a threaded Ruby like jruby then each thread will need its own database connection.
Checkout "Concurrency and Database Connections in Ruby with ActiveRecord" on the heroku docs, it is has very detailed explanation:
https://devcenter.heroku.com/articles/concurrency-and-database-connections

Resources