ActiveRecord connection errors with PGBouncer - ruby-on-rails

The bounty expires in 7 days. Answers to this question are eligible for a +100 reputation bounty.
CWitty wants to draw more attention to this question.
We recently introduce PGBouncer into our stack as we were exhausting our connections to our RDS instance. Upon doing so we started to see all sorts of connection exceptions which I posted below. The only thing of note is that we use multiple databases via Rails built in multi-db support. Only the primary/writer instance is going through PGBouncer at the moment and that is where we are seeing all of the exceptions, the reader connections seem to be fine.
I'm wondering if we need to fine tune some of the timeout or connection sizes a bit or what else could be causing this.
Exceptions
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ActiveRecord::ConnectionNotEstablished: connection to server at "{db server IP}", port 5432 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQsocket() can't get socket descriptor
PGBouncer Config
We're running quite a few smaller instances of PGBouncer since it is single process and I believe single threaded as well. We plan to fine tune this a bit later.
[databases]
production = our_connection_string
[pgbouncer]
max_client_conn = 500
pool_mode = transaction
default_pool_size = 200
server_idle_timeout = 30
reserve_pool_size = 0
Rails DB Config
default: &default
adapter: postgis
postgis_extension: true
encoding: unicode
pool: <%= ENV['DB_POOL'] || ENV['RAILS_MAX_THREADS'] || 5 %>
idle_timeout: 300
checkout_timeout: 5
schema_search_path: public, tiger
prepared_statements: false
production:
primary:
<<: *default
url: <%= ENV[DATABASE_URL] %>
primary_replica:
<<: *default
url: <%= ENV[DATABASE_REPLICA_URL] %>
Update 1
We attempted going with the default value for server_idle_timeout of 600 seconds and that doesn't seem to have made a difference.

Related

Sidekiq getting ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool

I am getting an ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool exception for background jobs in Sidekiq
CONFIG
I have a PUMA web process and a SIDEKIQ process running on Heroku (2 hobby dynos) [A Rails app with background jobs]
In database.yml I have pool: 40 (in default and production)
In sidekiq.yml I have :concurrency: 7
In puma.rb I have max_threads_count = ENV.fetch("PUMA_MAX_THREADS") { 5 } and have set ENV["PUMA_MAX_THREADS"] = 5
I am using a Heroku pgsql hobby instance, which allows for 20 connections
EXPECTED BEHAVIOR
When the 7 Sidekiq workers are busy running jobs they should have enough available db connections.
Because:
Needed db connections:
5 for 5 PUMA threads
12: [7 + 5] for SIDEKIQ threads (7 workers + 5 for redis? - not sure about reasoning behind that one)
TOTAL NEEDED: 17 [12+5]
TOTAL AVAILABLE: 20
ACTUAL BEHAVIOR
When the 7 Sidekiq workers are busy running jobs, 2 jobs fail and raise the ConnectionTimeOutError (always 2 jobs, so actual max concurrency is 5)
STUFF I NOTICED (MIGHT HELP):
In SIDEKIQ dashboard, Redis connections reach a maximum 10 (never higher) [I guess 5 threads + 5]
In Heroku db, when enqueueing a lot of jobs, connections are always much lower than the 20 available (so no problem from the pgsql instance)
Any help or advice would be super appreciated :))
Thanks in advance!
UPDATE: Adding my database.yml file
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
development:
<<: *default
database: tracker_app_development
test:
<<: *default
database: tracker_app_test
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
web: DB_POOL=$PUMA_MAX_THREADS bundle exec puma -C config/puma.rb
worker: DB_POOL=14 bundle exec sidekiq -C config/sidekiq.yml
release: rake db:migrate
This exception is being raised from the ActiveRecord::ConnectionAdapters::ConnectionPool::Queue class in Rails, specifically in the poll method of the class, which accepts a timeout period (defaults to 5s). This is how the error is being raised:
if elapsed >= timeout
msg = "could not obtain a connection from the pool within %0.3f seconds (waited %0.3f seconds); all pooled connections were in use" %
[timeout, elapsed]
raise ConnectionTimeoutError, msg
end
I think this is saying that if the time elapsed since it has tried to acquire a connection is greater than the timeout provided (default 5s) then it will raise this exception. This is happening because the number of available connections from the pool is 10, while in Sidekiq you have mentioned 14 as the default pool size. Try to increase the pool size of your web dyno to more than or equal to the number of default pool connections specified in your Sidekiq dyno. Hopefully, this resolves this exception.
If this does not work, then you can try increasing the checkout_timeout from 5s to a longer duration like so:
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
checkout_timeout: 10
development:
<<: *default
database: tracker_app_development
test:
<<: *default
database: tracker_app_test
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
This is what the API documentation for Rails has to say about ConnectionPools.
https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
SOLUTION FOUND:
In my database.yml file, the production config had 1 indentation while it should have had 0...

PG::UnableToSend: no connection to the server in Rails

I have a production server running ubuntu 14.04, Rails 4.2.0, postgresql 9.6.1 with gem pg 0.21.0/0.20.0. In last few days, there is constantly error with accessing to a table customer_input_datax_records in psql server.
D, [2017-07-20T18:08:39.166897 #1244] DEBUG -- : CustomerInputDatax::Record Load (0.1ms) SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1) [["status", "email_sent"]]
E, [2017-07-20T18:08:39.166990 #1244] ERROR -- : PG::UnableToSend: no connection to the server
: SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1)
The code which call to access the db server is with Rufus scheduler 3.4.2 loop:
s = Rufus::Scheduler.singleton
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
.......
end
After restart the server, usually there is with first request (or a few). But after some time (ex, 1 or 2 hours), the issue starts to show up. But the app seems running fine (accessing records with read/write & creating new). There are some online posts about the error. However the problem seems not the one I am having. Before I re-install the psql server, I would like to get some ideas about what causes the no connection.
UPDATE: database.yml
production:
adapter: postgresql
encoding: unicode
host: localhost
database: wb_production
pool: 5
username: postgres
password: xxxxxxx
So, the error is "RAILS: PG::UnableToSend: no connection to the server".
That reminds me of Connection pool issue with ActiveRecord objects in rufus-scheduler
You could do
s = Rufus::Scheduler.singleton
s.every '2m' do
ActiveRecord::Base.connection_pool.with_connection do
new_signups = CustomerInputDatax::Record
.where.not(status: 'email_sent')
.all
# ...
end
end
digging
It would be great to know more about the problem.
I'd suggest trying this code:
s = Rufus::Scheduler.singleton
def s.on_error(job, error)
Rails.logger.error(
"err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
" in job #{job.inspect}")
error.backtrace.each_with_index do |line, i|
Rails.logger.error(
"err#{error.object_id} #{i}: #{line}")
end
end
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
# .......
end
As soon as the problem manifests itself, I'd look for the on_error full output in the Rails log.
This on_error comes from https://github.com/jmettraux/rufus-scheduler#rufusscheduleron_errorjob-error
As we discuss in the comments, the problem seems related to your rufus version.
I would suggest you to check out whenever gem and to invoke a rake task instead of calling directly the activerecord model.
It could be a good idea, however, to open an issue with the traceback of your error in the rufus-scheduler repo on github (just to let then know...)

Postgree too many connections in rails console

I am developing a Ruby on Rails app using postgre gem and this is how my database.yml looks like:
development:
adapter: postgresql
encoding: utf-8
pool: 5
username: "hytxlzju"
password: "xxxxx"
host: "jumbo.db.elephantsql.com"
port: "5432"
database: "hytxlzju"
production:
adapter: postgresql
encoding: utf-8
pool: 5
username: "hytxlzju"
password: "xxxxxx"
host: "jumbo.db.elephantsql.com"
port: "5432"
database: "hytxlzju"
Whenever I am connecting to this db locally, from the rails console I am getting too many connections. How can I kill a connection in the code, once the user logged out, in the code, and how can I kill one in my rails console, after I finished altering the tables?
[EDIT]
This is the error message:
C:/RailsInstaller/Ruby2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-3.2.22.5/lib/active_record/connection_adapters/postgresql_adapter.rb:12
22:in `initialize': FATAL: too many connections for role "hytxlzju" (PG::ConnectionBad)
[EDIT] I added my initilizer, still no success:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['DB_POOL'] || ENV['RAILS_MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
You can try the below approach
Active Record limits the total number of connections per application
through a database setting pool; this is the maximum size of the
connections your app can have to the database
in config/datbase.yml
pool: <%= ENV['RAILS_MAX_THREADS'] || 5 %>
If you are using puma then use ENV['RAILS_MAX_THREADS'] more here
It might solve the problem.
[SOLVED]
Somehow my demo app was not finding the entries in the tables, so it was creating multiple pools connections, without closing them, because before closing the connection, a 500 error was getting thrown, hence that bit of code where I closed the pool was never closed. More abut postgre session here
https://devcenter.heroku.com/articles/concurrency-and-database-connections#connection-pool

How to properly connect ROR with Oracle database with Ruby-OCI8 gem?

I'm trying to use Oracle Database Xe on my ruby on rails app
but I'm having a lot of trouble with my database connection I'm currently not sure what the problem is, but according to what I have read I may have a problem with my TNS setup the error message that I'am having is
OCIError: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
The error appears every time I try to run rake db:migrate
on my rails console I try to run OCI8.new and it gives me this error
OCIError: ORA-12545: Connect failed because target host or object does not exist
I'm pretty much stuck and I'm really not sure what to do here.
TNS:
METRO=
(description=
(address_list=
(address = (protocol = TCP)(host = 127.0.0.1)(port = 1521))
)
(connect_data =
(service_name=METRO)
)
)
Database.yml :
development:
adapter: oracle_enhanced
database: metro
host: 192.168.18.55
username: metro
password: imperium
Looks like you are missing the port in database.yml file.
development:
adapter: oracle_enhanced
host: localhost
port: 1521
database: xe
username: user
password: secret

Error connecting external DB to Rails 4 App

I am attempting to add a Postgresql Redshift Database to my Rails 4 app, for use locally and in production. I am testing in Development first.
I have altered my database.yml file to look like this:
development:
adapter: postgresql
encoding: unicode
database: new_db
pool: 5
username: test
password: password
host: test_db.us-east-1.redshift.amazonaws.com
port: 5439
Now, when I hit localhost:3000 I get this error:
permission denied to set parameter "client_min_messages" to "warning"
: SET client_min_messages TO 'warning'
I can't seem to find out what is causing this - It seems like maybe my new DB isn't allow the SET command? I'm not really sure, but any help is appreciated.
Just answered this somewhere else, but I had the same issues today, here's what I did and it's working now:
#app/models/data_warehouse.rb
class DataWarehouse < ActiveRecord::Base
establish_connection "redshift_staging"
#or, if you want to have a db per environment
#establish_connection "redshift_#{Rails.env}"
end
Note that we are connecting on 5439, not the default 5432 so I specify the port
Also, I specify a schema, beta, which is what we use for our unstable aggregates, you could either have a different db per environment as mentioned above, or use various schemas and include them in the search path for ActiveRecord
#config/database.yml
redshift_staging:
adapter: postgresql
encoding: utf8
database: db03
port: 5439
pool: 5
schema_search_path: 'beta'
username: admin
password: supersecretpassword
host: db03.myremotehost.us #your remote host here, might be an aws url from Redshift admin console
###OPTION 2, a direct PG Connection
class DataWarehouse < ActiveRecord::Base
attr_accessor :conn
def initialize
#conn = PG.connect(
database: 'db03',
port: 5439,
pool: 5,
schema_search_path: 'beta',
username: 'admin',
password: 'supersecretpassword',
host: 'db03.myremotehost.us'
)
end
end
[DEV] main:0> redshift = DataWarehouse
E, [2014-07-17T11:09:17.758957 #44535] ERROR -- : PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'
(pry) output error: #<ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'>
UPDATE:
I ended up going with option 1, but using this adapter for now for multiple reasons:
https://github.com/fiksu/activerecord-redshift-adapter
Reason 1: ActiveRecord postgresql adapter sets client_min_messages
Reason 2: adapter also attempts to set Time Zone, which redshift doesn't allow (http://docs.aws.amazon.com/redshift/latest/dg/c_redshift-and-postgres-sql.html)
Reason 3: Even if you change the code in ActiveRecord for the first two errors, you run into additional errors that complain that Redshift is using Postgresql 8.0, at that point I moved on to the adapter, will revisit and update if I find something better later.
I renamed my table to base_aggregate_redshift_tests (notice plural) so ActiveRecord was easily able to connect, if you can't change your table names in redshift use the set_table method I have commented out below
#Gemfile:
gem 'activerecord4-redshift-adapter', github: 'aamine/activerecord4-redshift-adapter'
Option 1
#config/database.yml
redshift_staging:
adapter: redshift
encoding: utf8
database: db03
port: 5439
pool: 5
username: admin
password: supersecretpassword
host: db03.myremotehost.us
timeout: 5000
#app/models/base_aggregates_redshift_test.rb
#Model named to match my tables in Redshift, if you want you can set_table like I have commented out below
class BaseAggregatesRedshiftTest < ActiveRecord::Base
establish_connection "redshift_staging"
self.table_name = "beta.base_aggregates_v2"
end
in console using self.table_name -- notice it queries the right table, so you can name your models whatever you want
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:31:58.678103 #43776] DEBUG -- : BaseAggregatesRedshiftTest Load (45.6ms) SELECT "beta"."base_aggregates_v2".* FROM "beta"."base_aggregates_v2" LIMIT 1
Option 2
#app/models/base_aggregates_redshift_test.rb
class BaseAggregatesRedshiftTest < ActiveRecord::Base
set_table "beta.base_aggregates_v2"
ActiveRecord::Base.establish_connection(
adapter: 'redshift',
encoding: 'utf8',
database: 'staging',
port: '5439',
pool: '5',
username: 'admin',
password: 'supersecretpassword',
search_schema: 'beta',
host: 'db03.myremotehost.us',
timeout: '5000'
)
end
#in console, abbreviated example of first record, now it's using the new name for my redshift table, just assuming I've got the record at base_aggregates_redshift_tests because I didn't set the table_name
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:09:39.388918 #11537] DEBUG -- : BaseAggregatesRedshiftTest Load (45.3ms) SELECT "base_aggregates_redshift_tests".* FROM "base_aggregates_redshift_tests" LIMIT 1
#<BaseAggregatesRedshiftTest:0x007fd8c4a12580> {
:truncated_month => Thu, 31 Jan 2013 19:00:00 EST -05:00,
:dma => "Cityville",
:group_id => 9712338,
:dma_id => 9999
}
Good luck!
What's your postgresql version? or middle ware version? it's not a original postgresql version?
you can use \set VERBOSITY verbose
and then do it again, find which function#code raise the error.
and then analyze why you cann't set client_min_messages.
I use postgresql 9.3.3 no this problem.
super and normal user can set client_min_messages correct.
digoal=# \c digoal digoal
You are now connected to database "digoal" as user "digoal".
digoal=> set client_min_messages=debug;
SET
digoal=> set client_min_messages=warning;
SET
digoal=> SET client_min_messages TO 'warning';
SET
digoal=> \c postgres digoal
You are now connected to database "postgres" as user "digoal".
postgres=> SET client_min_messages TO 'warning';
SET
I was able to solve the issue by opening up the postgresql adapter for Activerecord and commenting out all of the SET commands.
However, this is not a best practice at all. At the end of editing the postgresql adapter I was faced with errors due to an outdated postgres version (80002).
The right answer is just updating postgres, which unfortunately isn't possible amongst my shared Redshift DB that lives at Amazon.

Resources