The bounty expires in 7 days. Answers to this question are eligible for a +100 reputation bounty.
CWitty wants to draw more attention to this question.
We recently introduce PGBouncer into our stack as we were exhausting our connections to our RDS instance. Upon doing so we started to see all sorts of connection exceptions which I posted below. The only thing of note is that we use multiple databases via Rails built in multi-db support. Only the primary/writer instance is going through PGBouncer at the moment and that is where we are seeing all of the exceptions, the reader connections seem to be fine.
I'm wondering if we need to fine tune some of the timeout or connection sizes a bit or what else could be causing this.
Exceptions
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ActiveRecord::ConnectionNotEstablished: connection to server at "{db server IP}", port 5432 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQsocket() can't get socket descriptor
PGBouncer Config
We're running quite a few smaller instances of PGBouncer since it is single process and I believe single threaded as well. We plan to fine tune this a bit later.
[databases]
production = our_connection_string
[pgbouncer]
max_client_conn = 500
pool_mode = transaction
default_pool_size = 200
server_idle_timeout = 30
reserve_pool_size = 0
Rails DB Config
default: &default
adapter: postgis
postgis_extension: true
encoding: unicode
pool: <%= ENV['DB_POOL'] || ENV['RAILS_MAX_THREADS'] || 5 %>
idle_timeout: 300
checkout_timeout: 5
schema_search_path: public, tiger
prepared_statements: false
production:
primary:
<<: *default
url: <%= ENV[DATABASE_URL] %>
primary_replica:
<<: *default
url: <%= ENV[DATABASE_REPLICA_URL] %>
Update 1
We attempted going with the default value for server_idle_timeout of 600 seconds and that doesn't seem to have made a difference.
I am getting an ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool exception for background jobs in Sidekiq
CONFIG
I have a PUMA web process and a SIDEKIQ process running on Heroku (2 hobby dynos) [A Rails app with background jobs]
In database.yml I have pool: 40 (in default and production)
In sidekiq.yml I have :concurrency: 7
In puma.rb I have max_threads_count = ENV.fetch("PUMA_MAX_THREADS") { 5 } and have set ENV["PUMA_MAX_THREADS"] = 5
I am using a Heroku pgsql hobby instance, which allows for 20 connections
EXPECTED BEHAVIOR
When the 7 Sidekiq workers are busy running jobs they should have enough available db connections.
Because:
Needed db connections:
5 for 5 PUMA threads
12: [7 + 5] for SIDEKIQ threads (7 workers + 5 for redis? - not sure about reasoning behind that one)
TOTAL NEEDED: 17 [12+5]
TOTAL AVAILABLE: 20
ACTUAL BEHAVIOR
When the 7 Sidekiq workers are busy running jobs, 2 jobs fail and raise the ConnectionTimeOutError (always 2 jobs, so actual max concurrency is 5)
STUFF I NOTICED (MIGHT HELP):
In SIDEKIQ dashboard, Redis connections reach a maximum 10 (never higher) [I guess 5 threads + 5]
In Heroku db, when enqueueing a lot of jobs, connections are always much lower than the 20 available (so no problem from the pgsql instance)
Any help or advice would be super appreciated :))
Thanks in advance!
UPDATE: Adding my database.yml file
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
development:
<<: *default
database: tracker_app_development
test:
<<: *default
database: tracker_app_test
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
web: DB_POOL=$PUMA_MAX_THREADS bundle exec puma -C config/puma.rb
worker: DB_POOL=14 bundle exec sidekiq -C config/sidekiq.yml
release: rake db:migrate
This exception is being raised from the ActiveRecord::ConnectionAdapters::ConnectionPool::Queue class in Rails, specifically in the poll method of the class, which accepts a timeout period (defaults to 5s). This is how the error is being raised:
if elapsed >= timeout
msg = "could not obtain a connection from the pool within %0.3f seconds (waited %0.3f seconds); all pooled connections were in use" %
[timeout, elapsed]
raise ConnectionTimeoutError, msg
end
I think this is saying that if the time elapsed since it has tried to acquire a connection is greater than the timeout provided (default 5s) then it will raise this exception. This is happening because the number of available connections from the pool is 10, while in Sidekiq you have mentioned 14 as the default pool size. Try to increase the pool size of your web dyno to more than or equal to the number of default pool connections specified in your Sidekiq dyno. Hopefully, this resolves this exception.
If this does not work, then you can try increasing the checkout_timeout from 5s to a longer duration like so:
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
checkout_timeout: 10
development:
<<: *default
database: tracker_app_development
test:
<<: *default
database: tracker_app_test
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
This is what the API documentation for Rails has to say about ConnectionPools.
https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
SOLUTION FOUND:
In my database.yml file, the production config had 1 indentation while it should have had 0...
I'm trying to use Oracle Database Xe on my ruby on rails app
but I'm having a lot of trouble with my database connection I'm currently not sure what the problem is, but according to what I have read I may have a problem with my TNS setup the error message that I'am having is
OCIError: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
The error appears every time I try to run rake db:migrate
on my rails console I try to run OCI8.new and it gives me this error
OCIError: ORA-12545: Connect failed because target host or object does not exist
I'm pretty much stuck and I'm really not sure what to do here.
TNS:
METRO=
(description=
(address_list=
(address = (protocol = TCP)(host = 127.0.0.1)(port = 1521))
)
(connect_data =
(service_name=METRO)
)
)
Database.yml :
development:
adapter: oracle_enhanced
database: metro
host: 192.168.18.55
username: metro
password: imperium
Looks like you are missing the port in database.yml file.
development:
adapter: oracle_enhanced
host: localhost
port: 1521
database: xe
username: user
password: secret
I am attempting to add a Postgresql Redshift Database to my Rails 4 app, for use locally and in production. I am testing in Development first.
I have altered my database.yml file to look like this:
development:
adapter: postgresql
encoding: unicode
database: new_db
pool: 5
username: test
password: password
host: test_db.us-east-1.redshift.amazonaws.com
port: 5439
Now, when I hit localhost:3000 I get this error:
permission denied to set parameter "client_min_messages" to "warning"
: SET client_min_messages TO 'warning'
I can't seem to find out what is causing this - It seems like maybe my new DB isn't allow the SET command? I'm not really sure, but any help is appreciated.
Just answered this somewhere else, but I had the same issues today, here's what I did and it's working now:
#app/models/data_warehouse.rb
class DataWarehouse < ActiveRecord::Base
establish_connection "redshift_staging"
#or, if you want to have a db per environment
#establish_connection "redshift_#{Rails.env}"
end
Note that we are connecting on 5439, not the default 5432 so I specify the port
Also, I specify a schema, beta, which is what we use for our unstable aggregates, you could either have a different db per environment as mentioned above, or use various schemas and include them in the search path for ActiveRecord
#config/database.yml
redshift_staging:
adapter: postgresql
encoding: utf8
database: db03
port: 5439
pool: 5
schema_search_path: 'beta'
username: admin
password: supersecretpassword
host: db03.myremotehost.us #your remote host here, might be an aws url from Redshift admin console
###OPTION 2, a direct PG Connection
class DataWarehouse < ActiveRecord::Base
attr_accessor :conn
def initialize
#conn = PG.connect(
database: 'db03',
port: 5439,
pool: 5,
schema_search_path: 'beta',
username: 'admin',
password: 'supersecretpassword',
host: 'db03.myremotehost.us'
)
end
end
[DEV] main:0> redshift = DataWarehouse
E, [2014-07-17T11:09:17.758957 #44535] ERROR -- : PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'
(pry) output error: #<ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'>
UPDATE:
I ended up going with option 1, but using this adapter for now for multiple reasons:
https://github.com/fiksu/activerecord-redshift-adapter
Reason 1: ActiveRecord postgresql adapter sets client_min_messages
Reason 2: adapter also attempts to set Time Zone, which redshift doesn't allow (http://docs.aws.amazon.com/redshift/latest/dg/c_redshift-and-postgres-sql.html)
Reason 3: Even if you change the code in ActiveRecord for the first two errors, you run into additional errors that complain that Redshift is using Postgresql 8.0, at that point I moved on to the adapter, will revisit and update if I find something better later.
I renamed my table to base_aggregate_redshift_tests (notice plural) so ActiveRecord was easily able to connect, if you can't change your table names in redshift use the set_table method I have commented out below
#Gemfile:
gem 'activerecord4-redshift-adapter', github: 'aamine/activerecord4-redshift-adapter'
Option 1
#config/database.yml
redshift_staging:
adapter: redshift
encoding: utf8
database: db03
port: 5439
pool: 5
username: admin
password: supersecretpassword
host: db03.myremotehost.us
timeout: 5000
#app/models/base_aggregates_redshift_test.rb
#Model named to match my tables in Redshift, if you want you can set_table like I have commented out below
class BaseAggregatesRedshiftTest < ActiveRecord::Base
establish_connection "redshift_staging"
self.table_name = "beta.base_aggregates_v2"
end
in console using self.table_name -- notice it queries the right table, so you can name your models whatever you want
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:31:58.678103 #43776] DEBUG -- : BaseAggregatesRedshiftTest Load (45.6ms) SELECT "beta"."base_aggregates_v2".* FROM "beta"."base_aggregates_v2" LIMIT 1
Option 2
#app/models/base_aggregates_redshift_test.rb
class BaseAggregatesRedshiftTest < ActiveRecord::Base
set_table "beta.base_aggregates_v2"
ActiveRecord::Base.establish_connection(
adapter: 'redshift',
encoding: 'utf8',
database: 'staging',
port: '5439',
pool: '5',
username: 'admin',
password: 'supersecretpassword',
search_schema: 'beta',
host: 'db03.myremotehost.us',
timeout: '5000'
)
end
#in console, abbreviated example of first record, now it's using the new name for my redshift table, just assuming I've got the record at base_aggregates_redshift_tests because I didn't set the table_name
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:09:39.388918 #11537] DEBUG -- : BaseAggregatesRedshiftTest Load (45.3ms) SELECT "base_aggregates_redshift_tests".* FROM "base_aggregates_redshift_tests" LIMIT 1
#<BaseAggregatesRedshiftTest:0x007fd8c4a12580> {
:truncated_month => Thu, 31 Jan 2013 19:00:00 EST -05:00,
:dma => "Cityville",
:group_id => 9712338,
:dma_id => 9999
}
Good luck!
What's your postgresql version? or middle ware version? it's not a original postgresql version?
you can use \set VERBOSITY verbose
and then do it again, find which function#code raise the error.
and then analyze why you cann't set client_min_messages.
I use postgresql 9.3.3 no this problem.
super and normal user can set client_min_messages correct.
digoal=# \c digoal digoal
You are now connected to database "digoal" as user "digoal".
digoal=> set client_min_messages=debug;
SET
digoal=> set client_min_messages=warning;
SET
digoal=> SET client_min_messages TO 'warning';
SET
digoal=> \c postgres digoal
You are now connected to database "postgres" as user "digoal".
postgres=> SET client_min_messages TO 'warning';
SET
I was able to solve the issue by opening up the postgresql adapter for Activerecord and commenting out all of the SET commands.
However, this is not a best practice at all. At the end of editing the postgresql adapter I was faced with errors due to an outdated postgres version (80002).
The right answer is just updating postgres, which unfortunately isn't possible amongst my shared Redshift DB that lives at Amazon.
I have problem with seamless gem
development:
adapter: jdbcmysql
database: mydb_development
username: read_user
password: abc123
pool_adapter: jdbcmysql
port: 3306
master:
host: master-db.example.com
port: 6000
username: master_user
password: 567pass
read_pool:
- host: read-db-1.example.com
pool_weight: 2
- host: read-db-2.example.com
it should read for slave right [read-db-1.example.com] ? but it was weird.. it always read to master database [mydb_development] .
do you have any suggestion, how should i do to configure this gem for default read to slave database?
Thank you
Specify pool_weight=0 in the master configuration
By default, the master connection will be included in the read pool. If you would like to dedicate this connection only for write operations, you should set the pool weight to zero.
seam_leass_database_pool plugin