Why do I get ActiveRecord::ConnectionTimeoutError? - ruby-on-rails

I have hosted my website on AWS. I did not get the connection timeout error when I had only one server instance with 5000 requests from Jmeter in one second. But received connection timeout error after I increased the instance to 4.
I am using AWS db.t2.small RDS instance. I saw from the blog that it can handle 150 concurrent connections.
this is my database config.
adapter: mysql2
encoding: utf8
reconnect: false
pool: 5
If this error comes when rails gets more than 5 requests at a time then why did it not happen when I had only one instance?
Database can have 150 concurrent connections. 4 instances with 5 pool thread would be 20 threads. It is way below the maximum limit. What could be the reason for the error?

Related

Puma web server (ROR) and connection timeout

i have a custom setup of Ruby on rails and using puma as a web server (backed by Nginx - socket)
The database I am connection to is a rds medium (so 296 connections limit). My puma setup is threads 1:32 and 4 workers. With a connection pool of 128.
I have a high load 300 requests/sec and every lets say 1000 requests a longer calculation is made that takes about 3 seconds (getting all the events, making some calculations and updating them).
I am getting the error
ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.016 seconds)
But if I look at the rds database only 43 connections are opened. My memory is like 2000 MB out of 7000 MB (the 2 core processors are at 100%) I am wondering why do I get a connection timeout even if all my connections are not opened (and of course is the puma configuration ok)?
Thank for your help!
EDIT:
In my puma.rb I have:
on_worker_boot do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 128
ActiveRecord::Base.establish_connection
end
end
As mentioned here in the rails configuration guide about database pooling, when all the db connections are exhausted, ActiveRecord will wait for one to free up, I assume the number you increased is the http connection limit, not the db connection limit,
You could edit your database.yml and increase you connection limit to say 296 which is the limit of the rds instance
production:
adapter: mysql2
database: /path/to/sock
pool: 296
# username, password, etc

Rails Oracle enhanced adapter ignores pool size

I use 3 thin servers (behind an Nginx proxy) for my productive Rails app. Each of the thin servers produces 5 connections to the database. So my app has 15 connections in total. My Oracle admin complains that I use too many connections.
I do not know how to reduce the number of connections. I tried pool: 2 in database.yml, and restarted all thin servers, but my app still produces 15 connections. It seems that the pool setting is not used at all.
Of course, I could reduce the number of thin server, but I would like to know how to use pool.
I have another Rails app using PostgreSQL. Here, this parameter works as expected.
I use Rails 4.1 and Ruby 2.1
production:
adapter: oracle_enhanced
database: "(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx)(PORT=12345)))(CONNECT_DATA=(SID=yyy)))"
pool: 2
username: ORACLE_USER
password: ORACLE_PASSWORD
I'd suggest you to open issue at https://github.com/rsim/oracle-enhanced - the maintainers check that more often.
After a reboot of the database server, everything works as expected ...

ActiveRecord::ConnectionTimeoutError w/ rails, sidekiq & postgresql

I have a rails application that is running postgresql and sidekiq. When scheduling a large amount of background jobs that take ~20 seconds each to complete I get the following error
ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.097 seconds)
In my database.yml file I have specified a pool of 100. Then in my postgres config I have set the max_connections to be 200 (as there are other applications using postgres). It seems like this should avoid this problem but it does not.
Ideas?

Database connection timeout error in postgresql

Currently am running backgroud jobs with sidekiq, while running its giving "ActiveRecord::ConnectionTimeoutError".
This is my current database.yml file,
production:
adapter: postgresql
encoding: unicode
database: app_production
username: password
password:
host: app.domain.com
pool: 25
This is is my sidekiq.yml file,
production:
concurrency: 25
timeout: 300
While running its giving connection timeout error
This error was am getting in the backgroud,
could not obtain a database connection within 5 seconds (waited 5.82230675 seconds). The max pool size is currently 25; consider increasing it.
Maximum number of connections allowed to your postgres database is 25. But you have set your concurrency for sidekiq as 25. So if you have all the concurrent threads for sidekiq running, you will not have any database connection available for your app server.
Either reduce the sidekiq concurrency or increase the pool size(I recommend increasing the pool size).
Postgres allows 100 concurrent connections by default

Rails3 active record pool and Sidekiq multi-thread

I am using sidekiq with rails3. Sidekiq runs 25 threads default. I would like to increase multi-thread limit, I have done this by changing sidekiq.yml.
So, what is the relation between pool value in database.yml and sidekiq multi-thread. What is the maximun value of mysql pool. Is it depends on server memory?
sidekiq.yml
:verbose: true
:concurrency: 50
:pool: 50
:queues:
- [queue_primary, 7]
- [default, 5]
- [queue_secondary, 3]
database.yml
production:
adapter: mysql2
encoding: utf8
reconnect: false
database: db_name
pool: 50
username: root
password: root
socket: /var/run/mysqld/mysqld.sock
Each Sidekiq job executes in one of up to 50 threads with your configuration. Inside the job, any time an ActiveRecord model needs to access the database, it uses a database connection from the pool of available connections shared by all ActiveRecord models in this process. The connection pool lets a thread take a connection or blocks until a free connection is available.
If you have less connections available in your ActiveRecord database connection pool than running Sidekiq jobs/threads, jobs will be blocked waiting for a connection and possibly timeout (after ~ 5 seconds) and fail.
This is why it's important that you have as many available database connections as threads in your sidekiq worker process.
Unicorn is a single-threaded, multi-process server - so you shouldn't need more than one connection for each Unicorn back-end worker process.
However, the database can only handle so many connections (depending on OS, hardware, and configuration limits) so you need to make sure that you are distributing your database connections where they are needed and not exceeding your maximum.
For example, if your database is limited to 1000 connections, you could only run 20 sidekiq processes with 50 threads each and nothing else.

Resources