Whenever I have an application using ActiveRecord I get this ConnectionTimeoutError - but always after a certain unknown period of time
ActiveRecord::ConnectionTimeoutError (could not obtain a database connection within 5 seconds. The max pool size is currently 30; consider increasing it.):
It was previously set to 5, we have already increased it, and there is no way it can be using 30 connections at the same time. The only thing we use ActiveRecord for is our session store.
Our database.yml file looks like:
development:
adapter: sqlite3
database: db/development.sqlite3
pool: 30
timeout: 5000
(Test and production settings are the same)
I have been googling this occurrence, and just came across this posting:
https://groups.google.com/forum/#!msg/copenhagen-ruby-user-group/GEHgi_WudmM/gnCiwWqmVfMJ
Which mentions that ActiveRecord does not check a connection back into the pool once it is done with it?? Is that true? Do I need to manually manage the connections?
I appreciate any advice!!
edit I should probably mention I am running Rails 3.1.3
Rails has a middleware called ActiveRecord::ConnectionAdapters::ConnectionManagement which clears active connections every request so they do not stick around. Check your middleware to make sure you have this (which is there by default), run "rake middleware". You should not have to manage the connections manually to answer your last question.
Run this in your console
ActiveRecord::Base.clear_active_connections!
I used this code on my Sinatra app
after do
ActiveRecord::Base.clear_active_connections!
end
This solve my problem
Applies also to Rails 5, since Puma is default server.
If you are using Threaded Servers like Puma, Phushion Passenger, they create multiple threads of the same application. Thereby making your application run faster, by concurrently executing each incoming requests.
Make sure that the pool size is equal or more than the number of threads. I was having an issue when few of my threads were giving me ActiveRecord::ConnectionTimeoutError, and the problem was vague since it occurs once in a while not very often.
I was also experiencing a similar problem with a Sinatra App, I added
after do
ActiveRecord::Base.clear_active_connections!
end
To my application controller and it solved my problem.
This construct is known as a filter and it evaluates after each request.
I'm not sure what was actually happening with the application, but I would suspect that connections weren't being closed after each request.
Related
I have a Rails application that I've been developing for quite some time. All this time I tested it locally and on a DEV server. On the DEV server, next to the deployed application, there is also a PG database. And there were no problems with connections. I think there is simply no connection limit, or it is too high - not so important.
Today I started doing deployment to the PROD server. It is similar in power to that for DEV, but BD is already in the DO Database. By the way, the servers themselves are also located in DigitalOcean.
The problem is that DO Database has a limit of 20 connections. And as far as I understand, exceeding this limit - the Rails application gives an error:
ActiveRecord::ConnectionNotEstablished (FATAL: remaining connection slots are reserved for non-replication superuser connections)
The most obvious option is to reduce the number of requests on page load. But this still did not solve the problem if, for example, the number of users increases.
Can you please tell me which way to look? Are there any solutions to the problem other than updating the DO Database power?
You might want to try PG Bouncer (never tried it though, so i can't really tell how it will impact the app).
I have a problem on my Rails application.
I am in version 3.2.22 of rails and 2.2.5 of ruby connect to a mongodb 2.6.
The problem is that I have huge difference in performance on simple or even more complex queries.
For example :
I run rails c development and then I execute my function (quite complex) it responds after 30 seconds
I run rails c production, I perform the same function as the previous one, it responds after 6 minutes 30 seconds, 7 times slower.
So I try to copy pasted the configuration 'development' in 'production', but the result remains the same, same for the Gemfile.
I look in all the code of the project no difference between the environment production and development.
Do you know the differences in the heart of rails between these two environments? did anyone ever encounter the problem?
Importantly, I am of course connecting to the same database.
Thanks in advance.
You have not specified your mongo (Ruby driver) and mongoid versions, if they are old you may need to upgrade and/or adjust the code to your environment.
To determine whether the slowdown happens in the database or in your application, use command monitoring as described here: https://docs.mongodb.com/ruby-driver/current/tutorials/ruby-driver-monitoring/#command-monitoring
Look at the log entries corresponding to your queries and make note of how log they take in each environment. By implementing a custom event subscriber you can also save the commands being sent and verify that they are identical between the two environments.
I got this!
When I saw the number of requests in production, I immediately thought of the query cache.
I found the 'identity_map_enabled' parameter for mongo, so I changed it to true, and hop magic!
I'm running 7 sidekiq processes (currency set to 40) plus a passenger webserver, connecting to a postgres database. Rails pool setting is set to 100 and and postgres max_connections setting is also the default 100.
I just added a new job class where each job makes multiple postgres requests, and I started getting this error on many sidekiq jobs and sometimes on my webserver: PG::ConnectionBad: FATAL: remaining connection slots are reserved for non-replication superuser connections
I tried increasing postgres max_connections to 200, and the error still occurs. Then I tried reducing the activerecord pool setting to 25 (25 connections for each process = 200 total connections), figuring I might start getting DB connection timeout errors but at least it would stop the "no remaining connection slots" errors.
But I'm still getting the remaining connection slots are reserved error.
The smarter way to to deal with this issue might be to load the important postgres data that I keep reusing into redis, and then access it from redis - which obivously plays much more nicely and quickly with sidekiq. But even as I do that, I'd like to understand what's going on here with the postgres connections:
Am I likely leaking connections, and is that something I should be
managing inside the sidekiq jobs?
(see Releasing ActiveRecord connection before the end of a Sidekiq job)
Should I look into more obscure things like locking/contention issues
or threading issues with the PG driver?
(see https://github.com/mperham/sidekiq/issues/594. I think I'm using ActiveRecord pretty simply without much obscure or abnormal logic for a rails app...)
Or maybe I'm just not understanding how the ActiveRecord pool setting
and postgres max_connection settings work together...?
My situation may be too specific to help many others running into this error, but I'll share what I've found out in case it helps to point you in the right direction.
Am I likely leaking connections, and is that something I should be managing inside the sidekiq jobs?
No, not likely. Sidekiq's default middleware includes a hook to close connections even if a job fails. It took me a long time to understand what the heck that means, so if you're not sure what that means, tl;dr: Sidekiq won't leak connections if you're using it normally.
Should I look into more obscure things like locking/contention issues or threading issues with the PG driver?
Unless you're using a very obscure setup, its probably something more simple.
Or maybe I'm just not understanding how the ActiveRecord pool setting and postgres max_connection settings work together...?
Anyone can feel free to correct me if I'm wrong, but here's the guidelines I'm going on for pool settings, max_connections, and sidekiq processes:
Minimum DB pool size = sidekiq concurrency setting
Maximum DB pool size* = postgres max_connections / total sidekiq processes (+ leave a few connections for web processes)
*note that active record will only create a new connection when a new thread needs one, so if 95% of your threads don't use postgres at the same time, you should be able to get away with far fewer max_connections than if every thread is trying to check out a connection at the same time.
What fixed my problem:
On my Ubuntu machine, I had changed the vm.overcommit_memory setting to 1 as recommended by redis, so that it can spawn it's write to disk process without breaking the machine.
This is the right way to go, but leaves postgres vulnerable to being killed by OOM (out of memory) Killer if memory usage gets too high. Turns out that postgres will stop allowing new connections if it receives a kill signal from the OOM Killer.
Once I restarted postgres, sidekiq was able to connect again. The longer term solution is simply to work on memory leaks and make sure memory usage doesn't get too high. Also it's possible to configure the OOM killer to prioritize killing my sidekiqs before killing postgres.
I am having an issue deploying an JRuby Rails application into JBoss, using JNDI to manage database connections.
After the first request I have this error:
[CachedConnectionManager] Closing a connection for you. Please close them yourself
I think this is because JBoss uses a connection pool and expect that rails (jruby) release the connection after each use, what is wrong, because rails (ActiveRecord) has its own connection pool.
I've tried to call
ActiveRecord::Base.clear_active_connections!
after each request, in a after_filter, but this haven't worked.
Does somebody have some idea?
Thanks in advance.
I am also having connection pool problems trying to run a Sinatra/ActiveRecord app in multithreaded mode in a Glassfishv3 container. The app includes a "before" block with ActiveRecord::Base.connection to force acquisition of a thread-owned connection, and an "after" block with ActiveRecord::Base.clear_active_connections! to release that connection.
Multi threaded mode
I have tried a great many variations of ActiveRecord 3.0.12 and 3.2.3, JNDI with a Glassfish connection pool, simply using ActiveRecord's connection pool, or even monkey patching the ActiveRecord connection pool to bypass it completely, using the Glassfish pool directly.
When testing with a simple multi-threaded HTTP fetcher, all variations I have tried have resulted errors, with a greater percentage of failing requests as I increase the worker threads in the HTTP fetcher.
With AR 3.0.12, the typical error is the Glassfish pool throwing a timeout exception (for what it's worth, my AR connection pool is larger than my Glassfish pool; I understand AR will pool the connection adapter object, and arjdbc will acquire and release actual connections behind the scenes).
With AR 3.2.3, I get a more ominous error. The error below came from a test not using JNDI but simply using the ActiveRecord connection pool. This is one of the better configurations in that about 95% of requests complete OK. The error requests fail with this exception:
org.jruby.exceptions.RaiseException: (ConcurrencyError) Detected invalid hash contents due to unsynchronized modifications with concurrent users
at org.jruby.RubyHash.keys(org/jruby/RubyHash.java:1356)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:294)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:282)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:201)
at MonitorMixin.mon_synchronize(classpath:/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:200)
at ActiveRecord::ConnectionAdapters::ConnectionPool.checkin(/Users/pat/app/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:276)
at ActiveRecord::ConnectionAdapters::ConnectionPool.release_connection(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstrac/connection_pool.rb:110)
at ActiveRecord::ConnectionAdapters::ConnectionHandler.clear_active_connections!(/Users/pat/apps/glassfish/glassfish3/glassfish/domains/domain1/applications/lookup_service/WEB-INF/gems/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:375)
...
Single threaded mode
Losing confidence in the general thread safety of ActiveRecord (or arjdbc?), I gave up on using ActiveRecord multithreaded and configured warbler and JRuby-Rack do JRuby runtime pooling, emulating multiple single-threaded Ruby processes much like Unicorn, Thin, and other typical Ruby servers.
In config/warble.rb:
config.webxml.jruby.min.runtimes = 10
config.webxml.jruby.max.runtimes = 10
I reran my test, using JNDI this time, and all requests completed with no errors!
My Glassfish connection pool is size 5. Note the number of jruby runtimes is greater than the connection pool size. There is probably little point in making the JRuby runtime pool larger than the database connection pool since in this application, each request consumes a database connection, but I just wanted to make sure that even with contention for database connections I did not get the time-out errors I had seen in multithreaded mode.
These concurrency problems are disappointing to say the least. Is anyone successfully using ActiveRecord under moderate concurrency? I know that in a Rails app, one must call config.threadsafe! to avoid the global lock. Looking at the code, it does not seem to modify any ActiveRecord setting; is there some configuration of ActiveRecord I'm not doing?
The Rails connection pool does hold connections open, even after they are returned to the pool, so the underlying JDBC connection does not get closed. Are you using a JNDI connection with activerecord-jdbc-adapter (adapter: jndi or jndi: location in database.yml)?
If so, this should happen automatically, and would be a bug.
If not, you can use the code in ar-jdbc and apply the appropriate connection pool callbacks yourself.
https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/jdbc/callbacks.rb
class ActiveRecord::ConnectionAdapters::JdbcAdapter
# This should force every connection to be closed when it gets checked back
# into the connection pool
include ActiveRecord::ConnectionAdapters::JndiConnectionPoolCallbacks
end
I have site running rails application and resque workers running in production mode, on Ubuntu 9.10, Rails 2.3.4, ruby-ee 2010.01, PostgreSQL 8.4.2
Workers constantly raised errors: PGError: server closed the connection unexpectedly.
My best guess is that master resque process establishes connection to db (e.g. authlogic does that when use User.acts_as_authentic), while loading rails app classes, and that connection becomes corrupted in fork()ed process (on exit?), so next forked children get kind of broken global ActiveRecord::Base.connection
I could reproduce very similar behaviour with this sample code imitating fork/processing in resque worker. (AFAIK, users of libpq recommended to recreate connections in forked process anyway, otherwise it's not safe )
But, the odd thing is that when I use pgbouncer or pgpool-II instead of direct pgsql connection, such errors do not appear.
So, the question is where and how should I dig to find out why it is broken for plain connection and is working with connection pools? Or reasonable workaround?
After doing a bit of research / trial and error. For anyone who is coming across the same issue. To clarify what gc mentioned.
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
Above code should be placed in: /lib/tasks/resque.rake
For example:
require 'resque/tasks'
task "resque:setup" => :environment do
ENV['QUEUE'] = '*'
Resque.after_fork do |job|
ActiveRecord::Base.establish_connection
end
end
desc "Alias for resque:work (To run workers on Heroku)"
task "jobs:work" => "resque:work"
Hope this helps someone, as much as it did for me.
When I created Nestor, I had the same kind of problem. The solution was to re-establish the connection in the forked process. See the relevant code at http://github.com/francois/nestor/blob/master/lib/nestor/mappers/rails/test/unit.rb#L162
From my very limited look at Resque code, I believe a call to #establish_connection should be done right about here: https://github.com/resque/resque/blob/master/lib/resque/worker.rb#L123
You cannot pass a libpq reference across a fork() (or to a new thread), unless your application takes very close care of not using it in conflicting ways. (Like, a mutex around every single attempt to use it, and you must never close it). This is the same for both direct connections and using pgbouncer. If it worked in pgbouncer, that was pure luck in missing a race condition for some reason, and will eventually break.
If your program uses forking, you must create the connection after the fork.
Change Apache configuration and add
PassengerSpawnMethod conservative
I had this issue with all of my Mailer classes and I needed to call ActiveRecord::Base.verify_active_connections! within the mailer methods in order to ensure a connection was made.