We have recently been having issues with postgres running out of connection slots, and after a lot of debugging and shrugging of shoulders we have pretty much tracked it down to the fact that we understood Connection pools wrong.
We use Rails, Postgres and Unicorn, and Delayed Job
Are we correct to assume that the connection pool is process specific, i.e each process has its own 10 (our connection pool limit) connections to the db in the pool?
And If there are no threads anywhere in the app, are we correct to assume that for the most part each process will use 1 connection, since noone ever needs a second one?
Based on these assumptions we tracked it down to the number of processes
Web server - 4x unicorn
Delayed job 3x server - 30 processes = 90 connections
That's 94 connections, and a couple connections for rails:consoles and a couple of rails runner or rake tasks would explain why we were hitting the limit often right? It has been particularly often this week after I converted a ruby script into a rails runner script.
We are planning to increase the max from 100 -> 200 or 250 to relieve this but is there a trivial way to implement inter process connection pooling in rails?
You probably want to take a look at pgbouncer. It's a purpose-built PostgreSQL connection pooler. There are some notes on the wiki too. It's packaged for most linux distros too.
Related
Background:
On production we have a poorly understood error that occurs sporadically (more frequently by the week) and may take down our whole application at times – or sometimes just all of our background processes. Unfortunately, I am not certain what causes the issue, below is my working theory – could you please validate its logic?
The error preceding the downtime (occurring a couple of hundred times in matters of seconds) is the PostgreSQL error FATAL: sorry, too many clients already.
Working theory:
Various parts of an API can request connections with the database. In our Ruby on Rails application for example, we have 12 puma workers with 16 threads (12 * 16 = 192 possible db connections). Also, we have 10 background workers, each being allowed a single db connection. If we also account for a single SSH session with 1 database connection, the maximum amount of db connections we would have to anticipate is 192 + 10 + 1 = 203 PostgreSQL connections, set with the max_connections in the postgresql.conf config file.
Our max_connections however is still set to the PostgreSQL default of 100. My understanding is that this is problematic: when the application thinks more db connections are possible (looking at the application side settings for puma and our background workers) it allows for new db connections to be made. But when those connections with PostgreSQL are initiated, PostgreSQL looks at its own set maximum of 100 connections and breaks the connection.
When instead the amount of "requestable" connections (in this case 203) would either be lower than or equal to the PostgreSQL max_connections, it would utilise the pool timeout to queue to requested db connection until a db socket becomes available.
This is desirable since too many connections could be resolved within the pool timeout. Thus the solution to our problem is to make the "requestable" database connections =< possible database connections. If that is still not enough, I should increase the 100 possible connections.
Does this make sense...?
Any ideas or criticism would be very much appreciated!
Your app threads does not need to map 1-1 to database connections. You can use a connection pool for the database connections. See https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
There is also lots of good info on this subject at https://devcenter.heroku.com/articles/concurrency-and-database-connections
Here is my problem: Each night, I have to process around 50k Background Jobs, each taking an average of 60s. Those jobs are basically calling the Facebook, Instagram and Twitter APIs to collect users' posts and save them in my DB. The jobs are processed by sidekiq.
At first, my setup was:
:concurrency: 5 in sidekiq.yml
pool: 5 in my database.yml
RAILS_MAX_THREADS set to 5 in my Web Server (puma) configuration.
My understanding is:
my web server (rails s) will use max 5 threads hence max 5 connections to my DB, which is OK as the connection pool is set to 5.
my sidekiq process will use 5 threads (as the concurrency is set to 5), which is also OK as the connection pool is set to 5.
In order to process more jobs in the same time and reducing the global time to process all my jobs, I decided to increase the sidekiq concurrency to 25. In Production, I provisionned a Heroku Postgres Standard Database with a maximum connection of 120, to be sure I will be able to use Sidekiq concurrency.
Thus, now the setup is:
:concurrency: 25 in sidekiq.yml
pool: 25 in my database.yml
RAILS_MAX_THREADS set to 5 in my Web Server (puma) configuration.
I can see that 25 sidekiq workers are working but each Job is taking way more time (sometimes more than 40 minutes instead of 1 minute) !?
Actually, I've been doing some tests and realize that processing 50 of my Jobs with a sidekiq concurrency of 5, 10 or 25 result in the same duration. As if somehow, there was a bottleneck of 5 connections somewhere.
I have checked Sidekiq Documentation and some other posts on SO (sidekiq - Is concurrency > 50 stable?, Scaling sidekiq network archetecture: concurrency vs processes) but I haven't been able to solve my problem.
So I am wondering:
is my understanding of the rails database.yml connection pool and sidekiq concurrency right ?
What's the correct way to setup those parameters ?
Dropping this here in case someone else could use a quick, very general pointer:
Sometimes increasing the number of concurrent workers may not yield the expected results.
For instance, if there's a large discrepancy between the number of tasks and the number of cores, the scheduler will keep switching your tasks and there isn't really much to gain, the jobs will just take about the same or a bit more time.
Here's a link to a rather interesting read on how job scheduling works https://en.wikipedia.org/wiki/Scheduling_(computing)#Operating_system_process_scheduler_implementations
There are other aspects to consider as well, such as datastore access, are your workers using the same table(s)? Is it backed by a storage engine that locks the entire table, such as MyISAM? If that's the case, it won't matter if you have 100 workers running at the same time, and enough RAM and cores, they will all be waiting in line for whichever query is running to release the lock on the table they're all meant to be working with.
This can also happen with tables using engines such as InnoDB, which doesn't lock the entire table on write but you may have different workers accessing the same rows (InnoDB uses row-level locking) or simply some large indexes that don't lock but slow down the table.
Another issue I've encountered was related to Rails (which I'm assuming you're using) taking quite a toll on RAM in some cases, so you might want to look at your memory footprint as well.
My suggestion is to turn on logging and look at the data, where do your workers spend most time at? Is it something on the network layer (unlikely), is it waiting to get access to a core? Reading/writing from your data store? Is your machine swapping?
Assume I have the below setup on Heroku + Rails, with one web dyno and two worker dynos.
Below is what I believe to be true, and I'm hoping that someone can confirm these statements or point out an assumption that is incorrect.
I'm confident in most of this, but I'm a bit confused by the usage of client and server, "connection pool" referring to both DB and Redis connections, and "worker" referring to both puma and heroku dyno workers.
I wanted to be crystal clear, and I hope this can also serve as a consolidated guide for any other beginners having trouble with this
Thanks!
How everything interacts
A web dyno (where the Rails application runs)
only interacts with the DB when it needs to query it to serve a page request
only interacts with Redis when it is pushing jobs onto the Sidekiq queue (stored in Redis). It is the Sidekiq client
A Worker dyno
only interacts with the DB if the Sidekiq job it's running needs to query the DB
only interacts with Redis to pull jobs from the Sidekiq queue (stored in Redis). It is the Sidekiq server
ActiveRecord Pool Size
An ActiveRecord pool size of 25 means that each dyno has 25 connections to work with. (This is what I'm most unsure of. Is it each dyno or each Puma/Sidekiq worker?)
For the web dynos, it can only run 10 things (threads) at once (2 puma x 5 threads), so it will only consume a maximum of 10 threads. 25 is above and beyond what it needs.
For worker dynos, the Sidekiq concurrency of 15 means 15 Sidekiq processes can run at a time. Again, 25 connections is beyond what it needs, but it's a nice buffer to have in case there are stale or dead connections that won't clear.
In total, my Postgres DB can expect 10 connections from the web dyno and 15 connects from each worker dyno for a total of 40 connections maximum.
Redis Pool Size
The web dyno (Sidekiq client) will use the connection pool size specified in the Sidekiq.configure_client block. Generally ~3 is sufficient because the client isn't constantly adding jobs to the queue. (Is it 3 per dyno, or 3 per Puma worker?)
Each worker dyno (Sidekiq server) will use the connection pool size specified in the Sidekiq.configure_server block. By default it's sidekiq concurrency + 2, so here 17 redis connections will be taken up by each dyno
I don't know Heroku + Rails but believe I can answer some of the more generic questions.
From the client's perspective, the setup/teardown of any connection is very expensive. The concept of connection pooling is to have a set of connections which are kept alive and can be used for some period of time. The JDK HttpUrlConnection does the same (assuming HTTP 1.1) so that - assuming you're going to the same server - the HTTP connection stays open, waiting for the next expected request. Same thing applies here - instead of closing a JDBC connection each time, the connection is maintained - assuming same server and authentication credentials - so the next request skips the unnecessary work and can immediately move forward in sending work to the database server.
There are many ways to maintain a client-side pool of connections, it may be part of the JDBC driver itself, you might need to implement pooling using something like Apache Commons Pooling, but whatever you do it's going to increase your behavior and reduce errors that might be caused by network hiccups that could prevent your client from connecting to the server.
Server-side, most database providers are configured with a pool of n possible connections that the database server may accept. Usually each additional connection has a footprint - usually quite small - so based on the memory available you can figure out the maximum number of available connections.
In most cases, you're going to want to have larger-than-expected connections available. For example, in postgres, the configured connection pool size is for all connections to any database on that server. If you have development, test, and production all pointed at the same database server (obviously different databases), then connections used by test might prevent a production request from being fulfilled. Best not to be stingy.
I'm running 7 sidekiq processes (currency set to 40) plus a passenger webserver, connecting to a postgres database. Rails pool setting is set to 100 and and postgres max_connections setting is also the default 100.
I just added a new job class where each job makes multiple postgres requests, and I started getting this error on many sidekiq jobs and sometimes on my webserver: PG::ConnectionBad: FATAL: remaining connection slots are reserved for non-replication superuser connections
I tried increasing postgres max_connections to 200, and the error still occurs. Then I tried reducing the activerecord pool setting to 25 (25 connections for each process = 200 total connections), figuring I might start getting DB connection timeout errors but at least it would stop the "no remaining connection slots" errors.
But I'm still getting the remaining connection slots are reserved error.
The smarter way to to deal with this issue might be to load the important postgres data that I keep reusing into redis, and then access it from redis - which obivously plays much more nicely and quickly with sidekiq. But even as I do that, I'd like to understand what's going on here with the postgres connections:
Am I likely leaking connections, and is that something I should be
managing inside the sidekiq jobs?
(see Releasing ActiveRecord connection before the end of a Sidekiq job)
Should I look into more obscure things like locking/contention issues
or threading issues with the PG driver?
(see https://github.com/mperham/sidekiq/issues/594. I think I'm using ActiveRecord pretty simply without much obscure or abnormal logic for a rails app...)
Or maybe I'm just not understanding how the ActiveRecord pool setting
and postgres max_connection settings work together...?
My situation may be too specific to help many others running into this error, but I'll share what I've found out in case it helps to point you in the right direction.
Am I likely leaking connections, and is that something I should be managing inside the sidekiq jobs?
No, not likely. Sidekiq's default middleware includes a hook to close connections even if a job fails. It took me a long time to understand what the heck that means, so if you're not sure what that means, tl;dr: Sidekiq won't leak connections if you're using it normally.
Should I look into more obscure things like locking/contention issues or threading issues with the PG driver?
Unless you're using a very obscure setup, its probably something more simple.
Or maybe I'm just not understanding how the ActiveRecord pool setting and postgres max_connection settings work together...?
Anyone can feel free to correct me if I'm wrong, but here's the guidelines I'm going on for pool settings, max_connections, and sidekiq processes:
Minimum DB pool size = sidekiq concurrency setting
Maximum DB pool size* = postgres max_connections / total sidekiq processes (+ leave a few connections for web processes)
*note that active record will only create a new connection when a new thread needs one, so if 95% of your threads don't use postgres at the same time, you should be able to get away with far fewer max_connections than if every thread is trying to check out a connection at the same time.
What fixed my problem:
On my Ubuntu machine, I had changed the vm.overcommit_memory setting to 1 as recommended by redis, so that it can spawn it's write to disk process without breaking the machine.
This is the right way to go, but leaves postgres vulnerable to being killed by OOM (out of memory) Killer if memory usage gets too high. Turns out that postgres will stop allowing new connections if it receives a kill signal from the OOM Killer.
Once I restarted postgres, sidekiq was able to connect again. The longer term solution is simply to work on memory leaks and make sure memory usage doesn't get too high. Also it's possible to configure the OOM killer to prioritize killing my sidekiqs before killing postgres.
My database is very cpu constrained, and I can't find the root cause of the issue. I currently have two applications servers each wit a Rails api connecting to PostgreSQL via the ruby-pg gem. Both application server also have sidekiq running background jobs, and I have a handful of support servers processing new posts from a national feed via sidekiq. If I were running out of memory, the solution would seemingly be straight forward. Any general ideas why I am CPU constrained?
Database Specs:
Rackspace 8GB Performance Tier cloud VM (8GB RAM, 8x Core CPU, SSD)
Debian 7 Wheezy Linux OS
PostgreSQL 9.1 with PostGIS extension
Possible Problems:
PostgreSQL 9.1 is bad at indexes
The database has nearly 10GB of indexes. I am going to upgrade my database to PostgreSQL version >= 9.2. In version 9.2, index only scans were introduced.
Too many connections
In the postgresql.conf, I have set max connection equal to '500'. Usually throughout the day, only 175 connections are utilized, but during peak times, sidekiq tasks will increase the current connections to 350. How many connections are recommended with an 8GB server instance?
Idol Connections
When I take a look at pg_stat_activity in the psql console, I see sidekiq is leaving a lot of IDLE connections. Could these connections result in CPU inflation? Does the fix exist in the api or in sidekiq?
Need a more powerful server
Maybe there is not a bug. I might need to simply increase the server instance. Again this would make more sense if I was memory bound. However, both app servers and 3 of the support sidekiq servers are 4gb performance tier instances. Essentially, servers that interact with the database have combined more than double the resources of the database. Should this even matter?
Additional questions:
What tools/techniques should I employ to troubleshoot the issue?
Any basic settings in the postgresql.conf related to cpu usage?
Are there any known issues related to rails, sidekiq, or the pg gem that could be a contributing factor? (I havent seen any open issues.)
Are there any general postgreSQL guideline for CPU usage?
Any other ideas thoughts that might help my search?
You are using massively too many concurrent connections. PostgreSQL will be wasting lots of its time on housekeeping and juggling concurrent queries. All the concurrent work will be fighting for CPU and buffer space, there'll be heavy contention on spinlocks, and it'll all generally be a mess.
On an 8 core machine, you should probably not have more than 20 actively working connections if you're mostly CPU constrained. If you're I/O limited, you can go higher, but 350 is just ridiculous.
If possible, put a PgBouncer in transaction pooling mode in front of your PostgreSQL instance, so queries get queued up and executed rapidly in series instead of slowly in parallel.
See number of database connections (Pg wiki).
Additionally, PostGIS can be very CPU-heavy. It sometimes needs to do very complex calculations. I suggest using the auto_explain module to record long running queries, and using pg_stat_statements / pg_stat_plans to record what's taking up resources. Examine these queries to see if they need improvement.
Your idle in transaction sessions must be dealt with, too. Depending on why they're idle and whether they have a transaction ID or not, they might be causing serious table bloat. They're also creating unnecessary signalling overhead within PostgreSQL, as it has to do more co-ordination with backends that're actively doing things. Finally, the number of open transactions its self increases the cost of some internal housekeeping operations.
So. Your DB will probably perform better if you reduce the connection counts, put a PgBouncer in transaction pooling mode in front, and fix those idle connections.
Most likely you are CPU constrained because your work needs a lot of CPU. :)
9.1 is not generally bad at indexes. There may be some specific issues, as all versions might, which exactly what they are might change from version to version.
Index-only-scans are mostly a benefit when you are IO constrained. I wouldn't hold out much hope for that being a magic bullet for you.
350 connections are certainly not helpful, but probably are not very harmful, either. But when they are harmful, it can be downright catastrophic. The correct value is more determined by the number of cores, not the amount of RAM. If it is easy to throttle down the sidekiq connections, do it even if you can't prove that it helps.
If the connections are just IDLE, not IDLE in transaction, then they probably aren't very harmful, but again there are a few cases where they can be. That is pretty much the same issue as the number of connections.
The connection you showed from top was idle in transaction. That status shouldn't be taking up much CPU, so that probably means it is rapidly cycling through statements and top just happens to catch it while it is between them. But you didn't say how many similar lines there were in top, if it is just that one it suggests your code is not running concurrently and 7 of you 8 CPUs are wasted.
Regarding the db server versus the other servers, if the database is fundamentally the limit, beating on it with a bigger hammer is not going to help. Often there is some flexibility about where computation is done. If you can get the app servers to do more computation that is currently done on the db and let the db focus on ACID issues, that would be good. But no one but you can know if that is possible or feasible.
My first stop would be to use pg_stat_statements to see what SQL statements are taking the most time. Maybe just adding an index to the slowest/most frequent query would make the problem magically go away.