I'm getting the following error with Ruby on Rails, Heroku and Postgresql:
PG::Error (FATAL: too many connections for role "********"
I've restarted the server several times to no avail. Any ideas?
Paying Heroku more money isn't always the answer.
I had this problem temporarily when I was running up against the dev-level database's row limit. Deleting rows using the console until I was below the limit solved the issue.
Another potential way you can run into this is if you're using unicorn. The number of connections used is the number of dynos times the number of unicorn workers per dyno. Heroku explains it all here, along with a way to configure it in config/unicorn.rb.
Also, seeing the number of connections being used can be useful. Just run heroku pg:info.
Apparently I was on a dev-level DB. I upgraded to Crane level production DB and everything should be fine.
Related
We are getting a lot of sentries issue with PG:ConnectionBad on Postgres in RDS AWS and Ruby on Rails
PG::ConnectionBad Sidekiq/BookingExtensionCheckWorker
could not translate host name “ls-XXXXXXXXXXXXfee44.XXXXXXXXX.eu-west-1.rds.amazonaws.com” to address: Name or service not known
In two weeks ago, we have migrate a new database and we change endpoint in the RoR files api
new database endpoint,
ls-XXXXXXXXXXXXXXXf3d4a.XXXXXXXX.eu-west-1.rds.amazonaws.com
It working fine with no issue between the new database and Ruby on Rails api. however, I get a lot of sentries issue said that sidekiq having an issue with connection database as they use old database address which it’s no longer used. I have to check sidekiq database; the code shows it’s connected to the new database. They keep go back to the old database when I run Ruby on rails.
Is there some way to find why sidekiq keep connection old database
Sidekiq is background service. So when someone deployed he has used some sort of like. Right now you are getting issue you think is fine but actually most jobs are not running. Which you will noticed in few days.
How can you check jobs If you sidekiq setup. Probably following url will lead to all jobs.
your_url/sidekiq
It will show you all jobs, I think you have option here to restart services. Just click to restart sidekiq. And everything would be fine.
How to start
As your sidekiq running with old configuration. So following steps could be dangrous, I think you must check how you have started your process. Otherwise following are some way people configure it and runt it.
systemctl restart sidekiq
If this does not work, check your command your deployment guy has setup some sort of scripts inside /etc/init.d folder
Some time developer use following simple line to run sidekiq
bundle exec sidekiq -d -P tmp/sidekiq.pid -L log/sidekiq.log
or
bundle exec sidekiqctl stop
I have deployed a Rails app on Heroku since 2 years without trouble
Todays the app crash.
Rails log are:
/app/vendor/bundle/ruby/2.3.0/gems/pg-0.21.0/lib/pg.rb:56:in `initialize': FATAL: sorry, too many clients already (PG::ConnectionBad)
FATAL: sorry, too many clients already
My rails app is v5.2.0
I use Heroku with 2 dyno
The database is a postgres with "Hobby Dev".
I try:
To upgrade the database but I got the same error
heroku addons:create heroku-postgresql:standard-0 --follow DATABASE_URL --app locabri
Creating heroku-postgresql:standard-0 on ⬢ xxxx... !
▸ An error was encountered when contacting the add-on partner to create heroku-postgresql:standard-0: The database you are attempting to follow was not found.
to change DB_POOL in env variable
heroku pg:info
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.6
Created: 2017-05-29 07:40 UTC
Data Size: 138.8 MB
Tables: 12
Rows: 5748/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
Region: Europe
Add-on: postgresql-regular-79163
But nothing work.
I can't do anything on the database because I can't connect on it and and don't know how to restart it.
Thanks for your help or question
EDIT
heroku pg:killall
doesnt work
SOLUTION
I finally find the solution by changing the available dyno !
heroku ps:scale web=0
heroku ps:scale web=2
heroku restart
Now I can check the "connexion leak"
Recently I encountered a the same error message for one of my Python apps. I've reached out to Heroku and they acknowledged the error:
FATAL: sorry, too many clients already
"It is generally triggered because the hobby tier databases are a shared resource which means that several databases will share computation and storage from a host. We monitor these regularly and always aim to run them with spare capacity but occasionally coincidental spikes in usage from neighboring databases can cause the host machine to have trouble. This is one of the reasons we don't endorse the hobby tier for use in production applications."
They suggested trying one of the following options:
Provision a new hobby database on the same plan and migrate the data over. This will spin up on a new database host which will mean that you no longer have the same noisy neighbors.
Upgrade to one of the professional plans
I've not tried the 1st option, but the second one solved my issue. This answer shows you how to do the migration step by step.
Hard issue happening in production for a long time, we have no clue about where it's coming from. Can sometimes reproduces it on localhost, Heroku Enterprise support has been clue-less about this.
On our production database, we currently have the following setup:
Passenger Standalone, threading disabled, limited to 25 processes MAX. No min setup.
3 web dynos
a SELECT * FROM pg_stat_activity GROUP BY client_addr and count the number of connections per instance shows that more than 1 PSQL connection is opened for one passenger process during our peak days.
Assumptions:
A single address is about a single Dyno (Confirmed by Heroku staff)
Passenger does not spawn more than 25 processes at the time (confirmed with passenger-status during those peaks)
Here is a screenshot of what looks the SELECT * FROM pg_stat_activity;:
In the screenshot, we can see that there are 45 psql connections coming from the same dyno that runs passenger. If we followed our previous logic, it should not have more than 1 connection per Passenger process, so 25.
The logs doesn't look unusual, nothing mentioning either a dyno crash / process crash.
Here is a screenshot of our passenger status for the same dyno (different time, just to prove that there are not more processes than 25 created for one dyno):
And finally one of the response we got from the Heroku support (Amazing support btw)
I have also seen previous reports of Passenger utilising more connections than expected, but most were closed due to difficulty reproducing, unfortunately.
In the Passenger documentation, it's explained that Passenger handle itself the ActiveRecord connections.
Any leads appreciated. Thanks!
Various information:
Ruby Version: 2.4.x
Rails Version: 5.1.x
Passenger Version: 5.3.x
PG Version: 10.x
ActiveRecord Version: 5.1.x
If you need any more info, just let me know in the comments, I will happily update this post.
One last thing: We use ActionCable. I've read somewhere that passenger is handling weirdly the socket connections (Opens a somewhat hidden process to keep the connection alive). This is one of our leads, but so far no luck in reproducing it on localhost. If anyone can confirm how Passenger handles ActionCable connections, it would be much appreciated.
Update 1 (01/10/2018):
Experimented:
Disable NewRelic Auto-Explain feature as explained here: https://devcenter.heroku.com/articles/forked-pg-connections#disabling-new-relic-explain
Run locally a Passenger server with min and max pool size set to 3 (more makes my computer burn), then kill process with various signals (SIGKILL, SIGTERM) to try to see if connections are closed properly. They are.
We finally managed to fix the issue we had on Passenger. We have had this issue for a very long time actually.
The fix
If you use ActionCable, and your default cable route is /cable, then change the Procfile from:
web: bundle exec passenger start -p $PORT --max-pool-size $PASSENGER_MAX_POOL_SIZE
to
web: bundle exec passenger start -p $PORT --max-pool-size $PASSENGER_MAX_POOL_SIZE --unlimited-concurrency-path /cable
Explanation
Before the change, each socket connection (ActionCable) would take one single process in Passenger.
But a Socket is actually something that should not take a whole process. A process can handle many many open socket connection. (Many is more than 10thousands at the same time for some big names). Fortunately, we have much lower socket connections, but still.
After the change, we basically told Passenger to not take a whole process to handle one socket connection, but rather dedicate a whole process to handle all the socket connections.
Documentation
The in-depth documentation on how to do Sockets with Passenger: https://www.phusionpassenger.com/library/config/standalone/tuning_sse_and_websockets/
The flag to pass to Passenger: https://www.phusionpassenger.com/library/config/standalone/reference/#--unlimited-concurrency-path-unlimited_concurrency_paths
Some metrics, after 3 weeks with the fix
Number of forked processes on Passenger dramatically decreased (from 75 processes to ~ 15 processes)
Global memory usage on the web dynos dramatically decreased (related to previous point on forked Passenger processes)
The global number of PSQL connections dramatically decreased and has been steady for two days (even after deployment). (from 150 to ~30 connections)
Number of PSQL connections per dyno dramatically decreased, (from ~50 per dyno to less than 10 per dyno)
The number of Redis connections decreased and has been steady for two days (even after deployment)
Average memory usage on PostgreSQL dramatically decreased and has been steady for two days.
The overall throughput is a bit higher than usual (Throughput is the number of requests handled per minute)
I have been stuck on this issue for the past 3 days and unsure where to look now.
I have a simple Sidekiq implementation into my rails app.
I am working on: Rails 4.2.0, Sidekiq 4.1.2, Redis 3.0.6
The production app is running live with heroku, and I have 1 worker dyno and 1 web dyno.
The issue is this, and I am unsure on how to approach it or what I did to make it do this.
When I run the redis-cli on heroku I can see the clients that I have running. At most I have 2 or 3 clients running at any given time. I can easily kill the clients with
CLIENT KILL TYPE normal
So that's all fine and dandy. The part when things get a little tricky is when I fire up my server locally, and I am working in development. All of a sudden my redic-cli shows that I have 19 clients running. This will result in me logging
Err max clients reached
My assumption is that somehow locally I am directing sidekiq to work off the redis production url. I have to admit what I know about Redis and Sidekiq is limited, but I do have a basic understanding of how it should be working.
Any help or guidance would be appreciated.
Try using sidekiq -c 3 to limit your concurrency.
This ended up being a configuration error. Just in case anyone stumbles upon this question hopefully this will help them not overlook something like I did.
This issue was happening only when I was firing up my local server, so I knew it had something to do with me locally. I noticed that on my production redis:cli I was seeing clients that had my local IP in the ADDR column.
This led me to believe that my local machine was pushing clients to my production Redis server. Looking at my logs when I fired up my Procfile I saw the Redis url there so that only confirmed it.
Finally after searching through my code, I discovered that I had actually added the url into my .env, so when I fired up my server it was using that production Redis url. So I changed it to the appropriate IP address for local development on my .env file redis://127.0.0.1:6379 and everything is now working as normal.
I'm trying to deploy my first Rails app here, and I've been stuck on something since last night. I'm encountering some weird behaviors I can't explain.
I'm running Rails, Apache, Phusion Passenger, and for the moment, SQLite 3. (I'll move that over to MySQL shortly.) Currently this is being hosted on a too-small EC2 slice running Ubuntu Server 11.04 (Natty).
When I visit the address of the EC2 slice in the browser, I get the default Rails 500 notice. Here's what's weird, though. When I tail /log/production.log, I see the following error:
ActionView::Template::Error (SQLite3::SQLException: no such table: offers: SELECT "offers".* FROM "offers" WHERE (code = '') ORDER BY created_at desc LIMIT 25 OFFSET 0):
So, I manually opened up the development database in SQLite3, and saw that table in there. The production database, however, does not have that table.
OK, so I'm getting errors with the production database logged in the production log. The application has to be running in production mode, right?
That is what's throwing me. First of all, it's running in development mode on my development machine, and I didn't change any of the files when I deployed it. Neither did I use any fancy deployment tools to deploy it (which may have switched something without my knowledge) - I just did a simple git push.
Furthermore, I added the following to my httdp.conf VirtualHost config:
RailsEnv development
Also, when I run rails console, I can get the following:
irb(main):002:0> Rails.env
=> "development"
So, the application really should be running in development mode, right? In fact, it seems to think (partially) that it is, right?
I'm really not sure what's happening here, and I'd really appreciate some expert advice.
Thanks everyone.
Edit - A few server reboots later, and now the thing just hangs when I try to view it in a browser. Also, Apache seems to hang when I try to restart it (hence the server reboots), related problem, or different problem altogether?
Well, this isn't a 100% satisfactory answer for me, but I did two things, and I think I got it working.
First, I re-installed the passenger Apache module. That may or may not have been necessary.
This was the big thing, though: after I had added the line to httpd.conf to pass the Rails Environment over to Passenger, I believe Apache restarted incorrectly. (Rather, I believe I've been restarting Apache incorrectly for my whole life!)
I was trying to restart Apache this way:
sudo /etc/init.d/apache2 restart
That has always worked for me (when programming PHP), but it simply wasn't working here. Apache would just stall on the restart.
This, however, works fine:
sudo apachectl restart
I'll have to ask Server Fault what the significant difference is between the two.
I hope that helps someone out.