Rails Postgres reconnection on failover for RDS - ruby-on-rails

I have a Rails application with a Postgres database under AWS RDS with multi-az architecture. The HA architecture used by RDS is master/slave and they provide the service with a single endpoint that points to the current master.
Whenever there's a database failover, Active Record will continue to try to connect to the same server, instead of retrying the connection in order to pick up the new IP for the master.
Is there a way to create a "global" rescue for the ActiveRecord::StatementInvalid: PG::ConnectionBad: PQsocket() can't get socket descriptor error that simply runs ActiveRecord::Base.connection_pool.disconnect! that will make the next query to work?

I was able to make Active Record reconnect after the failover event by applying a monkey patch to postgres_adapter.
lib/core_ext/active_record/postgresql_adapter.rb:
require 'active_record/connection_adapters/postgresql_adapter'
class ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
private
def exec_no_cache(sql, name, binds)
log(sql, name, binds) { #connection.async_exec(sql, []) }
rescue ActiveRecord::StatementInvalid => e
if e.to_s.include?('PG::ConnectionBad')
ActiveRecord::Base.connection_pool.disconnect!
end
raise e
end
end

Related

How can I switch database inside a Rails console on Heroku?

I'm trying to migrate users from one system to another. Each system has its own database and different classes.
My plan is to connect to one database, read some info from one database via SQL commands:
ActiveRecord::Base.connection.execute(sql_command)
do something with the data and then write some results on the new database using normal models.
I plan on doing that inside Sidekiq job but I'm trying to do some testing using a Rails console on Heroku.
This should be pretty straightforward, but this proves ridiculously difficult.
When I launched a Rails console on Heroku. I'm connecting to DATABASE_URL, which is ok, but when I try to connect to the old database and execute a command, like this:
ActiveRecord::Base.establish_connection(adapter: "postgresql", encoding: "unicode", pool: 10, url: "postgres://...info...")
ActiveRecord::Base.connection.execute("select count(*) from users")
I end up with:
PG::ConnectionBad (could not connect to server: No such file or directory)
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I can, however, connect to this old database by launching my rails console on heroku using DATABASE_URL as the env variable:
$ heroku run 'DATABASE_URL=postgres://...info... rails console' -a console' -a app
BUT I don't know how to switch back to my new database so I can update things.
How does one switch databases at run time when using rails console on heroku?
Why try to runtime switch the database? Why not have both connected at the same time and specify at the model level which database they read/write from? Rails supports connecting multiple databases and specifying in individual models what database connection to use: https://guides.rubyonrails.org/active_record_multiple_databases.html
The problem was that using url: didn't work and all parameters needed to be specified.
config = {"adapter"=>"postgresql", "encoding"=>"unicode", "pool"=>10, "username"=>"u...", "password"=>"p...", "port"=>5432, "database"=>"d...", "host"=>"ec2..."}
If you go for a 3tier database yml, you can use this:
config = ActiveRecord::Base.configurations["production"]["seconddb"]
Then, you can use establish_connection
ActiveRecord::Base.establish_connection(config)
ActiveRecord::Base.connection.execute("select count(*) from users")
Once I started specifying username, password, port, database and host, it all worked like a charm.
To work with both databases at the same time, a good way is to create a class
class OtherDB < ActiveRecord::Base
establish_connection(ActiveRecord::Base.configurations["production"]["seconddb"])
end
Then you can call things that way
OtherDB.table_name = "table_name"
OtherDB.first
ref (Establish a connection to another database only in a block?)
And to run SQL commands:
OtherDB.connection.execute("select count(*) from users")

How to check if postgres server is up through Rails App?

I am trying to create a health check page for my app. There are 3 different servers for back end, front end and database.
I have created an API to check if services (sidekiq, redis) are running.
Now, I want to check if postgres server is up or not.
For this I have added a method
def database?
ActiveRecord::Base.connection.active?
end
This method returns true when postgres is running. If the Postgres server is stopped, and I try to hit my API I get
PG::ConnectionBad (could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
):
How to rescue this error?
To prevent rails guts to be loaded before you actually check the DB connection, I would suggest to create a simple rack middleware, and put in the very beginning of middlewares stack:
class StatusMiddleware
def initialize(app)
#app = app
end
def call(env)
return #app.call(env) unless status_request?(env)
# Feel free to respond with JS here if needed
if database_alive?
[200, {}, []]
else
[503, {}, []]
end
end
private
def status_request?(env)
# Change the route to whatever you like
env['PATH_INFO'] == '/postgres_status' && env['REQUEST_METHOD'] == 'GET'
end
def database_alive?
::ActiveRecord::Base.connection.verify!
true
rescue StandardError
false
end
end
And in your config/application.rb:
config.middleware.insert_before 0, StatusMiddleware
I didn't do anything like it but here's how I'd do it.
Host a rails app (or sinatra, with no db or a dummy one) to status.myapp.com which is just a simple index page with a bunch of checks - db, redis, sidekiq. I'd make sure is hosted on the same machine as the production one.
db - try to establish a connection to your production db - fails or not
redis - try to see if it's a running process for redis-server
sidekiq - try to see if it's a running process for sidekiq
etc ...
Again, just an idea. Maybe someone did it differently.

Appfog + Rails 3 + Postgresql + IronWorker => could not connect to server: Connection timed out (PG::Error)

I have a Rails 3 app deployed on Appfog with a postgresql database. I would like to run background jobs with IronWorker but i can not connect to my database.
Here are my ironworker files (.worker + my_worker.rb)
.worker
runtime "ruby"
gemfile '../Gemfile'
dir "../app/models" # merge all models
full_remote_build true
exec "my_worker.rb"
my_worker.rb
require 'rubygems'
require 'active_record'
require 'pg'
require 'models/my_model.rb'
def setup_database
puts "Database connection details:#{params['database'].inspect}"
return unless params['database']
# estabilsh database connection
ActiveRecord::Base.establish_connection(params['database'])
end
setup_database
#my_models = My_model.all
Then i create a task, with passing the database connection to ironworker:
client = IronWorkerNG::Client.new
client.tasks.create("my_worker", database:Rails.configuration.database_configuration[Rails.env])
And here is the error i have in IronWorker
/task/__gems__/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `initialize': could not connect to server: Connection timed out (PG::Error)
Is the server running on host "10.0.48.220" and accepting TCP/IP connections on port 5432?
Can somebody help me out to connect my Appfog DB from IronWorker ?
Thanks a lot in advance
--
Mathieu
As far as i know, AppFog blocking external connections to their databases
https://groups.google.com/forum/?fromgroups=#!topic/appfog-users/I31ni0pff9I
Possible solutions:
Poke AppFog
Switch to other database hoster
Make some kind of tunnel.
As example, IronWorker platform allows you to interact with any non-privileged code during worker's execution, so you can try to set up SSH tunnel to 'good' machine.

Capistrano with PostgreSQL, error: database is being accessed by other users

I have a Rails app that uses PostgreSQL as a backend with a cert environment that tries to mimic production, except that it needs to have the database reset periodically for QA.
When I attempt to execute db:reset from a Capistrano task during deployment I get the error:
ERROR: database "database_name" is being accessed by other users
and the database cannot be dropped as part of the reset task resulting in deployment failing. Is there a way I can reset database connections from Capistrano so I can successfully drop the table? Piping the SQL to psql from a Capistrano task might work but I was wondering if there was a better way to go about this.
With PostgreSQL you can issue the following statement to return the backend pids of all open connections other than then this one:
SELECT pid FROM pg_stat_activity where pid <> pg_backend_pid();
Then you can issue a a termination request to each of those backends with
SELECT pg_terminate_backend($1);
Binding the pids returned from the first statement to each pg_terminate_backend exec.
If the other connections are not using the same user as you, you will have to connect as a superuser to successfully issue the terminates.
Admin signalling functions docs
Monitoring stats functions
pg_stat_activity view docs
UPDATE: Incorporating comments and expressing as Capistrano task:
desc "Force disconnect of open backends and drop database"
task :force_close_and_drop_db do
dbname = 'your_database_name'
run "psql -U postgres",
:data => <<-"PSQL"
REVOKE CONNECT ON DATABASE #{dbname} FROM public;
ALTER DATABASE #{dbname} CONNECTION LIMIT 0;
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
AND datname='#{dbname}';
DROP DATABASE #{dbname};
PSQL
end
I have combined dbenhur's answer with this Capistrano task to achieve the result I needed works like a charm:
desc 'kill pgsql users so database can be dropped'
task :kill_postgres_connections do
run 'echo "SELECT pg_terminate_backend(procpid) FROM pg_stat_activity WHERE datname=\'database_name\';" | psql -U postgres'
end
This assumes the auth_method for user postgres set to 'trust' in pg_hba.conf
Then you can just call it in your deploy task after update_code and before migrate
after 'deploy:update_code', 'kill_postgres_connections'
You can simply monkeypatch the ActiveRecord code that does the dropping.
For Rails 3.x:
# lib/tasks/databases.rake
def drop_database(config)
raise 'Only for Postgres...' unless config['adapter'] == 'postgresql'
Rake::Task['environment'].invoke
ActiveRecord::Base.connection.select_all "select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where datname='#{config['database']}' AND state='idle';"
ActiveRecord::Base.establish_connection config.merge('database' => 'postgres', 'schema_search_path' => 'public')
ActiveRecord::Base.connection.drop_database config['database']
end
For Rails 4.x:
# config/initializers/postgresql_database_tasks.rb
module ActiveRecord
module Tasks
class PostgreSQLDatabaseTasks
def drop
establish_master_connection
connection.select_all "select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where datname='#{configuration['database']}' AND state='idle';"
connection.drop_database configuration['database']
end
end
end
end
(from: http://www.krautcomputing.com/blog/2014/01/10/how-to-drop-your-postgres-database-with-rails-4/)

rails Rake and mysql ssh port forwarding

I need to create a rake task to do some active record operations via a ssh tunnel.
The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt.
desc "Syncronizes the tablets DB with the Server"
task(:sync => :environment) do
require 'rubygems'
require 'net/ssh'
begin
Thread.abort_on_exception = true
tunnel_thread = Thread.new do
Thread.current[:ready] = false
hostname = 'host'
username = 'tunneluser'
Net::SSH.start(hostname, username) do|ssh|
ssh.forward.local(3333, "mysqlhost.com", 3306)
Thread.current[:ready] = true
puts "ready thread"
ssh.loop(0) { true }
end
end
until tunnel_thread[:ready] == true do
end
puts "tunnel ready"
Importer.sync
rescue StandardError => e
puts "The Database Sync Failed."
end
end
The task seems to hang at "tunnel ready" and never attempts the sync.
I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync.
This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here.
Any Ideas!?
Thanks
The issue is very likely the same as here:
Cannot connect to remote db using ssh tunnel and activerecord
Don't use threads, you need to fork the importer off in another process for it to work, otherwise you will lock up with the ssh event loop.
Just running the code itself as a ruby script (with Importer.sync disabled) seems to work without any errors. This would suggest to me that the issue is with Import.sync. Would it be possible for you to paste the Import.sync code?
Just a guess, but could the issue here be that your :sync rake task has the rails environment as a prerequisite? Is there anything happening in your Importer class initialization that would rely on this SSH connection being available at load time in order for it to work correctly?
I wonder what would happen if instead of having environment be a prereq for this task, you tried...
...
Rake::Task["environment"].execute
Importer.sync
...

Resources