PG::UnableToSend: no connection to the server in Rails - ruby-on-rails

I have a production server running ubuntu 14.04, Rails 4.2.0, postgresql 9.6.1 with gem pg 0.21.0/0.20.0. In last few days, there is constantly error with accessing to a table customer_input_datax_records in psql server.
D, [2017-07-20T18:08:39.166897 #1244] DEBUG -- : CustomerInputDatax::Record Load (0.1ms) SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1) [["status", "email_sent"]]
E, [2017-07-20T18:08:39.166990 #1244] ERROR -- : PG::UnableToSend: no connection to the server
: SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1)
The code which call to access the db server is with Rufus scheduler 3.4.2 loop:
s = Rufus::Scheduler.singleton
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
.......
end
After restart the server, usually there is with first request (or a few). But after some time (ex, 1 or 2 hours), the issue starts to show up. But the app seems running fine (accessing records with read/write & creating new). There are some online posts about the error. However the problem seems not the one I am having. Before I re-install the psql server, I would like to get some ideas about what causes the no connection.
UPDATE: database.yml
production:
adapter: postgresql
encoding: unicode
host: localhost
database: wb_production
pool: 5
username: postgres
password: xxxxxxx

So, the error is "RAILS: PG::UnableToSend: no connection to the server".
That reminds me of Connection pool issue with ActiveRecord objects in rufus-scheduler
You could do
s = Rufus::Scheduler.singleton
s.every '2m' do
ActiveRecord::Base.connection_pool.with_connection do
new_signups = CustomerInputDatax::Record
.where.not(status: 'email_sent')
.all
# ...
end
end
digging
It would be great to know more about the problem.
I'd suggest trying this code:
s = Rufus::Scheduler.singleton
def s.on_error(job, error)
Rails.logger.error(
"err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
" in job #{job.inspect}")
error.backtrace.each_with_index do |line, i|
Rails.logger.error(
"err#{error.object_id} #{i}: #{line}")
end
end
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
# .......
end
As soon as the problem manifests itself, I'd look for the on_error full output in the Rails log.
This on_error comes from https://github.com/jmettraux/rufus-scheduler#rufusscheduleron_errorjob-error

As we discuss in the comments, the problem seems related to your rufus version.
I would suggest you to check out whenever gem and to invoke a rake task instead of calling directly the activerecord model.
It could be a good idea, however, to open an issue with the traceback of your error in the rufus-scheduler repo on github (just to let then know...)

Related

Errno::ENOTTY Inappropriate ioctl for device when connecting to a remote server through Net::SSH on SuSe (with Ruby on Rails 5.2.4)

My Ruby on Rails application remotely starts some scripts on a distant SuSe server (SUSE Linux Enterprise Server 15 SP2). It relies on the net-ssh gem which is declared in the Gemfile: gem 'net-ssh'.
The script is triggerd remotely through the following block:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |ssh|
feed_back = ssh.exec!("#{event.statement}")
end
This works as expected as long as long as the Rails server runs on Windows Server 2016, which is my DEV environment. But when I deploy to the Validation environment, which is SUSE Linux Enterprise Server 15 SP2, I get this error message:
Errno::ENOTTY in myController#myMethod
Inappropriate ioctl for device
On another hand, issuing the SSH request through the command line - from SUSE to SUSE - works as expected too. Reading around I did not find a relevant parameter for the Net::SSH module to solve this.
Your suggestions are welcome!
I finally found out that the message refers to the operating mode of SSH: it requires a sort of terminal emulation - so called pty - wrapped into a SSH chanel.
So I implemented it this way:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |session|
session.open_channel do |channel|
channel.request_pty do |ch, success|
raise "Error requesting pty" unless success
puts "------------ pty successfully obtained"
end
channel.exec "#{#task.statement}" do |ch, success|
abort "could not execute command" unless success
channel.on_data do |ch, data|
puts "------------ got stdout: #{data}"
#task.update_attribute(:return_value, data)
end
channel.on_extended_data do |ch, type, data|
puts "------------ got stderr: #{data}"
end
channel.on_close do |ch|
puts "------------ channel is closing!"
end
end
end
### Wait until the session closes
session.loop
end
This solved my issue.
Note:
The answer proposed above was only a part of the solution. The same error occured again with this source code when deploying to the production server.
The issue appears to be the password to the SSH target: I retyped it by hand instead of doing the usual copy/paste from MS Excel, and the SSH connection is now successful!
As the error raised is not a simple "connection refused", I suspect that the password string had a specific character encoding, or an unexpected ending character.
As the first proposed solution provides a working example, I leave it there.

Rails 4.2 Postgres 9.4.4 statement_timeout doesn't work

I am trying to set a statement_timeout. I tried both setting in database.yml file like this
variables:
statement_timeout: 1000
And this
ActiveRecord::Base.connection.execute("SET statement_timeout = 1000")
Tested with
ActiveRecord::Base.connection.execute("select pg_sleep(4)")
And they both don't have any effect.
I am running postgres 10 in my local and the statement_timeouts works just expected. But on my server that is running postgres 9.4.4, it simply doesn't do anything.
I've check Postgres' doc for 9.4 and statement_timeout is available. Anyone can shed some light?
I wasn't able to replicate this issue locally using: Postgresql 9.4.26. But it might be useful to share what I've tried and some thoughts around the server issue.
Here is what I've tried (a useful bit might be a query to verify the PG version from rails):
# Confirming I am executing against 9.4.x PG:
irb(main):002:0> ActiveRecord::Base.connection.execute("select version()")
(10.8ms) select version()
=> #<PG::Result:0x00007ff74782e060 status=PGRES_TUPLES_OK ntuples=1 nfields=1 cmd_tuples=1>
irb(main):003:0> _.first
=> {"version"=>"PostgreSQL 9.4.26 on x86_64-apple-darwin18.7.0, compiled by Apple clang version 11.0.0 (clang-1100.0.33.17), 64-bit"}
# Set timeout:
irb(main):004:0> ActiveRecord::Base.connection.execute("SET statement_timeout = 1000")
(0.4ms) SET statement_timeout = 1000
=> #<PG::Result:0x00007ff7720a3d88 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0>
# Confirm it works - it is ~1s and also stacktrace is pretty explicit about it:
irb(main):005:0> ActiveRecord::Base.connection.execute("select pg_sleep(4)")
(1071.2ms) select pg_sleep(4)
.... (stacktrace hidden)
ActiveRecord::StatementInvalid (PG::QueryCanceled: ERROR: canceling statement due to statement timeout)
: select pg_sleep(4)
Here is what to try
Since the issue occurs on server only and since statement_timeout works on other minor version and locally, one thing that comes to mind is the lack of privileges to update statement_timeout from where it is attempted. Perhaps rails pg login used to make db connection is not allowed to update that setting.
The best would be to verify that either via rails console on a server:
irb(main):004:0> ActiveRecord::Base.connection.execute("SET statement_timeout = 1000")
irb(main):004:0> irb(main):003:0> ActiveRecord::Base.connection.execute("show statement_timeout").first
(0.2ms) show statement_timeout
=> {"statement_timeout"=>"1s"}
Or, it can be checked directly via psql console (some deployments allow this too):
psql myserveruser # if this was heroku's pg: heroku pg:psql
postgres=# set statement_timeout = 1000;
SET
postgres=# select pg_sleep(4);
ERROR: canceling statement due to statement timeout
Time: 1068.067 ms (00:01.068)
Other thing to keep in mind (taken from https://dba.stackexchange.com/a/83035/90903):
The way statement_timeout works, the time starts counting when the
server receives a new command from the client...
And if a function does SET statement_timeout = 100; it will have an effect only starting at
the next command from the client.

Configuring Backup gem in Rails 5.2 - Performing backup of PostgreSQL database

I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App

rails i18n redis ERR unknown command [] railscast 256

Following this RailsCast : http://railscasts.com/episodes/256-i18n-backends but using Rails 5.2, I raise this error :
Redis::CommandError in Pages#home<br>
ERR unknown command '[]'
In config/initializers/i18n_backend.rb
TRANSLATION_STORE = Redis.new seems causing this problem.
Whereas TRANSLATION_STORE = {} works like a charm.
But without Redis!
Any hint?
The problem is defined here:
https://github.com/ruby-i18n/i18n/blob/master/lib/i18n/backend/key_value.rb#L25-L30
I haven't investigated redis 4 but it seems that these methods has been removed.
The solution is to patch the Redis gem and add these methods to the redis.
# config/initializers/redis.rb
class RedisHash
def initialize(redis)
#redis = redis
end
def [](key)
#redis.get(key)
end
def []=(key, value)
#redis.set(key, value)
end
end
Redis.current = Redis.new(host: 127.0.0.1,
port: 6379,
db: 0,
thread_safe: true)
# config/initializers/i18n.rb
I18n::Backend::KeyValue.new(RedisHash.new(Redis.current))
This code above is sample initializer. It works with the newest version of redis 4.3.5
I've tested also redis-store/redis-i18n and it also works with the newest redis versions but in my opinion this implementation enforces huge overload on redis.
EDIT:
Due to the redis contributors [answer][1]
[1]: https://github.com/redis/redis-rb/issues/997#issuecomment-871302883 I've updated my solution
Not a full answer I know, but I had a similar problem after upgrading redis gem from 3.3.1 to 4.0.2
I set it back to 3.3.1 and it got fixed. Odd thing for me was that the problem only occurred in the production environment.
I am using a chained backend
I18n.backend= I18n::Backend::Chain.new(
I18n::Backend::KeyValue.new(Redis.current),
I18n.backend
)

How can I build a front end for querying a Redshift database (hopefully with Rails)

So I have a Redshift database with enough tables that it feels worth my time to build a front end to make querying it a little bit easier than just typing in SQL commands.
Ideally, I'd be able to do this by connecting the database to a Rails app (because I have a bit of experience with Rails). I'm not sure how I'd connect a remote Redshift database to a local Rails application though, or how to make activerecord work with redshift.
Does anyone have any suggestions/resources to help me get started? I'm open to other options to connect the Redshift database to a front end if there are pre-made options easier than Rails.
#app/models/data_warehouse.rb
class DataWarehouse < ActiveRecord::Base
establish_connection "redshift_staging"
#or, if you want to have a db per environment
#establish_connection "redshift_#{Rails.env}"
end
Note that we are connecting on 5439, not the default 5432 so I specify the port
Also, I specify a schema, beta, which is what we use for our unstable aggregates, you could either have a different db per environment as mentioned above, or use various schemas and include them in the search path for ActiveRecord
#config/database.yml
redshift_staging:
adapter: postgresql
encoding: utf8
database: db03
port: 5439
pool: 5
schema_search_path: 'beta'
username: admin
password: supersecretpassword
host: db03.myremotehost.us #your remote host here, might be an aws url from Redshift admin console
###OPTION 2, a direct PG Connection
class DataWarehouse < ActiveRecord::Base
attr_accessor :conn
def initialize
#conn = PG.connect(
database: 'db03',
port: 5439,
pool: 5,
schema_search_path: 'beta',
username: 'admin',
password: 'supersecretpassword',
host: 'db03.myremotehost.us'
)
end
end
[DEV] main:0> redshift = DataWarehouse
E, [2014-07-17T11:09:17.758957 #44535] ERROR -- : PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'
(pry) output error: #<ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: permission denied to set parameter "client_min_messages" to "notice" : SET client_min_messages TO 'notice'>
UPDATE:
I ended up going with option 1, but using this adapter for now for multiple reasons:
https://github.com/fiksu/activerecord-redshift-adapter
Reason 1: ActiveRecord postgresql adapter sets client_min_messages
Reason 2: adapter also attempts to set Time Zone, which redshift doesn't allow (http://docs.aws.amazon.com/redshift/latest/dg/c_redshift-and-postgres-sql.html)
Reason 3: Even if you change the code in ActiveRecord for the first two errors, you run into additional errors that complain that Redshift is using Postgresql 8.0, at that point I moved on to the adapter, will revisit and update if I find something better later.
I renamed my table to base_aggregate_redshift_tests (notice plural) so ActiveRecord was easily able to connect, if you can't change your table names in redshift use the set_table method I have commented out below
#Gemfile:
gem 'activerecord4-redshift-adapter', github: 'aamine/activerecord4-redshift-adapter'
Option 1
#config/database.yml
redshift_staging:
adapter: redshift
encoding: utf8
database: db03
port: 5439
pool: 5
username: admin
password: supersecretpassword
host: db03.myremotehost.us
timeout: 5000
#app/models/base_aggregates_redshift_test.rb
#Model named to match my tables in Redshift, if you want you can set_table like I have commented out below
class BaseAggregatesRedshiftTest < ActiveRecord::Base
establish_connection "redshift_staging"
self.table_name = "beta.base_aggregates_v2"
end
in console using self.table_name -- notice it queries the right table, so you can name your models whatever you want
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:31:58.678103 #43776] DEBUG -- : BaseAggregatesRedshiftTest Load (45.6ms) SELECT "beta"."base_aggregates_v2".* FROM "beta"."base_aggregates_v2" LIMIT 1
Option 2
#app/models/base_aggregates_redshift_test.rb
class BaseAggregatesRedshiftTest < ActiveRecord::Base
set_table "beta.base_aggregates_v2"
ActiveRecord::Base.establish_connection(
adapter: 'redshift',
encoding: 'utf8',
database: 'staging',
port: '5439',
pool: '5',
username: 'admin',
password: 'supersecretpassword',
search_schema: 'beta',
host: 'db03.myremotehost.us',
timeout: '5000'
)
end
#in console, abbreviated example of first record, now it's using the new name for my redshift table, just assuming I've got the record at base_aggregates_redshift_tests because I didn't set the table_name
[DEV] main:0> redshift = BaseAggregatesRedshiftTest.first
D, [2014-07-17T15:09:39.388918 #11537] DEBUG -- : BaseAggregatesRedshiftTest Load (45.3ms) SELECT "base_aggregates_redshift_tests".* FROM "base_aggregates_redshift_tests" LIMIT 1
#<BaseAggregatesRedshiftTest:0x007fd8c4a12580> {
:truncated_month => Thu, 31 Jan 2013 19:00:00 EST -05:00,
:dma => "Cityville",
:group_id => 9712338,
:dma_id => 9999
}
Good luck #johncorser!
This tutorial helps you setup a rails app with a redshift adapter:
https://www.credible.com/code/setting-up-a-data-warehouse-with-aws-redshift-and-ruby/
In a nutshell:
Clone the sample app:
git clone git#github.com:tuesy/redshift-ruby-tutorial.git
cd redshift-ruby-tutorial
Setup ENV Variables via ~/.bashrc (or dotenv):
export REDSHIFT_HOST=redshift-ruby-tutorial.ccmj2nxbsay7.us-east-1.redshift.amazonaws.com
export REDSHIFT_PORT=5439
export REDSHIFT_USER=deploy
export REDSHIFT_PASSWORD=<your password here>
export REDSHIFT_DATABASE=analytics
export REDSHIFT_BUCKET=redshift-ruby-tutorial
Use the gem activerecord4-redshift-adapter, in Gemfile:
'activerecord4-redshift-adapter', '~> 0.2.0' # For Rails 4.2
'activerecord4-redshift-adapter', '~> 0.1.1' # For Rails 4.1
Then you can query into redshift like you would with a normal AR model:
bundle exec rails c
RedshiftUser.count
(Disclosure: I havn't yet given this method a try, but I may soon)
You might want to consider http://www.looker.com/. It's a frontend for exploring your DB, allowing easily savable queries and a GUI that the business guys can also use.

Resources