How to override or disable Postgrex timeout setting: 15 seconds? - erlang

Working on an Elixir app. There's a Scraper function that copies data from a Google Spreadsheet into a postgres database via the Postgrex driver. The connection through the Google API works fine, but the function always times out after 15 seconds.
01:48:36.654 [info] Running MyApp.Endpoint with Cowboy using http://localhost:80
Interactive Elixir (1.6.4) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Scraper.update
542
iex(2)> 01:48:55.889 [error] Postgrex.Protocol (#PID<0.324.0>) disconnected: ** (DBConnection.ConnectionError) owner #PID<0.445.0> timed out because it owned the connection for longer than 15000ms
I have tried changing the 15_000 ms timeout setting everywhere in the source, but it seems the setting has been compiled into binary. I am not an erlang/elixir developer, just helping a client install the app for the purposes of demoing. My question is:
How can I recompile the Postgrex driver with the modified timeout setting?
Is there another way to override this setting, or disable the timeout altogether? I have tried find-replace of basically every instance of "15" in the source.

When issuing a query with postgrex, the last argument can be a keyword list of options.
Postgrex.query!(pid, "AN SQL STATEMENT;", [], timeout: 50_000, pool_timeout: 40_000)
https://hexdocs.pm/postgrex/Postgrex.html#query/4

config :my_app, MyApp.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "my_app_dev",
hostname: "localhost",
timeout: 600_000,
ownership_timeout: 600_000,
pool_timeout: 600_000
Look at timeout and ownership_timeout. These values are set to 600 seconds. And probably not of them are necessary.
Also I want to say that once I had to remove everything from _build and recompile an application to have this values actually applied.

Related

ActiveRecord connection errors with PGBouncer

The bounty expires in 7 days. Answers to this question are eligible for a +100 reputation bounty.
CWitty wants to draw more attention to this question.
We recently introduce PGBouncer into our stack as we were exhausting our connections to our RDS instance. Upon doing so we started to see all sorts of connection exceptions which I posted below. The only thing of note is that we use multiple databases via Rails built in multi-db support. Only the primary/writer instance is going through PGBouncer at the moment and that is where we are seeing all of the exceptions, the reader connections seem to be fine.
I'm wondering if we need to fine tune some of the timeout or connection sizes a bit or what else could be causing this.
Exceptions
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
ActiveRecord::ConnectionNotEstablished: connection to server at "{db server IP}", port 5432 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQsocket() can't get socket descriptor
PGBouncer Config
We're running quite a few smaller instances of PGBouncer since it is single process and I believe single threaded as well. We plan to fine tune this a bit later.
[databases]
production = our_connection_string
[pgbouncer]
max_client_conn = 500
pool_mode = transaction
default_pool_size = 200
server_idle_timeout = 30
reserve_pool_size = 0
Rails DB Config
default: &default
adapter: postgis
postgis_extension: true
encoding: unicode
pool: <%= ENV['DB_POOL'] || ENV['RAILS_MAX_THREADS'] || 5 %>
idle_timeout: 300
checkout_timeout: 5
schema_search_path: public, tiger
prepared_statements: false
production:
primary:
<<: *default
url: <%= ENV[DATABASE_URL] %>
primary_replica:
<<: *default
url: <%= ENV[DATABASE_REPLICA_URL] %>
Update 1
We attempted going with the default value for server_idle_timeout of 600 seconds and that doesn't seem to have made a difference.

Can we change connection dynamicly to Elasticsearch server on Rails App at run time

I have 2 elasticsearch servers. On my rails app, can I change connection to the Elasticsearch servers at run time?
For example,
- If user 1 log in the app, it should connect to elasticsearch server 1
- If user 2 log in the app, it should connect to elasticsearch server 2
Thanks
You can use randomize_hosts when creating connection
args = {
hosts: "https://host1.local:9091,https://host2.local:9091",
adapter: :httpclient,
logger: (Rails.env.development? || Rails.env.test?) ? Rails.logger : nil,
reload_on_failure: false,
randomize_hosts: true,
request_timeout: 5
}
client = Elasticsearch::Client.new(args)
Randomize hosts doc
Here you can read about a different host selection strategy than round robin. You could implement your own ideas.

Configuring Backup gem in Rails 5.2 - Performing backup of PostgreSQL database

I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App

How To Get The Value Of An Environment Variable In Elixir On Windows?

Windows 10
Elixir 1.3.1
Per the advice in this article, I've tried to modify my config files to use the "${ENV_VAR}" syntax. But then when I try to compile the code Elixir complains about the values of the configuration settings. So I tried the syntax directly in iex and it doesn't seem to work.
iex(1)> "${PATH}"
"${PATH}"
iex(2)> System.get_env("PATH")
"C:\\Program Files\\erl8.0\\erts-8.0\\bin;C: . . ." (rest omitted for brevity's sake)
I'd really like to use the "${ENV_VAR}" notation because it'd be nice to not have to hand-edit the sys.config file. Am I doing something wrong or is this just a Windows specific issue?
Here's part of my config file (even though, as I say, it seems I can reproduce the behavior in iex):
config :riismi, ecto_repos: [Riismi.Repo]
config :riismi, Riismi.Mailer,
adapter: Bamboo.SMTPAdapter,
server: "smtp.gmail.com",
port: 465,
username: "${RMI_MAIL_SERVERUSER}",
password: "${RMI_MAIL_SERVERPWD}",
tls: :if_available, # can be `:always` or `:never` or `:if_available`
ssl: true, # can be `true`
retries: 3
As I say, I realize this is likely to be a Windows issue--just wanted to insure that I'm not missing something otherwise.

How to debug/fix random occurring Redis::TimeoutError?

I have a rails app running which is using redis quite a lot - however - I'm seeing quite a few Redis::TimeoutError occurring here and there, from time to time. There is no pattern in the circumstances. It occurs both in the web app and in the background jobs (which is being processed using sidekiq) - not often but from time to time.
Now I have no idea how to track down the root cause of this and hence no idea how to fix it.
Here is a little background on my setup:
The redis instance is running on a separate physical server which is connected to both my web server and background server in a private local 1Gbit network. All servers are running ubuntu 12.04. The redis version is 2.6.10. I'm connecting from my rails app (which is 3.2) using an initializer like so:
require 'redis'
require 'redis/objects'
REDIS = Redis.new(:url => APP_CONFIG['REDIS_URL'])
Redis.current = REDIS
This is the output of redis-cli INFO:
# Server
redis_version:2.6.10
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 3.2.0-38-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.3
process_id:28475
run_id:d89bbb1b81d3169c4228cf23c0988ae437d496a1
tcp_port:6379
uptime_in_seconds:14913365
uptime_in_days:172
lru_clock:1507056
# Clients
connected_clients:233
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:19
# Memory
used_memory:801637360
used_memory_human:764.50M
used_memory_rss:594706432
used_memory_peak:4295394784
used_memory_peak_human:4.00G
used_memory_lua:31744
mem_fragmentation_ratio:0.74
mem_allocator:jemalloc-3.3.0
# Persistence
loading:0
rdb_changes_since_last_save:23166
rdb_bgsave_in_progress:0
rdb_last_save_time:1378219310
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:4
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
# Stats
total_connections_received:932395
total_commands_processed:3088408103
instantaneous_ops_per_sec:837
rejected_connections:0
expired_keys:31428
evicted_keys:3007
keyspace_hits:124093049
keyspace_misses:53060192
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:17651
# Replication
role:master
connected_slaves:1
slave0:192.168.0.2,6379,online
# CPU
used_cpu_sys:54000.21
used_cpu_user:73692.52
used_cpu_sys_children:36229.79
used_cpu_user_children:420655.84
# Keyspace
db0:keys=1498962,expires=1310
In my redis config I have the following set:
\fidaemonize yes
pidfile /var/run/redis/redis-server.pid
timeout 0
loglevel notice
databases 1
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
slave-serve-stale-data yes
slave-read-only yes
slave-priority 100
maxclients 1000
maxmemory 4GB
maxmemory-policy volatile-lru
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
That could come from many issues :
because you use the SAVE command (it is setup in your conf) generating a lot of I/O and hammering the server, especially if you use EBS volumes on Amazon.
because you have a Redis slave (same as before, doing SAVE before mirroring).
because you use a KEY * which is very slow on a lot of indexes.
Try "slowlog" command on the redis server to see if there are some "slow query".
Write some logs when "TimeoutError" happens, to see if the "error redis command" in the "slow log".
adjust your timeout setting on the client side。
It might be a problem on the client side if server performs normally. Each redis client instance, not the server, also has a timeout setting, and the default setting is very short - something like a few milliseconds. So if the server does not respond within that time, a Redis::TimeoutError will be raised by the client.
First thing you can try is to set a longer timeout value, and see if things get better.
redis_url = 'redis://user:password#host:port/'
redis = Redis.connect(:url => redis_url, :timeout => 0.7)
Even with longer timeout setting, there is no guarantee that timeout would not happen, but then it'd be a problem of the design of your system.
Are you rolling your own code to connect to redis or just letting sidekiq handle it? I think you should really just design your connection code to reconnect if the connection has been lost. You can rescue Redis::BaseConnectionError and reconnect.

Resources