Sharing a connection pool for all Rails uses of Redis - ruby-on-rails

Summary: I'm using a single Redis instance for the Rails cache, actioncable and (non-cache) use in my rails code. Should all these uses share a single connection pool and if so how can I config this to make it happen since there seem to be totally different ways to setup the pooling for each?
Details follow since people seem to like to see them.
I'm using redis as the adapter for rails cache using the following config.
config.cache_store = :redis_cache_store, {
url: "redis://XXX.net:6379/0",
pool_size: ENV.fetch('RAILS_MAX_THREADS') { 5 },
password: Rails.application.credentials.dig(:redis, :password),
expires_in: 24.hours,
pool_timeout: 5
}
I've set the expires_in option so that I can set the option in my redis config to evict keys with expiration set so I can use the same redis instance for both cache and non-cache data. Now, I want to also access REDIS directly for non-cache related tasks via something like the example config below
pool_size = ENV.fetch("RAILS_MAX_THREADS", 5)
redis_pool = ConnectionPool.new(size: pool_size) do
Redis.new(
url: "redis://XXX.net:6379/0",
)
end
But I'm not sure if that is correct. Shouldn't I be sharing a connection pool between the cache_store connections and the other connections to Redis? If so, how can I do this?
To complicate matters further I'm also using Redis for actioncable via a config like
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://XXX.net:6379/0" } %>
password: <%= Rails.application.credentials.dig(:redis, :password) %>
I've seen suggestions that actioncable will automatically handle connection pooling with Redis if I'm using the connection_pool gem (is this right?) but I feel like all these connections should be drawing from the same pool. If so how can I make that happen?

Related

Slow and frequent PostgreSQL database connection booting in Rails API

I have a Rails API with a PostgreSQL database.
Some requests to the API show a strange behavior that
doesn't depend on the endpoint.
These requests (around 5-10% of total requests) start with
the same 7 database queries :
SET client_min_messages TO ?
SET standard_conforming_strings = on
SET SESSION timezone TO ?
SELECT t.oid, t.typname FROM pg_type WHERE t.typname IN ( ? )
...
The request also takes a long time to start before the 7 queries are executed.
It seems to be the database adapter initiating a connection.
ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
This significantly slows down the query.
I am using a PostegreSQL 11.6 AWS RDS instance, with default parameters.
Here is my database.yml config :
default: &default
adapter: postgresql
encoding: unicode
username: *****
password: *****
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
production:
<<: *default
database: *****
username: *****
password: *****
pool: 50
How do I reduce the number of connections initiating ?
Is there a way to cache the queries ?
Thank you,
Ran into the same thing and here's what I think is happening:
Every time a new connection is instantiated it performs the bootstrapping queries you mention above. Assuming a new process is not spawned, a new connection would need to be instantiated because existing connections have been reaped by ActiveRecord.
By default, the ConnectionPool::Reaper will disconnect any connection that has been idle for over 5 minutes.
See: https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
If your API does not receive any requests for a period of 5 minutes and all the connections are reaped the next request will need to instantiate a new connection and therefore run the queries.
How do I reduce the number of connections initiating ?
You could set an idle_timeout of 0 in database.yml. This would prevent ActiveRecord from reaping the connections but could potentially cause issues depending on how many processes are running and what your PG max_connections value is.
Is there a way to cache the queries ?
There's a closed issue that talks about this but it doesn't look like it's possible to cache these today.
https://github.com/rails/rails/issues/35311

Setting up Rails to use multple Redis instances

We have set up multple Redis instances in Herkou to handle different caches and queues, how do you set up Rails to use different instances?
You can initiate as many redis clients as you want and use the same at different places.
Say something like this,
cache1 = Redis.new(host: 'cache1.redis-server.com', port: 6379)
cache2 = Redis.new(host: 'cache2.redis-server.com', port: 6379)
queues = Redis.new(host: 'queues.redis-server.com', port: 6379)
cache1.set('my_key', 'my_value')
queues.lpush('my_queue', 'my_job')
If you are also using Sidekiq and want a separate connection for it, refer the docs here

Ruby Rack App - Couchbase DNS/Hostname lookup error

I'm using couchbase as session storage in my rack application (couchbase gem v1.3.9).
When I test the rack app with some more request (for example 50 parallel threads in jmeter)
or just reload the app many times, I allways get this error:
Rack app error: Couchbase::Error::UnknownHost: bootstrap error, DNS/Hostname lookup failed (error=0x15)>
My questions:
Anyone else here has such error, when using couchbase with ruby and how can I solve this?
What about performance of couchbase as sessionstore in a ruby rack application?
Additional informations:
My config.ru
session_options = PlainRackApplication::Config.session_options
use ActionDispatch::Session::CouchbaseStore, session_options
run RackApp.new
and my couchbase options
module PlainRackApplication
class Config
#session_options = {
path: '/',
namespace:'sessions_',
key: 'foo_session',
expire_after: 30.days,
couchbase: {bucket: "foo",
username: 'foo',
password: 'bar',
default_format: :json}
}
end
end
In what environment did you encounter this error?
If this happens on your localhost, verify that
127.0.0.1 localhost
is included in your /etc/hosts. Worked for me.
The (error=0x15) error message suggest that one of the host names in the bootstrap list is incorrect.
The client randomise the boot strap list, so that explains why you are only seeing it when you make more requests or if you you reload the application a number of times.
Further more creating and destroy couchbase client objects can slow down your application. If you can you should try to use a long live persistent connection that is used by all of your requests.
A number of users do use couchbase as session store mainly because of its high performance.

Mongodb server goes down, how to prevent Rails app from timing out?

I'm using central_logger to store logs from our Rails app in mongodb. When the mongo server went down recently our app started timing out on mongo inserts. How can I prevent Rails from timing out if the mongo server goes down?
The ruby driver supports timeouts like so
#conn = Connection.new("localhost", 27017, :pool_size => 5, :timeout => 5)
But the central_logger gem isn't using that. So you can either fork it to add that in there, or monkey-path the CentralLogger::MongoLogger.connect method
It currently has
def connect
#mongo_connection ||= Mongo::Connection.new(#db_configuration['host'],
#db_configuration['port'],
:auto_reconnect => true).db(#db_configuration['database'])
if #db_configuration['username'] && #db_configuration['password']
# the driver stores credentials in case reconnection is required
#authenticated = #mongo_connection.authenticate(#db_configuration['username'],
#db_configuration['password'])
end
end
You would need to monkey-path in :timeout=>5 (or whatever) to the Mongo::Connection.new
I would bet the author of central-logger would like to have this in there, so a fork and pull request would likely be welcome.
You could use replica sets - so if the master goes down, it can failover automatically to one of the replicas.
Usually the database insert should be fast, so you could work with the ruby timeout:
require 'timeout'
Timeout::timeout(0.2) do
... write to log server
end
this code will timeout and continue after 200 milliseconds in any case.

FTPS (TLS/SSL) from Ruby on Rails App

I have an FTP server which only accepts connections through running FTPS (explicit FTP over TLS). I need to be able to connect to this using a Ruby on Rails app.
Does anybody know of a method to do this? I have tried the Net::FTP library but this does not appear to support FTPS connections.
How about using Net::FTPTLS ?
Since Ruby 2.4, TLS over FTP has been available with Net::FTP... this has caused gems like double-bag-ftps to become archived and all your google searches to yield outdated answers.
If you can do explicit FTP over TLS (Connects to FTP normally, then issues a command AUTH TLS to switch to TLS Mode), then great... that should be able to use Ruby's Net::FTP out of the box by just passing {ssl: true} in the options.
Implicit FTP over TLS (runs over TLS from the get-go) does not work out of the box, however, and you must override Net::FTP's connection method to establish an SSL socket and then optionally send commands to the FTP server.
Inidka K posted a Github Gist, but since those are bad form (can go stale), I've posted my version that works against a ShareFile Implicit FTP setup (which seems to only support Implicit FTP):
require 'net/ftp'
class ImplicitFtp < Net::FTP
FTP_PORT = 990
def connect(host, port = FTP_PORT)
synchronize do
#host = host
#bare_sock = open_socket(host, port)
begin
ssl_sock = start_tls_session(Socket.tcp(host, port))
#sock = BufferedSSLSocket.new(ssl_sock, read_timeout: #read_timeout)
voidresp
if #private_data_connection
voidcmd("PBSZ 0")
voidcmd("PROT P")
end
rescue OpenSSL::SSL::SSLError, Net::OpenTimeout
#sock.close
raise
end
end
end
end
Then, in your code:
ftp_options = {
port: 990,
ssl: true,
debug_mode: true, # If you want to see what's going on
username: FTP_USER,
password: FTP_PASS
}
ftp = ImplicitFtp.open(FTP_HOST, ftp_options)
puts ftp.list
ftp.close
I done something like this with Implicit/Explicit FTPS, I used double-bag-ftps gem that I patched to support reuse of ssl session. It's a requirement for a lot of ftps servers.
I put the code on github here : https://github.com/alain75007/double-bag-ftps
EDIT: I figured out how to get it running locally, but am having issues getting it to work on Heroku. That's a bit of a departure from this question, so I've created a new one:
Heroku with FTPTLS - Error on SSL Connection
require 'net/ftptls'
ftp = Net::FTPTLS.new()
ftp.passive = true
#make sure you define port_number
ftp.connect('host.com', port_number)
ftp.login('Username', 'Password')
ftp.gettextfile('filename.ext', 'where/to/save/file.ext')
ftp.close
If you want to use Implicit FTPS, please try this gist.
For Explicit FTPs, you can use the ruby gem ftpfxp.
Support for implicit FTPS was merged into ruby/net-ftp in January 2022 (in this PR). If you want to make use of this straight away, you can include the latest version directly in your Gemfile:
gem "net-ftp", github: "ruby/net-ftp", branch: "master"
Then you just need:
options = {
ssl: true,
port: 990,
implicit_ftps: true,
username: "your-user",
password: "*********",
debug_mode: true
}
Net::FTP.open("yourhost.com", options) do |ftp|
ftp.list.map{ |f| puts f }
end
I implemented an ftps solution using double-bag-ftps
double-bag-ftps

Resources