I am using a global variable in a rails application to store a redis client using the redis gem. In a config/initializers/redis.rb, I have
$redis = Redis.new(host: "localhost", port: 6379)
Then in application code, I use $redis to work the data in the Redis store.
I also use puma as the web server in production environment, and capistrano to deploy code. In the deploy process, capistrano restarts puma.
Every time I start or restart the puma web servers, I always get an "Internal Server Error" when I first use $redis to access data in the Redis store. I saw errors like Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.)
Searching around with google and stackoverflow led me to think that I needed to reconnect to Redis after puma forks child processes. So, I added in my config/puma.rb:
on_worker_boot do
$redis.ping
end
But I was still getting the "Internal Server Error" caused by Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.).
I saw this post http://qiita.com/yaotti/items/18433802bf1720fc0c53. I then tried adding in config/puma.rb:
on_restart do
$redis.quit
end
That did not work.
I tried in config/initializers/redis.rb to $redis.ping right after Redis.new. That did not work either.
I got this error if puma was started with no puma processes running, or restarted when an instance of puma process was running.
Refreshing the page would get me past this error. But I want to get rid of this even on the first attempt to use $redis. I was thinking that I did not use the redis gem or configure its reconnection correctly. Could someone tell me:
Is that the right way to use redis gem in a rails application?
How should the redis connection be reconnected in puma?
puma gem documentation says, "You should place code to close global log files, redis connections, etc in this block so that their file descriptors don't leak into the restarted process. Failure to do so will result in slowly running out of descriptors and eventually obscure crashes as the server is restart many times." It was talking about the on_restart block. But it did not say how that should be done.
I was able to fix the error with a monkeypatch. This changes the behaviour so it just reconnects instead of throwing the Redis::InheritedError
###### MONKEYPATCH redis-rb
# https://github.com/redis/redis-rb/issues/364
# taken from https://github.com/redis/redis-rb/pull/389/files#diff-597c124889a64c18744b52ef9687c572R314
class Redis
class Client
def ensure_connected
tries = 0
begin
if connected?
if Process.pid != #pid
reconnect
end
else
connect
end
tries += 1
yield
rescue ConnectionError
disconnect
if tries < 2 && #reconnect
retry
else
raise
end
rescue Exception
disconnect
raise
end
end
end
end
## MONKEYPATCH end
I'm running a Rails Application with IdentityCache using Puma in clustered mode with workers=4.
It is essential that the reconnects happen in the on_worker_boot callback.
I have to reconnect both the Rails.cache and the IdentityCache to avoid restart errors. Here's what I got working:
puma-config.rb
on_worker_boot do
puts 'On worker boot...'
puts "Reconnecting Rails.cache"
Rails.cache.reconnect
begin
puts "Reconnecting IdentityCache"
IdentityCache.cache.cache_backend.reconnect
rescue Exception => e
puts "Error trying to reconnect identity_cache_store: #{e.message}"
end
end
Then I see the following in my logs, showing me the proof that it all works.
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
[7109] - Worker 7115 booted, phase: 0
[7109] - Worker 7123 booted, phase: 0
[7109] - Worker 7119 booted, phase: 0
[7109] - Worker 7127 booted, phase: 0
Sure enough, the first request problems that used to be there after server restart are gone. QED.
Upgrade redis-rb to 3.1.0 or above.
The detail https://github.com/redis/redis-rb/pull/414/files#
monkey patch
# https://github.com/redis/redis-rb/pull/414/files#diff-5bc007010e6c2e0aa70b64d6f87985c20986ee1b2882b63a89b52659ee9c91f8
class Redis
class Client
def ensure_connected
tries = 0
begin
if connected?
if Process.pid != #pid
raise InheritedError,
"Tried to use a connection from a child process without reconnecting. " +
"You need to reconnect to Redis after forking."
end
else
connect
end
tries += 1
yield
rescue ConnectionError, InheritedError
disconnect
if tries < 2 && #reconnect
retry
else
raise
end
rescue Exception
disconnect
raise
end
end
end
end
Here's what I did:
Redis.current.client.reconnect
$redis = Redis.current
($redis is my global instance of a redis client)
I've put this into my config/puma.rb file, works for me.
on_restart do
$redis = DiscourseRedis.new
Discourse::Application.config.cache_store.reconnect
end
on_worker_boot do
$redis = DiscourseRedis.new
Discourse::Application.config.cache_store.reconnect
end
Related
I am trying to create a health check page for my app. There are 3 different servers for back end, front end and database.
I have created an API to check if services (sidekiq, redis) are running.
Now, I want to check if postgres server is up or not.
For this I have added a method
def database?
ActiveRecord::Base.connection.active?
end
This method returns true when postgres is running. If the Postgres server is stopped, and I try to hit my API I get
PG::ConnectionBad (could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
):
How to rescue this error?
To prevent rails guts to be loaded before you actually check the DB connection, I would suggest to create a simple rack middleware, and put in the very beginning of middlewares stack:
class StatusMiddleware
def initialize(app)
#app = app
end
def call(env)
return #app.call(env) unless status_request?(env)
# Feel free to respond with JS here if needed
if database_alive?
[200, {}, []]
else
[503, {}, []]
end
end
private
def status_request?(env)
# Change the route to whatever you like
env['PATH_INFO'] == '/postgres_status' && env['REQUEST_METHOD'] == 'GET'
end
def database_alive?
::ActiveRecord::Base.connection.verify!
true
rescue StandardError
false
end
end
And in your config/application.rb:
config.middleware.insert_before 0, StatusMiddleware
I didn't do anything like it but here's how I'd do it.
Host a rails app (or sinatra, with no db or a dummy one) to status.myapp.com which is just a simple index page with a bunch of checks - db, redis, sidekiq. I'd make sure is hosted on the same machine as the production one.
db - try to establish a connection to your production db - fails or not
redis - try to see if it's a running process for redis-server
sidekiq - try to see if it's a running process for sidekiq
etc ...
Again, just an idea. Maybe someone did it differently.
My Rails 4.1 sidekiq application is running into an issue with the active record connections to postgresql not being closed.
Since the connections are not being closed postgres is not releasing the memory and after a while I run out of memory on my postgres server(I have 15GB of ram and a fairly small database).
The other issue is that after awhile my sidekiq workers start to take about 10X as long as they should to run, and after a time the workers just get stuck and stop processing any new jobs.
I have tried wrapping my worker code in the following to no affect:
def with_connection(&block)
ActiveRecord::Base.connection_pool.with_connection do
yield block
end
ensure
ActiveRecord::Base.clear_active_connections!
ActiveRecord::Base.connection.close
end
I have also added this in my unicorn.rb file in the after_fork method
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] || Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = 10 # seconds
ActiveRecord::Base.establish_connection(config)
end
Is there anything I can do to get these connections to close, or at least release the postgres memory?
With puma the number of threads can be altered to handle multiple requests at the same time. But in case of Heroku, the database connections to postgres are limited.
To handle more requests we can increase the number of dynos, where each dyno has lets say by default 0:16 threads. In which case under load each dyno can make 16 connections to the database.
With rails ActiveRecord we can limit the number of database connections per rails worker process using this configuration:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || ENV['MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
However with the db connection limit, if the number of dynos increase the connection limit is hit.
Is there any way to like kill a thread and close the database connection soon as the request has been served?
I've tried using pgbouncer as a buildpack but there are issues with prepared statements.
I'm currently on rails 4.0.0 using puma 2.7.1.
Is there some event hook in puma whcih we can configure like this for when a request is complete?
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
Try to use cluster mode of puma if you use MRI. I suggest to setup 4 workers.
puma -w 4
If you use Jruby, you need to specify near 20 threads. But it should not be more then number of allowed connections to the DB.
puma -t 20:20
More info:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#thread-safety
https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections
I have a problem that sidekiq process is using almost all of my server CPU.
Today I removed sidekiq from the app but still when I run htop it shows heavy CPU usage from usr/local/bin/bundle exec sidekiq.
The weirdest thing is that the sidekiq was implemented about 5 months ago but the problem started to occur only recently (around two weeks ago).
I'm using sidekiq only for one background job.
I have tryed to run killall sidekiq on server to kill all sidekiq processes but nothing happens.
Here is my pretty simple sidekiq worker:
class UserWorker
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
url = "http://url_to_external_api"
uri = URI.parse(url)
request = Net::HTTP::Get.new(uri.request_uri)
response = Net::HTTP.start(uri.host, uri.port) do |http|
http.request(request)
end
age = JSON.parse(response.body)['user']['age']
user.age = age
user.save!
rescue ActiveRecord::RecordNotFound
# nothing to do here
end
end
At this point I'm pretty desperate because this sidekiq process is killing my server every other day and I can't even remove it completely.
EDIT:
When I run kill -9 PID to kill the process, it gets killed but right after that it's starts up again with another PID.
QUESTION
I just manually run the command /usr/bin/ruby1.9.1 /usr/local/bin/bundle exec sidekiq -r /var/www/path to my application on Amazon Ec2 instance and I got error: Error fetching message: Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED)
So do I need to install redis on the server to run sidekiq?
I have a Rails (web) app that I need to add a (redis) pub/sub subscriber too.
Below is my PubsubSubscriber class which I need to kick off then the app starts up.
The redis connection is created in a resque.rb initializer file. I tried PubsubSubscriber.new after the connection, but when I try to start the rails server it hangs at:
=> Booting Thin
=> Rails 3.2.13 application starting in development on http://0.0.0.0:5000
=> Call with -d to detach
=> Ctrl-C to shutdown server
As opposed to when the server starts successfully:
=> Booting Thin
=> Rails 3.2.13 application starting in development on http://0.0.0.0:5000
=> Call with -d to detach
=> Ctrl-C to shutdown server
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:5000, CTRL+C to stop
Any idea why the server hangs when I try to instantiate the PubsubSubscriber class in the initializer? Is there a better place to start this up?
# example modified from https://github.com/redis/redis-rb/blob/master/examples/pubsub.rb
class PubsubSubscriber
def initialize
$redis.psubscribe( :channel_one ) do |on|
on.psubscribe do |event, total|
end
on.pmessage do |pattern, event, message|
# message received, kick off some workers
end
on.punsubscribe do |event, total|
end
end
end
end
One problem that you are having is that EventMachine is not ready yet when you are in the initializer. So wrapping your initialization in EM.next_tick will delay your code until EventMachine is ready:
EM.next_tick { ... EventMachine code here ... }
When I tried this, my server started up all the way... and then blocked when the call to $redis.psubscribe fired.
However, switching to em-hiredis worked:
# config/initializers/redis_pubsub.rb
EM.next_tick do
$emredis = EM::Hiredis.connect(uri)
$emredis.pubsub.subscribe('channelone') do |message|
Rails.logger.debug message
end
end
This is occurring because the standard redis gem is not listening for events through the EventMachine interface - it is just creating a connection to your redis server and then blocking everything else.
In order to take advantage of EM you would need to set up a simple connection, in the form of:
class RedisConnection
# connect to redis
def connect(host, port)
EventMachine.connect host, port, self
end
def post_init
# send subscribe channel_one to redis
end
def receive_data(data)
# channel_one messages will arrive here
end
end
I've got a working example in this gist: https://gist.github.com/wheeyls/5647713
em-hiredis is a redis client that uses this interface.